How might we develop best practices for the creation and adaptation of content for OERs using tools that are viable, accessible and actively support the Open community and its values in the long term?
Confused? New to Github? Visit the GitHub help page on our site for more information!
At a glance
- Submission Name: How might we develop best practices for the creation and adaptation of content for OERs using tools that are viable, accessible and actively support the Open community and its values in the long term?
- Contact Lead: [email protected]
- Region:#NorthernAmerica
- Issue Area: #OpenEducation
- Issue Type: #Challenge
Description
One major problem faced by faculty and higher education instructors in the creation of Open textbooks and other forms of OER is the logistics of creating diagrams and schematics. The daunting task of preparing illustrations and other supporting media can act as a barrier and dissuade otherwise interested parties from creating educational content that is effective, affordable and tailored to the instructor's specific needs.
I approach this problem as a digital media co-op librarian working in a mid-size Canadian university who wants to support educators in the pursuit of Open Education in ways that are collaborative and involve the efforts of students.
As librarians, we want to recommend tools that are usable, viable and teachable. As Open advocates, we want to recommend tools that advance the mission and values of the Open movement.
What are we working on during the do-a-thon? What kinds of support do we need?
The goal of this do-a-thon is to brainstorm criteria that forefront accessibility, usability and the ethos of Open to evaluate media-creation tools, the most appropriate of which will be included in a set of discipline-specific toolkits for the creation of OER.
How can others contribute?
Stage 1 - In Progress) A Google Doc will be updated in this thread with the evaluation criteria drafted during the session. Feel free to add your comments to the document.
Possible thread responses in Stage 1:
- Examples of evaluation schema
- Criteria to consider
- Questions/concerns
This post is part of the OpenCon Do-A-Thon. Not sure what's going on? Head here.
Some existing evaluation schema to consider:
Another side of "open educational resources" which isn't always in the conversation is how to avoid creating resources owned/maintained only by a single party, which can be prone to "rotting" or atrophying after funding dries up. An alternative is building "net-centric resources" that are built AND maintained by a larger network. This involved employing thoughtful tactics from the early planning phases -- it is not just an afterthought of dissemination, but an aspect of creation.
This conversation already has at least one community of practice that I'm aware of: https://www.fabriders.net/network-centric/
Also: Yaaaaay OpenCon! Cheering y'all on from afar.
h/t @dcwalk (for introducing me to this community)
@patcon
Thanks for the heads up about Network Centric Resources! People powered sustainability should be a major factor when evaluating tools. Reading the FabRiders page also reminded me to incorporate the tenets of design into this evaluation resource and to ensure that it continues to benefit librarians and instructors in the long term. I have adjusted the agenda for the do-a-thon to start us off with a discussion about the rubric's place in the conversation, its scope and how we might keep this resource collaborative, inclusive of different voices and widely maintained.
Summary of design principles, from the FabRiders page:
- To get users involved early and as equal partners.
- To be transparent during development and solicit review and feedback.
- To have clear inroads and encourage contribution.
- To appreciate those that do, and guide those that may have something to contribute but don’t know it.
- To articulate the value of participating.
- To encourage reuse and modification through outreach and licensing.
We're meeting at Table 4 in the Collaboratory, starting now!
Update: December 2, 2018
Whew, it's been a while.
Here's everything that's been happening with CreateOpen:
- By the end of the Do-a-thon, we had created a draft of a digital toolkit rubric for evaluating media creation tools (WIP). We're hoping to solicit feedback on the rubric in the near future by making it available online.
- We "met" on December 2 for the first time since OpenCon to discuss how things will look going forward. It's a busy time of year for everyone so we'll be taking a short break over the end of the year and will resume again in January
Some questions that came up:
-
- How do we ensure that the rubric is complete to the best of our ability before sending it out for feedback?
-
- How can we structure the feedback process so that participants volunteering their time aren't overwhelmed by the amount of content?
-
- How can we promote this project to involve instructors, librarians, students and other relevant stakeholders in the feedback process?
We now have an official email! If you want to get involved with the project, please shoot us an email at [email protected] Also, join in on the conversation with the CreateOpen hashtag on Twitter!
EDIT: Redacted. I'm a dolt. (I recalled being in the meta thread about do-a-thons.)
That's what I get for starting a reply on mobile :/
Sorry for the noise!
How do we ensure that the rubric is complete to the best of our ability before sending it out for feedback?
Left some inline comments related to this. (Sorry, I know it's still in pre-feedback phase!)
How can we structure the feedback process so that participants volunteering their time aren't overwhelmed by the amount of content?
The g0v community in taiwan has the concept of "tofu tasks" that they've used to talk about bite-sized pieces of work used for crowd-sourcing. I wonder if a similar consideration could be applied here...? Like what's the smallest unit that people could offer feedback on? And would it possible to serve it somehow in a way where people could jump in and contribute a bit, but maybe not grapple with the full context of rubric?