Meeting 2 - 2021-04-20
- Eric Jansson
Owned by Eric Jansson
Agenda
- Review the suggested canonical use cases that were extrapolated from Meeting #1 – these can be found in the meeting notes: https://edfi.atlassian.net/wiki/display/ESIG/Meeting+1+-+2021-03-30
- Gather feedback and direction on the overlap of LMS and Gradebook data and processes. This includes answering questions such as:
- Should the gradebook data in the LMS data domain and current Gradebook be combined? Or do they represent a natural sequence of data from a more raw "assignment" to a formal grade?
- Are there changes needed to Gradebook to make it function well in an API-based exchanges?
- Is new guidance for usage of Gradebook needed?
- Ideas for visualizations (see EDFI-899 - Getting issue details... STATUS )
Notes
Much commentary focused on what LMS data elements to focus on. It was generally agreed that assignment data is more normalized, more clear, and more stable as a concept across platforms, and therefore is a better starting point.
- Trying to compare systems to each other, you will only get internal consistency. For example, on some systems a login via the mobile app may count differently than a normal login. So assume internal consistency by not external consistency. This is another reason to start with assignment data: more externally consistent.
- Is configurability an answer to helping to approach activity data? What is defined by local policy – can that be put into a layer of configurability?
- Lessons learned around Caliper and XAPI – framework of activity and have struggled with what counts as activity.
- Activity is a leading indicator – you have this before assignments, but it varies a LOT. It also varies considerably by content and subject areas. So it need to be scoped carefully – the complexity is much higher.
Q: Why isn't the LMS Toolkit pushing vendor specs? Long-term isn’t this on the vendor problem? Yes, but Ed-Fi taking a different approach for now but it could evolve longer term into a specifictions push. Why act differently?
- The LMS market is more centralized
- Major players already have significant and sophisticated investments in data out
- Drive specifications by usage and not "by committee" - let's get usage data going first
Q: what SIS data to combine this with?
- See Idaho RFP, shared as an example of this https://drive.google.com/file/d/1NripEG8tLHAezhgytbraPdHWLPfT64Xc/view
- Start with demographics, evolve from there.
- Another example: health care continuous improvement – sharing engagement data across teachers and courses might be important-- Is the student struggling in just one class or in multiple classes? Is it just math? Looking at teams of teachers and seeing how they work together.
Q: Grade Book vs LMS - what's the overlap?
- These are a continuum, and not the same - data moves from the LMS to the gradebook, in a process of local record keeping. We should keep both domains for now
Q: what is the primary audience?
- Unclear discussion: teachers obviously the most important, but hard to get to adopt new tools. But probably start there.
- Non-individual teacher interventions made more possible by digital interventions. If you focus on the teacher, there is high resistance to that, as teacher are burdened.
Other
- SEA’s working on rostering and assessment data enablement for LEAs
- Performance / scalability a key factor too – maybe a reason to reconvene the SIG