TAG Meeting 2016-10-12 (at Ed-Fi Summit)

Participants

  • Mark Walls 

  • Richard Charlesworth
  • Mark Reichert 
  • Britto Augustine (AZ)
  • Matt Warden 
  • Josh Klein 
  • John Raub (WI)
  • Dan Retzlaff 
  • Don Dailey 
  • Nancy Wilson (Collaboration Synergy)
  • Geoff McElhanon 
  • Neil (Skyward - via conference call)
  • Chris Moffatt
  • Eric Jansson

Notes

I. Ecosystem maturity (9 to 9:45)

PPT Overview of “Tools for building a quality ecosystem” summit session was presented.

Visioning Exercise with 5 questions:
1. What do product vendors, integrators, SEAs, LEAs, ESCs want Ed-Fi to be for them?
2. Where are the rough edges?
3. What brings people to the Ed-Fi Community?
4. Why don’t people commit to Ed-Fi?
5. 5 years in the future – what is Ed-Fi?

Over-arching themes: Simplicity and Base ROI

For full notes see the attached spreadsheet.

II. Performance and scalability (10 to 10:45) (Actual 10:30 to 11:30)

Pain points from Michigan:

    1. Needing to benchmark all the different processes that are involved in the Ed-Fi solution
    2. Inbound XML processing - started with 1.2 and then when 2.0 went to the next process and found some districts were taking 6 hours to load a file; were sending a whole year of data every day so went to comparing to hash file and only process changes and cut down to 11 minutes – need to be able to share that with other Ed-Fi users
    3. API processing in general
    4. Build process – what should that look like; currently from 2 to 6 hours; benchmarking will help us know if it is something on our end or within Ed-Fi
    5. Usage – what does it take; down we need load balancing etc.

Have done a couple rounds of benchmarking and published results, need baseline and encourage all to use tools and document re build performance time++; need to turn around and publish back to people to use

Work WI has done is fantastic (have gone through 11 rounds with it) – trial and error with great narrative to walk through the 11 iterations

Need general area to share those stories and each other’s contacts

AZ everything is on Azure (not SQL Azure) – in production have six websites and has the potential to scale out to 9 based on load (auto-scaling); 2 sequel servers simultaneously; started with medium-size machine on Azure and now on medium-high (112gig memory with CPUs); looking into using premium disks; hooked up to monitor to show all our response times – 95% are great response times (250 milliseconds); periodically calculate the delta mid-stream – need to work with vendors on that

It is helpful to know the mix of transactions; isn’t it time re platform as service? Case for Cloud – there is some external factor with SEAs and LEAs for not using Cloud; when can it be Cloud first and traditional second? Will there be more to lose with those who can’t do Cloud?

WI has reporting real time via Azure

Paying for web servers 24/7 with Azure – also there is outbound transport charge except for web rolls; design vendor uses to interoperate could be costly depending on how it is; traffic through I2 could be 100% free

Need some study or use cases re scaling to Cloud; need to consider size and scope of where we are trying to get; there are a lot of variables such as number of gets in Michigan ODS vs. number in AZ; composite APIs put a totally different performance in the discussion; have had beginning discussion re some kind of subscription - change notification;

There is also an eventing model (send all the deltas since a particular point in time) – haven’t seen the community move to a specific model; not convinced Exchange is right place to document; need a template that can be filled in – can detail that out and have members provide data to build database to be shared across members

Not having performance benchmarks etc. may be preventing people from coming on board

Collect our configurations, transactions each day, response times, etc. – Arizona has something that can be shared

Another use is the new to Ed-Fi and what you go through initially and starting to collect what everybody looks like with standard information (like an ODS usage profile) including demographics so new people can see who is like me and what are they running – would really set people off on a better foot; standard information for new adopters; would help agencies that don’t have the technical expertise and resources to go through what the current adopters have done

Who is the audience for benchmarks – need to consider lag time from existing implementations that may be too stale for new implementers to use.

We need to have benchmarking when something is released - a standard set of benchmarks as a starting point with each release or before that to have the information to analyze re additional hardware etc.

The Alliance could ask people to share this information and see what the response is – sounds like people would find this info really valuable. Let’s do it once and see what happens and figure out if we keep this ongoing – will need to revisit.

WI using Ready API and Arizona Jmeter; pre-generate data set and post it

Would be good to have a tool that the community can broadly use; may be able to crowd source this or get best practices around how to construct and beta test. Will do this as get feedback re utility and value; can be integrated into onboarding process too.

Scalability remains an issue – may be connected to how implementation is done; need to find solutions together

There is ecosystem anxiety related to API performance time; need to look at data standards and API re reasonable amount of time or if there are design decisions – especially re aggregation of assessment data and transfer into grade book; still struggling with most don’t want to use API - want to flat file dump; their processes are in place – reluctance to change those – performance remains a big issue. Architect the scale of good, better, best. Should be more advanced that CSV. Develop use cases with recommendations for various approaches.

Real time versus less than 15 minutes depends on the situation but hours and days is another discussion. A transactional API may be worth the investment. XML has found its home as the alternative to flat file dump but the architectural considerations about how to do this in an API world need discussion. Would be nice to have one format – XML vs Json? Should we look at other industries? Message packs - another format, and there is one from Google – may be harder to get general developer in code. Could consider not optimizing for those constraints that are not likely to be constraints in the future such as bandwidth.

How to best do periodic off-cycle batch work? This is the lighter path to broader adoption. Becomes a bigger discussion with expansion to assessment.

Going Cloud-centric enables scaling up such as to assessment – Arizona started with six servers and is down to two depending on the situation. Need to determine value of what is needed to offset scalability with assessment – for example not doing item level analysis and do not need millions of data points. Would be valuable to have some hard evidence on process time with different use cases. Will need to see where the assessment work group goes.

Also, would like to know what things people have learned with Azure.

III. TAG Processes(11:30 to 12)

Survey done based on community request to formalize reactions and advice. 

Survey results’ slides shared.

Thoughts on a plan for selecting and renewing members for TAG:

1. Get the word out of who the TAG members are to all licensees and communicate that they can contact TAG members with input
2. Continue to recruit from members
3. Organizational memberships with a minimal level of commitment to Ed-Fi standards and usage
4. Suggestion of having an application process
5. Formal charter is needed – need a document that defines process, sets why people are on the group, may have term limits etc. Use as a mechanism to describe participation and set expectations and limitations.
6. Currently have good representation of SEAs, LEAs, vendors (product and integration) and at-large – may want to be representative for others in your category and have meetings specific to get input from your group - maybe a quarterly conference call.
7. Norming and trust within the group is important
8. Need a way to funnel up ideas and this group has provided that.
9. Each stakeholder group needs a minimum of 2 representatives.
10. Consider Education/Regional Service Centers and higher education.
11. Need special interest groups and bring people in as needed with 1 or 2 TAG members included.
12. Keep in mind that purpose is technical but there are frequent strategic conversations.