Tonight I had the opportunity to speak at #IgniteTO, Toronto’s local gathering under the Ignite umbrella. This was the second ignite, following a highly successful, and entertaining evening back in September.
For my talk I decided to take my FITC presentation from earlier this year (a 90 minute talk) and distill it down to its key points so I could summarize it in just five short minutes.
Needless to say, it was an interesting experience trying to take a much longer presentation and crunch it down to it’s bare essential components. I highly encourage people to try it sometime, certainly makes you look hard at a deck and say “Do I really need to say that?”
They record all these Ignite talks so when the video is up I’ll be sure to post it here too.
As a freelance facilitator and information designer, I can help your organization discover, define and develop your story so you can share it more effectively – If your organization could benefit from better explaining what it is you do, then I can be of help, contact me today.
Yesterday Toronto’s Mayor, David Miller, officially launched the city’s Open Data project (#opendatato) as part of the Toronto Innovation Showcase taking place November 2 & 3. Originally announced at the mesh conference earlier this year, it’s great to see this initiative come to fruition finally.
As part of this launch Mark Kuznicki (a.k.a @remarkk) was asked to come in and facilitate an event called the “Open Data Lab” where interested developers and citizens could get informed about what the initial datasets contained, how to access them and to provide feedback on what else they wanted to see. Mark asked myself, and several other volunteers to join him in facilitating this session.
Inspire & Learn
After a series of presentations, intended to inspire the participants and get them thinking about innovative uses & applications for this data, we moved to the members lounge (attached to the City Council chambers) and began the ‘learn’ portion of the event. In this section the participants were divided into 6 separate groups (one for each dataset that had been released).
Taking cues from the speed-dating format, a subject matter expert (SME) associated with each dataset spent 10 minutes with each table, detailing what data they represented and answering any questions that the table might have. At the end of 10 minutes each SME moved to the next table and the process repeated itself. After a little more than an hour everyone in the room had had the opportunity to get a little face time with each of the datasets and their SME, have their questions answered and, most importantly really get the gears grinding on ideas for what was now possible.
From here we moved into the third and final stage: Ideate. The entire group was brought back together and people were invited to share their ideas and inspirations. These ideas were captured and then each was assigned to a table in the room. Participants could choose which conversation they wanted to participate in and for about 20 minutes some intense and interesting conversations took place as ideas were vetted and expanded on.
Finally, as the 20 minutes concluded everyone was brought back together and a spokesperson from each group was asked to answer three questions:
What is the idea?
What datasets does it require?
What do you need most to make it happen?
All-in-all the afternoon was an interesting process to watch unfold. There were certainly a lot of unknowns, would anyone show up (and if yes, who would they be), would they stay around for the interactive portion, would they engage and ideate or will it devolve into a conversation of everything that is “wrong” with the initiative?
Thankfully people did show up and it was a great mix of people, from hard-core coders, to very non-technical people who just had an interest in more access to information. The group I was with during the learn portion had a lot of great ideas and questions and really put the city’s SME’s through the ringer (in a friendly, positive way) and for the most part the SME’s had the answers.
What I took away from the event: First off, the city staff who are responsible for publishing this data are all over it and seem to be behind the idea 100%. That said, they even admit that there are parties within the city’s bureaucracy that would rather not put a lot of data out there, especially anything that allows people to analyze and form opinions on how certain departments or elected officials are performing. The apolitical nature of the content that was released was brought up several times.
The three big themes that emerged for me: People want more data, in real-time in a standard format. I spoke with some of the co-facilitators after the event and many of them noticed similar trends in the conversations they were a part of.
I think what was presented yesterday was a great start and everyone around the table admits and agrees that there’s still a lot of work ahead. To make this work, the city is going to need to learn to live a bit outside their comfort zone on this one (as I think they’re already starting to do), and I have no doubt that the Toronto tech community is going to help drag them out there.
I recorded the final output presentations where each of the ideas was presented. They’ve been embedded below:
Table 1: Application to Facilitate Citizen Fact-checking
Table 2: Enhanced 3-1-1 – “What’s available in my neighbourhood”
Table 3: GPS-aware “NextBus” for mobile Devices
Table 4: Childcare Space Splitting/Sharing Application
Table 5: Well documented API to Standardize Data Sets
Table 6: Application to Facilitate Dataset Translation