Open Data and App Competitions: the case of NYC BigApp 3.0


The view towards crowds and their involvement in society has changed through time. They were first seen as irrational protest groups looking to create riots. Later the crowds were seen as rational protesters challenging those in powers. Now they are seen as solution provider to problems (Wexler 2011).

The private sector has actively used the crowd for its benefit in broad industries such as security software (Franke & von Hippel 2003), computer games (Prugl & Schreier 2006; Jeppesen & Molin 2003), integrated circuits (von Hippel 1998), athletic shoes (Piller & Walcher, 2006) and construction among many other industries (von Hippel 2005). 

Even though the utilization of crowds for problem solving can be traced back to decades or centuries (Quill 1963; Wexler 2011), it has been with the Web 2.0 that it has become a popular method for those seeking solution to problems. The collaborative aspects of Web 2.0 provides the needed tools for reaching a targeted audience that will foster communities that can develop innovative solutions to problems.
The success in the private sector with the utilization of the crowd has shifted the perception of some government and now they also seek to collaborate with their citizens for problem solving. This shift have allowed governments to open their data to the public and as such behave as platforms (O’Reilly, 2010) fostering entrepreneurship, innovation, value creation and economic growth (Lakhani et al 2010; Noveck 2009; Foutnier-Tombs 2011).

By opening their data, governments create datasets that are accessible in open formats with licenses that allow its utilization in different ways (Davies 2010). This implies that an open government dataset can be mashed with other datasets from any other sources in infinite ways. To exploit these infinite mashups possibilities, governments can create ideation competitions that promote the utilization of the open data (Nam 2012). This competition strategy was lucrative for Washington, DC. They were able to obtained 47 applications in 30 days with an estimated worth of US$2.6 million by just offering US$25,000 as prize for the best application (Lakhani et al 2010). The success of this contest inspired other cities and governments to do the same.

Idea competition has proven to be a successful tool for problem solving (Howe 2006, 2008) and new product development (Piller & Walcher 2006) in the private sector. On the other hand, the public sector has mainly utilized it as a promotional tool to incentive the usage of their open data. However, little is known about the applications developed during the competition once the competition has finished. The aim of this study is to explore the status of these applications once the competitions are completed and discuss their sustainability.

This study continues by first describing the study method and then by presenting the results of the findings. The fourth and final section provides the implications of our findings, a brief discussion and the concluding remarks

Study method

To explore the status of applications that utilize open data after a competition has finished, this study centers on the case of NYC BigApps 3.0. NYC BigApps 3.0 was an application competition sponsored by the city of New York between October 2011 and April 2012 for software developers to create new applications that can help those that live and visit the city, as well as the businesses that operate in it. For an application to participate, it must use at least one dataset of those available in the NYC open data catalog. 

The method utilized in this study involves a two-step approach. First a content analysis of the rules, objectives and prizes of the contest was done. Secondly a thorough analysis of all the applications submitted to the contest was conducted. A list of all submitted applications was kept at the official webpage of the contest ( The list includes a text description, a link and a video of the functionalists for each application.

Surveying developers that participated in the competition could provide an insight on motivations that can affect the sustainability of the applications. However, content analysis can provide a thorough understanding of the factors affecting the sustainability of those applications (Fournier-Tombs, 2011).

NYC BigApp 3.0

The contest

The NYC BigApps 3.0 was the third version of the NYC BigApps annual contest. This version provided US$50,000 in prizes plus the opportunity for the winning app to present its application at NY Tech Meetup . Additionally, all the winners were introduced into the BigApps Founder Network, a network that provides support in the way of mentor ship and networking to successfully build a startup.

The contest was jointly created by New York City’s Department of Information Technology & Telecommunications and the Economic Development Corporation with the purpose of fostering the development of applications that would bring access to information for the citizens of New York and transparency for the government of the city. These applications would also enhance the interactions that different actors would have with the city. Secondly the competition looked to encourage innovation and value creation to incentive economic growth by citizens, startups or small organizations.

The competition started accepting submissions on October 11, 2011 and stopped on January 25, 2012. They obtained a total of 96 applications in this period of time. After the submission period ended NYC BigApps 3.0 started a public voting period that was extended from February 9 to March 8, 2012. The public voting consistent on register users from NYC BigApps 3.0 website to vote on their favorite application submitted to the contest. More than 9,000 users registered and were allowed to submit one vote per day for each application they found to be their favorite. The purpose of this public voting was to select the winner for the first and second prize on the “Popular Choice Award” category. 

To select the winners of the other 11 categories judges were appointed by the creators of NYC BigApps 3.0. A total of 15 judges, who all had senior positions in different technology related organizations, judged each application based on the quality of the idea, its implementation and the potential impact it could have in New York City. Each category had a theme and each application was categorized to fit in a category. A list of the different categories for the contest can be seen in Table 1.

The submitted application had to be an original work from the submitter and was free to mash any data as long as it used at least one dataset from the open data portal of NYC . This provided infinite possibilities of applications. They only constraint was that the application had to be free for the public for the duration of the competition plus at least one year after the completion. Any other dataset used by the developers had to provide this requirement. Additionally the contestant had to submit a link to the application along with a video and a photo of the application working, as well as a text description of it. The contest was open to any type of software application such as mobile, web or SMS. 

Table 1: NYC BigApps 3.0 categories
Best Overall Application - Grand & Second Prize
based on quality, implementation and potential impact
Investor’s Choice Application
based on potential of commercialization
Best Mobility Application
make it easy to move around NYC
Best Green Application
encourage environment sustainability
Best Education Application
include, encourage or bring awareness of educational aspects
Best Health Application
include, encourage or bring awareness of health aspects
Best NYC Mashup
utilize APIs from participating companies with the Open Data of NYC
Best Student Award
applications submitted by students
City Talent Award
for employees of the local government of NYC
Large Organization Recognition Award
for companies with more than 50 employees
Popular Choice Award – Grand & Second Prize
most valid votes

All submitted applications had an opportunity to be selected to participate in the TechStars program , since they were also going to choose two applications from the pool of applications in the contest. This partnership clearly flags the intention of creating applications that will last longer than the contest itself. 

The applications

After one year passed from the competition we could find that from the eleven winning applications in the contest only two were no longer working. However, only five had any update since the competition had completed. From all the applications only 35% of them had been updated at least once after the end of the contest and 28% were no longer available.

The contest was open to any type of software application. However, participants mostly submitted applications for web or mobile platforms. Only one of the applications was submitted that worked based on SMS. Mobile application were the most submitted being 56% of all summations, followed by web applications with 38%. Interestingly 5% of the participants submitted a combination of applications for web and mobile platforms.

From all the mobile applications submitted only 72% of them were still working one year after the contest submission deadline but only 28% of the total had been updated since the submission deadline. In the case of web applications 69% were still working and 47% had signs of an update after the submission deadline had passed. For those that submitted application for both web and mobile, they all had their applications working but only two of those applications had any sign of updates. Table 2 provides further details on working and non-working applications submitted to the contest.

Table 2: working and non-working applications based on type of application

Non-working (total)
Non-working (Percentage)
Working/not updated (total)
Working/not updated (Percentage)
Working/ updated (total)
Working/ updated (Percentage)
Mobile & Web

From the analysis of all the applications we could group applications based on the data source used in each applications. The first group was based on those applications that exclusively used the data located on NYC open data portal. This groups was comprise of 41 applications were only 19.5% of those application were still working and had signed of an update after the competition had finished (see Table 3).

The second group was formed by those applications that had mashed available data from different sources. Some utilized commonly available APIs, such as those from Google Maps, Bing Maps, Twitter or Foursquare along with those from the NYC open data portal. This group was formed by 35 applications and 40% of those applications where still working one year after the competition and had been updated at least once since then.

The third and last group was formed by applications that utilized different available data just like the second group but also generated data by their users. This group was formed by 20 application where 60% of them where still working one year from the contest and had been updated at least once since then.

Table 3: working and non-working applications based on data source

Working/not updated
Working/ updated
First group: exclusive NYC open data
Second group: multiple sources of data
Third group: multiple sources of data plus user generate data

Discussion and conclusion

NYC BigApp 3.0 clearly showed an interest on the development of applications that can bring value to the citizens, business and tourist of New York City. They created partnership with different startups accelerators and provided networking and business development support a long with cash prizes. However, only 35% of all applications had signed of still being working.

Those applications that exclusively used data from the NYC open data portal had the highest possibilities of being left outdated  meaning that their developers had passed to other projects. Those that used multiple sources of data and engaged with new user generated data had the highest survival rate, at least one year after the competition had finished. 

The applications that mashed data from multiple sources along with that of the open data portal can prove to be helpful for different communities. On the other hand, those that also mashed user generated data to their multiple data sources proved to be more sustainable and provided greater value to the communities.

Value can be created in many different ways. Mashing different datasets or providing visualization tools to different datasets can reveal information in ways that can move market (Lakhani et al 2010). In the case of BigApp 3.0 the applications that utilized exclusively with data from the NYC open data catalog didn’t mashed different datasets in unique ways. They mostly utilized one or two datasets and displayed the information that it contained. This can provide certain value to those that don’t know how to use the open data set and help reduce the data dive (Gustein 2011), but no economic value or innovation come from that source. Few applications in the contest utilized different datasets to produce new data, and those who did this prove to keep working on improving those applications.

The success of a contest has to include the impact that the outcome has (Nam 2012). Applications that last longer periods than the contest can have greater impact on the communities. Open data initiatives have to look further than idea competition to support the usage of their open data catalogs. Providing training to citizens on how to use the open data can bring awareness to the open data (Gustein 2011) and help built more sustainable applications that can last longer than the contest.

This study has some limitations. The study focuses exclusively in one contest in New York City, many other cities in the world have implemented similar contest and findings from those contest can show different characteristics of sustainability based on the available sources of data or partnership of the contest sponsors. More research is needed to understand our understanding of application sustainability after an open data contest. A next step for this research is to explore other cases and to conduct survey on motivations of the participants to have a clear view on factors that affect the sustainability of these applications.


  • Chun, S., Shulman, S., Sandoval, R. and Hovy, E. (2010), Government 2.0: Making connections between citizens, data and government, Information Polity, 15(1), 1-9.
  • Davies, T. (2010), Open data, democracy and public sector reform: A look at open government data use from, Master’s thesis (unpublished), University of Oxford, at, accessed 9 January 2013.
  • Farias, C.F. (2010), Can People Help Legislators To Make Better Laws? The Brazilian Parliament’s e-Democracia, In proceedings of the 4th International Conference on Theory and Practice of Electronic Governance, p301-306, New York, USA.
  • Fournier-Tombs, E. (2011), Evaluating the Impact of Open Data Websites, Social Science Research Network (September) at, accessed 7 January 2013. 
  • Franke, N. and von Hippel, E. (2003), Satisfying heterogeneous user needs via innovation toolkits: the case of Apache security software, Research Policy, 32(7), 1199-1215
  • Gurstein, M. (2011), Open data: Empowering the empowered or effective data use for everyone?, First Monday, 16(2) at, accessed 7 January 2013.
  • Hilgers, D., and Ihl, C. (2010), Citizensourcing: Applying the Concept of Open Innovation to the Public Sector, International Journal of Public Participation, 4(1), 67-88.
  • Howe, J. (2006), The rise of crowdsourcing, Wired, 6(6), 176-83.
  • Howe, J. (2008), Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business, Crown, New York, NY.
  • Jeppesen, L. and Molin, M. (2003), Consumers as Co-developers: Learning and Innovation Outside the Firm, Technology Analysis & Strategic Management, 15(3), 363-383
  • Lakhani, K., Austin, R., & Yi, Y. (2010),, Harvard Business School, (May).
  • Lee, S.M., Hwang, T. and Choi, D. (2012), Open Innovation in the Public Sector of Leading Countries, Management Decision, 50(1), 147-162.
  • Nam, T. (2012), Suggesting frameworks of citizen-sourcing via Government 2.0, Government Information Quarterly, 29(1), 12-20.
  • Nam, T. and Sayogo, D.S. (2011), Government 2.0 Collects the Wisdom of Crowds, In Proceedings of the Third International Conference on Social Informatics, 51–58.. Berlin, Heidelberg.
  • Nambisan, S. (2008), Transforming Government through Collaborative Innovation, IBM Centre for the Business of Government research report, May.
  • Nichols, R. (2010), Do Apps for Democracy and Other Contests Create Sustainable Applications?, Government Technology (July 11) at, accessed 9 January 2013.
  • Noveck, B.S. (2009), Wiki Government: How Technology Can Make Government Better, Democracy Stronger, and Citizens More Powerful. Brookings Institution Press: Washington, DC.
  • O’Reilly, T. (2010), Government as a Platform, Innovations, 6(1), 13-40.
  • Piller, F. and Walcher, D. (2006), Toolkits for idea competitions: a novel method to integrate users in new product development, R&D Management, 36(3), 307-318.
  • Prügl, R. and Schreier, M. (2006), Learning from leading-edge customers at The Sims: opening up the innovation process using toolkits. R&D Management, 36(3), 237–250.
  • Puron-Cid, G., Gil-Garcia, J.R., Luna-Reyes, L.F. (2012), IT-Enabled Policy Analysis: New Technologies, Sophisticated Analysis and Open Data for Better Government Decisions, in Proceedings of the 13th Annual International Conference on Digital Government Research, 97-106
  • Quill, H. (1963), John Harrison, Copley Medallist, and the £20,000 longitude prize, Notes and Records of the Royal Society of London, 18(2),146-60.
  • Terwiesch, C. and Xu, Y. (2008), Innovation Contests, Open Innovation, and Multiagent Problem Solving, Management Science, 54(9), 1529-1543.
  • Torres, L.H. (2007), Citizen sourcing in the public interest, Knowledge Management for Development Journal, 3(1), 134-145.
  • von Hippel, E. (1998), Economics of Product Development by Users: The Impact of "Sticky" Local Information, Management Science, 44(5), 629-644
  • von Hippel, E. (2005), Democratizing Innovation. MIT Press: Cambridge
  • Wexler, M.N. (2011), Reconfiguring the sociology of the crowd: exploring crowdsourcing, International Journal of Sociology and Social Policy, 31(1), p6-20.

Untuk pemesanan, hubungi kami melalui kontak yang tersedia berikut:

Chat WhatsApp Kirim SMS Telpon

Komentar (0)

Posting Komentar