Posts tagged developers

Maybe Asia-Pacific Developers Will Deliver The Internet Of Things

Demand, meet supply. The world is in dire need of millions of Internet of Things developers within the next few years. The good news? According to a new Evans Data survey, 17% the world’s software developers are already working on IoT applications; those in the Asia-Pacific region are particularly active.

See also: The Internet Of Things Will Need Millions Of Developers By 2020

The bad news? This developer population doesn’t have a strong history of software and cloud services innovation.

Asia-Pacific: A Hotbed Of Activity

To reprise: Evans Data’s recent global survey of over 1,400 software developers found that 17% are working on applications for connected devices for IoT, while an additional 23% expect to begin work on them in the next 6 months. Given that so much of the world’s electronics are produced in Asia-Pacific, it’s perhaps not surprising that it’s the region with the most aggressive IoT developers. 

In fact, nearly 23% of APAC developers are currently developing software for Internet of Things. Only 20% of APAC developers say that they have no such plans, compared to 36% in North America and 49% in EMEA. 

But the real question for APAC developers is whether they’ll repeat their errors of the past few decades: building great hardware and neglecting to connect that hardware with software and services. Sensors, it turns out, are somewhat pointless.

More Devices, More Connected

The number of ‘things’—30 billion devices connected to the Internet by 2020, according to Gartner, compared to 7.3 billion personal devices—is impressive but not the real story. Soon enough, as Gartner suggests, these devices “will … be able to procure services such as maintenance, cleaning, repair, and even replacement on their own.” They will be able to interact without human intervention, creating all sorts of possibilities, not to mention security vulnerabilities.

Developers are making this happen, developers that believe in and are helping to shape a connected future:

Due to the convergence of cloud, embedded systems, real-time event processing and even cognitive computing, developers are blessed with a perfect storm of low-cost devices and the ability to intelligently connect them. That in turn will yield revenue-generating services, which is where the real IoT money is.

Opening Up The Internet Of Things In APAC

While 31% of developers associate the Internet of Things with cloud computing, according to the Evans Data survey, the connections that bring device data to the cloud are much more important. As Intel Internet of Things business leader Ton Steenman complains, companies currently spend 90% of their IoT budgets “stitching things together,” when that number should be closer to 10%.

APAC hasn’t traditionally been good at such “stitching.”

That stitching is hard both because developers haven’t trusted third-party networks to carry their device data, but also because connectivity hasn’t been built into devices and sensors at the pace needed, as data from Berg Insight suggests:

• Wireless connectivity has been incorporated into just 1/3 of point-of-sales terminals sold in 2013

• 27% of ATMs in North America are connected to cellular networks while only 5% to 10% are connected in Europe

• The number of “oil and gas” devices with cellular connectivity hovered at 93,000 in 2013 but will jump to 263,000 new units by 2018

There are signs that this is changing, particularly in APAC, which was an early pioneer in mobile communications. Just looking at the prevalence of connected smart electricity meters, APAC has the lead, despite lagging considerably in 2011.

Companies in APAC have struggled to build compelling software (e.g., Sony smartphone interfaces) or cloud services (e.g., Samsung cloud sync and back-up services). While this is changing, it’s an open question whether APAC will be able to take the lead in developing connected experiences across devices.

One place to start is by opening up APIs.

As Rob Wyatt argues, “It is the open, local API that is missing from the Internet of Things.” To make the it work anywhere, and particularly in Asia-Pacific, it’s not enough for “vendors … to provide dumb ‘smart’ devices with a select handful of ‘strategic’ integrations within their pay-walled garden.”

For Internet of Things applications to work, device vendors need to provide open APIs so that other developers can hack services around and into them. 

If APAC developers do this, they’ll win the war. Again, the battle won’t be won by building nice devices. It will be won by creating compelling developer cloud-services experiences that span a wide array of devices—all of which can start with open APIs on those devices so that developers, both within APAC and outside it, can hack the future.

Lead image by Flickr user Ed Coyle, CC 2.0

View full post on ReadWrite

Why Javascript Developers Should Get Excited About Object.observe

Nobody is more excited about the most recent version of Google’s Chrome browser than JavaScript developers.

The latest version, Chrome 36, now includes a long awaited potential update to the JavaScript language. Called Object.observe, it’s a low-level API (see our API explainer) that might solve one of the biggest problems in modern JavaScript development.

That problem: JavaScript developers have yet to find a satisfactory way to ensure that changes in a Web app’s underlying data—say, as the result of user input—are reflected properly in the browser display. (This basically reflects the fact that JavaScript developers often separate an app’s data structures and its user interface into separate program components. That keeps the coding simpler and cleaner, but also raises issues when the two components need to communicate.)

Various JavaScript frameworks offer workarounds that developers can use to get a WYSIWYG (What You See Is What You Get) display based on exactly what their app is doing. But these workarounds add more code that can slow the app down, alter the flow of its execution and potentially introduce new bugs.

Object.observe would simplify the problem by creating a direct pipeline between an app’s data structures and its display. It can do this more easily because it’s an actual change baked into the structure of the JavaScript language itself, and not just a collection of bolted-on code.

Object.observe is still unofficial; it’s so far only incorporated into Chrome, which means developers who use it can’t count on it working their apps working in others browsers such as Firefox or Apple’s Safari. And it’s not clear when—or even whether—other browser makers will jump on the Object.observe bandwagon.

Still, the promise of Object.observe is such that if it clears these hurdles, it could change the way JavaScript is coded forever.

What Is Object.observe?

I asked Rafael Weinstein, a software engineer at Google who played a big role in Object.observe and Chrome integration, to explain what it is and what it does.

While some developers have guessed that Object.observe will replace or remove the need to use JavaScript development frameworks, Weinstein says that isn’t the case.

“Object.observe is a low-level primitive,” he said. “It is an enabling technology which should make some existing JavaScript libraries and frameworks faster, more robust or both.”

In other words, Object.observe is an additional function that will one day be built into JavaScript. Developers won’t work with it directly; instead, frameworks like Backbone, Angular, and Ember will add new libraries that rely on Object.observe to keep app data and the app display in sync.

In an introduction to Object.observe at <a href=””>JSConf EU</a>, Google developer Addy Osmani demonstrated how you can use Object.observe to view and edit code.&nbsp;

The main advance Object.observe offers is a feature called two-way data binding. Translated from codespeak, that means it ensures that changes in a program’s underlying data are reflected in the browser display.

In order to view both the program’s data state and its display at the same time, JavaScript developers usually work inside a framework that includes a Model View Controller (MVC), in which the raw code appears in one window and the user display appears in another. However, current solutions for displaying both at once tend to bulk up a program with extra code devoted to data binding. The very act of observing the display changes how it is coded.

Amjad Masad, a developer at Facebook, has called two-way data binding the “holy grail of JavaScript MVC frameworks” because it sidesteps such workarounds.

“In the [Model View Controller] pattern, you have the model [i.e., underlying app data] that describes the problem you want to solve and then you have this view that renders it,” Masad told me. “It turns out that translating the model logic [i.e., the app's data structures] to the user interface is a really hard problem to solve as well as the place in the code where most of the bugs occur.”

“When you have true data binding, it reflects automatically in the user interface without you putting in any ‘glue’ code in the middle,” Masad continued. “You simply describe your view and every time the model changes it reflects the view. This is why it’s desirable.”

It’s 2014. Where’s My Object.observe?

Of course, nothing is an end-all be-all solution, not even an API that supposedly makes “the holy grail of JavaScript” possible. With Object.observe, the hurdle that remains is that it’s still not an official part of JavaScript. 

The Object.observe proposal needs to be approved by TC39, the organization that oversees the development and maintenance of ECMAscript, a general scripting-language standard that encompasses JavaScript and several related languages. Currently, TC39 has approved Object.observe as a draft, the second stage of the lengthy approval process.

See also: How To Build A WinJS App In 10 Easy Steps

TC39 members include developers from Google, Mozilla, Microsoft, and Apple. Weinstein, who submitted the Object.observe proposal, said that since developers from these companies approved of adding it to ECMAscript, he’s optimistic that they’ll also want to add Object.observe functionality to their companies’ own browsers.

If Object.observe becomes a standard feature across all major browsers, it’s more likely to be adopted as an official JavaScript component. Still, the longer Object.observe development drags on, the greater the possibility that developers will cotton to some other solution to the data-binding problem.

For example, Weinstein is also at the head of a project called Polymer or observe-js, a library that uses Object.observe if it is available, and alternate methods if it isn’t. That way, developers can harness Object.observe whenever possible, although they still have to be prepared in case their program runs somewhere it’s not supported.

Meanwhile. Facebook has come up with an alternative to Object.observe called React, which developers can use now. React, however, isn’t an addition to the JavaScript language itself—just a framework adjustment that performs a similar function.

Developers can try a number of solutions for fixing the data binding problem. But since the absence of data binding is itself a general problem for Javascript, you can see the attraction of adding something like Object.observe to the language in order to resolve the issue in a relatively clean and universal fashion.

This Is Your Code On Object.observe

JavaScript developers don’t usually code every part of their programs by hand. Instead, they use frameworks like Backbone, Angular, and Ember that incorporate libraries—chunks of code that handle frequently encountered programming hurdles.

For example, if you are a developer who builds a lot of contact forms for websites, you might have a library of software functions that shortcut the process of inserting a contact form in a website. So when the Object.observe API is released, libraries will be built to make two-way data binding something developers can do as easily as inserting a library in their code.

See also: Angular, Ember, And Backbone: Which JavaScript Framework Is Right For You?

In the absence of Object.observe, JavaScript frameworks have each come up with a different ways of implementing two-way data binding.

The Angular framework uses something called “dirty checking.” Every time you add to your code, Angular checks to see what changed. It checks even if the view hasn’t changed at all. This continues once the app is deployed; every time the user inputs a change into the app while using it, the dirty checking code checks if it needs to refresh the display in response. This adds load time to every screen of your app.

The Ember framework uses something called “wrapper objects.” Basically, the developer adds a class to the object that serves as a “listener.” When the object changes, the “listener” triggers an event so the developer knows something changed. Since these classes are not native to JavaScript, they add loading time as well. Wrapper objects also increase developer labor, which inevitably leads to more possibility for bugs and errors.

Goodbye, Cruel Wrappers And Dirty Checking

Object.observe make both of these workarounds obsolete. It allows plain JavaScript objects without wrappers to listen for changes. And only when the listener object notices a change does the view model itself alter, instead of indiscriminate dirty checking.

Igor Minar, lead developer on the AngularJS framework, said developers who use Angular won’t have to work with Object.observe directly.

“Object.observe is a low level API, which is kind of awkward or inconvenient to use directly. That’s good because by providing such low level API the platform enables us to built on top the API and create higher layers with more opinions,” he said. Object.observe is already slated for addition as a feature in AngularJS 2.0

Early versions of Object.observe have already given Angular a performance boost. In a test, dirty checking took 40 minutes compared to Object.observe’s 1 to 2 minutes. In other words, Angular became 20 to 40 times faster while using Object.observe.

Object.observe has the potential to change the way JavaScript operates. But developers themselves won’t notice much of a difference in their workflow.

“Object.observe is a low level feature that framework authors will use,” said Masad. “Developers using those frameworks won’t notice a different except for higher performance programs.”

Screenshot of Google engineer Addy Osmani introducing Object.observe at JSConf EU

View full post on ReadWrite

Among Mobile App Developers, The Middle Class Has Disappeared

The Platform is a regular column by mobile editor Dan Rowinski. Ubiquitous computing, ambient intelligence and pervasive networks are changing the way humans interact with everything.

The middle class of mobile app developers is completely non-existent.

According to a research survey from market research firm VisionMobile, there are 2.9 million app developers in the world who have built about two million apps. Most of those app developers are making next to nothing in revenue while the very top of the market make nearly all the profits. Essentially, the app economy has become a mirror of Wall Street.

According to the survey: “The revenue distribution is so heavily skewed towards the top that just 1.6% of developers make multiples of the other 98.4% combined.”

About 47% of app developers make next to nothing. Nearly a quarter (24%) of app developers who are interested in making money from their apps are making nothing at all. About another quarter (23%) make less than $100 a month from each of their apps. Android is more heavily affected by this trend, with 49% of app developers making $100 or less a month compared to 35% for iOS.

As you can see, only 6% of Android developers and 11% of iOS developers make more than $25,000 per month, numbers that make it extremely hard to build a real, sustainable business with mobile apps.

If we chop off the top and the bottom of the market, that leaves a “middle class,” which is extremely poor, struggling to make any kind of money. About 22% of developers earn between $100 and $1000 a month off their mobile apps. The higher end of that scale isn’t bad for hobby developers, but professional app makers can’t get by on that. VisionMobile draws an “app poverty line” at apps that make less than $500 a month, leaving 69% of all app developers in this category.

That leaves a very thin middle class that makes between $1,000 and $10,000 a month per app. To put that in perspective, the American middle class at large earns between $40,000 and $95,000 annually (with the “middle-middle” making $35,000 and $53,000 per year).

So what happened to all the riches in the app economy? The fact is that the money dried up a long time ago and only the top of the food chain makes any real money. The developer middle class is small and struggling while two-thirds of developers trying to make money off their apps may just look towards other ways to employ their skills.

Vision Mobile concludes:

More than 50% of app businesses are not sustainable at current revenue levels, even if we exclude the part-time developers that don’t need to make any money to continue. A massive 60-70% may not be sustainable long term, since developers with in-demand skills will move on to more promising opportunities.

The Balloon Effect

The death of the developer middle class should come as no surprise to industry watchers. The app economy has mirrored the rest of the mobile industry of the last several years.

The first comers to the industry carved out names for themselves and benefited from the unexpected popularity of the smartphone (led by Apple’s iPhone and the App Store). Copycats and entrepreneurs raced to get in on the riches, creating a bloated app store filled with poor and mediocre apps to fill just about every product category you could think. This pushed out quality (but limited) market apps. The revenue consolidates at the top of the market

App store inventories continue to grow, one poor app after another. This will lead to the eventual realignment of the developer pool, building mobile apps as they struggle to find revenue or venture money to grow their businesses. In the past, I have called this the balloon effect. We’ve have seen it in smartphone manufacturing (where middle tier players like HTC get pushed out as Samsung and Apple dominate) and developer services where companies struggle to compete against each other and industry heavyweights. Eventually, these companies are either bought or merge. (StackMob and Parse were acquired, PlayHaven and Kontagent merged to become Upsight.)

The app economy is one of the foundational elements of the mobile industry, so the balloon effects take longer to manifest but the impact is much broader on the developer community.

The Sparrow In The Coal Mine

Developer David Barnard offers a cautionary tale about an app called Sparrow.

We’ve all read stories about and been enthralled by the idea of App Store millionaires. As the story goes… individual app developers are making money hand over fist in the App Store! And if you can just come up with a great app idea, you’ll be a millionaire in no time!

Sparrow was an app built by a three-person team which became five people after a venture capital seed round. It started as a paid app in the Mac App Store and then the iOS App Store, with plans for a Windows app on the way. Sparrow debuted well and had a couple popularity spikes with new releases and media coverage. But Sparrow was not long for the world. It could not sustain the popularity needed to make enough revenue for its team to make the riches its efforts may have deserved. Eventually Sparrow sold to Google—a quality outcome. But most developers will never see the same type of popularity spikes, venture capital investment or exit to a huge company experienced by Sparrow.

If a well received, well-made and popular app like Sparrow could not hack it in the mobile app business, the average indie developer has little chance to make a dent without stumbling upon a mega hit, a la Flappy Birds (developed by a lone programmer in Vietnam). The kicker is that Sparrow’s tale … is from 2012.

Two years later, the opportunities for apps like Sparrow have more or less dried up as thousands of apps have filled its category, making it harder for app publishers to stand out from the crowd. For every Instagram success story, there are thousands of apps that make little to no money and have no prospect of success in the near future.

Barnard summed it up well, diagnosing the prognosis of the app developer middle class in 2012.

Given the incredible progress and innovation we’ve seen in mobile apps over the past few years, I’m not sure we’re any worse off at a macro-economic level, but things have definitely changed and Sparrow is the proverbial canary in the coal mine. The age of selling software to users at a fixed, one-time price is coming to an end. It’s just not sustainable at the absurdly low prices users have come to expect. Sure, independent developers may scrap it out one app at a time, and some may even do quite well and be the exception to the rule, but I don’t think Sparrow would have sold-out if the team—and their investors—believed they could build a substantially profitable company on their own. The gold rush is well and truly over.

Top image courtesy of Flickr user Bennet.

View full post on ReadWrite

Developers Are Starting To Chase After Apple’s Swift

Apple really wants developers to switch to Swift. And it looks like the feeling is mutual.

Six weeks after Apple unveiled Swift, the new programming language for iPhone and Mac applications is attracting a noticeable level of interest from developers. Phil Johnson at IT World crunched the numbers, and at least on GitHub, developers are picking it up.

See also: Apple Wants Devs To Love Swift, Its Shiny New Language—But There’s A Catch

Swift is now the 15th most widely used language on GitHub, with more than 2,600 new Swift repositories created since June, according to Johnson’s study. More significantly, Johnson believes that interest in Swift is directly replacing interest in Objective-C:

“From the beginning of January through the end of May, developers created about 294 new Objective-C repositories per day on GitHub. Since Swift was released in early June, that average has dropped to about 246 repos per day. That drop of 48 repos per day is pretty close to the average number of new Swift repositories created per day since its release and initial spike in interest.”

Apple has shown a marked interested in getting developers to adopt Swift, even going so far as to launch a surprisingly open and friendly development blog.

See also: Why Apple’s Blogging About Swift, Its New Programming Language For iPhones And Macs

From Apple’s perspective, Swift is a simpler, safer, faster-to-run alternative to the somewhat clunky and error prone language Objective-C now used to write apps for iPhones, iPads and Macs. But even if Swift is the magic bullet Apple conveys, it’s still going to have to rally developers to switch from the old way of doing things to an unproven new language.

The GitHub data shows that at least some developers are turning a new leaf.

View full post on ReadWrite

Android Wear’s First Custom ROM Shows Huge Potential For Android Developers

A beautiful aspect about Google’s Android operating system has always been the fact that it allows for developers and enthusiasts to strip away the platform’s core experience and replace it with homebuilt customized versions. Custom ROMs have been part of Android since nearly the beginning.

So it is natural that custom ROMs have now come to Android Wear, Google’s version of the operating system that runs on smartwatches and wearable devices.

Android developer Jake Day has released one of the first custom ROMs for the LG G Watch, one of the first two Android Wear watches to hit the market. Day posted the ROM on RootzWiki, an Android news and information site for developers and designers.

The ROM—nicknamed Gohma after a boss in the video game Zelda—is fairly simple. It improves battery life of the LG G Watch, speeds up overall performance, reduces lag time between notification cards and increases vibration intensity.

Gohma isn’t a full-blown Android Wear replacement. The ROM abides by the basic user interface design principles of Wear and the LG G Watch will still take over-the-air updates to the operating system from Google and LG (which will wipe out the ROM installation). Day makes sure to note that Gohma is a small release intended to improve performance and to make sure that everything is work well before releasing a fuller version of the ROM at a later date.

Gohma is fairly easy to install. Knowledgeable developers will just need to make sure that the device’s bootloader is unlocked and the ROM script will root the device and itself, allowing for the custom software to be installed.

Unleashing The Community: A Good Thing For Smartwatches

Android Wear generally leaves a lot to be desired. It is Google’s first go at smartwatch software and, initially, it is basically just a notifications device strapped to your wrist. For the time being, that’s perfectly fine as wrist-based notifications are a (surprisingly) pleasant way to receive messages. But Android Wear and smartwatches in general have much more potential than what is currently available.

Part of that is a hardware problem as engineers are naturally limited by the capabilities of currently available processors and sensors. But the hardware in the LG G Watch is almost the equivalent of a 2011 Android smartphone, so it should be able to do much more than the notification cards and voice interaction that is currently available through the initial release of Android Wear.

See also: What Not To (Android) Wear: One Woman’s Search For Smartwatch Bliss

This is where the large community of Android developers has an opportunity to build on top of Wear through custom skins and ROMs to make it a better performing, more functional and attractive device. Day’s Gohma should just the start as the heavy hitters in the Android ROM community—like CyanogenMod—will surely get involved, pushing Android Wear development to further feats of utility and maturation.

The Android developer community doesn’t operate in a vacuum either. Google listens to developers and often implements features and requests that developers have built on their own to work around the limitations of stock Android. The Android development community is essentially one giant sandbox for Google to learn about what app builders and consumers want in the next version of the operating system. For the last six years, this process has worked well in helping to build ever better versions of Android for smartphones and tablets. Hopefully with the first custom ROM for Android Wear, Google can learn how to build better software for smartwatches as well.

Images: Gohma via HD Wallpaper. Android LG G Watch by Adriana Lee for ReadWrite.

View full post on ReadWrite

iOS Developers Make More Money, But Android’s Volume Is Closing The Gap

Looking at smartphone and tablet sales, Google’s Android ecosystem should be printing money for developers. After all, not only are Android device sales outpacing Apple’s iPhone and iPad sales, but Google also shares more Android-related revenue with its ecosystem than Apple does with the iOS ecosystem. 

And yet iOS developers earn more than Android developers. What, or rather who, gives?

The answer is in efficiency. Apple is able to centralize its revenue stream while Google shares with a wide variety of partners. But Android, on pure volume, may soon outstrip the mighty iOS.

Android’s Larger Ecosystem

It’s no surprise that Android devices have been outselling iOS devices for some time. Given Apple’s insistence on charging a price premium, falling behind was a foregone conclusion. Analyst Mark Hibbens estimates Android’s widening lead over iOS in shipments.

Credit: Mark Hibbens

Which means, of course, that in the first quarter of 2013 the population of Android’s installed base surpassed that of iOS and will almost certainly never look back.

Credit: Mark Hibbens

And yet this hasn’t translated into more money for the Android app economy.

Who Does Android Pay?

According to a new VisionMobile study, Apple’s app economy is considerably larger than Google’s Android, at $163 billion:

Apple’s Ecosystem: All About Apple

Google’s is smaller at $149 billion:

But there’s a key difference between the two economies, from hardware to apps to accessories: Apple claims much of its ecosystem’s revenues, whereas Google shares among manufacturers, developers, carriers and advertising partners. To highlight this point, both Apple and Google take a 30% from developers for paid app downloads and in-app purchases. Google used to hardly keep any of this money, passing it along to distribution partners (like cellular carriers and payment processors) and paying fees. As of Google I/O 2014 though, that policy has changed and Google will keep nearly all of the revenue from Google Play. Apple keeps nearly all of the 30% it takes from app developers.

Not that Google is necessarily playing a charity here. Part of Google’s problem, as ABI Research notes, is fragmentation. While ABI says Android was used in 77% of smartphones shipped worldwide in the fourth quarter, 32% of those 221 million devices used forked versions (up from 20% of shipments the year before and up from 27% in Q3 2013).  

So a fair amount of Android’s adoption does not generate revenue for Google, even if it wanted to. Google is trying to minimize the negative impact from fragmentation “by giving primacy to Google Play Services as the hub for new Android capabilities,” as Crittercism’s Michael Santa Cruz highlights, but it has a long way to go. 

Even so, Google’s strategy inherently shares more with its ecosystem: by design, Google doesn’t care about capturing hardware or accessories revenue, and even in software it is less concerned with app revenue than ad revenue. Google’s goal has long been to get more people on the Internet, using the Web, searching for more items. Google’s view is that the more eyeballs there are on the Internet, the more potential it has to advertise them through search.

Google announced that it payed app developers about $5 billion dollars between Google I/O 2013 and I/O 2014, with a rate increase of 2.5x in that span.

And yet iOS developers make more. $500 – $1000 per app per month, according to VisionMobile, compared to Android’s $101 to $200 per app per month. 

At least, for now.

Go East, Young Man

While Hibbens suggests that Apple’s higher app spend per device accounts for the chasm between the Android and iOS economies, and that this gap will only widen over time, this feels like a short-term perspective. Yes, it’s true, as Andreessen Horowitz’s Benedict Evans posits, that Apple benefits from a “wealth gap” between its customer base and Google’s. 

Apple enjoys market share superiority in the comparatively rich North American and Western European markets, as VisionMobile illustrates:

This isn’t something to celebrate, however. As I’ve written before, emerging economies can’t afford Apple’s price premium. And when “emerging economies” include China, set to become the world’s largest economy in 2014, and India, another market serving over one billion people, the future for Android looks very bright indeed.

It will likely continue to be the case that Apple will earn more app revenue per device than Google, but that’s just fine for Google. Android has always been a volume play. With few exceptions, Google’s business model is always about skimming small amounts of money from vast amounts of transactions

Which is not to say Apple is doomed. It’s simply to argue that developers should tune their monetization strategies differently for iOS and Android … just like Apple and Google do.

Article updated to correctly reflect Google’s cut of Play app earnings.

View full post on ReadWrite

The Internet Of Things Will Need Millions Of Developers By 2020

It’s standard to size a market by the number of widgets sold, but in the Internet of Things, which numbers sensors and devices in the billions, widget counts don’t really matter. In part this is because the real money in IoT is not in the “things,” but rather in the Internet-enabled services that stitch them together.

More to the point, it’s because the size of the IoT market fundamentally depends on the number of developers creating value in it. While today there are just 300,000 developers contributing to the IoT, a new report from VisionMobile projects a whopping 4.5 million developers by 2020, reflecting a 57% compound annual growth rate and a massive market opportunity.

Start Making Sense

In the last 30 years we’ve created a fair amount of data, but it pales compared to what we’ve generated just in the last two years. Ninety percent of the world’s data was generated in the last two years alone, much of it by machines. Such machine-produced data dwarfs human-generated data. 

In such an IoT world, devices are not the problem. According to Gartner, we’ll have 26 billion of them by 2020. Connecting them isn’t, either. As VisionMobile’s report makes clear, however, “making sense of data” is the real challenge. 

It’s also the big opportunity:

Just honing in on the middle column, Google acquired Nest for $3.2 billion, and just six days ago Google’s Nest acquired Dropcam for $555 million. Dropcam’s cameras upload more data every day than users put up on YouTube. That’s a lot of data, and a lot of money.

It all comes down to developers, because it’s developers and the companies they work for that are pulling intelligence from the data.

More Data Requires More Developers

Fortunately, we’re about to get a huge crowd of developers actively contributing to IoT applications—4.5 million of them by 2020, according to VisionMobile.

As VisionMobile suggests, “the only way to make a profit in the Internet of Things is to build a network of entrepreneurs who create unique value on top of commodity hardware, connectivity and cloud services.” Here’s a more detailed explanation:

The key to being successful with developer-centric business models is to find a way to bundle your core product with the new demand generated by developers. Much like Apple bundles its devices with million apps in the App Store, Google bundles its online services with Android devices. Through these services, Google collects user intelligence and creates opportunities to expand its ad inventory. Amazon as well bundles its e-commerce services with subsidized Kindle tablets (and soon smartphones) to drive user traffic to its virtual store shelves. 

In other words, developers aren’t the buying audience: they create the ecosystem that makes other buyers interested in buying hardware, cloud services or some other value. 

What Will They Build?

As much as we may want to fantasize about refrigerators talking to coffee machines, the reality is that we have no clue what meaningful applications will emerge from the IoT opportunity. As the report authors state, “Demand for IoT technology will not come from a single killer app, but from thousands of unexpected new use cases.”

No single company will win in the IoT, nor will any one app. Such developer-driven demand “will create new Internet of Things markets that are several times bigger than the ones we could ever predict with a spreadsheet that extrapolates today’s market.” The only thing we know for sure is that developers are fundamental to making IoT a big, profitable market, even if they don’t pay a single dime for a single sensor in that market. 

Lead image by Flickr contributor Official GDC, CC 2.0

View full post on ReadWrite

What Developers Need To Know About Android L

Google is taking a different tact for its newest version of Android than it has in the past. Instead of announcing and releasing an official version of the operating system, it has released a developer preview—dubbed “L”—thus giving developers and manufacturers time to get ready for it before its official released.

Dave Burke, Google’s head of Android engineering, says that L is the biggest release Android has ever had. Looking at the breadth of L, it’s hard to disagree. Google has long promised that Android would eventually be in everything, although that’s been a long time coming. But Google plans to make Android L a vehicle for smart televisions, automobiles and wristwear, finally giving developers, manufacturers and consumers a way to actually build for the next stage of mobile computing.

Why “L”?

Historically, Google has given each version of Android an alphabetical name taken from sweets. Android 2.2 was “Froyo”; and Android 4.4 was “KitKat.” Google hasn’t officially named—or numbered—the next version of Android, but the next letter in the alphabet is “L.” Will it be a Lollipop? Or Lemon Meringue Pie? Or perhaps Licorice? No one outside Google knows.

L changes the design scheme of Android as well as adding some important projects to trim and analyze battery usage, a new compiler and bringing Android everywhere. If you’re an Android developer, here’s what you’re going to need to know about Android L.

Material Design And Graphics

Google has changed Android’s design scheme to give it a more universal look, one befitting an interface designed to show up across a broader array of devices. Its “material design” schema aims to provide a more intuitive look and feel that works on a variety of screen shapes and sizes while bringing more tactile response to Android navigation.

“In material design, surface and shadow establish a physical structure to explain what can be touched and what can move,” wrote Google designer Nicholas Jitkoff. “Content is front and center, using principles of modern print design. Motion is meaningful, clarifying relationships and teaching with delightful details.” 

Material design has some new features that developers and designers will want to figure out before the official release of Android L: 

  • Theme: It exposes new colors and represents all colors as greyscale that can then can be tinted.
  • Widgets: It employs a new CardView and RecycleView (ListView2) that greatly eases the burden on making ListView in Android. There are new controller features in the MediaStyle and MediaSession functions, and playback widgets in the new Android Extension Pack.
  • Realtime soft shadows: These provide the ability to “lift” images to the top of the view hierarchy where they can cast subtle shadows that aim to convey how objects interact. 
  • Animations: A good portion of material design has to do with animations such as transitions within or between apps. Animations are baked into the platform and can be shared between activities in order to make transitions intuitive for the user.


Google has also updated to OpenGL ES 3.1 in Android L, with backward compatibility to previous versions.

Network Functions

One of the biggest updates to Android will be in the “recent apps” drawer. Essentially, Google is broadening the notion of recent activity by including opened websites and documents as well as apps in a card-style user interface. 

Google updated the Android status bar in Android KitKat 4.4; L offers some more improvements such as the ability to change the transparency and color of the status bar to match the brand color of a developer’s app.

Project Volta, meanwhile, is Google’s effort to make Android L more energy efficient. It will show battery stats for individual apps, while a battery historian reveals how apps use power over time. Google says that Project Volta is “like traceview for power events.”

A new “JobScheduler” will let that apps condition their activity on a variety of new criterial. Currently, for instance, if an app needs to update or check for background data, it just turns on the phone and its network connection and tries to run its job. With JobScheduler, the app can first check for a Wi-Fi or cellular connection and make sure the battery holds sufficient juice. The new JobScheduler is basically intelligent background processing for Android apps.

Android L also employs a new multi-networking feature that will help apps switch seamlessly between networks without interrupting the user flow and interaction within an app. In theory, that means that if you move from Wi-Fi to a cellular network, the changeover won’t necessarily disrupt an app’s functions.

Bluetooth will feature more peripheral device support, which will be necessary for TV and Android Wear devices. Android L also promises to make NFC easier to develop with and for users to find—accomplishing the latter by including Android Beam in the Android “Share” menu.


Some of the biggest changes in L involve notifications. In accordance with material design concepts, notification backgrounds will be card shaped with shadow casting, while the foreground allows for dark text and actions with all icons treated as silhouettes. The design will feature new accent coloring and small icon badging. L builds upon—but doesn’t replace—Android notification features from previous versions of the operating system.

“Heads-Up” notifications are high priority notifications involving people; they will emit an audible alert and blow up to a full screen when they come into a user’s device. They are designed to be easy to act on and easy to ignore.

Android L also features new lock screen notifications similar to those that manufacturers have introduced to specific devices, such as the Moto X from Motorola. Developers and users can set these notifications to adhere to a specific privacy settings (see picture below) ranging from public to secret.

Notifications are also getting improved metadata as well to annotate how what information is collected and presented to the user.

L’s Odds And Ends

Google is replacing Android’s traditional Dalvik virtual machine in L, as I first reported almost eight months ago. The new Android compiler is called Android Runtime (ART); it features smaller garbage collection pauses, dedicated space for large objects and a moving collector for background app functions. Android Runtime can compile apps on the fly (i.e., what’s technically known as just-in-time, or JIT, compilation) or well in advance of use.

L features new enterprise-security features such as a new device policy manager  and new profiles for device owners or the companies that manage devices. 

For Android TV, L now offers a new leanback launcher as an intent category.

Google did not announce updates to the new tools in Android such as the Android Studio integrated developer environment but we were promised that the Google tools team would have an update Thursday morning. Google Play Services and the Developer Console also have some significant news such as a Wear Data API and new analytics.

Developers interested in working on apps for Android L can visit Google’s preview site and download the latest version starting today, June 26.

Lead image by Owen Thomas for ReadWrite

View full post on ReadWrite

Nest To Developers: Time To Hatch Your Ideas With Our API

Developers, you have a new smart-home platform to play with. Google’s Nest unit has formally unveiled an API (see our API explainer) that will let independent programmers create new applications for the company’s smart thermostats and smoke alarms. Nest’s press release is embedded below.

The main idea behind the program is to let a variety of other devices—everything from smartwatches to smart lighting to smart cars—connect with Nest’s products to share data and act together more intelligently. They’ll do so by way of their apps, which developers can modify to use Nest API functions that, say, read data from one of its smart smoke detectors or change the thermostat temperature. (Nest actually announced the developer program last September; today just marks its formal launch.)

That opens the door to a variety of new applications, some of which Nest is showcasing as part of today’s announcement. For instance:

  • Logitech’s Harmony Ultimate remote will let you set the temperature on a Nest thermostat without getting up from the couch;
  • The popular online service IFTTT—a way of programming new behaviors into your existing online services by combining them using the formulation “if this then that”—will now work with Nest, allowing new “recipes” such as “if my detector senses smoke, text my neighbors”;
  • Google’s voice-activated smartphone search will let you set the temperature by saying “OK Google” and issuing a voice command, while its Google Now personal assistant can tell Nest when you’re nearing home and have it start warming or cooling your home before you get there;
  • Smart LED bulbs from the Australian company Lifx will flash red if a linked Nest Protect detects smoke, helping you see through the haze and even alerting hearing-impaired people who might not hear the alarm;
  • The Mercedes-Benz SmartDrive app will 

Not all of those applications may strike you as equally exciting at first glance. And while almost all of them are available immediately (a few, such as the Google services, won’t debut until the fall), it’s also worth noting that the products involved may not be in widespread use yet. It’s not clear, for instance, how many people own Whirlpool washers they can control with an app (and which can coordinate with the Nest thermostat to schedule cycles around peak energy-usage periods).

But these applications should give you a good sense of how Nest sees its future in the smart home—as a kind of traffic cop for the connected home, one that leverages the data it’s collecting about residents to inform and work with other connected devices.

It’s worth noting that Nest officials don’t embrace the notion of making their products into a “hub” that connects and coordinates other devices, except in specific and user-friendly ways. “We’re building this symbiotic experience” between Nest’s gadgets and third-party devices, says Greg Hu, director of Nest’s developer program. “It’s not about a single side becoming the hub and controlling the other.”

The data Nest gizmos collect on their households is central to making these new applications work. Its thermostat already “learns” from the behavior of residents as they turn it up and down, eventually figuring out how to program itself. It will even turn down the heat or air conditioning when residents are away, a conclusion it will reach after a certain period in which no one adjusts the temperature and the thermostat’s built-in infrared sensors detect no motion. Nest’s Protect smoke detectors likewise carry eight different sensors, including four that detect movement.

And despite a recent setback for its Protect smoke detectors (including a product recall), Nest’s ambitions are clearly growing in this respect. On Friday, for instance, it acquired the home surveillance-camera maker Dropcam for a reported $555 million, providing it yet another platform for collecting data that can be mined and used in new ways. (The company says its privacy policy prohibits the sharing of that information without customers’ permission.)

Here’s the full Nest release:

View full post on ReadWrite

How Many Languages Do Developers Need To Know?

At its Worldwide Developer Conference last week, Apple announced its new programming language Swift. It’s the latest in a rash of new languages developed by big tech companies, in some cases for specific use with their own platforms.

Apple has Swift for iOS developers; Facebook has Hack, a language for back-end development. Google, meanwhile, has its own entries—the would-be Javascript replacement Dart and a new general programming language called Go.

This rash of new languages raises a number of issues for developers. Perhaps the most significant is one my colleague Adriana Lee raised after Apple’s Swift announcement:

A Computer-Language Babel

There are already hundreds of programming languages in existence, and more are popping into existence all the time. Many are designed for use in a relatively narrow range of applications, and large numbers never catch on beyond small groups of coders.

Similarly, big tech companies have been developing new languages for about as long as there have been big tech companies. The seminal general-purpose language C originated at AT&T Bell Labs in the early 1970s. Java, now the primary language for development of Android apps, was born at Sun Microsystems in the 1990s.

What’s different these days is the extent to which companies embrace new languages to further their specific business objectives—a process that also has the effect of creating a dedicated base of developers who are effectively “locked in” to a company’s particular platform. That sort of dual strategy dates back at least to Sun’s introduction of Java, which the company promoted as a way to challenge Microsoft’s dominance on the PC desktop. (Things didn’t work out the way Sun planned, although Java eventually found a home in enterprise middleware systems before Google adopted it for Android.)

It’s also clearly Apple’s goal with Swift. Should it live up to the company’s early hype, Swift seems likely to simplify iOS app development by filing the rough edges off Objective-C, the current lingua franca of iOS and Mac OS X developers. But it will also require those same developers to learn the ins and outs of a new language that they’re unlikely to use anywhere else.

Why Companies Roll Their Own

Which cuts against the ingrained “don’t reinvent the wheel” philosophy that animates most developers. So why don’t more companies just adopt already existing languages to new uses?

One answer is simply that companies build their own languages because they can. Designing a new language can be complex, but it’s not particularly resource-intensive. What’s hard is building support for it, both in terms of providing software resources (shared code libraries, APIs, compilers, documentation and so forth) and winning the hearts and minds of developers. Companies are uniquely positioned to do both.

There’s also the fact that existing languages are often difficult to shoehorn into today’s complex code frameworks. Take, for instance, Facebook’s decision to create Hack, a superset of the scripting language PHP that’s commonly used in Web development.

Facebook’s main goal with Hack—a common one these days—was to improve code reliability, in this case by enforcing data-type checking before a program is executed. Such checks ensure that a program won’t, say, try to interpret an integer as a string of characters, an error that could yield unpredictable results if not caught. In Hack, those checks take place in advance so that programmers can identify such errors long before their code goes live.

According to Julien Verlaguet, a core developer on Facebook’s Hack team, the company first looked for an an existing language that might allow for more efficient programming. But much of Facebook was already built on PHP, and the company has built up a substantial software infrastructure to support PHP and its offshoots. While it’s possible to make PHP work with code written in a different language, it’s not easy—nor is it fast.

“Let’s say I try to rewrite our PHP codebase in Scala,” Verlaguet said. “It’s a well designed, beautiful language, but it’s not at all compatible with PHP. Everytime I need to call to PHP from the Scala part of the code base, I’ll lose performance speed. We would have liked to use an existing language but for us, it just wasn’t an option.”

Instead, Facebook invented Hack, which has enough in common with PHP that it can share the company’s existing infrastructure. The vast majority of the Facebook codebase has been migrated from PHP to Hack, said Verlaguet, but the company has open sourced the language in hopes that independent developers will find uses for it outside of Facebook. 

“You can still use PHP,” he said. “But we’re hoping you’ll want to use Hack.”

Who Holds The Power

Therein lies the balance of power between companies and developers. Companies can make their languages as specific as they like. But if developers don’t want to use them, nobody is going to—outside, that is, of anyone who might harbor hopes of one day working at the company that invented the language.

It’s not unusual for companies to make it easiest to develop in one language over another. For example, you would use Objective-C to develop iOS apps, but Java to develop Android apps. This has never been a major sticking point with developers because both Objective-C and Java are general purpose object-oriented languages. They’re useful for a number of purposes. 

Hack, Dart, Go, and Swift, however, so far have only proven useful for particular company-designated programming solutions, usually in tandem with that company’s programming environment of choice. Granted, it may be too soon to judge. Hack, for example, can be used in several back-end implementations; it’s just so new that Facebook doesn’t yet have any data that people want to use it that way.

It’s not that developers aren’t capable of learning multiple languages. Most already do. Think of them like the Romance languages—if you know Spanish, it’ll be easier to learn French and so on than if you didn’t already know one. Likewise, if you already know Java, it’ll be easier to learn Ruby or Perl. And if you know PHP, you basically already know Hack.

On the contrary, it’s more of a question of habit. If Java already solves your specific problems, you don’t have any incentive to learn Ruby. And if you are happy coding iOS apps in Objective-C, you’re not going to feel very tempted to pick up Swift.

To some developers, though, ecosystem-specific languages just make life harder for everybody. Freelance designer Jack Watson-Hamblin, for instance, told me that initiatives like Apple’s Swift risk overburdening programmers and fragmenting the developer community:

It’s important for programmers to know multiple languages, but forcing them to keep up with new languages all the time doesn’t make sense. If I’m making a simple cross-platform app, I don’t want to have to know four languages to do it. I only want to use the single-purpose language if I really need to.

Watson-Hamblin argues that when companies each build their own language for their own needs, it slows down overall progress both by dividing the attention of coders and by enforcing a monolithic perspective on development within that language. “When companies are in charge of a language vs. an open-source community, it’s like the difference between a corporation and a start-up,” he said. Communities are more flexible and adaptive by definition. 

Of course, Apple had a lot of very good reasons to start from scratch with Swift, just as Facebook did when it invented Hack. That doesn’t mean it’s not going to force change on developers—some of it doubtless unwelcome. 

“As new languages are invented, it gets more hegemonic,” said Verlaguet. “It can be frustrating to have to keep up. But on the other hand, you’re more likely to have a new language to fit your exact problem. Imagine the reverse—a world where programmers used the same language for everything. It’d be a language that could do everything poorly but nothing well.” 

Lead image by Flickr user Ruiwen Chua, CC 2.0

View full post on ReadWrite

Go to Top
Copyright © 1992-2014, DC2NET All rights reserved