Posts tagged developers
Convenience drives much of the world’s best technology, from Amazon Web Services to Web frameworks like AngularJS. But that “convenience”, which makes it easy to quickly become productive, often comes with a hidden price tag: to become truly productive, you’re going to have to sweat.
Great technology is often deceptively simple, allowing newbies to intuitively “learn” the system without much effort. The problem comes when people assume they have mastered the technology when all they’ve really done is the equivalent of coding a “hello world” app. Before you blame the tool, you often need to invest time in learning to use it correctly.
“Mixed Feelings” About AngularJS
The problem, as Anand Mani Sankar suggests, is that while it’s simple to start with AngularJS, that simplicity belies the power of the framework:
[AngularJS] also simplifies the application development process by abstracting a lot of the internal complexity and exposing only what the application developer needs to know.
While this sounds like a great thing, it can also lead newbies to think that they’ve mastered the system upon completing their first “hello world” app:
The AngularJS journey can evoke mixed feelings. The learning curve is very different from other JS frameworks. The initial barrier to get started is very low. But once you start diving deep the learning curve suddenly becomes steep.
Sankar then points to Ben Nadel’s humorous depiction of an AngularJS journey:
Some people, of course, get stuck in the troughs. George Butiri, for example, gets a lot of Google search love with his “The reason Angular JS will fail” post. Butiri argues that AngularJS is actually quite difficult, without giving much in the way of specific examples of why this is so, at least beyond “because I like jQuery more.”
It’s So Easy To Fail
Much of the best technology is like this. It’s deceptively simple to get started, but if you want to truly master it, you’re going to have to make a big investment of your time. Some people start strong, discover the complexity, and then complain that technology doesn’t remain mind-blowingly easy forever and ever.
Sorry, real technology doesn’t work that way. It always requires effort and will fail if not applied in the right way.
Take NoSQL databases, the world in which I spend most of my time.
Newbies to NoSQL, whether MongoDB, HBase or Cassandra, like to tout its schema-less nature. The old world of relational databases required a rigid schema but HURRAY! In this new world of NoSQL, gone are schemas that define your data’s structure, gone are DBAs, GONE ARE RULES! So easy!!
Which, of course, is complete nonsense. As my colleague Asya Kamsky likes to say, “NoSQL != NoDBA.” (That is, NoSQL is not the same as “no database administrator.”)
NoSQL does not mean “no DBA”. If anyone tries to convince you otherwise, they probably have something to sell you.This does not mean that you have a team or even a person who has the title “DBA”—however, if you have a database, whether it’s relational, or non-relational, then someone has the role of “DBA”—and if they don’t know that they do, then a whole bunch of things aren’t being done or thought about before problems happen.
Go through the hater posts about NoSQL databases or AngularJS or most any technology you prefer and I guarantee many, if not most, of them are written by people who feel cheated that the technology didn’t fit how the user wanted it to work, often with minimal to no real investment. Sure, sometimes technology fails. At times, spectacularly.
But far too often we complain when technology doesn’t magically remove our need to work.
Fewer Levers, More Happiness?
One way to get the best of both worlds is through managed services like Amazon Web Services’ Redshift. Redshift is a fully-managed data warehouse that runs in the cloud. “Fully managed” means that it’s easier to use, but it also means that users lose some of the knobs and levers they might have in Teradata or another enterprise data warehouse.
That, however, is precisely the point.
As Matt Wood, general manager of data science at AWS, told me recently, Redshift and other AWS services aim to improve ease-of-use for users by removing complexity. Giving users fewer “levers” means that AWS also gives them fewer ways to fail. The trick, of course, is finding the balance between product simplicity and user control.
Airbnb, for example, was elated by how easy Redshift was to begin with, but then required some trade-offs (and investments):
The first challenge we had was schema migration. Even though Redshift is based on Postgres 8.0, the “subtle” differences are big enough, forcing you into the Redshift way of doing things. We tried to automate the schema migration, but the problem was bigger than we originally expected and we decided it was beyond the scope of our experiment. Indexes, timestamp type, and arrays are not supported in Redshift, thus you need to either get rid of them in your schema or find a workaround.
Having put in the effort, however, Airbnb saw a minimum of 5x performance improvements over other systems and dramatic cost savings. Easy to get started, but also worth continuing to invest.
And so it is with a lot of great software that is deceptively simple to use. To get beyond newbie status with any great technology, you’re going to have to use it as intended, and you’re going to have to spend the time and effort to master it.
There may be free software, but there’s no free lunch.
View full post on ReadWrite
Google announced a few changes to the search box you find within the Google search results. Often, when you search for a brand name, Google will add a search box within the search snippet result for that site. That box allows you to search within that specific site only. The results are similar to…
Please visit Search Engine Land for the full article.
Everyone wants to hire more engineers, including you, driving software salaries through the roof. Unfortunately, it’s very likely that you don’t have the slightest clue how to recruit well.
Take heart. While your ability to spot real talent in an interview may be weak, open source makes it relatively easy to see who can actually code, and who simply knows how to answer useless, abstruse questions.
Finding great technical talent is important. In fact, in a world increasingly run by developers, I’d argue that it’s the most important thing any company does, whether it’s a technology vendor or a manufacturer of cars or clothes. The better the engineering, the better the product, and the better the product, the less reliant your company needs to be on sales and marketing, at least, early on.
Or, as venture capitalist Fred Wilson puts it, “Marketing is for companies who have sucky products.”
The problem, of course, is that everyone is scouring the planet for the same engineers. Which, in turn, has driven the cost of developer salaries way up:
There are all sorts of gimmicks to finding great engineers. Google, for example, used to impose complex brainteasers on job applicants—only to discover they were utterly useless, as Laszlo Bock, senior vice president of people operations at Google, said:
We found that brainteasers are a complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.
Brainteasers, then, are out.
But, as Bock went on to highlight, so are brand-name schools, test scores and grades. “Worthless,” he declares. In fact, the whole hiring process is a “complete random mess.”
So how can you fix this?
Changing The Interview Process
One way is to change the way you interview. As Laurie Voss, the CTO of NPM, recently argued, “You are bad at giving technical interviews…. You’re looking for the wrong skills, hiring the wrong people, and actively screwing yourself and your company.”
Sadly, she’s probably right. And not just about you. We’re all pretty bad at technical interviews (or interviews, generally, for that matter).
The gist of her post is that too often we “over-valu[e] present skills and under-valu[e] future growth,” hiring people based on what they’ve done (or went to school) rather than what they can do. Or, as she summarizes:
1) Many interview techniques test skills that are at best irrelevant to real working life; 2) you want somebody who knows enough to do the job right now; 3) or somebody smart and motivated enough that they can learn the job quickly; 4) you want somebody who keeps getting better at what they do; 5) your interview should be a collaborative conversation, not a combative interrogation; 6) you also want somebody who you will enjoy working with; 7) it’s important to separate “enjoy working with” from “enjoy hanging out with;” 8) don’t hire [jerks], no matter how good they are; 9) if your team isn’t diverse, your team is worse than it needed to be; and 10) accept that hiring takes a really long time and is really, really hard.
Bock echoes this, indicating that Google’s experience has been that behavioral interviews work best. Rather than asking a candidate to remember some obscure computer science fact, Google now starts with a question like:
“Give me an example of a time when you solved an analytically difficult problem.” The interesting thing about the behavioral interview is that when you ask somebody to speak to their own experience, and you drill into that, you get two kinds of information. One is you get to see how they actually interacted in a real-world situation, and the valuable “meta” information you get about the candidate is a sense of what they consider to be difficult.
This is a great approach, but there’s a way to take it one step further.
Open Source Your Interview
The best place to see how engineers solve problems in the real world is in the open-source projects to which they contribute. Open-source communities offer a clear view into an engineer’s interactions with others, the quality of her code and a history of how she tackles hard problems, both individually and in groups.
No guesswork. No leap of faith. Her work history is all there on GitHub and message boards.
But open source offers other benefits, too. As Netflix’s former head of cloud operations, Adrian Cockroft, once detailed, open source helps to position Netflix as a technology leader and to “hire, retain and engage top engineers.” How? Well, the best engineers often want to work on open source. Providing that “perk” is essential to hiring great technical talent.
Interviews are important to ascertain cultural fit, among other things, but they shouldn’t be a substitute for the more informative work of analyzing a developer’s open source work.
And if they have none, well, that tells you something, too. A colleague at a former company told me that the best engineers were all on GitHub, not LinkedIn. While perhaps an overstatement, there’s a fair amount of truth to it, too.
In sum, you should be able to get to know your next engineering hire through open-source development far better than through any interview process, no matter how detailed.
Photo via Shutterstock
View full post on ReadWrite
We knew StackOverflow was different. Turns out it’s, well, really different.
The technical Q&A site looks like your standard Web developer hangout. But according to new data from IEEE Spectrum, its community has some unusual technical tastes. For instance, its readers evince a serious interest in the niche-y area of embedded hardware development—that is, programmable systems that typically live inside other gadgets and don’t expose a user interface to the average person.
On the other hand, this data doesn’t necessarily mean researchers have uncovered unexpected pockets of embedded or enterprise popularity, Maybe StackOverflow’s community preferences is simply telling us how poorly documented these technologies are—and how the right online forum can self-organize to meet the needs of developers who have to work with them.
Correlating Online Technical Communities
In response to IEEE Spectrum’s new programming language popularity analysis tool, Redmonk analyst Donnie Berkholz set out to try to uncover “commonalities and communities across all of [different] sources” so as to glean “insight into what technologies developers care about and use, and which provide mainly reinforcement of others.”
So, when comparing the popularity of programming languages as measured by jobs vs. which ones get discussed on social media and open-source code hubs, the top-10 programming languages look like this:
Some correlations he found make intuitive sense.
For example, there is an exceptionally strong correlation between Twitter conversation and Google trends. As he puts it, “people talking about programming languages in real-time chat tend to also search for what they’re talking about.”
Berkholz also uncovered very strong correlations (above 0.85) between Google Trends and search; programming language interest across different job sites like Dice and CareerBuilder; Reddit and Google Trends (developers look for information about current topics on different sites); and GitHub created and StackOverflow questions (a correlation of open-source usage and broader conversation among forward-leaning communities).
Others correlate more weakly between sources—like HackerNews with most everything else.
But StackOverflow stands out.
StackOverflow Developers: A Breed Apart?
In fact, StackOverflow developers stand alone. Completely alone, it would seem from Berkholz’s analysis. As he notes:
The weakest correlations were between StackOverflow views and almost everything else. It’s shocking how different the visitors to StackOverflow seem from every other data source.
Here are the top-10 programming languages on StackOverflow in terms of what readers actually read:
These results differ markedly from all other sources. As Berkholz highlights:
Three of the top 5 are hardware (Arduino, VHDL, Verilog), supporting a strong audience of embedded developers. Outside of StackOverflow views, these languages are nonexistent in the top 10 with only two exceptions: Arduino is #7 on Reddit and VHDL is #8 in IEEE Xplor. That paints a very clear contrast between this group and everyone else, and perhaps a unique source of data about trends in embedded development. Enterprise stalwarts are also commonplace, such as Visual Basic, Cobol, Apex (Salesforce.com’s language), and ABAP (SAP’s language).
This could suggest that StackOverflow is a leading indicator of hot new technologies. For example, the hardware bent to its audience might point to rising interest in the Internet of Things, which is going to be built on top of a whole lot of, well, embedded hardware systems.
Or, frankly, it could just mean that StackOverflow does a particularly good job of providing a home to smaller communities of embedded and enterprise developers that can’t get good documentation from Salesforce.com.
I mean, really, who wants to hang out in IBM’s Cobol Café?
But Who Are These People?
While we don’t have data from 2013 or 2014, in December 2011 someone took a survey of 2,532 StackOverflow users. A significant chunk of StackOverflow users come from the U.S., with the largest percentage (12%) in California and the second largest (8.4%) in New York, with a majority (53%) aged 25-34 and 68% having at least 6 years of IT/programming experience.
Not particularly surprising.
What is surprising, given the IEEE Spectrum data, is that a whopping 40% describe themselves as web application developers while only 4.3% are embedded application developers. Most are building enterprise applications (32%) or web platforms (33%), but the languages they indicate they know differ from the languages they view on StackOverflow:
This jibes with the enterprise developer finding in the IEEE Spectrum data. It’s still hard to see the embedded hardware developer in these numbers—though not so hard to uncover the enterprise developer.
This becomes more pronounced if we only look at StackOverflow users who answer questions (and not necessarily those that read the answers):
In short, there’s a difference between those that answer questions and those that merely lurk. For example, the top 20 most active StackOverflow participants have little to do with embedded engineering, as this data visualization shows. (Click through to see what each works on.)
StackOverflow Is Unique
Such technologies don’t have great documentation within their home communities (e.g., Salesforce.com’s Apex language), but StackOverflow has become the go-to home-away-from-home community for these embedded and enterprise technologies.
There are far more questions tagged “Java” (625,000+), for example, than for Arduino (12,000+), but according to the IEEE Spectrum data there’s way more reader interest in the latter than the former. The IEEE Spectrum approach measures both the number of questions posted mentioning each language in 2013 and the amount of attention paid to those questions. In StackOverflow’s world, people pay far more interest to embedded and enterprise than general Web development, even though its user base has historically skewed web developer.
A different breed, indeed. Or, quite possibly, an indication of mainstream enterprise and web developers looking beyond “mainstream” to tap into the Internet of Things applications or other modern applications?
Lead image by Flickr user Alexandre Dulaunoy, CC 2.0
View full post on ReadWrite
Demand, meet supply. The world is in dire need of millions of Internet of Things developers within the next few years. The good news? According to a new Evans Data survey, 17% the world’s software developers are already working on IoT applications; those in the Asia-Pacific region are particularly active.
The bad news? This developer population doesn’t have a strong history of software and cloud services innovation.
Asia-Pacific: A Hotbed Of Activity
To reprise: Evans Data’s recent global survey of over 1,400 software developers found that 17% are working on applications for connected devices for IoT, while an additional 23% expect to begin work on them in the next 6 months. Given that so much of the world’s electronics are produced in Asia-Pacific, it’s perhaps not surprising that it’s the region with the most aggressive IoT developers.
In fact, nearly 23% of APAC developers are currently developing software for Internet of Things. Only 20% of APAC developers say that they have no such plans, compared to 36% in North America and 49% in EMEA.
But the real question for APAC developers is whether they’ll repeat their errors of the past few decades: building great hardware and neglecting to connect that hardware with software and services. Sensors, it turns out, are somewhat pointless.
More Devices, More Connected
The number of ‘things’—30 billion devices connected to the Internet by 2020, according to Gartner, compared to 7.3 billion personal devices—is impressive but not the real story. Soon enough, as Gartner suggests, these devices “will … be able to procure services such as maintenance, cleaning, repair, and even replacement on their own.” They will be able to interact without human intervention, creating all sorts of possibilities, not to mention security vulnerabilities.
Developers are making this happen, developers that believe in and are helping to shape a connected future:
Due to the convergence of cloud, embedded systems, real-time event processing and even cognitive computing, developers are blessed with a perfect storm of low-cost devices and the ability to intelligently connect them. That in turn will yield revenue-generating services, which is where the real IoT money is.
Opening Up The Internet Of Things In APAC
While 31% of developers associate the Internet of Things with cloud computing, according to the Evans Data survey, the connections that bring device data to the cloud are much more important. As Intel Internet of Things business leader Ton Steenman complains, companies currently spend 90% of their IoT budgets “stitching things together,” when that number should be closer to 10%.
APAC hasn’t traditionally been good at such “stitching.”
That stitching is hard both because developers haven’t trusted third-party networks to carry their device data, but also because connectivity hasn’t been built into devices and sensors at the pace needed, as data from Berg Insight suggests:
• Wireless connectivity has been incorporated into just 1/3 of point-of-sales terminals sold in 2013
• 27% of ATMs in North America are connected to cellular networks while only 5% to 10% are connected in Europe
• The number of “oil and gas” devices with cellular connectivity hovered at 93,000 in 2013 but will jump to 263,000 new units by 2018
There are signs that this is changing, particularly in APAC, which was an early pioneer in mobile communications. Just looking at the prevalence of connected smart electricity meters, APAC has the lead, despite lagging considerably in 2011.
Companies in APAC have struggled to build compelling software (e.g., Sony smartphone interfaces) or cloud services (e.g., Samsung cloud sync and back-up services). While this is changing, it’s an open question whether APAC will be able to take the lead in developing connected experiences across devices.
One place to start is by opening up APIs.
As Rob Wyatt argues, “It is the open, local API that is missing from the Internet of Things.” To make the it work anywhere, and particularly in Asia-Pacific, it’s not enough for “vendors … to provide dumb ‘smart’ devices with a select handful of ‘strategic’ integrations within their pay-walled garden.”
For Internet of Things applications to work, device vendors need to provide open APIs so that other developers can hack services around and into them.
If APAC developers do this, they’ll win the war. Again, the battle won’t be won by building nice devices. It will be won by creating compelling developer cloud-services experiences that span a wide array of devices—all of which can start with open APIs on those devices so that developers, both within APAC and outside it, can hack the future.
Lead image by Flickr user Ed Coyle, CC 2.0
View full post on ReadWrite
Object.observe is still unofficial; it’s so far only incorporated into Chrome, which means developers who use it can’t count on it working their apps working in others browsers such as Firefox or Apple’s Safari. And it’s not clear when—or even whether—other browser makers will jump on the Object.observe bandwagon.
What Is Object.observe?
I asked Rafael Weinstein, a software engineer at Google who played a big role in Object.observe and Chrome integration, to explain what it is and what it does.
The main advance Object.observe offers is a feature called two-way data binding. Translated from codespeak, that means it ensures that changes in a program’s underlying data are reflected in the browser display.
“In the [Model View Controller] pattern, you have the model [i.e., underlying app data] that describes the problem you want to solve and then you have this view that renders it,” Masad told me. “It turns out that translating the model logic [i.e., the app's data structures] to the user interface is a really hard problem to solve as well as the place in the code where most of the bugs occur.”
“When you have true data binding, it reflects automatically in the user interface without you putting in any ‘glue’ code in the middle,” Masad continued. “You simply describe your view and every time the model changes it reflects the view. This is why it’s desirable.”
It’s 2014. Where’s My Object.observe?
TC39 members include developers from Google, Mozilla, Microsoft, and Apple. Weinstein, who submitted the Object.observe proposal, said that since developers from these companies approved of adding it to ECMAscript, he’s optimistic that they’ll also want to add Object.observe functionality to their companies’ own browsers.
For example, Weinstein is also at the head of a project called Polymer or observe-js, a library that uses Object.observe if it is available, and alternate methods if it isn’t. That way, developers can harness Object.observe whenever possible, although they still have to be prepared in case their program runs somewhere it’s not supported.
This Is Your Code On Object.observe
For example, if you are a developer who builds a lot of contact forms for websites, you might have a library of software functions that shortcut the process of inserting a contact form in a website. So when the Object.observe API is released, libraries will be built to make two-way data binding something developers can do as easily as inserting a library in their code.
The Angular framework uses something called “dirty checking.” Every time you add to your code, Angular checks to see what changed. It checks even if the view hasn’t changed at all. This continues once the app is deployed; every time the user inputs a change into the app while using it, the dirty checking code checks if it needs to refresh the display in response. This adds load time to every screen of your app.
Goodbye, Cruel Wrappers And Dirty Checking
Igor Minar, lead developer on the AngularJS framework, said developers who use Angular won’t have to work with Object.observe directly.
“Object.observe is a low level API, which is kind of awkward or inconvenient to use directly. That’s good because by providing such low level API the platform enables us to built on top the API and create higher layers with more opinions,” he said. Object.observe is already slated for addition as a feature in AngularJS 2.0.
Early versions of Object.observe have already given Angular a performance boost. In a test, dirty checking took 40 minutes compared to Object.observe’s 1 to 2 minutes. In other words, Angular became 20 to 40 times faster while using Object.observe.
“Object.observe is a low level feature that framework authors will use,” said Masad. “Developers using those frameworks won’t notice a different except for higher performance programs.”
Screenshot of Google engineer Addy Osmani introducing Object.observe at JSConf EU
View full post on ReadWrite
The Platform is a regular column by mobile editor Dan Rowinski. Ubiquitous computing, ambient intelligence and pervasive networks are changing the way humans interact with everything.
The middle class of mobile app developers is completely non-existent.
According to a research survey from market research firm VisionMobile, there are 2.9 million app developers in the world who have built about two million apps. Most of those app developers are making next to nothing in revenue while the very top of the market make nearly all the profits. Essentially, the app economy has become a mirror of Wall Street.
According to the survey: “The revenue distribution is so heavily skewed towards the top that just 1.6% of developers make multiples of the other 98.4% combined.”
About 47% of app developers make next to nothing. Nearly a quarter (24%) of app developers who are interested in making money from their apps are making nothing at all. About another quarter (23%) make less than $100 a month from each of their apps. Android is more heavily affected by this trend, with 49% of app developers making $100 or less a month compared to 35% for iOS.
As you can see, only 6% of Android developers and 11% of iOS developers make more than $25,000 per month, numbers that make it extremely hard to build a real, sustainable business with mobile apps.
If we chop off the top and the bottom of the market, that leaves a “middle class,” which is extremely poor, struggling to make any kind of money. About 22% of developers earn between $100 and $1000 a month off their mobile apps. The higher end of that scale isn’t bad for hobby developers, but professional app makers can’t get by on that. VisionMobile draws an “app poverty line” at apps that make less than $500 a month, leaving 69% of all app developers in this category.
That leaves a very thin middle class that makes between $1,000 and $10,000 a month per app. To put that in perspective, the American middle class at large earns between $40,000 and $95,000 annually (with the “middle-middle” making $35,000 and $53,000 per year).
So what happened to all the riches in the app economy? The fact is that the money dried up a long time ago and only the top of the food chain makes any real money. The developer middle class is small and struggling while two-thirds of developers trying to make money off their apps may just look towards other ways to employ their skills.
Vision Mobile concludes:
More than 50% of app businesses are not sustainable at current revenue levels, even if we exclude the part-time developers that don’t need to make any money to continue. A massive 60-70% may not be sustainable long term, since developers with in-demand skills will move on to more promising opportunities.
The Balloon Effect
The death of the developer middle class should come as no surprise to industry watchers. The app economy has mirrored the rest of the mobile industry of the last several years.
The first comers to the industry carved out names for themselves and benefited from the unexpected popularity of the smartphone (led by Apple’s iPhone and the App Store). Copycats and entrepreneurs raced to get in on the riches, creating a bloated app store filled with poor and mediocre apps to fill just about every product category you could think. This pushed out quality (but limited) market apps. The revenue consolidates at the top of the market
App store inventories continue to grow, one poor app after another. This will lead to the eventual realignment of the developer pool, building mobile apps as they struggle to find revenue or venture money to grow their businesses. In the past, I have called this the balloon effect. We’ve have seen it in smartphone manufacturing (where middle tier players like HTC get pushed out as Samsung and Apple dominate) and developer services where companies struggle to compete against each other and industry heavyweights. Eventually, these companies are either bought or merge. (StackMob and Parse were acquired, PlayHaven and Kontagent merged to become Upsight.)
The app economy is one of the foundational elements of the mobile industry, so the balloon effects take longer to manifest but the impact is much broader on the developer community.
The Sparrow In The Coal Mine
Developer David Barnard offers a cautionary tale about an app called Sparrow.
We’ve all read stories about and been enthralled by the idea of App Store millionaires. As the story goes… individual app developers are making money hand over fist in the App Store! And if you can just come up with a great app idea, you’ll be a millionaire in no time!
Sparrow was an app built by a three-person team which became five people after a venture capital seed round. It started as a paid app in the Mac App Store and then the iOS App Store, with plans for a Windows app on the way. Sparrow debuted well and had a couple popularity spikes with new releases and media coverage. But Sparrow was not long for the world. It could not sustain the popularity needed to make enough revenue for its team to make the riches its efforts may have deserved. Eventually Sparrow sold to Google—a quality outcome. But most developers will never see the same type of popularity spikes, venture capital investment or exit to a huge company experienced by Sparrow.
If a well received, well-made and popular app like Sparrow could not hack it in the mobile app business, the average indie developer has little chance to make a dent without stumbling upon a mega hit, a la Flappy Birds (developed by a lone programmer in Vietnam). The kicker is that Sparrow’s tale … is from 2012.
Two years later, the opportunities for apps like Sparrow have more or less dried up as thousands of apps have filled its category, making it harder for app publishers to stand out from the crowd. For every Instagram success story, there are thousands of apps that make little to no money and have no prospect of success in the near future.
Barnard summed it up well, diagnosing the prognosis of the app developer middle class in 2012.
Given the incredible progress and innovation we’ve seen in mobile apps over the past few years, I’m not sure we’re any worse off at a macro-economic level, but things have definitely changed and Sparrow is the proverbial canary in the coal mine. The age of selling software to users at a fixed, one-time price is coming to an end. It’s just not sustainable at the absurdly low prices users have come to expect. Sure, independent developers may scrap it out one app at a time, and some may even do quite well and be the exception to the rule, but I don’t think Sparrow would have sold-out if the team—and their investors—believed they could build a substantially profitable company on their own. The gold rush is well and truly over.
Top image courtesy of Flickr user Bennet.
View full post on ReadWrite
Apple really wants developers to switch to Swift. And it looks like the feeling is mutual.
Six weeks after Apple unveiled Swift, the new programming language for iPhone and Mac applications is attracting a noticeable level of interest from developers. Phil Johnson at IT World crunched the numbers, and at least on GitHub, developers are picking it up.
Swift is now the 15th most widely used language on GitHub, with more than 2,600 new Swift repositories created since June, according to Johnson’s study. More significantly, Johnson believes that interest in Swift is directly replacing interest in Objective-C:
“From the beginning of January through the end of May, developers created about 294 new Objective-C repositories per day on GitHub. Since Swift was released in early June, that average has dropped to about 246 repos per day. That drop of 48 repos per day is pretty close to the average number of new Swift repositories created per day since its release and initial spike in interest.”
Apple has shown a marked interested in getting developers to adopt Swift, even going so far as to launch a surprisingly open and friendly development blog.
From Apple’s perspective, Swift is a simpler, safer, faster-to-run alternative to the somewhat clunky and error prone language Objective-C now used to write apps for iPhones, iPads and Macs. But even if Swift is the magic bullet Apple conveys, it’s still going to have to rally developers to switch from the old way of doing things to an unproven new language.
The GitHub data shows that at least some developers are turning a new leaf.
View full post on ReadWrite
A beautiful aspect about Google’s Android operating system has always been the fact that it allows for developers and enthusiasts to strip away the platform’s core experience and replace it with homebuilt customized versions. Custom ROMs have been part of Android since nearly the beginning.
So it is natural that custom ROMs have now come to Android Wear, Google’s version of the operating system that runs on smartwatches and wearable devices.
Android developer Jake Day has released one of the first custom ROMs for the LG G Watch, one of the first two Android Wear watches to hit the market. Day posted the ROM on RootzWiki, an Android news and information site for developers and designers.
The ROM—nicknamed Gohma after a boss in the video game Zelda—is fairly simple. It improves battery life of the LG G Watch, speeds up overall performance, reduces lag time between notification cards and increases vibration intensity.
Gohma isn’t a full-blown Android Wear replacement. The ROM abides by the basic user interface design principles of Wear and the LG G Watch will still take over-the-air updates to the operating system from Google and LG (which will wipe out the ROM installation). Day makes sure to note that Gohma is a small release intended to improve performance and to make sure that everything is work well before releasing a fuller version of the ROM at a later date.
Gohma is fairly easy to install. Knowledgeable developers will just need to make sure that the device’s bootloader is unlocked and the ROM script will root the device and itself, allowing for the custom software to be installed.
Unleashing The Community: A Good Thing For Smartwatches
Android Wear generally leaves a lot to be desired. It is Google’s first go at smartwatch software and, initially, it is basically just a notifications device strapped to your wrist. For the time being, that’s perfectly fine as wrist-based notifications are a (surprisingly) pleasant way to receive messages. But Android Wear and smartwatches in general have much more potential than what is currently available.
Part of that is a hardware problem as engineers are naturally limited by the capabilities of currently available processors and sensors. But the hardware in the LG G Watch is almost the equivalent of a 2011 Android smartphone, so it should be able to do much more than the notification cards and voice interaction that is currently available through the initial release of Android Wear.
This is where the large community of Android developers has an opportunity to build on top of Wear through custom skins and ROMs to make it a better performing, more functional and attractive device. Day’s Gohma should just the start as the heavy hitters in the Android ROM community—like CyanogenMod—will surely get involved, pushing Android Wear development to further feats of utility and maturation.
The Android developer community doesn’t operate in a vacuum either. Google listens to developers and often implements features and requests that developers have built on their own to work around the limitations of stock Android. The Android development community is essentially one giant sandbox for Google to learn about what app builders and consumers want in the next version of the operating system. For the last six years, this process has worked well in helping to build ever better versions of Android for smartphones and tablets. Hopefully with the first custom ROM for Android Wear, Google can learn how to build better software for smartwatches as well.
Images: Gohma via HD Wallpaper. Android LG G Watch by Adriana Lee for ReadWrite.
View full post on ReadWrite
Looking at smartphone and tablet sales, Google’s Android ecosystem should be printing money for developers. After all, not only are Android device sales outpacing Apple’s iPhone and iPad sales, but Google also shares more Android-related revenue with its ecosystem than Apple does with the iOS ecosystem.
And yet iOS developers earn more than Android developers. What, or rather who, gives?
The answer is in efficiency. Apple is able to centralize its revenue stream while Google shares with a wide variety of partners. But Android, on pure volume, may soon outstrip the mighty iOS.
Android’s Larger Ecosystem
It’s no surprise that Android devices have been outselling iOS devices for some time. Given Apple’s insistence on charging a price premium, falling behind was a foregone conclusion. Analyst Mark Hibbens estimates Android’s widening lead over iOS in shipments.
Which means, of course, that in the first quarter of 2013 the population of Android’s installed base surpassed that of iOS and will almost certainly never look back.
And yet this hasn’t translated into more money for the Android app economy.
Who Does Android Pay?
According to a new VisionMobile study, Apple’s app economy is considerably larger than Google’s Android, at $163 billion:
Google’s is smaller at $149 billion:
But there’s a key difference between the two economies, from hardware to apps to accessories: Apple claims much of its ecosystem’s revenues, whereas Google shares among manufacturers, developers, carriers and advertising partners. To highlight this point, both Apple and Google take a 30% from developers for paid app downloads and in-app purchases. Google used to hardly keep any of this money, passing it along to distribution partners (like cellular carriers and payment processors) and paying fees. As of Google I/O 2014 though, that policy has changed and Google will keep nearly all of the revenue from Google Play. Apple keeps nearly all of the 30% it takes from app developers.
Not that Google is necessarily playing a charity here. Part of Google’s problem, as ABI Research notes, is fragmentation. While ABI says Android was used in 77% of smartphones shipped worldwide in the fourth quarter, 32% of those 221 million devices used forked versions (up from 20% of shipments the year before and up from 27% in Q3 2013).
So a fair amount of Android’s adoption does not generate revenue for Google, even if it wanted to. Google is trying to minimize the negative impact from fragmentation “by giving primacy to Google Play Services as the hub for new Android capabilities,” as Crittercism’s Michael Santa Cruz highlights, but it has a long way to go.
Even so, Google’s strategy inherently shares more with its ecosystem: by design, Google doesn’t care about capturing hardware or accessories revenue, and even in software it is less concerned with app revenue than ad revenue. Google’s goal has long been to get more people on the Internet, using the Web, searching for more items. Google’s view is that the more eyeballs there are on the Internet, the more potential it has to advertise them through search.
Google announced that it payed app developers about $5 billion dollars between Google I/O 2013 and I/O 2014, with a rate increase of 2.5x in that span.
And yet iOS developers make more. $500 – $1000 per app per month, according to VisionMobile, compared to Android’s $101 to $200 per app per month.
At least, for now.
Go East, Young Man
While Hibbens suggests that Apple’s higher app spend per device accounts for the chasm between the Android and iOS economies, and that this gap will only widen over time, this feels like a short-term perspective. Yes, it’s true, as Andreessen Horowitz’s Benedict Evans posits, that Apple benefits from a “wealth gap” between its customer base and Google’s.
Apple enjoys market share superiority in the comparatively rich North American and Western European markets, as VisionMobile illustrates:
This isn’t something to celebrate, however. As I’ve written before, emerging economies can’t afford Apple’s price premium. And when “emerging economies” include China, set to become the world’s largest economy in 2014, and India, another market serving over one billion people, the future for Android looks very bright indeed.
It will likely continue to be the case that Apple will earn more app revenue per device than Google, but that’s just fine for Google. Android has always been a volume play. With few exceptions, Google’s business model is always about skimming small amounts of money from vast amounts of transactions.
Which is not to say Apple is doomed. It’s simply to argue that developers should tune their monetization strategies differently for iOS and Android … just like Apple and Google do.
Article updated to correctly reflect Google’s cut of Play app earnings.
View full post on ReadWrite