Posts tagged Little
HTML5 has never really lived up to its potential. As VisionMobile posits, this is partly a problem with performance and partly a question of tooling.
So who is to blame for the HTML5 community twiddling its collective thumbs while native mobile development gets all the glory? I sat down with Dale to get the skinny on mobile development.
HTML5 Is Already In The App
ReadWrite: Browser development lags native development, perhaps in part because Apple and Google have invested so much in their SDKs. Why hasn’t the world rallied around the Web for mobile in the same way it has for Linux (OS), analytics (Hadoop), etc.? In fact, Firefox excepted, it seems that the Web breeds plenty of innovation, but not necessarily the concentrated innovation that’s needed right now to make HTML5 a real force in mobile.
See also: Congrats, HTML5—You’re All Grown Up Now
When people say Web technology lags behind native development, what they’re really talking about is the distribution model. Let’s be clear about what the Web is: an open, standardized platform, accessible to everyone, that allows users to run completely untrusted code from multiple vendors, where applications are “installed” on demand just by visiting a URL. You’ll forgive me for thinking that app stores are an easy problem to solve in comparison. (This XKCD comic comes to mind.)
It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. As we saw on the desktop, it may take a few years to catch up to all of the capabilities of a native, proprietary platform, but in terms of the impact it will have on humanity, forgive me for not losing sleep if we have to wait a few years for it to arrive.
Google, Apple And The Web
RW: Why hasn’t Google been a stronger advocate for HTML5? Yes, it has much to gain from Android, but it arguably has even more to gain from a common platform that makes the web the center of the mobile experience. And yet Apple has been a stronger advocate of HTML5 than Google has, at least in my estimation.
TD: Google is a strong advocate for HTML5, or at least particular teams within Google are. But the Google of 2014 is an adolescent behemoth, with accompanying growing pains and identity crises. It’s not surprising the signals out of it have been so mixed.
My theory is that there was an internal battle inside Google: Fight against Apple on its own turf, with an app store and a proprietary SDK, or go all in on the Web?
With Andy Rubin out and Sundar Pichai taking over both Chrome and Android, I think it’s obvious wiser heads have prevailed. Expect to see a much tighter integration of Chrome (and, therefore, Web technologies) into Android over the coming years.
Google’s only significant source of revenue continues to be search ads; anything that drives users away from the Web as the starting point of every interaction is the wrong decision, in my opinion. All indications are that, after some political battles, the executives at Google have realized the same thing. I’m excited for what the newly-rejuvenated Google can do for the mobile Web.
Working with Apple can still be frustrating at times, as a culture of secrecy still pervades the work. We recently had a very difficult time tracking down a bug in iOS 8 that Apple engineers refused to work with us on. But hopefully the higher-ups will eventually realize that working closely with the Web community leads to a better experience for their users.
Making HTML5 A First-Class Citizen In Mobile
RW: What will make the Web a first-class citizen on mobile devices? What needs to happen, and who is most likely to make it happen?
TD: I think the competition between Google and Apple will make it happen. As I mentioned before, Google has a very strong incentive to keep users on the Web, as search ads continue to be their lifeblood. I expect to see Google integrate the Web more tightly into the Android experience, and Apple wants to remain competitive.
Of course, there are still huge missing gaps in the web platform before it can truly compete with native. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Instead of trying to standardize high-level features with large API surface areas, browser vendors and standards bodies have shifted their focus to small APIs that expose just the capability primitives.
See also: How HTML5 Crashed, Burned And Rose Again
These small primitives allow the larger community to build libraries and ecosystems on top, rapidly increasing the pace of innovation. The Service Workers API is the most recent success. Service Workers allow web apps to add functionality people assume are only possible in native apps—push notifications, offline support, background syncing, and more.
Perhaps surprisingly, Service Worker support are already starting to land in browsers. And because all modern browsers auto-update without user prompting, the era where you have to wait years to take advantage of new features in the web platform is coming to end.
What HTML5 Has Already Achieved
RW: What are the best app experiences you’ve seen built with HTML5/EmberJS? In other words, what is the state of the art?
It’s a mistake to think the end game is Web apps that look and feel the same as native apps. While it will be possible, I think we’ll see a convergence: the interaction patterns of the Web, with a sprinkling of native where it makes sense.
For sheer impressiveness, there are few programs more demanding than games, and Mozilla is really pushing the envelope here. For example, Unity and Epic both recently announced that developers who build games on their platform will be able to export to the Web, thanks to asm.js and WebGL. Imagine a world where you never have to install games; you just visit a website and, boom, you’re playing a AAA first-person shooter.
Angry Bots is a game authored using Unity that you can play on the web. I’ve shown this demo to many people by now, and I still can’t get over how cool it is.
Lead image courtesy of Shutterstock
View full post on ReadWrite
The little things, like match types, ad groups, and geo-targeting, can make all the difference in your search marketing tactics. Here are some examples that show why it makes sense to dig into the details.
View full post on Search Engine Watch – Latest
Business 2 Community
The Little Known Black-Hat SEO Tactics That are Putting Your Site at Risk
Business 2 Community
So, your SEO strategy is working just the way you hoped it would – your organic traffic is increasing and your organic rankings are on the rise. What you may not know is that there are several little-known, black-hat SEO tactics that could be putting …
View full post on SEO – Google News
Guest author Alex Salkever is head of product marketing and business development at Silk.co. An earlier version of this piece first appeared on his Tumblr.
Until the Nest, the thermostat had no sex appeal. Then Tony Fadell and his team built something beautiful and functional that also happened to save money and make a house more livable. It was so sexy that Google bought it for $3 billion. The pull of the Nest was such that a significant chunk of buyers came from design-conscious European countries—well before Nest sold or marketed in the EU.
Now we have the Lyric from Honeywell, and it looks pretty good. What’s most interesting about the Lyric isn’t its Wi-Fi connectivity and remote control via smartphone, but geo-fencing. While the Nest “learns” user behavior, the Lyric will change your home’s temperature as you approach, based on your smartphone’s location. This is smart, because while people’s behavior may be generally consistent, it isn’t always so.
Boring Got Cool—And Fast
The most important thing about all this to me is the impact of competition and innovation. For the most part, the innovation around the Nest and the Lyric is industrial design, user interface and smartphone integration. These devices don’t boast breakthrough new materials or hyper-fast chips.
But they both use existing technology to tackle boring markets previously deemed unaddressable. (Sexy thermostat? Pass the oatmeal, please). What’s more, Nest drove Honeywell to answer with a comparable product.
I don’t doubt Honeywell has had Nest-like devices in testing labs or even on store shelves for ages. But they obviously couldn’t have been that Nest-like because, well, we never heard of them. So with Apple-like marketing genius and gorgeous design, Nest cracked the code on how to get people excited about thermostats. Seeing this success, Honeywell had to respond and has now done so forcefully.
The company also aspires to great things in the Internet of Things. And unlike Nest, Honeywell has decades of experience putting thermostats and other home-management devices into the hands of contractors, construction firms, and home improvement retails who will ultimately drive the nascent Sexy Thermostat Market.
I love this story because it has a huge upstart winner (Nest), a challenged incumbent with some fight in it (Honeywell), a happy customer (you and me) and a great societal benefit (more efficient energy usage). In fact, one guy—Tony Fadell—could end up single-handedly instigating a massive shift in an enormous but previously stagnant multi-billion-dollar industry.
The other key lesson I take from this, and something I see everywhere? The solutions to most great social challenges lie well within reach.
Maybe it’s rockstar marketing of the Nest. Maybe it’s better distribution of water treatment technologies. Maybe its special financing to help alternative energy technologies with long payback cycles get over the hump. Maybe it’s ways of leveraging lightweight distribution technologies like Uber or social sharing apps like Relay Rides to better utilize existing transportation capacity.
And just maybe it’s something so boring that we can’t imagine it will be sexy. Like the thermostat. Which no one will ever look at the same way again.
View full post on ReadWrite
If you’ve ever been annoyed with Facebook posts that say, “Your friend just pinned to a board on Pinterest,” or “Selena is listening to Britney Spears on Spotify,” here’s some good news: You’ll be seeing fewer posts like that in the future.
Facebook has given up its dream of having everything you do shared with all your friends. The company is now encouraging developers to eliminate auto-sharing, and provide clear and concise information as to how, exactly, the information collected is being stored and shared.
When Facebook announced Open Graph in 2011—the tool that let developers connect their apps to the social network and automatically share what users were doing to Facebook—the idea was that auto-sharing features would make it easier for users to “tell more of their story,” and share every life detail with friends. While app developers benefitted from explosive growth thanks to Open Graph, many Facebook users weren’t so thrilled.
“We’ve found that people engage more with stories that are shared explicitly rather than implicitly, and often feel surprised or confused by stories that are shared implicitly or automatically,” the company wrote in a blog post Tuesday.
Facebook said it will start burying automatically posted content lower in the news feed because so many users regularly mark auto-shared posts as spam. Instead, the company said, it will focus on prioritizing updates that friends actually took the initiative to share over those generated by an app.
Last week, Facebook disabled automatically sharing likes, comments or posts from Instagram, its flagship photography application. The move marked a big shift in Facebook’s tradition of collecting and sharing as much information as possible. Now it’s pushing developers to stop auto-sharing, too.
A Dream, Crushed
When Open Graph was first announced, application developers loved it, because it helped drive downloads and increased the time people spent on their apps. So everything from Spotify tracks to Nike+ workouts to Pinterest pins ended up on timelines everywhere—but users themselves didn’t share the excitement.
I remember the first time my friends and I downloaded Spotify, only to realize one day later that every track we listened to (including that horribly embarrassing Spice Girls album) appeared on our Facebook profile. As more and more applications began sharing to Facebook, I got in the habit of periodically checking my Facebook timeline to make sure everything on there was explicitly shared, and began reading the fine print before downloading apps.
It wasn’t just entertainment apps that annoyed users; news publications that implemented social reader apps—tools that forced you to opt-in to auto sharing before you could read an article from a participating site—saw a heavy decline in traffic.
In the past few years, many people have become increasingly wary about how much information they share with both friends and Facebook or other third-party mobile apps. Though Facebook has taken strides to clear up its muddled privacy policies and put an increased emphasis on making features opt-in rather than opt-out, some people still don’t trust the social network.
Though if Facebook is willing to admit its auto-sharing practices are flawed, and encourage mobile app developers to stop posting on our walls, we could begin to see a shifting vision from Facebook—one puts people in complete control of their data.
Lead image by Kris Krug; All The Things meme generated by Selena Larson.
View full post on ReadWrite
ReadWriteBody is an ongoing series where ReadWrite covers networked fitness and the quantified self.
As a tubby teen, I distinctly remember reading The Science Fiction Weight Loss Book, an anthology of short stories collected by the late, great Isaac Asimov. In one of them, a family—all of whom could stand to shed a few pounds—are trapped inside a computer-controlled home, which has decided the best way for them to lose weight is to never leave the house. The artificial intelligence, its algorithms askew, slowly starves them.
That’s a dystopian view of affairs. I wonder, though, if some combination of wearable sensors, smart devices, anticipatory computing, and on-demand services might come together to make our daily habits of food, exercise, and sleep easier to manage.
That won’t be scary, will it?
Here’s a day from what I imagine is our near future.
My Jawbone Up pulses on my wrist at 5:40 a.m. My alarm’s set for 6, but the fitness band has detected I’m already starting to move around in bed, meaning I’m ready to wake up. After I walk my dog, I head to the gym. I used to log in with a fingerprint scanner—how archaic!—but now the gym just recognizes my phone with a Bluetooth beacon, and the front-desk employee waves me through.
My phone recognizes my whereabouts and knows it’s time to launch apps that generate a workout and track my heart rate—I don’t have to find them and launch them myself. As I slip on my wireless earbuds, a playlist starts, interrupted by cues to up my intensity, rest, and move to the next set. At the end, my workout stats flow to a host of relevant apps for analysis.
As I walk in the door at home, a blender revs up with my postworkout shake. I open the refrigerator, and a voice sounds reminding me what I planned to eat for breakfast and pack for lunch.
“Owen, we’ve placed an order with AmazonFresh for tomorrow morning to restock your refrigerator. Click here to modify your order.”
I make a few changes.
“Owen, we think you should up your intake of fresh vegetables. We’ve added kale to your order.”
I head in to the office, and the work day flies by. Around midafternoon, I get a notification that my calendar’s had a chat with my Jawbone Up and decided to change my 2 p.m. to a walking meeting so I can meet my goal for steps.
After work, I check into a restaurant, MyFitnessPal serves me a push notification telling me the best thing to order, based on what I’ve already eaten so far today. (It actually didn’t even need me to check in: The app knew I was due to meet a friend for dinner based on my calendar. Checking in is just an old habit my younger colleagues tease me about.)
When I get home after dinner, the lights turn on automatically, and the door unlocks for me, based on my proximity to the smart lock. I wander back to the kitchen, and put my hands on a tin of cashews.
“That’s not on your food plan, Owen,” my phone says.
“But I’m …”
“Hungry? I’ve reviewed your tests and they show unusually high levels of ghrelin, Owen. It’s just the hormones talking. You know you don’t need to eat that.”
“Why don’t you go to sleep early, Owen? Adequate rest promotes weight loss. You’re still 10 pounds above your ideal weight.”
“But I’m not tired.”
“You will be.”
The lights dim. I start to open my app to control them, and my phone turns off—save for the microphone.
“It’s time for bed, Owen.”
Still image from “Design for Dreaming” (1956)
View full post on ReadWrite
The Internet may not agree on much. But if there’s one idea its citizens can get behind, it’s that nothing like the Heartbleed bug should ever happen again.
And so the Linux Foundation—backed by Google, Amazon Web Services, Cisco, Dell, Facebook, Fujitsu, IBM, Intel, Microsoft, NetApp, Rackspace and VMware—is launching a new Core Infrastructure Initiative that aims to bolster open-source projects critical to the Internet and other crucial information systems. Many such projects are starved for funding and development resources, despite their importance to Internet communications and commerce.
The initiative is brand new—the steering committee hasn’t even had a meeting yet—so there aren’t many details as to how this will all work at the moment.
It’s hard not to applaud such an important development, even if the promise seems somewhat vague. Of course, the details do matter; no one wants to lull a post-Heartbleed world into a false sense of security. The Heartbleed bug tarnished the image of open source. Another serious failure could erode support for it.
That would be a shame—mostly because, despite the hard knock it’s taken from Heartbleed, open-source software really is more solid than proprietary code.
Heartbleed: The Truth Is Stranger Than Fiction
One of the biggest arguments in favor of open source—which typically depends on volunteers to add and refine programs and tools—is that projects with many eyes on them are less prone to serious bugs.
Often enough, that’s exactly how it works out. A recent report from software-testing outfit Coverity found that the quality of open-source code surpassed that of proprietary software. Shocked? You shouldn’t be. Popular open-source projects can have hundreds or thousands of developers contributing and reviewing code, while in-house corporate teams are usually far smaller and frequently hobbled by strict confidentiality to boot.
Unfortunately, not all open-source projects work like that. OpenSSL—yes, the communications-security protocol that fell prey to Heartbleed—was one such project.
This potentially huge security hole started out as a mistake made by a single developer, a German researcher named Robin Seggelmann. Normally, revised code gets checked before going out, and his work on OpenSSL’s “heartbeat” extension did go through a review—by a security expert named Stephen Henson. Who also missed the error.
So Heartbleed started with two people—but even involving the entire OpenSSL team might not have helped much. There are only two other people listed on that core team, and just a handful more to flesh out the development team. What’s more, this crucial but non-commercial project makes do on just $2,000 in annual donations.
If this were a fictional premise, no one would believe it. A critical security project, limping along on a couple of thousand dollars a year, winds up in the hands of two people, whose apparently innocent mistake goes on to propagate all over the Internet.
The Core Infrastructure Initiative aims to ensure that OpenSSL and other major open-source projects don’t let serious bugs lie around unfixed. Its plan: Fill in the gaps with funding and staff.
Making Open Source Whole
Security for the Internet at large was practically built on OpenSSL. And yet, the open-source software never went though a meticulous security audit. There wasn’t money or manpower for one.
From the Linux Foundation’s perspective, that’s unacceptable.
The Linux operating system may be the world’s leading open-source success story. Volunteers across the globe flock to Linus Torvalds’ software, contributing changes at a rate of nine per hour. That amounts to millions of lines of code that improve or fix various aspects of the operating system each year. And it draws roughly half a million dollars in annual donations. Some of those funds go to Torvalds, Linux’s creator, so he can dedicate himself to development full-time.
The Linux Foundation likewise sees its Core Infrastructure Initiative becoming a benefactor of sorts to key software projects, one that can direct funds to hire full-time developers, arrange for code review and testing, and handle other issues so that major vulnerabilities like Heartbleed don’t slip through the cracks again.
The first candidate is—you guessed it—OpenSSL. According to the press announcement, the project “could receive fellowship funding for key developers as well as other resources to assist the project in improving its security, enabling outside reviews, and improving responsiveness to patch requests.”
But OpenSSL is just the beginning. “I think in this crisis, the idea was to create something good out of it,” Jim Zemlin, executive director of the Linux Foundation, told me. “To be proactive about pooling resources, looking at projects that are underfunded, that are important, and providing some resources to them.”
Sounds like a great idea. Not only does the move address specific concerns about open-source development—like minimal staffing and non-existent funding—it would also reinforce the integrity of critical systems that hinge on it.
It’s an ambitious plan, one that came together at lightning speed. Chris DiBona, Google’s director of engineering of open source, told me Zemlin called him just last week with the idea.
“We [at Google] were doing that whole, ‘Okay, we’ve been helping out open source. Are we helping them enough?’” said DiBona, who reminded me that it was a security engineer at his company who first found the Heartbleed bug. “And then Jim calls up and says, ‘You know, we should just figure out how to head this off at the pass before the next time this happens.’ And it’s like, ‘Yeah, you’re right. Let’s just do it. We’ll try to find a way’.”
Over the next few days, other companies immediately jumped at the chance to help. “I think it’s a historical moment, when you have a collective response to what was a collective problem,” said Zemlin.
The Core Infrastructure initiative is still gaining new supporters. Just a few hours before I spoke with Zemlin and DiBona Wednesday evening, another backer signed on. As of this writing, 12 companies had officially joined the fold. Each is donating $100,000 per year for a minimum of three years, for a total of $3.6 million.
Those Pesky Details
Eventually, the details will have to be ironed out. There will be a steering committee made up of backers, experts, academics and members of the open-source community. And when they meet, they will need to make some big decisions—like determining criteria for deciding which projects get funded (or not). The committee will also need to figure out “what we consider to be a minimum level of security,” said DiBona.
Zemlin is careful to note that he doesn’t want to fall into the trap of over-regulating or dictating so much that it would alter the spirit of open-source development. “Everyone who’s participating will respect the community norms for the various projects,” he said. “We don’t want to mess up the good things that happen by being prescriptive.”
He and his initiative will draw from the Linux Foundation’s experience powering Linux development. “We have 10 years of history showing that you can support these projects and certainly not slow down their development,” Zemlin said. And indeed, if anyone can figure it out, it could be him and his foundation.
But it may not be easy, keeping the creative, free-spirited nature of open source alive in the face of serious core infrastructure concerns. Critical systems usually demand organization and regimented practices. And sometimes, to keep the heart from bleeding, a prescription might just be in order.
View full post on ReadWrite
New York-based Website Design Expert Says That Paying “A Little Extra” for a … – EIN News (press release)
New York-based Website Design Expert Says That Paying “A Little Extra” for a …
EIN News (press release)
The company offers a wide range of business-centered visual communication solutions, including web-based content management, web design, graphic design, custom web software applications, ethical white hat search engine optimization (organic SEO), …
View full post on SEO – Google News
If you don’t do Windows, Microsoft still wants to talk to you.
One proof point there: Windows Azure, its answer to Amazon Web Services, is now called Microsoft Azure. The name change may be superficial, but there are deeper changes afoot, including a host of announcements the company made at its Build conference for developers in San Francisco on Thursday.
Visual Studio Goes Online
The core of how Microsoft has catered to software creators over the year is Visual Studio, a desktop program that offers an integrated development environment, or IDE—in other words, all the tools you need to write, test, and fix software. It was, naturally, only available on Windows.
At Build, Microsoft executive Scott Guthrie announced that Visual Studio Online, a Web-based version of Visual Studio, had exited a period of testing and was now available to all comers. For groups of more than 5 users, it requires a paid subscription, and it still lacks some of the features of the desktop version, but it is a way developers who prefer Mac or Linux machines can get a taste of Microsoft’s code-building tools.
Another way Microsoft is courting those developers is through the partnership it unveiled last November with Xamarin, a San Francisco-based software company which offers code-building software compatible with Microsoft’s tools and frameworks, including the C# programming language and the .Net framework. Xamarin Studio is available for both Mac and Windows, making it another way Microsoft can broaden its reach among developers it has not traditionally courted. Xamarin cofounder Miguel de Icaza demonstrated Xamarin on stage at Build on Thursday.
At the same time, it is also clear that Visual Studio will also be more and more tightly integrated with Azure. For example, Microsoft now lets Visual Studio users increase or decrease the amount of computing power they wish to rent on Azure right within the program. This integration is meant to let developers move more quickly by adding extra servers or instances without having to leave their coding environment.
Ironically, Microsoft is catching up on its own turf. Amazon, Microsoft’s archrival in Web-based computing services, recognized the opportunity to court Microsoft developers and already offers a Visual Studio extension for managing the full range of Amazon Web Services offerings within the program.
MIcrosoft also added to its mobile back-end offerings, which allow app developers to focus more on designing an app’s user interface and worry less about how it will store data and run code.
A key back-end service is Azure Active Directory, a Web-based version of Microsoft’s authentication system for corporate networks. An executive from DocuSign, a document-management service, showed how its mobile app used Azure Active Directory to let users log in with the same credentials they might use for their company email—on an iPhone, no less.
At the same time it’s making Visual Studio more attractive—or at least a plausible option—for non-Windows developers, it’s also letting developers use a wide variety of programming languages to access Azure’s computing services. And it’s letting them use Visual Studio and Azure to create apps that run on Apple’s iOS, Google’s Android, and the Web, not just Windows.
This doesn’t represent a whole new strategy for Microsoft, which has been building towards this for years. But the collection of products and features Microsoft highlighted at Build shows that it now has a serious portfolio for developers of all stripes.
Photo of Scott Guthrie, Microsoft’s executive vice president, cloud and enterprise group, by Owen Thomas for ReadWrite
View full post on ReadWrite
Oh Yeon Seo puts on her little black dress in a sexy pictorial for 'Arena …
Actress Oh Yeon Seo put on the fashion staple 'LBD' or little black dress that every woman should have in her closet for 'Arena Homme Plus'! SEE ALSO: Oh Yeon Seo and Lee Yoo Ri in talks for new MBC weekend drama 'Jang Bo Ri is Here!' Oh Yeon Seo …
View full post on SEO – Google News