Posts tagged developer
Pornographic content is forbidden in the Apple App Store, but Apple seems to be OK with sending porn to developers who submit their apps for review, according to one who received an inappropriate pic.
“It turns out Apple thought the best way to tell us our app could be used to surf porn was to surf for porn using our app,” Carl Smith, a Florida developer for nGen Works, wrote in a blog post on Medium (NSFW link).
The email, which Smith shared with ReadWrite, appears to be from the Apple app review team and includes an attached photo of a man’s genitalia, but no warning of the enclosed content. This is the kind of thing that can create a hostile work environment for nGen employees whose jobs necessitate reading emails from Apple.
Smith suggested a number of alternatives he thought Apple could have used to indicate a concern about explicit content. The team could have sent nGen Works a search term to try, or even warn in advance what the emailed photo was of. Instead, Smith said the developers who opened the email had no warning that it would be graphic.
“What I want from Apple is for them to address the issue and put a policy in place that prevents an App store reviewer from sending pornographic images as an example of a issue,” he said. “They could have easily masked out the bad part of the photo or told us a phrase to search. At the very least warn someone before they open the attachments that they aren’t safe for work.”
“Specifically, we noticed your app contains objectionable content at time of review. Please see the attached screenshot/s for more information,” the Apple review team email reads, before offering a downloadable file that turned out to be the genitalia photo in question.
Smith said solution is hypocritical of the company. Of course nGen’s app, which allows users to enlarge, save, and search for Instagram photos, would be capable of browsing any photo that exists on Instagram already.
“This is a double standard,” Smith told ReadWrite. “If I type bad words into Safari I am going to see bad things. So I think Apple needs to address that.”
Smith said he doubted Apple’s “upper echelons” would approve of this action, and encouraged readers to spread the word.
We’ve contacted Apple for a comment on this allegation.
Photo via Shutterstock
View full post on ReadWrite
Twitter is making its play for developers.
At its first-ever Flight developer conference, Twitter CEO Dick Costolo announced a new developer toolkit aimed at helping developers build and make money off applications on the Twitter platform. Called Twitter Fabric, the bundle of services includes Twitter’s Crashlytics application crash detecting service, and MoPub, the ad exchange network.
With the new tools, Twitter officially throws its hat in the ring to compete with Facebook and Google for developers’ time and attention. Its tools are designed to work with Apple’s Xcode and “all major Android IDEs,” meaning that developers can presumably use the Twitter tools within the development environments they’re already used to.
Costolo also lobbed some direct criticism at competitors during his keynote address. “The mobile SDK landscape has been inhabited by parties that optimize for self-interest first, and your interest second,” Costolo said.
He was presumably poking at Facebook, which offers developers the backend-as-a-service Parse, Facebook Login, and the new Facebook Audience Network that displays Facebook ads across different applications.
Google, meanwhile, also just acquired Firebase, a backend service for building realtime apps as part of its cloud services.
Twitter also debuted a new Twitter login feature that will let people log into applications and services with their Twitter credentials instead of creating new username/password IDs for each one. That service essentially matches similar login services from Facebook and Google.
Lead image by Selena Larson for ReadWrite
View full post on ReadWrite
For the last few months, any Android app maker who wanted to check out the latest version of Google’s mobile operating system had the Android L Developer Preview to play with. Now, “L” has taken on a few more letters to become “Lollipop,” and on Friday, the full release of its software developer kit became available for download.
According to Google, Lollipop (also known as Android 5.0) will be heading to Nexus 4, 5, 7 (2012, 2013) and 10 in early November, right around the time when the latest Nexus 6, 9 and Nexus Player will hit the market. That’s just a couple of weeks away.
In other words, if you’re an Android developer, don’t wait to roll up those sleeves and dive in. There’s not much time left, if you want to get your apps ready for the launch.
Images courtesy of Google
View full post on ReadWrite
Editor’s Note: This piece was originally published by our partners at xoJane.
They threatened the wrong woman this time. I am the Godzilla of bitches. I have a backbone of pure adamantium, and I’m sick of seeing them abuse my friends.
The misogynists and the bullies and the sadist trolls of patriarchal gaming culture threatened to murder me and rape my corpse, and I did not back down. They tried to target my company’s financial assets and I did not back down. They tried to impersonate me on Twitter in an attempt to professionally discredit me and I did not back down.
The BBC called me “Defiant,” in a caption. I plan to frame and put it on my wall.
What Is “Gamergate”?
My name is Brianna Wu. Ordinarily, I develop videogames with female characters that aren’t girlfriends, bimbos and sidekicks. I am a software engineer, a popular public speaker and an expert in the Unreal engine.
Today, I’m being targeted by a delusional mob called “Gamergate.”
If you don’t know what Gamergate is, my God do I envy you. Gamergate is basically a group of boys that don’t want girls in their videogame clubhouse. Only, instead of throwing rocks, they threaten to rape you. And, if that doesn’t work, they’ll secretly record your conversations and release the lurid details of your sex life in a public circus. From seeing the Gamergate mobs plan this on 8chan.co, it seems like they’re having a lot of fun.
It started two months ago, when my friend Zoe Quinn dated Eron Gjoni. Their relationship ended, as relationships sometimes do. Only, rather than get drunk and play Madden, Eron decided to secretly record everything Zoe said, and released it on a blog he titled “The Zoe Report” in an attempt to destroy her.
If Zoe had been a man, the blog would have been laughed off as the work of a jilted lover.
But, no. Instead, a mob formed to destroy her. Ostensibly concerned about ethics, Gamergate was very worried about Gjoni’s accusations that Zoe might have had a relationship with a journalist to get favorable reviews of her universally celebrated title Depression Quest, which has been downloaded more than a million times and has helped countless people better understand their depression.
It tells you everything you need to know about Gamergate that this mob went after Zoe and not the journalist.
How Gamergate Happened To Me
The Gamergate mob isn’t a new thing, though it’s only recently been named. They targeted my friend Samantha Allen back in July, when she dared criticize Giant Bomb’s decision to remain the only major site in videogames with a 100 percent white, straight and male employee pool.
They ran through their playbook. They targeted her on Twitter, they harassed her. They researched her past. They questioned her personal relationships. They threatened her. And they have done everything possible to try to quash one of the videogame industry’s most insightful and powerful voices.
It’s a playbook that works. They used it against Jenn Frank until she quit. They used it against Mattie Brice until she quit. They used it against Leigh Alexander. They used it against Zoe Quinn. And they used it against Anita Sarkeesian, who had to cancel a speaking engagement gig this week after a school shooting threat—and then they used it against me.
What was my crime?
A fan of my show on 5by5, Isometric, made a meme of some of my Tweets about Gamergaters.
I loled. I tweeted. And, by Friday I was receiving death threats.
I have to be honest. A mob telling you they will castrate your husband, make you choke to death on the parts, murder any children you might have and then rape your ass until it bleeds has a way of scaring the hell out of you.
But, you know, because I am the Godzilla of bitches, by Saturday morning I was pissed off. I’m talking Jack Bauer pissed off. So, I decided I was going to do everything in my power to stop these fuckers.
Thanks in part to Wil Wheaton, one of my tweets about the death threats went mega viral. The press started calling. I wanted to crawl into a hole, but I pushed through and talked to them. Kotaku ran a story. Recode ran a story. Polygon ran a story. I was barely sleeping or eating, but I pulled myself together for MSNBC and CNN. The anti-Gamergate movement started to catch fire. Over 100 stories have been written all over the world because I’m sick of these asshats taking out my friends and I’m calling them on their shit.
What It’s Like To Be A Target
There’s no easy way to say this. I am a massive target for Gamergate/8chan.co right now and it is having horrible consequences for my life.
They tried to hack my company financially on Saturday, taking out our company’s assets. They’ve tried to impersonate me on Twitter in an effort to discredit me. They are making burner accounts to send lies about my private life to prominent journalists. They’ve devastated the metacritic users’ score of my game, Revolution 60, lowering it to 0.3 out of 100.
With all of this, my only hope is that my colleagues in the industry will stand by me—and recognize the massive target I made myself standing up to these lunatics.
I woke up twice last night to noises in the room, gasping with fear that someone was there to murder me. I can barely function without fear or jumpiness or hesitation. I’ve been driven from my home. My husband says he feels like he’s been shot.
But I have to be honest: I don’t give a fuck.
I am mad as hell at these people, and I’m not going to let them keep destroying the women I love and respect.
In part, because of the press campaign I’ve done in the last five days against Gamergate, the jig is up. The Entertainment Software Association, the largest trade group in our industry, denounced the movement. Vox ran an editorial about the pattern established with the threats against me, “Angry misogyny is now the primary face of #GamerGate.” And journalistic enterprises like Giant Bomb, which had sat on the sidelines, are finally discussing the issue.
Gamergate, I have one message for you so listen up.
When you take your last dying breath, I want you to know this. It was an absolute pleasure knocking you on your ass for the fine women in this field.
View full post on ReadWrite
Hacking new technologies can be time-consuming … and expensive. So to help students create technical projects or learn how to use new tools, social coding site GitHub and a handful of technology partners have created the GitHub Student Developer Pack that provides access to 14 developer tools for free.
The project has been in the works for over a year, said John Britton, education evangelist at GitHub. The company already provides a free “micro account” to students, which provides them with five free private code repositories; this plan normally costs $7 a month. (GitHub’s normal free plan requires all such “repos” to be public). Now it’s expanding on that offer with limited free access to tools like Stripe for payment processing and DigitalOcean for cloud hosting.
Many companies offer free services to students who aren’t shy about asking for them. But Britton says most companies make these offers on an individual basis, because it takes time and effort to manage an entire student services database.
“Students would write and ask GitHub for tools—a lot of companies are happy to do it, but it’s ad-hoc,” Britton said. “It’s an administrative burden. We thought, ‘If we’re going to do the administrative work anyway, why not offer other tools as well and take the admin responsibility?’”
Over 100,000 students have already used a free GitHub account.
While it’s a charitable move on GitHub’s part, it won’t just benefit students. Once aspiring coders and engineers have grown accustomed to certain services, they’ll likely stick with the ecosystems they know when the free trial expires. That means more customers for companies like Stripe, which is waving fees for students on the first $1000 in revenue processed.
It will also benefit teachers who want to teach a class in something like game development. If they want to use the Unreal game engine, for instance, teachers can tell students to sign up for a GitHub Student Developer Pack, which will save each student almost $20 per month.
See also: GitHub Gets Its Science On
Students must sign up through GitHub and show proof of student status such as a university dot-edu email address or a student ID card. If neither is available, GitHub says an enrollment letter or transcript will work as well. Any student aged 13 or older can sign up for an account.
Participating companies will rely on GitHub’s student verification. So once students sign up through the company, they’ll get coupon codes or unique access links and can begin to use the full suite of services.
The offerings are as follows:
- Atom: A free text editor from GitHub
- Bitnami: Business 3 plan ($49/month for non-students) for one year
- Crowdflower: Access to the Crowdflower platform (normally $2,500/month) and $50 in worker credit
- DigitalOcean: $100 in platform credit
- DNSimple: Bronze hosted DNS plan ($3/month for non-students) for two years
- GitHub: Micro account (usually $7/month) with five private repositories while you’re a student
- HackHands: $25 in credit for live programming help
- Namecheap: Free domain name registration on the .me TLD and one free SSL certificate for one year
- Orchestrate: Free developer accounts for students (normally $149/month)
- Screenhero: Free individual account while you’re a student (saves students $10/month)
- SendGrid: Free student plan for one year (saves students $5/month)
- Stripe: No fees on first $1000 in revenue processed
- Travis CI: Free private builds (normally $69/month)
- Unreal Engine: Free access to the service (usually $19/month)
Lead image by HackNY
View full post on ReadWrite
Pity the child star. Like a Macaulay Culkin peaking too early, Docker, the hotter-than-hot Linux “container” technology, is already coming under withering criticism for not living up to its hype as the reincarnation of Gandhi … or the cure for Ebola.
Which is obviously really, really stupid.
Let’s be clear: there are lots of reasons to hype Docker. In a world of increasingly distributed applications, Docker’s Linux container technology is rightly celebrated for its ability to streamline and accelerate application development. But some advocates may be going too far in their adulation of Docker, making Docker hate feel like a public service.
A Happy Life With Docker
The cloud has made application development much easier in some ways, as developers no longer need to wait on the IT department to spin up servers for them. But it has also complicated things. Docker’s beauty lies in bringing simplicity to modern application development, as detailed by Jodi Mardesich for ReadWrite:
Docker is creating a massive buzz because it simplifies life for developers. Instead of cobbling together tools and writing apps for specific databases and other software components and operating systems, with Docker, developers can “package” an application in standard containers that can be transferred to virtually any server anywhere, whether it’s a virtual server on the developer’s laptop, a physical server in a company’s data center, or a virtual machine on Amazon’s Elastic Cloud.
This is a very big deal. So much so, in fact, that it has led people like Web programmer Barry Jones to gush about its potential:
[Docker] is going to be the most disruptive server technology that we’ve seen in the last few years. It fills a much needed hole that’s currently managed by very expensive solutions and it’s being actively funded by some of the biggest players in the market…. Docker is actively working to replace the need for hypervisors, virtual machines (VMs) and configuration management tools like Puppet / Chef /CFEngine in MOST cases.
In other words, abandon hope, all ye that enter here to compete with the Docker juggernaut.
Not surprisingly, such thinking drives technology pragmatists crazy.
Piercing The Reality Distortion Field
Some, like the authors of the Neutron Drive blog, complain that some “use these powerful tools [like Docker] to just cover up our crappy code.” Others, like Satory Global architect Neil Mackenzie, suggest that it’s not at all clear that Docker maps well to business realities, holding that it’s “not obvious that Docker fits well with the economic model of the public cloud where isolated VMs allows high-density utilization.”
Still others, like 451 Research’s Michael Coté, take a more sardonic tone:
Followed by CSC’s Simon Wardley gleefully heckling that “Docker turned my old ZX81 into a teleportation device and perpetual energy machine.”
None of these apparent critics is really being critical of Docker itself, though. They’re swatting at the hubris around Docker. This is one of the hardest tasks of any promising technology: reining in advocates, rather than answering critics. The haters will always be there and, if anything, simply serve as a leading indicator of success.
But some hate is an unnecessary byproduct of over-the-top adulation. The trick is to help advocates do so in a responsible fashion.
Consider: it’s possible—even likely—that Docker will threaten virtual machine technology in the long run. After all, as Dell’s Joseph Jacks suggests, “Docker promises to replace heavy VMs w[ith] Linux containers” as its superior isolation granularity means it can deliver “10X+ better consolidation & utilization” of system resources.
But in the short- to medium-term the two complement each other, Gartner analyst Kyle Hilgendorf notes:
[T]here is room for containers and VMs to live together for the next several years. I see value in two layers of encapsulation, one at the OS (VM) and one at the app (container) and we cannot ignore the enterprise readiness of VM security and VM management tools. Container management and security still needs improvement so why not combine the two worlds?
The best course for the Docker team is to embrace its market-changing characteristics without over-promising its current capabilities and uses. And, to the extent possible, to coach its biggest advocates on present-day limits even as they laud future-day possibilities.
So long as Docker engineers remain confident but humble, acolytes and critics alike will take a more measured tone and allow the project to grow into its potential to disrupt application development.
Lead image by wirralwater
View full post on ReadWrite
The year 1999 may have been the apex of the dot-com bubble euphoria, but it wasn’t the heyday of Web developers. At least, not according to U.S. state and federal Occupational Employment Statistics, which didn’t even register that “Web developer” was a real job.
Since then, Web development has become so popular that it has made it into our labor statistics even as it has faded as a marketable job skill. Today, it’s not enough to be a generic Web developer: the best developers have specialized.
Catching Up With The Zeitgeist
Government has never been known as an innovator. Nowhere is this more true than in its the data it captures on its workforce. As a new study from Pew Research finds, government jobs data tends to be a lagging indicator of the economy:
In 2013, an estimated 165,100 Americans worked as computer network support specialists, 141,270 as computer network architects, and 78,020 as information security analysts. None of those occupations existed on their own in 1999, though some workers in those fields likely were included in broader job classifications such as “computer programmers” or “network systems and data communications analysts.” But listing them separately speaks to the importance of networked computing in today’s economy.
In other words, the government eventually recognized what those in the industry already knew: The network had become a big deal. Somewhat ironically, the government can sometimes be so late to the party that the party is over by the time the government recognizes it ever existed.
Web development is like that.
Web Developers: On The Out?
As Pew Research highlights, “Web developer” wasn’t even reported as part of the OES classification system until 2012. The report notes this fact with the obvious statement that “[government] data often lags the evolution of the actual economy.”
By the time the OES got around to recognizing web development, the industry seems to have moved on. “Web developer” jobs peaked in 2009, according to Indeed.com data, even though Web development was becoming even more important.
This importance is expressed by a shift away from generic “Web development” to specific Web technologies Web developers need like jQuery and Node.js:
In other words, “Web development” is simply how apps are built now, making government’s and employers’ distinction of “Web developer” far less meaningful. The same is true for mobile. As such, saying “I need a Web developer” or “I need a mobile developer” is increasingly unhelpful as what matters are the technologies these developers know.
The Web You Need To Know Now
So what technologies does a developer need to know in order to escape the anonymous “Web developer” distinction and stand out? Oddly enough, some of the same technologies you needed to know way back in 1999, as data on trending web programming languages from IEEE Spectrum shows:
On that last point, it’s likely that the government will eventually get around to recognizing “mobile developer” as a separate job classification (today it doesn’t). That will likely be long after the industry has figured out that “mobile” is simply how applications are developed/deployed, and technologies like Node.js and PhoneGap are what “mobile developers” really need to know.
Give it time.
Lead image by Jim Sangwine
View full post on ReadWrite
If only Ferris knew what was ahead. That quote comes from much simpler, and slower times. With the web and all it’s related technologies, we have seen life change faster than ever. If life was fast during Ferris’ day, it’s at lightning speed now. But the faster life gets, the more we have to slow down to take stock of things. This is no less true in the ever-changing web. But let’s start at the beginning: web development. This is usually the starting point of a business’ web presence outside of social media. Sometimes, in haste to “get going” […]
The post 21 Things Every Web Developer Should Be Doing by @stoneyd appeared first on Search Engine Journal.
View full post on Search Engine Journal
Six years ago, Swedish programmer Jonas Bonér set about trying to crack some of the most challenging problems in distributed computing. These included scalability, so that a system as large as the Internet of Things won’t fail no matter how large it gets; elasticity, a way of making sure that its computing problems are matched with the right hardware and software at the right time; and fault-tolerance. And he wanted to make sure his system would work in a “concurrent” world in which zillions of calculations are happening at once—and often interacting with one another.
He may or may not have been listening to ABBA while doing so.
Bonér had built compilers, runtimes and open-source frameworks for distributed applications at vendors like BEA and Terracotta. He’d experienced the scale and resilience limitations of existing technologies—CORBA, RPC, XA, EJBs, SOA, and the various Web Services standards and abstraction techniques that Java developers have used to deal with these problems over the last 20 years.
He’d lost faith in those ways of doing things.
This time he looked outside of Java and classical enterprise computing for answers. He spent some time with concurrency-oriented programming languages like Oz and Erlang. Bonér liked how Erlang managed failure for services that simply could not go down—i.e., things like telecom switches for emergency calls—and how principles from Erlang and Oz might also be helpful in solving concurrency and distributed computing problems for mainstream enterprises.
In particular he saw a software concept called the actor model—which emphasizes loose coupling and embracing failure in software systems and dataflow concurrency—as a bridge to the future.
See also: What’s Holding Up The Internet Of Things
After about three to four months of intense thinking and hacking, Bonér shared his vision for the Akka Actor Kernel (now simply “Akka”) on the Scala mailing list, and about a month later shared the first public release of Akka 0.5 on GitHub.
Today Akka—celebrating the five year anniversary for its first public release on July 12—is the open source middleware that major financial institutions use to handle billions of transactions, and that massively trafficked sites like Walmart and Gilt use to scale their services for peak usage.
I recently caught up with Bonér—now CTO and co-founder of Typesafe—to get his take on where Akka has seen traction, how it has evolved through the years and why its community views it as the open-source platform best poised to handle the back-end challenges of the Internet of Things, which is introducing a new order of complexity for distributed applications.
How To Manage Failure When Everything Happens At Once
ReadWrite: What is the problem set that initially leads developers to Akka?
Jonas Bonér: Akka abstracts concurrency, elasticity/scale-on-demand and resilience into a single unified programming model, by embracing share-nothing design and asynchronous message passing. This gives developers one thing to learn, use and maintain regardless of deployment model and runtime topology.
The typical problem set is people want the ability to scale applications both up and out; i.e., utilize both multicore and cloud computing architectures. The way I see it, these scenarios are essentially the same thing: it is all scale-out. Either you scale-out on multicore, where you have multiple isolated CPUs communicating over a QPI link, or you scale-out on multiple nodes, where you have multiple isolated nodes communicating over the network.
Understanding and accepting this fact by embracing share-nothing and message-driven architectures makes things so much simpler.
The other main reason people turn towards Akka is that managing failure in an application is really hard. Unfortunately, to a large extent, failure management is something that historically has been either ignored or handled incorrectly.
Failing At Failure Management
The first problem is that the strong coupling (between components) of long, synchronous request chains raises the risk of cascading failures throughout the application. The second major problem is that the traditional way to represent failure in the programming model is through exceptions thrown in the user’s thread, which leads to defensive programming with the error handling (using try-catch) tangled with the business logic and scattered across the whole application.
Asynchronous message passing decouples components by adding an asynchronous communication boundary—allowing fine-grained and isolated error handling and recovery through compartmentalization. It also allows you to reify errors as messages to be sent through a dedicated error channel for management outside of the user call chain and not just throw it in the caller’s face.
The broad scenarios where Akka gets a lot of traction are those where there are a lot of users and unexpected peaks in visitors, environments where there are a lot of concurrently connected devices and use cases where there is just a ton of raw data or analytics that need to be crunched. Those are all domains where managing scale and failure are of critical importance, and those are where Akka quickly got a lot of traction.
In The Actor’s Studio
RW: What is an “actor,” and why is the actor model that’s been around for more than 40 years seeing a renaissance?
JB: Actors are very lightweight components—you can easily run millions of live actors on commodity hardware—that help developers focus on communications and functions between services. An actor encapsulates state and behaviour and communicates through its own dedicated message queue, called its “mail box.” All communication between actors is message-driven, asynchronous and fire-forget.
Actors decouple the reference to the actor from the runtime actor instance by adding a level of indirection—the so-called ActorRef—through which all communication needs to take place. This enables the loose coupling that forms the basis for both location transparency—enabling true elasticity through an explicit model for distributed computing—and the failure model that I mentioned.
The actor model provides a higher level of abstraction for writing concurrent and distributed systems—it frees the developer from having to deal with explicit locking and thread management, and makes it easier to write correct concurrent and parallel systems. Working with actors also gives you a very dynamic and flexible programming model that allows you to upgrade actors independently of each other and shifting the them around nodes without changing the code—all driven through configuration or adaptively by the runtime behavior of the system.
Like you say, actors are really nothing new. They were defined in a 1973 paper by Carl Hewitt, where discussed for inclusion in the original version of Smalltalk in 1976 and have been popularized by the Erlang programming language which emerged in the early 1980s. They have been used by Ericsson, for example, with great success to build highly concurrent and extremely reliable (99.9999999% availability—equal to 31 millisecond downtime per year) telecom systems.
The main reason that the actor model is growing in popularity is because it is a great way to implement reactive applications, making it easier to write systems [that] are highly concurrent, scalable, elastic, resilient and responsive. It was, like a lot of great technology, ahead of its time, but now the world has caught up and it can start delivering on its promises.
Scaling The Internet Of Things
RW: There is a lot of interest about Akka in the context of the Internet of Things (IoT). What’s your view of the scale challenges that are unique to IoT?
JB: The Internet of Things—with the explosion of sensors—adds a lot of challenges in how to deal with all of these simultaneously connected devices producing lots of data to be retrieved, aggregated, analyzed and pushed back out the the devices while maintaining responsiveness. Challenges include managing huge bursts in traffic in receiving sensor data at peak times, processing of these large amounts of data in both batch processes and real-time, and doing massive simulations simulating real-world usage patterns. Some IoT deployments also require the back end services to manage the devices, not just absorb the data sent from the devices.
The back-end systems managing all this needs to be able to scale on demand and be fully resilient. This is a perfect fit for reactive architectures in general and Akka in particular.
When you are building services to be used by potentially millions of connected devices, you need a model for coping with information flow. You need abstractions for what happens when devices fail, when information is lost and when services fail. Actors have delivery guarantees and isolation properties that are perfect for the IoT world, making it a great tool for simulating millions of concurrently connected sensors producing real-time data.
RW: Typesafe recently collaborated with a number of other vendors on the reactive streams specification, as well as introducing its own Akka Streams. What do the challenges look like for data streaming in an IoT world?
JB: If you have millions of sensors generating data, and you can’t deal with the rate that this data arrives—that’s one early problem set that we’re seeing for the back-end of IoT—you need a means to back-pressure devices and sensors that may not be ready or have the capacity to accept more data. If you look at the end-to-end IoT system—with millions of devices, the need to store data, cleanse it, process it, run analytics, without any service interruption—the requirement for asynchronous, non-blocking, fully back-pressured streams is critical.
We see Akka Streams playing a really important role in keeping up with inbound rates and managing overflow, so that there are proper data bulkheads in IoT systems.
Lead image courtesy of Shutterstock; image of Bonér courtesy of Jonas Bonér
View full post on ReadWrite