Posts tagged Gets

Windows XP Gets A Surprise Patch From Microsoft

Oh, how Microsoft must wish it could quit Windows XP.

The software giant officially stopped supporting its 13-year-old operating system in April, technically leaving its legions of hangers-on stranded with no hope of future security updates. But even Microsoft couldn’t leave users in the lurch in the face of an Internet Explorer vulnerability considered so harmful that the U.S. government warned users to avoid the browser until it was fixed.

So Microsoft has issued an emergency patch to Windows XP. Both IE and Windows XP received what Microsoft calls critical out-of-band security updates today, May 1, and all users that still have automatic updates enabled will get it straight away and don’t have to take any action.

Windows XP and IE users who are updating manually should do so immediately, a Microsoft Security Response Center blog post urges. Windows Update will download the patch automatically; you can also find manual updates on the Microsoft Update website.

Windows XP users shouldn’t take hope that Microsoft might actually reconsider extending XP support. Adrienne Hall, Microsoft general manager of trustworthy computing, made a point of noting that XP users should update to Windows 7 or 8, and IE users should update to version 11, in a post on Microsoft’s official blog.

Image of Terry Myerson, executive VP of the Microsoft operating-systems group, by Owen Thomas for ReadWrite

View full post on ReadWrite

Bing Ads Gets Friendlier To Big Accounts: Now See Up To 50,000 Keywords In Web UI

Managing big accounts in the Bing Ads web UI has long had its challenges. The team is starting to change that.  Now, at the account, campaign and ad group level, paid search managers can now see performance data on as many as 50,000 keywords in the web interface of Bing Ads. The results used to…



Please visit Search Engine Land for the full article.

View full post on Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Facebook Finally Gets Serious About Privacy

Facebook is finally getting serious about privacy. At its F8 developer conference in San Francisco Wednesday, CEO Mark Zuckerberg announced two new updates to Facebook Login that center around user privacy.

You’ll now be able to use Facebook Login anonymously, meaning you can log into a mobile application using your Facebook account—but the application won’t know any personal information about you. 

“It’s an experience that’s synced without an app knowing who you are,” Zuckerberg said. “If you want, you can always sign in with your real identity once you’re comfortable sharing your information.”

Facebook has grappled with privacy issues in the past. And Zuckerberg has notoriously been chilly about the idea about anonymity online. The Facebook experience is, after all, all about your real identity. But with anonymous login, Facebook is embracing the concerns of users and finally offering a new option for people that aren’t comfortable sharing their true identity with applications that don’t need to know your real name, location, or Facebook Likes in order to operate. 

For users that are privacy conscious but still want to use their real identity to log into third-party applications, Facebook rolled out editable permissions at F8 as well. Now, when you log into an application, you can choose which personal information to share line by line. 

“We know people are scared of pressing this blue [Facebook Login] button,” Zuckerberg said, admitting users are nervous about sharing all their personal information with third-party apps. “We don’t want anyone to be surprised how they’re sharing on Facebook.”

Some applications, like the ridesharing application Lyft, will likely require people to provide identifying personal information like their name and picture, but now you’ll be able to give apps only the information they require, and nothing else. 

While the new Facebook features are a great move for protecting user privacy, it’s still unclear whether Facebook knows which apps you’re using even if you log in anonymously. For now, though, Facebook will be satisfied by gaining the favor of privacy pundits that are nervous about how applications access and use their personal information. 

View full post on ReadWrite

Bing: New York Sleeps Late; SF Gets Up The Earliest

Microsoft Bing posted data on the earliest risers by city based on Bing usage data. Bing said they “thought it would be interesting to look at Bing usage as a proxy for when people get online, start work or otherwise wake up.” The earliest city to rise is San Francisco, reaching 50% of…



Please visit Search Engine Land for the full article.

View full post on Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Post-Heartbleed, Open Source Gets A New Security Attitude

The Internet may not agree on much. But if there’s one idea its citizens can get behind, it’s that nothing like the Heartbleed bug should ever happen again.

And so the Linux Foundation—backed by Google, Amazon Web Services, Cisco, Dell, Facebook, Fujitsu, IBM, Intel, Microsoft, NetApp, Rackspace and VMware—is launching a new Core Infrastructure Initiative that aims to bolster open-source projects critical to the Internet and other crucial information systems. Many such projects are starved for funding and development resources, despite their importance to Internet communications and commerce. 

The initiative is brand new—the steering committee hasn’t even had a meeting yet—so there aren’t many details as to how this will all work at the moment. 

It’s hard not to applaud such an important development, even if the promise seems somewhat vague. Of course, the details do matter; no one wants to lull a post-Heartbleed world into a false sense of security. The Heartbleed bug tarnished the image of open source. Another serious failure could erode support for it.

That would be a shame—mostly because, despite the hard knock it’s taken from Heartbleed, open-source software really is more solid than proprietary code.

Heartbleed: The Truth Is Stranger Than Fiction

One of the biggest arguments in favor of open source—which typically depends on volunteers to add and refine programs and tools—is that projects with many eyes on them are less prone to serious bugs.

Often enough, that’s exactly how it works out. A recent report from software-testing outfit Coverity found that the quality of open-source code surpassed that of proprietary software. Shocked? You shouldn’t be. Popular open-source projects can have hundreds or thousands of developers contributing and reviewing code, while in-house corporate teams are usually far smaller and frequently hobbled by strict confidentiality to boot.

Unfortunately, not all open-source projects work like that. OpenSSL—yes, the communications-security protocol that fell prey to Heartbleed—was one such project. 

This potentially huge security hole started out as a mistake made by a single developer, a German researcher named Robin Seggelmann. Normally, revised code gets checked before going out, and his work on OpenSSL’s “heartbeat” extension did go through a review—by a security expert named Stephen Henson. Who also missed the error.

So Heartbleed started with two people—but even involving the entire OpenSSL team might not have helped much. There are only two other people listed on that core team, and just a handful more to flesh out the development team. What’s more, this crucial but non-commercial project makes do on just $2,000 in annual donations.

If this were a fictional premise, no one would believe it. A critical security project, limping along on a couple of thousand dollars a year, winds up in the hands of two people, whose apparently innocent mistake goes on to propagate all over the Internet.

The Core Infrastructure Initiative aims to ensure that OpenSSL and other major open-source projects don’t let serious bugs lie around unfixed. Its plan: Fill in the gaps with funding and staff.

Making Open Source Whole


 

Security for the Internet at large was practically built on OpenSSL. And yet, the open-source software never went though a meticulous security audit. There wasn’t money or manpower for one.

From the Linux Foundation’s perspective, that’s unacceptable. 

The Linux operating system may be the world’s leading open-source success story. Volunteers across the globe flock to Linus Torvalds’ software, contributing changes at a rate of nine per hour. That amounts to millions of lines of code that improve or fix various aspects of the operating system each year. And it draws roughly half a million dollars in annual donations. Some of those funds go to Torvalds, Linux’s creator, so he can dedicate himself to development full-time. 

The Linux Foundation likewise sees its Core Infrastructure Initiative becoming a benefactor of sorts to key software projects, one that can direct funds to hire full-time developers, arrange for code review and testing, and handle other issues so that major vulnerabilities like Heartbleed don’t slip through the cracks again. 

The first candidate is—you guessed it—OpenSSL. According to the press announcement, the project “could receive fellowship funding for key developers as well as other resources to assist the project in improving its security, enabling outside reviews, and improving responsiveness to patch requests.”

But OpenSSL is just the beginning. “I think in this crisis, the idea was to create something good out of it,” Jim Zemlin, executive director of the Linux Foundation, told me. “To be proactive about pooling resources, looking at projects that are underfunded, that are important, and providing some resources to them.”

Sounds like a great idea. Not only does the move address specific concerns about open-source development—like minimal staffing and non-existent funding—it would also reinforce the integrity of critical systems that hinge on it. 

It’s an ambitious plan, one that came together at lightning speed. Chris DiBona, Google’s director of engineering of open source, told me Zemlin called him just last week with the idea.

“We [at Google] were doing that whole, ‘Okay, we’ve been helping out open source. Are we helping them enough?’” said DiBona, who reminded me that it was a security engineer at his company who first found the Heartbleed bug. “And then Jim calls up and says, ‘You know, we should just figure out how to head this off at the pass before the next time this happens.’ And it’s like, ‘Yeah, you’re right. Let’s just do it. We’ll try to find a way’.” 

Over the next few days, other companies immediately jumped at the chance to help. “I think it’s a historical moment, when you have a collective response to what was a collective problem,” said Zemlin.

The Core Infrastructure initiative is still gaining new supporters. Just a few hours before I spoke with Zemlin and DiBona Wednesday evening, another backer signed on. As of this writing, 12 companies had officially joined the fold. Each is donating $100,000 per year for a minimum of three years, for a total of $3.6 million.

Those Pesky Details

Eventually, the details will have to be ironed out. There will be a steering committee made up of backers, experts, academics and members of the open-source community. And when they meet, they will need to make some big decisions—like determining criteria for deciding which projects get funded (or not). The committee will also need to figure out “what we consider to be a minimum level of security,” said DiBona. 

Zemlin is careful to note that he doesn’t want to fall into the trap of over-regulating or dictating so much that it would alter the spirit of open-source development. “Everyone who’s participating will respect the community norms for the various projects,” he said. “We don’t want to mess up the good things that happen by being prescriptive.”

He and his initiative will draw from the Linux Foundation’s experience powering Linux development. “We have 10 years of history showing that you can support these projects and certainly not slow down their development,” Zemlin said. And indeed, if anyone can figure it out, it could be him and his foundation. 

But it may not be easy, keeping the creative, free-spirited nature of open source alive in the face of serious core infrastructure concerns. Critical systems usually demand organization and regimented practices. And sometimes, to keep the heart from bleeding, a prescription might just be in order. 

Images courtesy of Flickr users John (feature image), Bennett (lonely developer), Chris Potter (money life preserver), Alex Gorzen (Linux Easter Egg).

View full post on ReadWrite

Open Source Gets A Security Patch, With A Little Help From Its Friends

The Internet may not agree on much. But if there’s one idea its citizens can get behind, it’s that nothing like the Heartbleed bug should ever happen again.

And so the Linux Foundation—backed by Google, Amazon Web Services, Cisco, Dell, Facebook, Fujitsu, IBM, Intel, Microsoft, NetApp, Rackspace and VMware—is launching a new Core Infrastructure Initiative that aims to bolster open-source projects critical to the Internet and other crucial information systems. Many such projects are starved for funding and development resources, despite their importance to Internet communications and commerce. 

The initiative is brand new—the steering committee hasn’t even had a meeting yet—so there aren’t many details as to how this will all work at the moment. 

It’s hard not to applaud such an important development, even if the promise seems somewhat vague. Of course, the details do matter; no one wants to lull a post-Heartbleed world into a false sense of security. The Heartbleed bug tarnished the image of open source. Another serious failure could erode support for it.

That would be a shame—mostly because, despite the hard knock it’s taken from Heartbleed, open-source software really is more solid than proprietary code.

Heartbleed: The Truth Is Stranger Than Fiction

One of the biggest arguments in favor of open source—which typically depends on volunteers to add and refine programs and tools—is that projects with many eyes on them are less prone to serious bugs.

Often enough, that’s exactly how it works out. A recent report from software-testing outfit Coverity found that the quality of open-source code surpassed that of proprietary software. Shocked? You shouldn’t be. Popular open-source projects can have hundreds or thousands of developers contributing and reviewing code, while in-house corporate teams are usually far smaller and frequently hobbled by strict confidentiality to boot.

Unfortunately, not all open-source projects work like that. OpenSSL—yes, the communications-security protocol that fell prey to Heartbleed—was one such project. 

This potentially huge security hole started out as a mistake made by a single developer, a German researcher named Robin Seggelmann. Normally, revised code gets checked before going out, and his work on OpenSSL’s “heartbeat” extension did go through a review—by a security expert named Stephen Henson. Who also missed the error.

So Heartbleed started with two people—but even involving the entire OpenSSL team might not have helped much. There are only two other people listed on that core team, and just a handful more to flesh out the development team. What’s more, this crucial but non-commercial project makes do on just $2,000 in annual donations.

If this were a fictional premise, no one would believe it. A critical security project, limping along on a couple of thousand dollars a year, winds up in the hands of two people, whose apparently innocent mistake goes on to propagate all over the Internet.

The Core Infrastructure Initiative aims to ensure that OpenSSL and other major open-source projects don’t let serious bugs lie around unfixed. Its plan: Fill in the gaps with funding and staff.

Making Open Source Whole


 

Security for the Internet at large was practically built on OpenSSL. And yet, the open-source software never went though a meticulous security audit. There wasn’t money or manpower for one.

From the Linux Foundation’s perspective, that’s unacceptable. 

The Linux operating system may be the world’s leading open-source success story. Volunteers across the globe flock to Linus Torvalds’ software, contributing changes at a rate of nine per hour. That amounts to millions of lines of code that improve or fix various aspects of the operating system each year. And it draws roughly half a million dollars in annual donations. Some of those funds go to Torvalds, Linux’s creator, so he can dedicate himself to development full-time. 

The Linux Foundation likewise sees its Core Infrastructure Initiative becoming a benefactor of sorts to key software projects, one that can direct funds to hire full-time developers, arrange for code review and testing, and handle other issues so that major vulnerabilities like Heartbleed don’t slip through the cracks again. 

The first candidate is—you guessed it—OpenSSL. According to the press announcement, the project “could receive fellowship funding for key developers as well as other resources to assist the project in improving its security, enabling outside reviews, and improving responsiveness to patch requests.”

But OpenSSL is just the beginning. “I think in this crisis, the idea was to create something good out of it,” Jim Zemlin, executive director of the Linux Foundation, told me. “To be proactive about pooling resources, looking at projects that are underfunded, that are important, and providing some resources to them.”

Sounds like a great idea. Not only does the move address specific concerns about open-source development—like minimal staffing and non-existent funding—it would also reinforce the integrity of critical systems that hinge on it. 

It’s an ambitious plan, one that came together at lightning speed. Chris DiBona, Google’s director of engineering of open source, told me Zemlin called him just last week with the idea.

“We [at Google] were doing that whole, ‘Okay, we’ve been helping out open source. Are we helping them enough?’” said DiBona, who reminded me that it was a security engineer at his company who first found the Heartbleed bug. “And then Jim calls up and says, ‘You know, we should just figure out how to head this off at the pass before the next time this happens.’ And it’s like, ‘Yeah, you’re right. Let’s just do it. We’ll try to find a way’.” 

Over the next few days, other companies immediately jumped at the chance to help. “I think it’s a historical moment, when you have a collective response to what was a collective problem,” said Zemlin.

The Core Infrastructure initiative is still gaining new supporters. Just a few hours before I spoke with Zemlin and DiBona Wednesday evening, another backer signed on. As of this writing, 12 companies had officially joined the fold. Each is donating $100,000 per year for a minimum of three years, for a total of $3.6 million.

Those Pesky Details

Eventually, the details will have to be ironed out. There will be a steering committee made up of backers, experts, academics and members of the open-source community. And when they meet, they will need to make some big decisions—like determining criteria for deciding which projects get funded (or not). The committee will also need to figure out “what we consider to be a minimum level of security,” said DiBona. 

Zemlin is careful to note that he doesn’t want to fall into the trap of over-regulating or dictating so much that it would alter the spirit of open-source development. “Everyone who’s participating will respect the community norms for the various projects,” he said. “We don’t want to mess up the good things that happen by being prescriptive.”

He and his initiative will draw from the Linux Foundation’s experience powering Linux development. “We have 10 years of history showing that you can support these projects and certainly not slow down their development,” Zemlin said. And indeed, if anyone can figure it out, it could be him and his foundation. 

But it may not be easy, keeping the creative, free-spirited nature of open source alive in the face of serious core infrastructure concerns. Critical systems usually demand organization and regimented practices. And sometimes, to keep the heart from bleeding, a prescription might just be in order. 

Images courtesy of Flickr users John (feature image), Bennett (lonely developer), Chris Potter (money life preserver), Alex Gorzen (Linux Easter Egg).

View full post on ReadWrite

Girls’ Generation Seo Hyun, “I Hope No One Gets Hurt Anymore” – Yahoo Philippines News

Girls' Generation Seo Hyun, “I Hope No One Gets Hurt Anymore”
Yahoo Philippines News
Besides Seo Hyun, many Korean celebrities including singer Lee Jung, Chansung of 2PM, and Jo Kwon of 2AM have expressed their honest opinions on the ferry disaster and current rescue operation. (photo by bntnews DB). For more: bntnews.co.uk.

View full post on SEO – Google News

Real-Time Data Streaming Gets Standardized

One of the advantages of open source is that it can accelerate standards adoption on a level playing field. If there is a big enough problem to solve, smart people can attract the best minds to work together, investigate and share the solution.

That said, standards bodies often become little more than a parlor game for incumbent vendors seeking to position the standard to their market advantage.

In other words, there’s lots of talk, but not much code.

In such a scenario, it’s easy to end up with implementations of a standard that each works differently due to unclear or ambiguous specifications. I recently sat down with Viktor Klang, Chief Architect at Typesafe, one of the lead organizers of reactivestreams.org, an open source attempt to standardize asynchronous stream-based processing on the Java Virtual Machine (JVM). 

Klang and his group—along with developers from Twitter, Oracle, Pivotal, Red Hat, Applied Duality, Typesafe, Netflix, the spray.io team and Doug Lea—saw the future of computing was increasingly about stream-based processing for real-time, data-intensive applications, like those that stream video, handle transactions for millions of concurrent users, and a range of other scenarios with large-scale usage and low latency requirements.

The problem? Lack of backpressure for streaming data means if there’s a step that’s producing faster than the next step can consume, eventually the entire system will crash.

ReadWrite: What is driving this shift in computing to reactive streams today?

Viktor Klang: It’s not a new thing. Rather, it’s more like it was becoming a critical mass as more people started using Hadoop and other batch-based frameworks. They needed real-time online streaming. Once you need that, then you don’t know up front how big your input is because it’s continuous. With batch, you know up front how big your batch is.

Once you have potentially infinite streams of data flowing through your systems, then you need a means to control the rate at which you consume that data. You need to have this back pressure in your system to make sure the producer of data doesn’t overwhelm the consumer of data. It’s a problem that becomes visible once you start going to real-time streaming from batch-based.

Users have been asking for more “reactive” streams for a long time, for building their own network protocols or for their specific application needs. Any time you need to talk to a network device, you want to use this abstraction. Anything that has an IP address.

With reactivestreams.org, we’re trying to address a fundamental issue in a compatible way to hook all these different things together to work while being inclusive. Long-term, the plan for this is to build an ecosystem to build implementations that can be connected to other implementations and then have developers building more things on top of it. For example, connect Twitter’s streaming libraries with RxJava streaming libraries, and pipe into Reactor, Akka Streams, or other implementations on the JVM.

RWWho are key members today?

VK: Certainly Typesafe jumped in early, since we have an open-source software platform that deals with a lot of what the industry calls “reactive application challenges.” We were thrilled to have Twitter join, the Reactor guys from Pivotal, and Erik Meijer from Applied Duality, as well as Ben Christensen and George Campbell who work at Netflix. Red Hat’s in there with Oracle, and we also have some critical individuals like Doug Lea, inventor of “java util concurrent,” driving all concurrency stuff in the JVM. One of the goals of the project is to create a JSR for a future Java version.

Everyone pulls their weight. It’s just really hard to get engineering time from people at this level.

RWStandards don’t tend to be very popular with developers. How are you trying to approach this to attract more key people?

VK: You’re right, the average developer is about as interested in standards as cats are in water. Jokes aside, however, we start with open source. I think of this project as a non-standard standards thing. We are inverting the usual process. We have created a spec, a test suite that verifies the spec and we created a description of why the spec is what it is and why it isn’t what it isn’t. We’re really creating solutions, picking them apart, and confirming they do what they say they do and using this process to create the best specification.

RW: It sounds like developers in this case are also addressing an ops or a dev ops problem?

VK: As a developer, you can make life really difficult for your ops guys. This is about getting it right so your ops guys don’t come over and mess you up. Previously they’d have to make sure you don’t feed the system more information than it can process, so you’re not blowing up resources, making sure the processing is always faster than the input. It’s really tricky to do that for variable loads.

RWWhat are some examples that might inspire your core audience of Java developers?

VK: What’s a hard case for an enterprise Java developer? If you have a TCP connection with orders coming in and you need to perform some processing to it before passing it on to another connection, you need to make sure you aren’t pulling things off the inbound connection faster than you are able to send to the outbound connection. If you don’t, then you’ll risk blowing the JVM up with an OutOfMemoryError.

For web developers, it could be streaming some input from a user and storing it on Amazon S3 without overloading the server, and without having to be aware of how many concurrent users you can have. That’s a challenging problem to solve now.

Image courtesy of Shutterstock

View full post on ReadWrite

Google Gets Another Street View Privacy Fine — In Italy

It seems like each European country is taking its turn fining Google for some privacy infraction. This time it’s Italy and involves Street View. Google has reportedly paid a roughly $1.4 million (1 million EUR) fine. According a story in Reuters the issue this time was the failure to clearly…



Please visit Search Engine Land for the full article.

View full post on Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing

Go to Top
Copyright © 1992-2014, DC2NET All rights reserved