Posts tagged Framework
Node.js, a widely used open-source framework for building Web applications, has split into two separate projects as of late Tuesday.
A group headed by some of Node’s most important contributors has “forked” the project, creating a new version it’s calling io.js. It’s a version of Node “where contributions, releases, and contributorship are under an open governance model,” its Readme file states.
Tensions have been heating up in the Node.js community for some time, as Node contributors aired their grievances about Joyent—Node’s corporate lead—and its oversight. Dissidents including five of Node’s top seven contributors (numbers 2, 3, 4, 5 and 7), expressed frustration that Joyent’s stewardship was slowing down or complicating the project.
“We don’t want to have just one person who’s appointed by a company making decisions,” Mikeal Rogers, a Node community organizer (and contributor number 27) told Wired. “We want contributors to have more control, to seek consensus.”
Joyent was aware of the contributor unrest. In one attempt to address those concerns, it created a community advisory board for Node and offered seats to several dissident contributors, including former project lead Isaac Schlueter.
When ReadWrite spoke to him in November, Joyent CEO Scott Hammond said that while a Node fork was possible, it “would certainly surprise me” given Joyent’s latest efforts to bring the community into the Node decision process. Now, the atmosphere at Joyent is more one of frustration, Joyent CTO Bryan Cantrill told InfoWorld. “We really believe in the stability of Node,” he said, stating that Joyent was still trying to reach out to the leaders behind io.js.
Photo by Mike Carbonaro
View full post on ReadWrite
A Framework for Goal-Driven Video SEO [Explainer Video]
Business 2 Community
The most important part of video SEO is in building and defining a goal driven strategy. Implementing a video strategy starts by identifying what you are trying to accomplish and working backwards to figure out the necessary technical and creative …
View full post on SEO – Google News
The Perennial SEO Audit – Creating an Effective Framework for Keeping Your Campaign Running at Peak Performance
It’s hugely beneficial for us SEO types to periodically helicopter up from the daily grind and survey our campaigns from a top level.
View full post on Search Engine Watch – Latest
Silicon Valley Street Style is an occasional feature that looks at the intersection of fashion and technology culture.
Along with the slew of keynotes, fireside chats, and startup pitches that infiltrated TechCrunch Disrupt 2014, one of the hottest topics this year was attendee fashion.
Silicon Valley, not exactly known for being the most stylish places in the world, is reclaiming its name in the fashion space. Change is happening, slowly but surely—fashion and tech are meeting in the middle to produce functional and beautiful wearables, and apparel and e-commerce startups are aiming to bring professional designs to even the busiest of San Franciscans.
The best of the Bay’s clean yet laid-back fashion trends made an appearance at TechCrunch Disrupt. Here are some notable mentions of those who hit the stage with their best shoe forward.
Colorful socks and shoes in bright colors seemed to be the “it” fashion statement this year. The addition of fancy footwear really speaks to a choice that Clover’s Ryan Reid calls “chic laziness”.
By adding just an element of bright color to an outfit, someone can convey that he or she is fun, quirky, and thinks (but not too much!) about outfit choices.
“You want to look good, but not like you tried too hard,” says Reid. Sounds like the Silicon Valley fashion mantra in a nutshell.
Speakers and moderators weren’t the only ones bringing their A game. I spotted these perfect street style contenders through the bustling crowds at TCD.
It wouldn’t be TechCrunch Disrupt without Startup Alley, and if HBO’s Silicon Valley taught us anything, it would be that we should expect startup tees. Lots and lots of startup tees.
So did Silicon Valley portray the real face of TCD? While many people in the convention center were indeed rocking a startup tee, it wasn’t nearly as obnoxious and overwhelming as the show made it out to be. See for yourself—here are a few of the infamous t-shirts at Startup Alley.
Images by Stephanie Chan. Silicon Valley image by HBO.
View full post on ReadWrite
Posted by Paddy_Moogan
There is a problem with conversion rate optimization:
It looks easy. Most of us with some experience working online can take a look at a website and quickly find problems that may prevent someone from converting into a customer. There are a few such problems that are quite common:
- A lack of customer reviews
- A lack of trust / security signals
- Bad communication of product selling points
The thing is, how do we know
for sure that these are problems?
The fact is, we don’t. The only way to find out is to test these things and see. Even with this in mind, though, how do you know to test these things that are mainly based on your own gut feeling?
For me, this is where doing a high level of research and discovery is worth the time and effort. It can be far too easy to make assumptions about what to test and then dive straight in and start testing them. Wouldn’t it be better to run conversion rate tests based on actual data from your target audience?
I’m going to go into detail on the process we use at Distilled for conversion rate optimization. With the context above, it shouldn’t be any surprise that I spend a lot of time talking about the discovery phase of the process as opposed to testing and reviewing results.
For those of you who want the answer straight away and an easy takeaway, here is a graphic of the process:
Before I move on, I wanted to give you a few links that have certainly helped me over the last few years when learning about conversion rate optimization.
- The Definitive How-to for Conversion Rate Optimization by Stephen Pavlovich
- Holy Grail of eCommerce Conversion Rate Optimization by Pancham Prashar
- SEOgadget Guide to Conversion Rate Optimization
Right, let’s get into the process.
This entire stage is all about one thing: gathering the data you need to inform your testing. This can take time and if you’re working with clients, you need to set expectations around this. The fact is that this is a very important stage and if done correctly, can save you a lot of heartache further down the process.
Step 1: Data gathering
There are three broad areas from which you can gather data. Let’s look at each of them in turn.
This is the company / website that you’re working for. There is a bunch of information you can gather from them which will help inform your tests.
Why does the company exist?
I always believe in
starting with why and I’ve talked about this before in the context of link building. It is at this point that you can dive right into the heart of the company and find out what makes it different to others. This isn’t just about finding USPs, it goes far deeper than that into the culture and DNA of the company. The reason here is that customers buy the company and the message it portrays just as much as the product itself. We all have affinities with certain companies who probably do produce a great product and service, but it’s a love for the company itself which keeps us interested and buying from them.
What are the goals of the company?
This is a pretty crucial one and the reasons should be obvious. You need to focus your data gathering and testing around hitting these goals. There are times when some goals may be less obvious than others. These are sometimes called
micro-conversions and can include things that contribute to the bigger goal. For example, you may find that customers who signup to your email newsletter are more likely to become repeat customers than those who don’t. Therefore, a micro-conversion would be to get people signed up to your email list.
What are the unique selling propositions (USPs) of the company?
What makes the company different in comparison to competitors who sell the same or similar products? Bonus points here if the USP is something that a competitor
can’t emulate. For example, offering free delivery is something that may help improve conversions, but chances are that your competitors can also offer this.
What are the common objections?
This is where you should be speaking to people within the organisation who are outside the marketing team. One example is to talk to sales staff and ask them how they sell the products, what they feel the USPs are and what the typical objections are to the product. Another example is to talk to customer support staff and see what problems they tend to deal with. These guys will also have input on what customers tend to like the most and what positive feedback / product improvements get suggested.
Another team to speak to is whoever manages live chat for a website if it exists. At Distilled, we’ve sometimes been able to get access to live chat transcripts and have been able to run analysis to find trends and common problems.
Here, we are focusing specifically on the website itself and seeing what data we can gather to inform our experiments.
What does the sales process look like?
At this point, I’d recommend sitting down with the client and a big whiteboard to map out the sales process from start to finish, including each touch-point between the customer and the website or marketing materials such as email. From here, you can go pretty granular into each part of the process to find where problems can occur.
It is also at this point that you should
review funnels in analytics or set them up if they don’t currently exist. Try to find where the most common drop-off points are and take a deeper dive into why. Sometimes a technical problem may be to blame for the drop-off in conversions, so make sure you are at the very least segmenting data by browser to try and find problems.
What is the current traffic breakdown?
This involves you taking a deep dive into the existing analytics data that you have from the website. At this point you’re just trying to get a better understanding of a few core things:
- How much traffic the website receives: This can impact your testing in that you may discover low traffic numbers which can influence how long it takes a test to complete.
- What demographics the website typically attracts – this may require you to enable extra tracking if you’re using Google Analytics.
- What technology users typically use: As mentioned above, looking at browser usage is important. But on top of this, what devices do users tend to use? If you’re seeing high numbers of users using mobile devices, you should check how the website renders on a mobile device. If you’re seeing very low numbers of visits from mobile devices, that is probably worth investigating too given the growth of traffic from mobile in recent years.
Where do conversions currently come from?
Hopefully, the website will already have some
goals or eCommerce tracking enabled which makes this bit a lot easier! If not, then you will need to get them setup as soon as possible so that you can start gathering the data you need. This work needs to be done no matter what because you’re not going to be able to measure the results of your CRO tests if you can’t measure the conversions!
If you don’t have goals setup already, you can use
Paditrack which syncs with your Google Analytics account and allows you to apply goals to old data. It also allows you to segment your funnels which, annoyingly, Google Analytics doesn’t allow you to do as of writing.
If you do have this data, then you need to try and find patterns in the type of people who convert, as well as where they come from. With the latter, it can be a bit tricky sometimes because quite often, customers will find you via different channels. So you need to make sure that you’re looking at
multi-channel reports and seeing which ones are most common.
Is there any back-end data you can access?
things are changing, many analytics platforms do not integrate offline or back-end data by default, so you may need to go digging for it. One thing that many companies have is data on cancellation or refund rates. Typically this is not included in standard analytics views because it takes place offline, however it can provide you with a wealth of information about products and customers. You can find out what causes customers to cancel a service or what made them ask for a refund.
This can potentially be the most interesting area to gather data from and have the most impact. Here we are gathering information directly from your customers via a number of methods.
What are the biggest objections that customers have?
For me, this is one of the most insightful things to ask because it drills straight into the one core thing that we care about in this process – what is stopping the customer from buying?
There are a number of ways to do this, which I’ll give some detail on here.
Google Consumer Surveys
We have used
these surveys a few times at Distilled now and they have usually given us pretty good insights. The results can be quite broad and frankly, some responses can be pretty useless! But if you cut out the noise and look for the trends, you can get some good information on what concerns and considerations people have when buying products like yours.
Qualaroo is a cool little survey tool which you’ve probably seen on numerous websites across the web. It looks something like this:
What I like about Qualaroo is that it doesn’t intrude on the user experience and you can use some cool customization settings to make it appear exactly when you want. For example, you can set it to only appear on certain pages or based on user behavior like time on page. You can also set it to appear when it looks like someone is about the close the window.
One neat little tip here is to place the survey on your order confirmation page and ask the question “What nearly stopped you from buying from us today?” – this can give you some low-risk feedback because the user has already purchased from you.
It’s also worth mentioning that Qualaroo can now be used on mobile devices, too, so you can tailor your questions to mobile users really well:
Other survey services
If you have a good email list which is reasonably active and engaged, you can run email surveys using something like
Survey Monkey. This can be a little more tricky because chances are that the people on your email list may be existing customers who’s mindset is a bit different to someone who has never bought from you before. We’ve also used AYTM in the past for running surveys who offer a few more options in their free version than Survey Monkey.
Again, this is a tool that we often use at Distilled, and we have gotten some good results from it. There have been a few misses too in terms of how useful the user has been, but that happens from time to time. Usertesting.com allows you to recruit users based on certain characteristics (age, gender, interests etc) and then ask them to complete tasks for you. These tasks are usually focused around your website or a competitors and may involve researching and buying a product. As the user works through the tasks, they record a screencast and talk as they are working.
If you want to dive more into this, I really liked
this webinar from Conversion Rate Experts which focuses on how they use the service.
Step 2: List hypotheses
Now we need to make the step from information gathering to outlining what we may want to test. Without realising it, many people will jump straight to this step of the process and just start testing what feels right. By doing all the work we outlined in step 1, the rest of the process should be much more informed. Asking yourself the following questions should help you end up with a list of things to test that are backed up by real data and insight.
What are we testing?
Based on all of the information you gathered from the website, customers and the company in step 1, what would you like to test? Go back to the information and look for the common trends. I prefer to start with the most common customer objections and see what is common amongst them. For example, if a common theme of customer feedback was that they place a lot of value in knowing their personal payment details are safe, you could hypothesise that adding more trust signals to the checkout process will increase the number of people who complete the process.
Another example may be if you found that the sales team always get feedback that customers love the money-back guarantee that you offer. So you may hypothesise that making this selling point more obvious on your product pages may increase the number of people who start the checkout process.
Once you have a hypothesis, it is important to know what success looks like and therefore, how to tell if the test result is a positive one. This sounds like common sense, but it’s very important to get this clear right from the start so that you reach the end of the test and stand a high chance of having an answer.
Who are we testing?
It is important to understand the differences in the types of people who visit your website, not just in terms of demographic, but also in terms of where their mind is at in terms of the buying cycle. An important example to keep in mind is new vs. returning customers. Putting both of these types of customers into the same test could lead to unreliable results because the mindsets of the customers are very different.
Returning customers (assuming you did a good job!) will already be bought into your company and brand, they will have already experienced the checkout process, they may even already have their credit card details registered with you. All of these things are likely to make them automatically more likely to convert into a customer compared to a brand new customer. One thing to mention here is that you’re never going to be able to segment everyone perfectly because
analytics data quality is never 100% perfect. There isn’t much we can do about this beyond ensuring we’re tracking correctly and using best practice when segmenting users.
When you run your test, most pieces of software will allow you to direct traffic to your test pages based on various attributes, here is an example from
Another useful segment as you can see above is the segmentation by browser. This can be particularly useful if you have any bugs with certain browsers and your testing page. For example, if something you want to test doesn’t load correctly in Firefox, you can choose to exclude Firefox users from the test. Obviously if the test is successful, the final roll-out will need to work in all browsers, but this setting can be useful as a short term fix.
Where are we testing?
This is a pretty straight forward one. You just need to specify which page or set of pages you’re testing. You may choose to test just one product page or a set of similar products at once. One thing to mention here is that if you’re testing multiple pages at once, you should be aware of how the buying cycles for those products may differ. If you’re testing two product pages with a single test and one of those products is a $500 garden shed and the other product is a $10 garden ornament, then the results of the test may be a bit skewed.
When you list the pages that you’re testing, it is also a good time to run through a simple checklist to make sure that tracking code has been added to those pages correctly. Again, this is pretty basic but can be easily forgotten.
Goals of the discovery phase:
- You’ve gathered data from customers, the website, and the company
- You’ve used this data to form a hypothesis on what to test
- You’ve identified who you’re targeting with this test and what pages it applies to
- You’ve checked that tracking code is set up correctly on those pages
This stage is where we start testing! Again, this is a step that people can jump to straight away and not have data to backup their tests. Make sure that isn’t you!
Step 3: Wireframe test designs
This step is likely to vary on your specific circumstances. It may not even be necessary for you to do wire-framing! If you’re in a position where you don’t need to get sign-off on new test designs then you can make changes do your website directly using a tool like Optimizely or Visual Website Optimizer.
Having said that, there are benefits to taking some time to plan the changes that you’re going to make so that you can double check that they are in line with steps 1 and 2 above. Here are a few questions to ask yourself as you’re going through this step.
Are the changes directly testing my hypothesis?
This sounds basic; of course they should! However it can be easy to get off-track when doing this kind of work. So it’s good to take a step back and ask yourself this question because you can easily do too much and end up testing more than you expected to.
Are the changes keeping the design on-brand?
This is likely to be more of an issue if you’re working on a very large website where there are multiple stakeholders in the website such as UX teams, design teams, marketing teams etc. This can cause problems in getting things signed off but there are often good reasons for this. If you suggest a design that involves fundamental changes to page layout and design, it’s less likely to get sign-off unless you’ve already built up a serious amount of trust.
Are the changes technically doable?
At Distilled, we’ve sometimes run into issues where our changes have been a bit tricky to implement and have required a bit of development time to get working. This is fine if you have the development time available, but if you don’t, this could limit the complexity of the tests that you run. So you need to bear this in mind when designing tests and choosing which hypotheses to test.
Step 4: Implement design
As mentioned above, the more complex your design, the more work you may need to put the design live. It is really important at this point to make sure you’re testing the design across different browsers before putting live. Visual elements can change quite dramatically and the last thing you want to do is skew your results by a certain browser not rendering the design properly.
It is also at this stage that you can choose a few options in terms of who should see the test. This is how this looks in Optimizely:
You can also choose what proportion of your traffic will be sent to the testing pages. If you have high traffic numbers, then this can help offset the risk if a test resulting in conversion rates dropping – it does happen! So only sending 10% of your traffic to the test means that the remaining 90% will carry on as normal.
This is what this setting looks like if you’re using Optimizely:
You should also
connect Optimizely to your Google Analytics account so that you’re also able to determine the average order value for each group of visitors you are sending to your conversion tests. Sometimes, the raw conversion rate for a test may not increase, but the average order value may increase which is obviously a win that you don’t want to be overlooking.
Goals of the experiments phase:
- Test variations are live and getting traffic
- Cross-browser testing is complete
- Design has been signed off by client / stakeholders if applicable
- Correct customer segments / traffic allocation has been set
Now it’s time to see if our work has paid off!
Step 5: Was the hypothesis correct?
Was statistical significance reached?
Before diving in and assessing if your hypothesis was correct, you need to make sure that statistical significance has been reached. I like
this short definition by Chris Goward which helps explain what this is and it’s importance. If you want to go a bit deeper and see some examples, this post by Will on the Distilled blog is a great read.
Many split testing tools will actually tell you if significance has been reached or not so this takes some of the hard work out of the process. Having said that, it’s still a good idea to understand the theories behind it so you can spot problems if they occur.
In terms of how long it could take to reach statistical significance, it can be hard to predict but this is a cool tool which helps you on this. Evan has another tool in relation to this which allows you to determine how order value differs across two different test groups. This is one of the key reasons to connect Optimizely to Google Analytics as mentioned above.
Was the hypothesis correct?
Yes? Great! If your test was a success and increased conversions then that’s great, but what’s next? Well firstly you need to look at how to roll out the successful design to the website properly, i.e. not relying on Optimizely or Visual Website Optimizer to display the design to visitors. In the short term, you can send 100% of your traffic to the successful design (if you haven’t already) and keep an eye on the numbers. But at some point, you’ll probably need help from developers to deploy the changes on the website directly.
When the hypothesis isn’t correct
This is going to happen; most conversion rate experts don’t talk about their failed tests, but they do happen. One guy that did talk about this was Peep Laja in this article and he went into even more detail in this case study where he said that it took six tests before a positive result was reached.
The important thing here is to not give up and make sure you’ve learned something from the process. There are always things to learn from failed tests and you can iterate on them and feed the learnings into future tests. Alongside this, make sure you’ve keeping track of all the data you’ve gathered from failed tests so that you have a log of all tests which you can refer back to in the future.
Goals of the review stage:
- Know whether a hypothesis was correct or not
- If it was correct, roll out widely
- If it wasn’t correct, what did we learn?
- On to the next test!
That’s about it! Conversion rate optimization should be an ongoing process because there are always things that can be improved across your business. Look for the opportunities to test everything, follow a good process and you can make a big difference to the bottom line.
A few resources to leave you with which I’d highly recommend:
- Peep Laja’s blog
- Conversion rate experts articles
- Wider Funnel blog
- Michael Aagaard’s blog
- PRWD list of CRO resources
- Unbounce blog
If you have any feedback or comments, feel free to leave them below!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
View full post on Moz Blog
Integrated campaigns are stronger. They are more effective, give us clearer data points, allow for more creativity, and bring greater ROI for client and agency. Here’s a framework to help make your agency or work more integrated.
View full post on Search Engine Watch – Latest
If you’re reading this, I’m assuming that you’re a beginner or novice developer, or else you’re hiring somebody else to develop your idea for you. But while you’ve probably heard the words Angular, Ember, and Backbone, you might not know what they are, or why they help web development.
Here is a rundown of each of the three hottest frameworks, and what they’re best for:
Initially released in 2009, AngularJS is the oldest of the three frameworks. Probably as a result, it also has the largest community.
In 2013, Angular had the fourth-largest number of contributors and third-largest number of stars (kind of like Facebook “Likes”) on GitHub. On Built With AngularJS, you can check out all of the applications currently being developed with Angular.
According to Igor Minar, lead developer on AngularJS at Google, it’s more about Angular’s adaptability than anything to do with the news.
“I don’t think that Angular is more suitable for news sites than for other sites and apps. But there definitely is a bunch of them,” Minar said. “I think it’s just that these are high-visibility sites maintained by companies in highly-competitive market which means they keep their technology stack fresh in order to be efficient at making changes and providing great user experience.”
Why are sites that use Angular good at making quick changes? Probably because Angular, more aggressively than any other framework, nudges developers to create easily testable code and test it often. Though some developers might find this guidance annoying, it pays off in catching little coding errors before they have a chance to become big ones.
“Some terms we use commonly are specific to Angular and might come across as jargon or strange,” he said. “The good news is that the web standards are catching up and giving ‘official’ names to some of these concepts.”
If you code with Angular, you’re coding on Angular’s rigid terms, but Google Trends points to that not being such a bad thing. You’ll have to use Angular’s jargon and it might take time to make your code more testable, but the result is adaptability later on.
Backbone came out in June 2010, and its community is nearly as large as Angular’s.
Many popular applications use the Backbone framework, including Twitter, Foursquare, and LinkedIn Mobile. Also worth noting is that a number of music apps were built with Backbone, including Soundcloud, Pitchfork, and Pandora.
However, there’s something about Backbone that’s very, very small compared to other frameworks—and that’s its download size. Compressed and minified, AngularJS is about 36K; the Ember starter kit is even bigger, at 69K. But Backbone, compared to its contemporaries, is downright puny, at just 6.4K.
According to Backbone creator Jeremy Ashkenas, concerns about needless boilerplate coding are “a silly marketing campaign.”
“If you’re writing a lot of ‘boilerplate’ code in Backbone, then you don’t know how to use it,” Ashkenas said. “In general, in programming, if you’re writing the same thing over and over again—you write a function to do it automatically for you.”
If you’re having trouble, however, Backbone has an especially active community rife with free tutorials for getting started with the framework. Plenty of developers have taken to GitHub to upload useful examples and how-tos that take the place of other frameworks’ hand-holding.
If you’re working on a single-page application or widget—and you’re comfortable with being a self-starter—Backbone is likely the lightweight framework for you.
Ember is the newest kid on the block, but it’s already making waves. Initially released in 2011, Ember just hit version 1.0 last year. It also recently become Code School’s latest course, and given that Code School already offers courses for Angular and Backbone, it’s likely the newest course will grow to become equally popular.
LivingSocial, Groupon, Zendesk, Discourse, and Square are some of the most well-known applications that have adopted Ember. Ember creators Tom Dale and Yehuda Katz say it’s easy to see when a site is using Ember because of its loading speed.
“They feel like normal websites, they’re just far faster than what you’re used to,” Dale said. “It’s because all the rendering happens in the browser. It may look like a regular website, but under the hood, it’s architected like an iOS or Android app that isn’t being rendered by the server.”
At 69K minified and zipped, Ember is the largest framework of the three, but Katz points out that often medium-sized jpegs are just as large.
“The reason I feel confident that the features we’re baking in are things you need anyway is because I frequently look at the compiled size of Ember apps alongside other apps in the wild, and they’re all roughly the same size,” said Katz, implying that developers who use other frameworks often download additional libraries and tools during the building process.
Ember’s library size and support network are its two greatest strengths, but if you’re only trying to create a small widget or single-page app, it might be overkill for you. If you’re working on a multipage, navigational, long-term project, Ember might be your pick.
When developers discuss these three frameworks online among their contemporaries, the discussion often devolves into one of personal preference. But from a non-developer perspective, it’s clear that different applications—and different needs—make each framework shine its brightest.
Images by Madeleine Weiss for ReadWrite
View full post on ReadWrite