Archive for the ‘Opinions on Email Marketing’ Category
Posted by Deanna Cruzan on May 1st, 2013
As a trusted advisor to our clients, we are often asked to help address specific areas of need within their digital marketing programs. One of the most common inquiries that I receive is related to building a permission-based list of email addresses. The reasoning is fairly simple; communicating with their customers via email has proven to drive results.
For example, when one of our clients started with us, they had absolutely no email addresses. They explained their objectives and shared with us what resources they had to get there. We proposed a strategic opt-in program that required a commitment on the part of their employees to turn customer touch-points into opt-in opportunities. Their employees became engaged in requesting customer email addresses and they displayed signage at brick-and-mortar locations to encourage customers to subscribe to the email program. This strategic program not only increased their email subscriber list to over 1 million addresses within 6 years, but also increased their sales and created loyal customers.
Building an email list “the right way” doesn’t just happen overnight. This program is a perfect example of the importance of strategy and commitment to achieving organic list growth. By engaging in strategic discussions early in the process, we were able to work with this client to identify targets and suggest tactics to meet their objectives. Several years later, the results speak for themselves — proper list building practices lay the foundation for sustained email marketing success.
Posted by admin on June 8th, 2012
Independence Day is right around the corner. It is always a happy reminder that it is summer time. Here are a few things you can do this July 4th to help your business.
1) Host an event.
Everyone likes to party! Maybe it is time to plan for a summer sales event, a customer appreciation BBQ, or a company party. Have fun and enjoy what you do!
2) Keep your customers up-to-date.
Notify your customers if you will be closed in observation of the holiday.
Refresh their memory about all the great things you have done so far this year.
Talk about all the great plans you have for the rest of 2012.
3) Reflect on your mid year performance.
Are your emails campaigns getting the ROI they deserve? It’s time to roll up your sleeves and look at your performance for the year.
4) Plan ahead.
What are your goals for 2012? July marks the mid-point for the second half of the year. You have 6 calendar months to keep up your endurance and finish the year with a strong finish.
Posted by Rob Ropars on August 26th, 2011
We’ve all heard that if you’re in marketing, in particular email marketing, you should constantly be testing to maximize results. The most common test mentioned is the ubiquitous “A/B” split test, meaning a 50/50 list split to test one variable against another (graphics, copy, offer, layout, list, time of day, day of week, etc.).
But is an A/B test all you can or should do? If you have only a few thousand or fewer emails to work with, an A/B test may be all you can do to ensure statistically reliable results. However, if your list is too small, an A/B test might not make any sense. For example, if you only have a few hundred email addresses, splitting and conducting one test will literally tell you nothing (statistically) other than directionally relevant information. Instead you may need to try to replicate the test over time, to aggregate the results and to analyze your collective data over a longer period.
The first consideration is to quantify how many email addresses you need to test to ensure you have a representative sample and more importantly, to ensure the results are reliable. There is a lot of math and science behind this topic, and fortunately a lot of math/science/statistics sites have free online tools such as this one.
You must set up the test(s) correctly (with sufficient sample sizes and assumed response rates) on the front end to ensure that results on the back end are reliable, meaning with a confidence level that you’re comfortable with (we recommend a 95% confidence level if it’s possible). Again, there are resources online to assist such as this one. The key is to avoid the common mistake of merely looking at results and assuming winners/losers based on seemingly different response rates.
Before testing, you have to identify the goal or the question you’re trying to answer. We recommend that you actually write these down and then, as briefly and concisely as possible, describe the various yardsticks you will use to determine your winner. As form follows function, the goals/objectives of the test coupled with the means to measure results should help drive copy, graphics, and/or layout to ensure the messages are properly structured and focused on whatever question you’re trying to answer..
Let’s say your goal is a higher click rate and after an A/B test you find “A” has a 2.7% CTR and “B” has 2.85%. It is a common mistake to use subtraction and declare that “B” was the winner or that “B” was only 0.15% higher and that could lead you down the path of thinking it wasn’t a significant result (i.e. a virtual “tie”). Or maybe you routinely just pick the higher percentage as the winner and run with that. Using proper percent increase/decrease calculations, we find that this is actually a 5.56% increase from “A” to “B.”
That however may or may not be statistically significant, but as you can see it’s a much larger increase than originally assumed. In order to determine if the results are statistically significant, use one of the calculators, plug in each version’s list size and the click percentage (or open percentage, or conversion rate, etc. depending on the key metric you’re analyzing) and it will instantly tell you whether this difference is enough to be reliable (with a 95% confidence level).
In this example, let’s pretend I sent “A” and “B” to a random 2,000 people each. The calculations indicate that this would not be enough of a difference to be statistically reliable. In fact, the “B” cell’s click rate would have to have been at least 3.81% in order for the difference to be reliably significant. However, if you didn’t analyze the results properly you wouldn’t know this.
The other way to ensure you’re maximizing your results is to avoid doing a full scale A/B test. If your database for an email marketing campaign is large enough (again calculate minimum sample size), you can do a different kind of split test. First, split your list 10%/90% (ensuring it’s random). Then split the 10% group in half so you have two small splits and the remaining 90%.
Deploy your test to the 10% splits, give as much time as possible for activity to occur (twenty-four hours if possible), analyze the results and then deploy the winner to the remaining 90%. That way you’ve done your best to maximize the campaign’s results without going “all in” on a typical full file A/B split.
As with gambling, learn the rules, do the math, analyze the data and place your bets. Do it right, and the odds will swing in your favor.
Posted by Bill Leming on September 9th, 2010
In email marketing, there are always a lot of questions about how to judge the performance of an email campaign, and what will make it the most successful. The questions have been answered in any number of ways across the industry, but we have tended to see people with the same three questions since the birth of email marketing.
We’ve come to call them The Big Three:
1. What kind of a response rate should we expect from our list?
2. How often do you think we should send emails to our list?
3. What’s the best time of day/day of the week to send our emails?
This month we’ll tackle question one. Let’s begin by emphatically stating that there are no hard and fast rules regarding the answer to any one of these questions. The answer to question one depends upon how you define “response rate,” how the list was compiled, how it has been used /abused, how relevant the messages have been to the recipients, what performance baseline measurements exist, how many times a day/week/month/year the list has been mailed, what’s been the policy /practice re: subject lines and From addresses and about 100 other issues too numerous to list.
Currently there are no meaningful benchmarks that can be provided because there are simply too many variables at play. So unless you can definitively and accurately answer the question, “How long is a piece of string?” don’t expect anyone with any amount of integrity to answer what email response rate you should expect beyond, “It depends.” It’s simply not an answer that can easily be provided on a time-sensitive basis without performing due diligence and running a series of diagnostics.
In the future, measuring response rate will become a bit easier for those using an email service provider that has adopted the eec’s set of standardized metrics, known as the S.A.M.E Project. It will create a set of standardized email metrics that will create a common language and definition for metrics like response rate and make it easier to benchmark results. SubscriberMail will have complete adoption of the standardized metrics by December 2010. Stay tuned for part two!
Posted by Dave McCue on August 25th, 2009
In my quest to max out the storage space of my Hotmail account, I have hundreds of targeted email marketing messages saved from a variety of different sources. With so much email coming in all the time, I thought it might make for a fun exercise to look back over the last few days’ worth of email and point out some highs and lows…
Witty vs Effective:
Nike probably doesn’t need my help when it comes to marketing themselves, but a recent email had the following subject line: “Actually, It Is Rocket Science.”
When it comes to writing subject lines, the temptation to be fun or witty can lead to trouble. In this case, Nike was promoting a new running shoe called the LunarGlide+, but I would have never known it based on the subject line. As always, when it comes to subject lines, remember that you are writing for the recipients, not yourself or others within your company. Of course you’d open that email; doesn’t mean your subscribers would.
On a Thursday, Sirius | XM emailed me to let me know there would be a special, three-day channel dedicated to Woodstock over that weekend. Why is this timely? Because it was close enough to the weekend that it will still be fresh in recipients’ minds when they hit the road over the weekend. In November, this wouldn’t be nearly as effective, but during summer weekend road trip season, I really like the timing on this one.
You shouldn’t have:
My wife and I bought furniture from The RoomPlace last year, and for some reason they personalize messages by recipients’ last names rather than first names. My friends and my old football coach can call me by my last name, but it seems odd coming from a marketing message. Oh yeah, the last name they use is my wife’s maiden name—just to make it clear that I’m a valued subscriber.
Apple sent me an email promoting a Grand Opening of a new Apple Store in my area. Complete with directions and a t-shirt giveaway, this was a great example of targeting subscribers based on geography to ensure relevance as well as sparing non-local subscribers news that wouldn’t hold much value for them.
If Barnes & Noble has a preference center, there is no way to get there from their emails. This would really come in handy, as just this week I was sent a promotion about lower prices on text books and 10% off their selection of children’s books. Considering I’m neither a student nor a father, I wish there was a way I could choose which promotions I would like to receive.
Notice that I didn’t entirely discount the merits of any of these messages? Even those with flaws contained elements that the consumer in me could appreciate (i.e. Nike’s emails just look cool). In fact, it’s not often I come across a message that doesn’t have any redeeming qualities. The challenge email marketers face is typically not a full-scale overhaul of their messages, but the more difficult fine-tuning that will address deficiencies. As they say, the devil is in the details.