Archive for the ‘SubscriberMail Tips’ Category
Posted by Nic Winters on April 29th, 2013
Nearly every email marketing expert regularly proclaims the benefits of segmenting your list and deploying relevant content to the resulting segments. However, the creation, testing, deployment, and tracking of all these versions of your email campaigns can lead to frustration because of time demands of segmented message creation.
Harland Clarke Digital clients can bypass many of these bumps in the road by utilizing the dynamic content feature within their SubscriberMail accounts. This feature allows you to create, test, and deploy one message containing dynamic content items that include information relevant to each individual segment. At the time of deployment, the dynamic list filter (i.e. subscriber state = Illinois) you assign to each piece of dynamic content automatically assigns the appropriate version of the message to deploy to each individual segment.
The tracking and reporting related to this dynamic message will be simplified. It will include a consolidated roll-up report, giving you an overall summary of all messages and then a breakout of the performance for each individual version.
Contact the Harland Clarke Digital Client Support team at firstname.lastname@example.org for more information regarding how you can utilize dynamic content within your SubscriberMail account.
Posted by Bill Leming on April 18th, 2013
We’re in the final stages of wrapping up a 45 second video testimonial for a large financial institution and thought it might be helpful to compile a list of key considerations if you’re thinking about adding video to your website, your emails and your MMS mobile messages.
- Map out as precisely as you can what you want to achieve (length, content, authenticity, feel, tone, key takeaways, music, disclaimer requirements, scripted or non-scripted, who is hosting the video, etc.) and define as narrowly as possible your intended audience BEFORE you take another step. Preparing a detailed Creative Brief will define the scope, which will allow you to accurately estimate costs and help keep everyone on target.
- Hire an experienced director; they’re well worth the added cost and will quickly turn what, might otherwise be an amateurish endeavor, into a professional video. The same is also true for your editor.
- Have a skilled make-up artist on site. We’re all such avid consumers of various professional video formats that we almost take these people for granted. Don’t—they too are worth the additional investment required.
- Speaking of the production crew, review samples of both the camera and sound crew before you hire them. If you’re considering interviews, promotional announcements or testimonials, ask to see some recent examples. Like all things, some are better than others and some have particular fortes.
- Carefully interview your talent pool before you ask anyone to participate. Besides the obvious qualities of being photogenic and having a non-abrasive speaking voice, having a face-to-face interview before asking them to participate may reveal some traits and/or mannerisms that do not translate well to the screen and that might not be apparent via a phone conversation.
- Send the editor any and all client branding guidelines (documents or online resources) as well as any client specific font requirements before production ever begins…It will save you both time and money.
- Select an appropriate location that reflects the nature of the individual you’re interviewing or a site that seems natural for the individual providing the testimonial.
- Unless you’re particularly skilled and adept in the video world, seriously consider using an experienced production management team such as Harland Clarke Digital.
Like so many things that on the surface seem relatively easy, there’s a whole host of production details that are all critically important to achieving your video success. This list would include talent waivers, disclaimer copy, honorariums, transcription of final cut for legal review and an entire list of others, all of which need to be managed if you’re going to be successful and are on budget. Call us today to discuss how we might best work with you on your next video.
Posted by Nic Winters on January 28th, 2013
Harland Clarke Digital provides all SubscriberMail users with a standard page where recipients can update their email address or opt-out from future mailings. Clients can work with us to develop custom pages for this purpose, but here is one quick update you can make on your own that will allow you to brand your default page.
Within your SubscriberMail account, you have the ability to use our Add a Logo feature – a simple file selection and import screen that allows you to identify an image you would like to be featured above our standard unsubscribe verbiage. You can import your logo or a banner image that matches the heading on your homepage.
Not only will this image be displayed on the unsubscribe/edit page, it will also automatically appear above content on other pages we host for you. This could include “read more” pages (if you have read more links coded into your SubscriberMail email template) and subscription confirmation pages (if you are utilizing SubscriberMail opt-in code on your website).
Contact the Harland Clarke Digital Client Support team at email@example.com for more information.
Posted by admin on January 23rd, 2012
Reaching your Gmail subscriber’s inbox is critical. Even more important is that your message renders the way you want it to. All of it! If your HTML is more than 102 kilobytes, your email may be cut off by Gmail in mid-sentence. As an email marketer you may focus on the top half of your message, but at the bottom of your message are the tracking image used to record Opens/Renders and the unsubscribe link you need to be CAN-SPAM compliant.
Gmail will automatically clip a message if the total size exceeds 102 kilobytes. Users will see a [Message Clipped] View Entire Message link in order to download the rest of your message (see screenshot below). In Gmail’s smart phone and tablet apps, the same rules generally apply.
To fix this situation, keep your HTML code short by removing extra returns, comments and unnecessary attributes and styles. Applications like Outlook and Apple Mail will show you the size of your message if you’re looking for ways to test. You can also check your file size from an original HTML text file.
Aside from the HTML code, it is also recommended that you save your images in an optimized format. Recipients should not have to wait for the images to render on their desktop or smart phone.
Continue to test how your messages render. It is critical that your message renders properly in Gmail to avoid losing the unsubscribe link, tracking image for Opens/Renders , and any content that is displayed after 102 kilobytes.
Posted by Rob Ropars on August 26th, 2011
We’ve all heard that if you’re in marketing, in particular email marketing, you should constantly be testing to maximize results. The most common test mentioned is the ubiquitous “A/B” split test, meaning a 50/50 list split to test one variable against another (graphics, copy, offer, layout, list, time of day, day of week, etc.).
But is an A/B test all you can or should do? If you have only a few thousand or fewer emails to work with, an A/B test may be all you can do to ensure statistically reliable results. However, if your list is too small, an A/B test might not make any sense. For example, if you only have a few hundred email addresses, splitting and conducting one test will literally tell you nothing (statistically) other than directionally relevant information. Instead you may need to try to replicate the test over time, to aggregate the results and to analyze your collective data over a longer period.
The first consideration is to quantify how many email addresses you need to test to ensure you have a representative sample and more importantly, to ensure the results are reliable. There is a lot of math and science behind this topic, and fortunately a lot of math/science/statistics sites have free online tools such as this one.
You must set up the test(s) correctly (with sufficient sample sizes and assumed response rates) on the front end to ensure that results on the back end are reliable, meaning with a confidence level that you’re comfortable with (we recommend a 95% confidence level if it’s possible). Again, there are resources online to assist such as this one. The key is to avoid the common mistake of merely looking at results and assuming winners/losers based on seemingly different response rates.
Before testing, you have to identify the goal or the question you’re trying to answer. We recommend that you actually write these down and then, as briefly and concisely as possible, describe the various yardsticks you will use to determine your winner. As form follows function, the goals/objectives of the test coupled with the means to measure results should help drive copy, graphics, and/or layout to ensure the messages are properly structured and focused on whatever question you’re trying to answer..
Let’s say your goal is a higher click rate and after an A/B test you find “A” has a 2.7% CTR and “B” has 2.85%. It is a common mistake to use subtraction and declare that “B” was the winner or that “B” was only 0.15% higher and that could lead you down the path of thinking it wasn’t a significant result (i.e. a virtual “tie”). Or maybe you routinely just pick the higher percentage as the winner and run with that. Using proper percent increase/decrease calculations, we find that this is actually a 5.56% increase from “A” to “B.”
That however may or may not be statistically significant, but as you can see it’s a much larger increase than originally assumed. In order to determine if the results are statistically significant, use one of the calculators, plug in each version’s list size and the click percentage (or open percentage, or conversion rate, etc. depending on the key metric you’re analyzing) and it will instantly tell you whether this difference is enough to be reliable (with a 95% confidence level).
In this example, let’s pretend I sent “A” and “B” to a random 2,000 people each. The calculations indicate that this would not be enough of a difference to be statistically reliable. In fact, the “B” cell’s click rate would have to have been at least 3.81% in order for the difference to be reliably significant. However, if you didn’t analyze the results properly you wouldn’t know this.
The other way to ensure you’re maximizing your results is to avoid doing a full scale A/B test. If your database for an email marketing campaign is large enough (again calculate minimum sample size), you can do a different kind of split test. First, split your list 10%/90% (ensuring it’s random). Then split the 10% group in half so you have two small splits and the remaining 90%.
Deploy your test to the 10% splits, give as much time as possible for activity to occur (twenty-four hours if possible), analyze the results and then deploy the winner to the remaining 90%. That way you’ve done your best to maximize the campaign’s results without going “all in” on a typical full file A/B split.
As with gambling, learn the rules, do the math, analyze the data and place your bets. Do it right, and the odds will swing in your favor.