One of my mailing lists was asking questions today about an increase in invitation mailings from Spotify. I’d heard about them recently, so I started digging through my mailbox to see if I’d received one of these invites. I hadn’t, but it clued me into a blog post from early this year that I hadn’t seen before.
That article is full of FUD, and the author quite clearly doesn’t understand what the data he is relying on means. He also doesn’t provide us with enough information that we can repeat what he did.
But I think his take on the publicly available data is common. There are a lot of people who don’t quite understand what the public data means or how it is collected. We can use his post as a starting off point for understanding what publicly available data tells us.
The author chooses 7 different commercial mailers as his examples. He claims the data on these senders will let us evaluate ESPs, but these aren’t ESPs. At best they’re ESP customers, but we don’t know that for sure. He claims that shared IPs means shared reputation, which is true. But he doesn’t claim that these are shared IPs. In fact, I would bet my own reputation on Pizza Hut having dedicated IP addresses.
The author chooses 4 different publicly available reputation services to check the “marketing emails” against. I am assuming he means he checked the sending IP addresses because none of these services let you check emails.
He then claims these 4 measures
give a representation of how an ESP operates.
This includes whether it follows best principles and sends authenticated emails, unsubscribes Feed Back loop (FBL) complaints etc.
Well, no, not even a little bit.
The 4 measures he included are SenderScore, Sender Base, Trusted Source and MxToolbox’s blacklist checker. The first 4 are proprietary scores generated by commercial companies. Sender Base is a proprietary reputation scheme stream run by Cisco/Ironport. Trusted Source is a proprietary reputation evaluation run by McAfee.
In all cases, the scores are proprietary and are closely guarded secrets and we don’t know much about how they are generated. There are a few things I’m comfortable saying about them.
Scores reflect information provided by receiving mail servers. These scores are sometimes, but not always, applicable to receivers that use a different filtering system. Likewise, good senders can have poor scores and poor senders can have good scores.
In many of the scores volume plays an important role. Volume changes, whether up or down can cause unexpected and transient changes in scores.
Publicly available reputation scores don’t actually tell you that much about the policies of an ESP or the deliverability at a certain ESP. Content is playing a bigger and bigger role in filtering at major ISPs, and good IP reputation scores aren’t sufficient to overcome bad content.
The only thing that actually tells you about delivery rates is: actually looking at your delivery rates.
The other source the author relied on to analyze deliverability is a scan of 100+ blocklists. He points out that some “ESPs” are listed and blocked by those blocklists. He never mentions which ESPs are listed, or which blocklists are listing them. There are a lot of published blocklists that are not very widely used, and many senders, ESP and otherwise, don’t notice or care. The time and energy to get delisted does nothing to improve delivery. So they just ignore it.
As we’ve demonstrated here recently, even listings on widely used lists are not sufficient to demonstrate poor practices on the part of the sender. Sometimes the blocklists are wrong.
So the author wrote an entire blog post about analyzing deliverability, without actually analyzing deliverability.
And, when he reported the results of his analysis, he left out all the relevant information that would allow us to repeat his analysis. We can’t look at the IP addresses (or the ESPs) that he used as samples because he reported neither bit of information. We can’t look at the blocklists that these IP addresses (or the ESPs) are listed on because he didn’t report the blocklists.
His delivery analysis is full of problems. Tomorrow we’ll look at the errant conclusions he drew from his “analysis.”