I woke up this morning to hear that a neutrino had been caught for speeding. Specifically, it allegedly exceeded the speed of light, although attorneys for the neutrino deny the charge. Now I do not claim quantum physics expertise, but I gather this is big news. As in, upending Einstein’s special theory of relativity news. Granted, it will not impact the way we go about our lives like a surge in oil prices, drop in tax rates, or invasion of
might. No one apart from a few geeks at MIT or Uof C will lose much sleep over this one. Cuba
But what is interesting is what the scientists in
Europe did once they discovered this phenomenon: they expressed disbelieve in their own findings, and opened up the research methodology and results to the wider scientific community for evaluate. In effect, they exposed themselves to disrobing on an international scale and having their three years of work torn apart by other scientists.
Within the scientific community, this is not uncommon. Some scientists during
’s time often jealously guarded their mathematical and scientific discoveries, for fear of having a pretender stake a claim on their findings. Today, the notion of sharing and questioning across the community is well established and part of the linear scientific method: Newton
- Define a question
- Form a hypothesis
- Analyze the data
- Interpret and draw conclusions
- Publish the results
- Retest, usually by others outside the initial researchers.
The community seeks to validate or refute the assertion, as it no doubt will in scrutinizing the little neutrino’s behavior.
Putting a marketing hat on, does the scientific method apply to the development of brands, campaigns, programs? For the most part, it does, but often in a flawed fashion.
- Define a question – this happens a lot, such as ‘who are our best customers’, ‘why are sales declining’, ‘which subject line pulls best response and conversion rates’, or myriads of other questions which marketers and their agencies/consultants seek to resolve.
- Observe – sometimes this step is what leads to the question in the first place. Declines in sales, response rates, conversion percentages, or whatever other metric is used typically lead to the questions, rather than the other way around.
- Form a hypothesis – here is where it sometimes goes astray in marketing, and where subjective beliefs in what should be the reason are often fed into the mix, and further steps in the method aligned to prove the erroneous hypothesis. This is usually self-preservation (or job protection, more accurately). For example, a decline is sales is best viewed as a sales force problem than a failure of marketing to effectively manage their brand’s perceptions or provide timely leads or give good customer service.
- Test – with many companies, the notion of testing and refining is well established. Perhaps the foremost experts are financial services marketers like American Express, who over the years honed the ‘test, learn, refine’ method for direct marketing and applied it to their digital and social marketing efforts. But for many companies, ‘test’ is a dirty word which is translated as ‘we don’t know what will work, can’t make a decision, and will have delayed impact on our overall results’. Companies often don’t think medium term, especially when the CEO is demanding results NOW. Testing just doesn’t fit within that mindset.
- Analyze – if you’re looking for a career in marketing, brush up on your mathematical/statistical skills. Analysis and interpretation of data is a vital component of any campaign. As long as it is done correctly, that is!
- Interpret and draw conclusions – here is where shrewd marketers can warp or skew the analysis to suit their hypothesis. A client I know once had incontrovertible proof from customer research that their loyalty scheme was ineffectual. Yet the millions invested in developing, launching, and promoting the scheme was far too much (aka heads would have rolled) to allow this facet of the analysis to come to the surface. The interpretation was skewed, other results highlighted, and the loyalty scheme continued.
- Publish the results – typically publishing is internally, and within marketing it falls into one of two categories: a. “yay, we’re great”; or b. move on to the next campaign. Unless the campaign serves the career purposes of the marketing team and is successful, the final results are often either ignored or explained away. Yet my experience is that clever marketing people can take a ‘bad’ campaign and through judicious spin make it sound like a success for the company. Occasionally the publishing of findings is external at a conference or seminar. But when was the last time you saw a ‘failure’ campaign as a case study, unless it was swiftly offset by a success in the presentation? Practically, opening up hypothesis and results to the wider marketing community is not practical unless well after the event in question, lest competitors and critics have a field day.
- Retest – sometimes, retesting is done. Not by the same marketing team, unless they’re opting for the ‘control’ and ‘best performing pack/email’ methodology of refinement. ‘Retest’ in marketing circles is typically done by a new CMO or Marketing leader, usually under the guise of claiming the previous person screwed it up but ‘I’ll get it right this time’.
Yet what would be refreshing would be a marketing team who are NEVER satisfied with the status quo. Who look for new ideas constantly, no matter the source. That failing but learning on a program or campaign is sometimes as important as success. And that refining, testing, retesting should be a way of life – not an anathema.
Unfortunately, with the CEO/CFO demanding speed of light results, it’s pretty unlikely.