Here’s why we’ve moved away from the 100-point scale
There has been considerable discussion over the past many years regarding the value of the 100-point scale for rating wines.
When the scale was first introduced and popularized by Robert Parker, it did make sense. The 100-point scale actually works when there is only one person reviewing all the wines. In that context, it is quite clear (or at least should be) that when the reviewer scores a wine 94 points and another wine 87 points that the reviewer believes that the wine awarded the higher score is the better quality wine. With the inclusion of price, the reader is also able to distinguish which wines the reviewer believes provide greater value than others. For example, given a $100 wine that scored 91 points and a $14 wine that scored 88 points, the $14 wine should generate greater interest because it represents, in the opinion of the reviewer, a tremendous value and is affordable to a greater number of wine consumers.
But when there are multiple reviewers, the 100-point scale loses its context, even when the reviewers are experienced and regularly taste and are familiar with a wide range of wines. Part of the issue is that one reviewer’s 90 points is another reviewer’s 86 points. Both may believe that the wine is excellent, but where the wine falls on each reviewer’s personal scale may differ. This can cause considerable confusion for readers, given that both reviewers may have the exact same opinion of the wine’s quality, even though their scores seem to indicate otherwise.
The scoring range of reviewers also appears to be narrowing, thus making it more difficult to distinguish between well-made and not-so-well-made wines. Many reviewers, it seems, only have a range between 86 and 92 points. This crowding, in my opinion, diminishes the value of reviews to the reader.
Another issue with the 100-point scale is that wines that score below 90 points are often dismissed as inferior. There once was a time when a wine scoring 80 points was considered good quality. But, today, a wine scoring 80 points is perceived as undrinkable. Reviewers, readers, retailers and producers have all contributed to this viscous cycle.
Producers, understandably, use 90+ point scores to help market their wines. This collaterally raises the profile of the reviewer and publication, often resulting, subconsciously or not, in score escalation. Scores continue to escalate with an inordinate number of wines receiving 97+ points. In fact, given the penchant of some reviewers to anoint too many wines with their 100-point oil, it seems perfection is not that uncommon. Readers, as a result, are mistakenly led to believe that wines scoring under 90 points are ignored.
Complicating matters, many readers only look at the score, dismissing the reviewer’s description of the wine, thus losing sight of whether stylistically, the wine is something that they may actually enjoy. And retailers often take the easy way out by posting high scores to sell wines versus engaging and educating their clients and determining which wines may actually appeal to them. Speaking from personal experience, as someone who owned a retail wine shop for more than 20 years, treating each client as an individual helps in not only evolving and broadening each client’s palate, but also in opening their minds to not-so-well-known wines that provide great quality and value that they may, and often do, fall in love with. By taking the latter approach, wine sellers have the ability to elevate wine culture in their community, which allows for all sectors of the wine industry to benefit. Everyone in the industry needs to accept that not every wine, regardless of a high score, will appeal to every wine drinker’s taste.
Perhaps the most controversial issue caused by the chase for points is producers who make wines in a certain style to appeal to particular reviewers in hopes of receiving higher scores. The so-called Parker effect often results in wines losing their sense of place and becoming formulaic, shifting the focus to the points received versus growing balanced, terroir-based wines.
So, what do we do? I believe that there isn’t one, unique solution to this issue. However, it is important that we put the focus back on the wine as opposed to the score the wine receives. It’s also important to understand that not all reviewers will agree on a particular bottle, and that’s okay. Even experienced tasters will have a certain level of subjectivity based on their expectations, views, experiences and influences. Part of the beauty of wine is its diversity. There may well be wines that all can agree are great quality (or not), but without some divergence in views, we would likely exist in a sea of homogeneous mediocrity.
It’s time for reviewers to put the focus back on the wine and write reviews based on a wine’s varietal character, representation of style and/or region, balance and price-quality ratio and expressly indicate whether they believe the wine is exceptional, recommended, worth a try or, constructively, not.
And readers need to focus on how the wine is described and who is reviewing the wine. Follow reviewers whose taste appears to match yours. By reading the reviews (as opposed to just looking at the scores) and getting to understand the tastes of various reviewers, you will likely discover a wider range of wines that appeal to you versus pigeonholing yourself into wines based on criteria having nothing to do with your preferences. Focus on price point and style. Learn to trust your own palate and make it less about the numbers — doing so will open doors to discovery.
Let’s cultivate wine lovers instead of point seekers.