Professional Documents
Culture Documents
July 2018
Routing Trend
In the MNE Driving Routing mode, you evaluate routes to identify Major Navigational Errors (MNEs) that
would cause a bad user experience. However, we have observed that analysts mistakenly flag MNE-
Against Traffic for minor misalignments of the blue line.
Per Routing Guidelines Ch. 3.5, a rating of MNE-Against Traffic should only be applied in cases where
a user is instructed to drive the wrong way down a road. The most common examples of Against Traffic
MNEs are:
1. The blue line turns the wrong way onto a one-way street or the wrong way into a traffic circle
2. A two-way street becomes a one-way street and the blue line continues the wrong way on that street
3. A one-way street switches direction and the blue line continues the wrong way on that street
These errors are pretty obvious and severe. In other instances, the blue line aligns with the satellite layer
in a way that may appear to be Against Traffic, but is not, in fact, routing the user into on-coming traffic.
Example
Let’s take a look at this route in the United States, a country with right-hand traffic, where the user is
heading west. In the Satellite Layer, the blue line falls on the east-bound side of the road, but we can tell
from the Standard Layer that we are not actually instructing the user to drive against traffic. Although it is
a wide road, the road is not a divided/dual-carriage road. It is a single carriage/undivided road, and it is
depicted correctly in the Standard Layer. The blue line follows the middle of the road in the Standard
Layer. We might consider this minor misalignment (less than 50 meters) a Non-MNE issue, but this is not
an example of MNE-Against Traffic. Per Routing Guidelines Ch. 3.1.2, minor and un-noteworthy non-
MNEs like this do not merit any rating action nor a comment explaining the misalignment.
Example
The query is for [31 Lin], with the user and fresh viewport location both in Saint John, New Brunswick, Canada.
Using Canada Post official website and other online resources, we confirm 31 Linton Rd, Saint John, NB, Canada
(which corresponds to Result #4) exists in the real world. All other results do not show any evidence that they exist.
Result #1: This is the closest suggestion to the fresh viewport. Since this suggestion is closer than the address that
exists (Result #4), do not demote for distance. Relevance rating is Excellent. Address is rated Incorrect - Address does
not exist. Do not use Business/POI is closed or Does Not Exist for address type results.
Result #2: This is the second closest suggestion. Again this suggestion is closer than the address that exists, (Result #4).
Do not demote the relevance of this result for not existing, instead indicate this in the Address rating by selecting
Incorrect - Address does not exist. Relevance rating would still be Excellent.
Result #3: Measuring the distance from viewport, Result #3 is 9.9KM away while the address that exists (Result #4) is
9.2KM away. Relevance rating is Good. Address is rated Incorrect - Address does not exist.
Result #4: This is the valid address, as confirmed from the Canada Post official website. Relevance rating is Excellent,
even if there are other non-existent suggestions closer to the fresh viewport (such as Results #1 and #2). Address is rated
Correct.
Result #5: This result is more than 11.5KM from the fresh viewport. We demote for distance since an address that exists
(Result #4, 9.2KM) is closer. Relevance is rated Good. Address is rated Incorrect - Address does not exist.
Trend 2: Impact of Issues with Classification on Name Accuracy Ratings
In Search and Autocomplete you will need to evaluate the accuracy of POI/Business classification in order to determine
the final Name Accuracy. Not all results will include a classification and we do not demote when it is not present or listed
as N/A. When the classification is present, it needs to be appropriate for the result shown, as it will directly impact the
rating of Name Accuracy.
According to the Maps Search Guidelines Ch. 6.3.4. Classification and the Autocomplete Guidelines Ch. 5.3.4.
Classification, the Classification will impact the Rating of Name Accuracy as shown below:
* Refer to Maps Search and Autocomplete Guidelines Ch. 6 and 5 respectively, to review how to rate the Name/
Title correctness of a result.
Examples:
Rating relevance for short query strings is challenging because the user intent is often unclear. The viewport may be small
without any obvious results within or just outside. In these cases, rate with leniency to avoid excluding results that the
user may find useful, particularly when there are very few real world results that match the query. Favor prominent results
over lesser known suggestions. Follow the flowchart in the Autocomplete Guidelines Ch. 4. Relevance as a guide.
Example
The fresh viewport (FVP) covers one square kilometer and is over a small
town in Oregon. The user location is within the FVP. The query string is
[pros]. There are no prominent results within or just outside the FVP that
match the query string. All the suggestions match the query string, so the
focus is on user intent.
User/Viewport
5 Suggestions returned outside FVP.
Prospect St. is a small street but one of the closest results and is therefore
a possible primary intent. However, because it is not prominent, both
Excellent/Good Excellent and Good are possible ratings.
ProStyles Salon is one of the closest results. There are two ProStyles
Salons in Medford but this result is the closest of the two. Relevance is
rated Excellent.
Excellent