This post is more of a musing on a potential usecase for AR . Something I’m calling “Digital Fruit”. The idea stems from the need to have a quick way to distinguish the qualities of something. In nature it is mostly very easy to distinguish dangerous from safe. Or rather once a rule is learned its easy to follow or apply that rule to many classes of things. For instance it is dangerously simple to avoid small berries that are round, shiny and hard are not for eating and that berries in the form of a blackberry shape are probably safe. Another example would be petting an animal with fangs or brightly colored reptiles. There was some long forgotten training period , in which we were new to life but once we learned, it was easy to run on autopilot and lazily apply the rule to the get the general gist of the expected outcome, be that eating a piece of fruit or interacting with an animal. Because AR is already interfacing with a cluttered space, that is full of information. I purpose that any augmented information take the form of being easy to ascertain 90% of its meaning from shape and color. Codefying the information as much as possible. Allowing the augmented information to actually help make intelligent decisions, without having to understand the full genus of the thing. Imagine being able to pair down decisions like take the train or bus just by identifying two shapes and colors with no reading required. Imagine digital fruit in a HUD. Its much easier mental lift than any display text, or gif any day.