top of page
  • amenai9

In this post, we are experimenting with the ability of major Generative AI models to analyze and deduce information from a VFR approach chart. The French aviation authorities (DGAC) publish and regularly maintain VFR approach charts that help general aviation pilots plan and conduct their approaches safely.

Typically, a pilot needs this information before attempting to approach an airfield. It contains useful details about the approach and airfield frequencies, the type of accepted aircraft, field elevation, runway length, pattern altitude, and Visual Reference Points.

Here is an example of a French VFR approach plate for Lannion Côte de Granit Airfield.




In today's experiment, we will try to extract information from this chart using Anthropic's Claude 3.5 sonnet model.

Prior to writing this blog post, we have done a few experiments by uploading the PDF files of the approach charts. This came with mixed results as only the written text is taken into account by the AI. This also applies to OpenAI's Chat GPT 4o.

However, for all visual information, we were more successful at directly uploading PNGs and asking models to respond to queries based on them. We will demonstrate a few cases in this blog post.


Extracting Visual Approach Briefing Information


We asked Claude 3.5 the following question after uploading the LFRO PNG representation of the approach chart:


"Given this approach chart, can you give me a short briefing about the runway's length, orientation, and slopes, frequencies to contact; field elevations, pattern altitude, entry points, and any specifics that would allow me to initiate my approach in day VFR? I intend to approach the airfield from the south."


Claude responded with the following as shown in the picture.





Claude's response was mostly correct. It recognized frequencies, entry points, and runway lengths. However, it failed to recognize the pattern orientation and pattern altitude. It also claimed that the pattern is right-hand for both runways, while only runway 11 has a right-hand pattern. It was not able to conduct reasoning on the visual reference point to use when approaching from the south and gave both choices OL (North) and S (South).


We have conducted the same experiment with GPT4o, also capable of vision. The result is very similar with a few notable differences as shown below.



GPT-4 did recognize different left and right patterns depending on the active runway; however, it was not able to recognize the entry points that Claude was able to. Subtle but important difference.


The Need for Human Supervision


For now, our conclusion is that the models, although capable of extracting information from the approach charts, are not fully capable of providing advisories on their own. There is a need to curate and verify the data, especially in a highly regulated and demanding domain like aviation. With Pilot Briefer, we intend to find a solution to that problem by designing hybrid systems that can get us 80% of the way, combining the "opinions" of different models and curating the data based on official and reliable sources, including human supervision. It is possible to do so since the volume of data is limited.

The current state of the art is promising but does have room for further improvements. Having redundant systems is a great architectural pattern, and it should apply to Generative AI-based models.



32 views0 comments




Artificial intelligence has taken various forms over the centuries, Evolving from inventions like the printing press in the 15th century and the Pascaline through Enigma and microprocessors. The definition of AI has continuously evolved. In the last century, the computational power enabled significant progress in a particularly interesting branch of AI: machine learning.


Reversing the programming paradigm with Machine Learning


Machine learning differs from traditional computer programming where computers follow precise, sequential instructions like a kitchen recipe to process data. With machine learning, computers learn to achieve outcomes by looking at numerous variations of the same problem and their solutions.

For example, in aviation, we can use machine learning to identify an aircraft manufacturer from a photograph. The AI must be trained with thousands of images of different plane models, each labeled with its manufacturer. Through this training, the AI develops its own logic to classify images by manufacturer. After learning from thousands of images, the AI can accurately determine the manufacturer from an unlabeled photo.


Learning by reading and writing: Generative AI


The latest trend, generative AI, combines multiple branches of AI. Its primary purpose is to extract knowledge from text patterns and sequences and generate plausible new combinations of this knowledge. Similar to a child learning to distinguish words and colors, this new generation of AI has learned everything written and drawn by humans on the internet. It derives probable solutions to problems through its computational ability.


How Generative AI can be used in General Aviation


Let's use another simple aviation example to illustrate the basics generative AI: during takeoff, the most likely action after "rotation" is to pitch for a climb attitude and maintain the recommended climb speed as per the manufacturer's specifications. These sequences are found in all flight manuals and aviation literature.


If I instruct ChatGPT as follows:


"During takeoff, after rotation I should..."


AI will predict the most probable actions after rotation, such as climb attitude, checks, retracting landing gear and flaps, etc. as shown below:





While Generative Artificial Intelligence lacks the capability to fly an aircraft, it excels at predicting the likely text continuation after the phrase "during takeoff, after rotation I should...". By analyzing billions of texts containing these words in various sequences, AI demonstrates its proficiency in engaging in discussions on diverse topics, including aviation.


Fortunately for us pilots, this intelligence, also known as a language model, has "read" and trained on numerous texts and images related to aviation. We will follow up in subsequent posts on other more practical use cases.

17 views0 comments
bottom of page