The Software Side of the Quanser Self-Driving Car Research Studio - Quanser (2024)

Welcome back to the second post in this blog series, covering the development of our Quanser Self-Driving Car Research Studio. My first post looked at the hardware aspects of QCar, the feature vehicle of the studio. Today, I would like to focus on software aspects.

Admittedly, the software side isn’t quite as flashy as talking about the hardware, but stick with me, because the software is at the core of making all the magic happen. (Don’t tell our software guys I said that though, or it might go right to their heads!) So let’s look at some exciting new software additions – and maybe a few things you didn’t know that our software can already do.

QUARC: Bigger on the Inside

At the heart of all our products is our software, called QUARC. Many of our products are focused on teaching or researching various aspects of control theory, so traditionally, we present this using Mathworks Matlab/Simulink. Simulink allows you to graphically draw control diagrams that are exactly the same as what you might see in a textbook. QUARC enhances the Simulink library to bring new, real-time functionality – there are over 400 blocks available in QUARC.

QUARC uses the code generation capabilities in concert with our custom cross-compilers to automatically build and deploy code to remote targets (the NVidia TX2 in the case of the QCar). Once deployed, QUARC enables Simulink to provide you with real-time data, as well as real-time parameter changes to your vehicle. Finally, and probably most importantly, QUARC provides a level of hardware abstraction which improves and promotes code reusability, but I’ll come back to that in a bit.

Autonomous vehicles span many different aspects of engineering and computer science, and not everyone lives in the Matlab/Simulink world, but this is just one piece of QUARC. Did you know that QUARC also provides support in Java, C, C++, C#, and VB? It is a very powerful collection of tools and application programming interfaces (APIs). The QUARC codebase has over 3 million lines of code, and that doesn’t even include all of our custom cross compilers! That is more code than the space shuttle, the F-22 Raptor, or the Hubble Space Telescope have!

The “New” Hot Programming Language

If you’re already working in the autonomous vehicle space, then Python is an obvious omission to the list of languages QUARC supports. Despite being almost 30 years old, Python has seen a massive surge in popularity in the last five years. Particularly in the fields of artificial intelligence and machine learning, it is dominating the field. That’s why, as a key part of the Self-Driving Car Research Studio project, we are adding support for Python 3 to QUARC.

But wait! you cry. What about Python 2.7!? Well, with that version scheduled to be “sun-setting” on January 1st, 2020, many of the latest APIs only provide Python 3 support. So we opted to move directly to that version.

Python on the Radar

QUARC is compartmentalized into many sub-libraries, but there are three in particular that we chose to wrap for Python support: HIL, Stream, and Media:

  • HIL (Hardware In the Loop)
    The HIL API is one of the foundational APIs of QUARC. Remember the hardware abstraction feature I mentioned being a key feature of QUARC? With our HIL API, you begin by calling our HIL Open function and specify what piece of hardware you’re talking to. After that, we have function calls to read analog inputs, write PWM outputs, set encoder counts, etc. So, let’s say you previously wrote an algorithm for your QBot 2e ground robot that uses sensor fusion to combine IMU readings with wheel encoders to estimate your position. Now you want to move that algorithm to your brand-new QCar. All you need to do is change the parameter in your HIL Open that says you are now accessing a QCar instead of a QBot, and the rest of your code stays the same. QUARC provides you with a unified interface for every target, every platform, and every language.

    Python will be a part of our beta release of QUARC 2020, with initial support for the QCar’s Ubuntu on the TX2, QBot 2e’s Raspbian on the Raspberry Pi, and Windows. That also means you’ll have access to all of our current data acquisition hardware that is supported on those platforms using Python.
  • Stream
    The Stream API is all about communications. If you want to customize your hardware to be specific to your research, you may need access to serial ports, SPI, or I2C. Like the HIL API, the Stream API unifies communications with a protocol-agnostic interface. This means that the TX2 SPI port can be accessed the same way as the high-performance SPI port we provide through the QCar’s custom hardware. Going cross-platform, the same code can then be used to access an SPI port on a QBot. With a few parameter changes, the same functions could talk to an I2C port. If you build a library of code to access custom sensors or other hardware, QUARC helps make your hardware-specific code more portable.Outside of the hardware protocols, the Stream API also supports pipes. This feature lets you create communication channels between separate processes running on your QCar with minimal overhead. Of course, you could code everything in a single process with the same language, but this gives you the flexibility to use multiple languages if you want. For example, you could use Python for the AI algorithm, Simulink for the motion controller, and C++ for image processing.The final core use of the Stream API is for networking with TCP/IP and UDP. In this area, there are plenty of options for Python already. Using communications for control usually means a lot of small, but time-sensitive packets (try the “nagle” option!). Using communications for video means moving a lot of data as efficiently as possible (and with eight possible video streams on the QCar, that’s gigabits of data per second!). The Stream API gives you access to lots of parameters so you can optimize each of your communication streams.
  • Media
    Speaking of video, with the release of the Self-Driving Car Research Studio and QCars, we will be making our Media API publicly accessible for the first time. Like our other APIs, there is hardware abstraction and unification. You can access the depth camera on the QCar with the same code that accesses the depth camera on the QBot even though under-the-hood, these cameras operate very differently. You could plug in a webcam into the QCar on the USB 3 ports and access it the same way you access one of our high-speed CSI cameras. Our software team optimized the Media API for each of our target platforms. In the case of the QCar, the 2D media functions leverage the ability to DMA the CSI camera data directly to the GPU to manage format conversions. With that, we can efficiently translate the raw camera data into other formats that are directly compatible with the language of your choice so you can get high-speed or high-resolution video from the 360-degree camera coverage.

The Robot Elephant in the Room

It’s also time to address the robot elephant in the room: ROS or Robot Operating System. While it’s around since the late 2000s, its explosive growth began in 2016 with the first alpha version of ROS 2 and interest from both Amazon and Microsoft in 2017. Although there are lots of options for robotic middleware, ROS has become a dominant player, with a dedicated branch of the research community developing their work around the ROS framework.

ROS isn’t really an operating system, but rather a collection of tools and API’s to interface with hardware at a low level and then communicate between heterogeneous devices. So what are we doing in ROS? We’re not quite ready to talk about the details yet other than to say it’s coming for QCar early 2020! With the addition of our new Python and C APIs to QUARC, you will have all the parts you need to pursue your own development. I promise to bring you more details soon.

More Simulation

A new feature released in QUARC 2019 SP1 back in October is the HIL Simulation. This simple yet powerful block lets you simulate any of our DAQs or devices. In the HIL Initialize, you would change the board identifier from ‘0’ to ‘0@tcpip://localhost:18000’, for example. Your controller would now be talking to a virtual device of your own design instead. When you write an analog out on a HIL Write, it will arrive at the HIL Simulation as an analog in. A digital out from the simulation block would arrive as a digital input on your HIL Read block and so forth. This is a great way to seamlessly switch between physical and virtual hardware.

A simulation doesn’t need to be a distinct switch between virtual and physical, though. In fact, simulation and hardware-in-the-loop are more of a spectrum. In the demo we showed earlier this year at ICRA, the QCar communicated to a remote simulation of the vehicle dynamics running on a PC with a virtual world created in Unreal Engine 4. Data from the virtual RGB and depth camera was transmitted to the QCar, so all the image processing was being done onboard the TX2. This controlled the physical drive motor and steering. The steering and measured wheel velocity and was then returned to the PC to control the virtual QCar completing the loop:

Going to the other extreme, the QCar could be driving around the physical world, but you could use augmented reality to inject virtual objects into the real data. We’ll show you more details of our augmented reality experiments and how this can be used with QCar in the coming months.

Coming Up…

At this point, our Self-Driving Research Studio is almost finalized. We still have another month or so of validating all the hardware, then we’ll start moving this over to our manufacturing team. Our software development team has done some amazing work to get us up and running and they’re still hard at work, adding more features. Our applications team has been hard at work, implementing image processing algorithms, testing AI, and creating different driving scenarios. Their goal is to give you a great starting point so you can implement, validate, and publish faster! In the next blog post, we’ll take a look at what they have been up to. Ultimately, our driving motivation (pun intended) is to give you the tools to do your research your way.

Don’t miss future Self-Driving Car Research Studio blog posts, news, and updates. Sign up to be the first one to get them!

The Software Side of the Quanser Self-Driving Car Research Studio - Quanser (2024)

FAQs

What company makes software for self-driving cars? ›

Wayve is pioneering artificial intelligence software for self-driving cars.

What software does Tesla use for self-driving? ›

How does Autopilot work? As of mid-February 2022, all vehicles built for the North American market will feature Tesla Vision, which uses multiple cameras and powerful neural net processing to see the environment around the vehicle and deliver Autopilot features.

What software do autonomous vehicle engineers use? ›

Ansys provides a comprehensive autonomous vehicle sensor simulation capability that includes lidar, radar and camera design and development.

How does self-driving software work? ›

How Do Self-Driving Cars Work? Self-driving cars see what's going on around them using three main electronic “eyes”—radar, cameras and laser-based LiDar, which stands for light detection and ranging.

Who is leading the self-driving car technology? ›

Tesla Inc.

Regarding automated vehicle technology, perhaps the most well-known company is Tesla. Tesla has made a name for itself with their various self-driving automobiles, with cars like the Model S and Model Y being some of the most prominent Tesla vehicles.

Whose company is on the forefront of self-driving car technology? ›

Mobileye. Mobileye, a pioneer in ADAS and other life-saving technologies, is the market leader in self-driving enablement.

What software do Tesla engineers use? ›

Tesla has a wide range of CAD software that they rely on to design, test, and manufacture their products. From Autodesk Inventor and Solidworks for mechanical design, to Catia and Siemens NX for complex designs, Tesla uses the best CAD software available to bring its cutting-edge technology into the world.

Which software is most used in automotive industry? ›

SolidWorks. SolidWorks is a 3D CAD software that is widely used for designing and engineering automotive components and systems. It offers a wide range of tools for 3D modeling, analysis, and simulation, and is particularly useful for designing sheet metal parts and assemblies.

What is the computer for self-driving cars? ›

The NVIDIA DRIVE Orin™ SoC (system-on-a-chip) delivers 254 TOPS (trillion operations per second) and is the central computer for intelligent vehicles. It's the ideal solution for powering autonomous driving capabilities, confidence views, digital clusters, and AI cockpits.

What technology is used in self-driving cars? ›

Self-Driving Vehicles Technology

They also promise savings in fuel costs. LIDAR sensors: Detect objects in real-time using lasers. A laser beam is shot out (LIDAR) and then detects objects based on how they reflect at the sensor. Most modern vehicles have 4-8 LIDARs.

Who has the best autonomous driving technology? ›

Verdict
Car and rankingEase of useHands-off
1. Tesla Model Y55
2. Audi RS Q855
3. BMW iX355
4. Nissan Qashqai45
6 more rows

What is the software stack for autonomous vehicles? ›

At a basic level, the autonomous vehicle software stack is powered by sensors (including LiDAR, cameras, etc.) that feed data to AV software algorithms, which make split-second driving decisions. Firmware plays a crucial role in the tech stack in many different ways.

What company makes the technology for self-driving trucks? ›

Embark is making the $700 billion a year trucking industry safer, more sustainable, and more efficient with self-driving software that can power any fleet.

Who is the developer of self-driving cars? ›

then, numerous major companies and research organizations have developed working autonomous vehicles including Mercedes-Benz, General Motors, Continental Automotive Systems, Autoliv Inc., Bosch, Nissan, Toyota, Audi, Volvo, Vislab from University of Parma, Oxford University and Google.

Who makes the computer chips for self-driving cars? ›

STMicroelectronics (STMicro) is a globally recognized semiconductor company that produces a wide range of products, including chips for autonomous cars. STMicro's automotive-grade microcontrollers and sensors are utilized in various autonomous driving applications.

Which company has best self-driving tech? ›

Top 10: Autonomous Vehicles
  1. 1 | Zoox. Acquired by Amazon, Zoox develops purpose-built autonomous vehicles for ride-hailing services, emphasising a bidirectional, fully autonomous design for urban environments.
  2. 2 | Waymo. ...
  3. 3 | Tesla. ...
  4. 4 | Nuro. ...
  5. 5 | Mobileye. ...
  6. 6 | Cruise. ...
  7. 7 | Baidu. ...
  8. 8 | Aurora. ...
Jun 19, 2024

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Msgr. Benton Quitzon

Last Updated:

Views: 5885

Rating: 4.2 / 5 (43 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Msgr. Benton Quitzon

Birthday: 2001-08-13

Address: 96487 Kris Cliff, Teresiafurt, WI 95201

Phone: +9418513585781

Job: Senior Designer

Hobby: Calligraphy, Rowing, Vacation, Geocaching, Web surfing, Electronics, Electronics

Introduction: My name is Msgr. Benton Quitzon, I am a comfortable, charming, thankful, happy, adventurous, handsome, precious person who loves writing and wants to share my knowledge and understanding with you.