I recently made a custom but straightforward GUI application using SDL and C++. It doesn’t have a lot of moving parts, just a few number changes on button press so I expected it to be quite fast. On my old lintel i3 Ubuntu laptop it is, but on the Linux based micro controller I developed it for not so much. I tried it on two different controller boards, the ever classic Raspberry Pi 3B+ and a HummingBoard Edge which sports an iMX.6 dual core processor. On the raspberry pi, running my application would max out all 4 cores and overheat it in a matter of minutes. The HummingBoard fared a little bit better, the heat sink kept it from overheating but both of it’s cores were maxed. In both cases the program ran extremely slow, getting a single new frame every 2-3 seconds.
Now my initial reaction was that somethings wrong with my code so i went back and ran one of Lazy Foo’s tutorials on the Pi (http://lazyfoo.net/tutorials/SDL/24_calculating_frame_rate/index.php). A simple frame counter maxed out 2 of the Pi’s 4 cores and hit a max frame rate of about 40.
Later I found out that there is an option on the new Pi to enable hardware graphics acceleration and sure enough it ran my GUI program at 60fps using a fraction of a single core. Now of course that will speed it up but should it really be that drastic? This got me thinking about back when I was reading up on SDL Textures and how they are stored in VRAM and interact directly with the graphics processor. What exactly happens with Textures when you have no hardware graphics acceleration? The HummingBoard and first test with the Pi had no graphics hardware, is this why my performance seems so ridiculously bad? If that is the case is there any fix for this besides rewriting my whole program using surfaces?