Properly doing surface/texture code and structure

I’m creating my first major application with SDL after making multiple small ones. Right now I want to make sure I set myself up right for the future so I don’t run into problems, but I’ll describe the main one I run into and hopefully someone can shed some light on the issue:

Right now my game is a tile-based one, so I have a bunch of static textures that never really need modifying. Drawing a bunch of them on screen in fullscreen accelerated mode yields 1000fps+ which is awesome. Note I have read the article on why FPS drops are not the end-all-be-all because of how it’s measured (so I understand 1000 to 500 is not as drastic as 30 to 15 for example)

The performance hit I’m getting is coming from my console text rendering. Right now lets say we have 10 lines being rendered from a stored log of messages. Every frame (yes this is a bad idea on paper; I know) I use TTF to create a surface, then convert to texture with SDL_CreateTextureFromSurface() and then push that to the renderer. Creating a texture from a surface is quite expensive and is probably causing the major performance hit. I might be doing it wrong (I’m freeing everything and have no memory leaks), but I don’t understand how creating so many strings is causing such a significant performance hit. Maybe creating strings is just expensive in general, but this leads me to a few questions:

  1. Since my game is heavily ‘sprite’ based, would it be better to:
  • Stay in the surface area and just do SDL_UpdateWindowSurface()?
  • Do textures by writing everything to a ‘screen’ surface and push as a texture that at the end?
  • Cache as much as possible to textures and strictly do textures?
  1. My large fps drop is coming from the expensive operation of SDL_CreateTextureFromSurface() when attempting to take console log text and render it in the console every frame, what is the best way to handle this? Some thoughts I have are:
  • Keep everything as surfaces and just write them to the surface and push that at the end to a texture
  • Convert strings to a texture and only create new textures when new strings are needed for rendering (so if were displaying lines 10-20, then cache lines 10-20 as textures and just draw those, only update when we want to see 11-21 or such so only 10 textures are in memory at a time, and creation is only done when there’s an update)
  • Compose an alphabet of textures, and then just dynamically render the strings letter by letter each frame
    *Note that I am running SDL_CreateTextureFromSurface() every frame for x number of strings (create surface with TTF, surface => texture, rendercopy)
  1. Are surfaces better with software and textures better with hardware/accelerated? Or is it best to always go textures now no matter what?

  2. Should I keep the surfaces around in memory with the textures? I don’t know if its easier to access pixel data that way instead of constantly polling the texture

  3. Would there be any reason to have a texture that is ‘streaming’? This may be answered previously, but if there is, why would one use that rather than doing it all surface and pushing it at the very end to the renderer?
    How expensive is making a texture from a large surface? Like should I expect a significant processing power hit from lets say converting a final surface thats 800x600 to a texture before pushing?

  4. Slightly off topic: Is there a need to use SDL_Delay() at all? Or should I use SDL_Delay(1) at least once after a frame is rendered to possibly not choke the operating system?

Any input or help is greatly appreciated :slight_smile:

2013/9/28, NuclearDonkey :

  1. Would there be any reason to have a texture that is ‘streaming’? This may
    be answered previously, but if there is, why would one use that rather than
    doing it all surface and pushing it at the very end to the renderer?

Yes, there are streaming textures exclusively for this reason - for
textures that get updated constantly. Of course the idea is that you
do all the changes CPU-side then upload the final result to the
texture. It’s just that streaming textures are a lot faster for this
purpose.

How expensive is making a texture from a large surface? Like should I expect
a significant processing power hit from lets say converting a final surface
thats 800x600 to a texture before pushing?

That’s almost 110MB per second (assuming 60FPS)… ouch.

  1. Slightly off topic: Is there a need to use SDL_Delay() at all? Or should
    I use SDL_Delay(1) at least once after a frame is rendered to possibly not
    choke the operating system?

Keep processing events every frame, that’s how the operating system
knows that the program is not stuck (and will adapt the scheduler
accordingly).

Message-ID: <1380387752.m2f.39459 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

  1. My large fps drop is coming from the expensive operation of
    SDL_CreateTextureFromSurface() when attempting to take console log text and
    render it in the console every frame, what is the best way to handle this?
    Some thoughts I have are:
  • Compose an alphabet of textures, and then just dynamically render the
    strings letter by letter each frame
    *Note that I am running SDL_CreateTextureFromSurface() every frame for x
    number of strings (create surface with TTF, surface => texture, rendercopy)

This is what I’d suggest. Depending on the design of your tile engine,
it might work well as a basis for this sort of system (note: I’d
suggest placing each letter in the same texture, so that if the entire
texture was displayed at once it’d look like a list of letters).

If you actually do this, then I’d definitely suggest keeping the
result in a different file than everything else, so that you’ll have
an easier time refining and reusing your “textured console” in the
future.

  • Convert strings to a texture and only create new textures when new strings
    are needed for rendering (so if were displaying lines 10-20, then cache
    lines 10-20 as textures and just draw those, only update when we want to see
    11-21 or such so only 10 textures are in memory at a time, and creation is
    only done when there’s an update)

For SDL I probably wouldn’t do this, with one exception: Ryan has
mentioned a method of using a shader script to convert a 8-bit
paletted image into a 32-bit “true color” image. I haven’t ever spent
the time to figure out if this is possible (or how to do it if it is)
but if you’ve used shaders before then you might try using one to
treat a one dimensional image as a reference into an “alphabet
texture”.

Regretfully I think this may be beyond the current SDL2 api (there was
a recent conversation about some sort of buffering of the render
state) so this probably belongs firmly in the “for future
consideration” category. This is also certainly more complex, but if
you do keep the text display code as a separate piece of code from all
of the other rendering, then this version should be able to use the
same interface as the normal version, and thus would be a relatively
straight-forward path for a future upgrade.

  1. Are surfaces better with software and textures better with
    hardware/accelerated? Or is it best to always go textures now no matter
    what?

In software renderers, textures are implemented AS surfaces, so little
difference there. For hardware/accelerated, textures will usually be
better because:

  1. They’re already on the graphics card, and thus can use a
    hopefully-faster local bus on the card, instead of having to go
    through both that local bus AND the motherboard bus;
  2. GPUs stereotypically have a large number of somewhat simplified
    processor cores that act as an array, instead of 1 to 8 cores that act
    independently. Thus, anything that the graphics cores have the
    capability to handle (note: because they’re special-purpose, there
    apparently have limits on conditionals), AND which can be broken down
    into a set of mostly identical procedures (remember: limited
    branching) is something that the GPU will automatically be faster at,
    because while the CPU could do one more complex version, the GPU could
    do 8 (or 800) simpler versions.

So, for software it mostly doesn’t matter. For hardware, it can
potentially be a massive difference, if you can keep the code simple
enough.

  1. Should I keep the surfaces around in memory with the textures? I don’t
    know if its easier to access pixel data that way instead of constantly
    polling the texture

If you want to occasionally (or often) read the pixel data, and you
AREN’T going to be modifying it, then yes, keep a copy of the surface
in the same location as you keep the texture, so that you can avoid
transferring data back and forth between MB and graphics card. If
you’re going to be modifying either of them then you’ll need to sync
them, which will reduce the effectiveness of this buffering technique
(though if you only modify small portions, or read even twice as often
as you modify, then this can potentially still be a good idea).

  1. Slightly off topic: Is there a need to use SDL_Delay() at all? Or should
    I use SDL_Delay(1) at least once after a frame is rendered to possibly not
    choke the operating system?

I’d suggest using SDL_Delay() in conjunction with some sort of time
function (possibly the ordinary C/C++ functions) so that you don’t
wind up wasting time on 70 fps instead of 60. That having been said, I
recall there being some minimum time where SDL just deals with it by
busy-waiting, so in that case you won’t be able to gain any extra
favor from the scheduling algorithms with this.> Date: Sat, 28 Sep 2013 17:02:32 +0000

From: “NuclearDonkey”
To: sdl at lists.libsdl.org
Subject: [SDL] Properly doing surface/texture code and structure

  1. It is just better to use SDL_Textures all around, but you could do
    option (b) if you really wanted to.
  2. Use Bitmap fonts (option c)
  3. Surfaces are better in software yes, and textures in software are really
    surfaces behind the scenes. I would architect my engine to use
    SDL_textures. That means not doing the stuff that is slow, such as
    calling SDL_CreateTextureFromSurface() multiple
    times per frame.
  4. You can’t poll textures in SDL (Well you can, if you are using Direct3D
    renderer).
  5. Streaming texture can be locked and you can update it with new pixel
    data. Other textures cannot be locked.
  6. Using SDL_Delay() can be a good idea if your game has time to spare.On Sat, Sep 28, 2013 at 10:32 PM, NuclearDonkey wrote:

**
I’m creating my first major application with SDL after making multiple
small ones. Right now I want to make sure I set myself up right for the
future so I don’t run into problems, but I’ll describe the main one I run
into and hopefully someone can shed some light on the issue:

Right now my game is a tile-based one, so I have a bunch of static
textures that never really need modifying. Drawing a bunch of them on
screen in fullscreen accelerated mode yields 1000fps+ which is awesome.
Note I have read the article on why FPS drops are not the end-all-be-all
because of how it’s measured (so I understand 1000 to 500 is not as drastic
as 30 to 15 for example)

The performance hit I’m getting is coming from my console text rendering.
Right now lets say we have 10 lines being rendered from a stored log of
messages. Every frame (yes this is a bad idea on paper; I know) I use TTF
to create a surface, then convert to texture with
SDL_CreateTextureFromSurface() and then push that to the renderer. Creating
a texture from a surface is quite expensive and is probably causing the
major performance hit. I might be doing it wrong (I’m freeing everything
and have no memory leaks), but I don’t understand how creating so many
strings is causing such a significant performance hit. Maybe creating
strings is just expensive in general, but this leads me to a few questions:

  1. Since my game is heavily ‘sprite’ based, would it be better to:
  • Stay in the surface area and just do SDL_UpdateWindowSurface()?
  • Do textures by writing everything to a ‘screen’ surface and push as a
    texture that at the end?
  • Cache as much as possible to textures and strictly do textures?
  1. My large fps drop is coming from the expensive operation of
    SDL_CreateTextureFromSurface() when attempting to take console log text and
    render it in the console every frame, what is the best way to handle this?
    Some thoughts I have are:
  • Keep everything as surfaces and just write them to the surface and push
    that at the end to a texture
  • Convert strings to a texture and only create new textures when new
    strings are needed for rendering (so if were displaying lines 10-20, then
    cache lines 10-20 as textures and just draw those, only update when we want
    to see 11-21 or such so only 10 textures are in memory at a time, and
    creation is only done when there’s an update)
  • Compose an alphabet of textures, and then just dynamically render the
    strings letter by letter each frame
    *Note that I am running SDL_CreateTextureFromSurface() every frame for x
    number of strings (create surface with TTF, surface => texture, rendercopy)
  1. Are surfaces better with software and textures better with
    hardware/accelerated? Or is it best to always go textures now no matter
    what?

  2. Should I keep the surfaces around in memory with the textures? I don’t
    know if its easier to access pixel data that way instead of constantly
    polling the texture

  3. Would there be any reason to have a texture that is ‘streaming’? This
    may be answered previously, but if there is, why would one use that rather
    than doing it all surface and pushing it at the very end to the renderer?
    How expensive is making a texture from a large surface? Like should I
    expect a significant processing power hit from lets say converting a final
    surface thats 800x600 to a texture before pushing?

  4. Slightly off topic: Is there a need to use SDL_Delay() at all? Or
    should I use SDL_Delay(1) at least once after a frame is rendered to
    possibly not choke the operating system?

Any input or help is greatly appreciated [image: Smile]


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


Pallav Nawani
IronCode Gaming Private Limited
Website: http://www.ironcode.com
Twitter: http://twitter.com/Ironcode_Gaming
Facebook: http://www.facebook.com/Ironcode.Gaming
Mobile: 9997478768

The performance hit I’m getting is coming from my console text rendering. Right now lets say we have 10 lines being rendered from a stored log of messages. Every frame (yes this is a bad idea on paper; I know) I use TTF to create a surface, then convert to texture with SDL_CreateTextureFromSurface() and then push that to the renderer. Creating a texture from a surface is quite expensive and is probably causing the major performance hit. I might be doing it wrong (I’m freeing everything and have no memory leaks), but I don’t understand how creating so many strings is causing such a significant performance hit. Maybe creating strings is just expensive in general, but this leads me to a few questions:

I’m assuming this is a quake-style console. Given that, I’ve got
some thoughts.

  1. Create the texture once and recreate it as rarely as possible.
    You need only ever re-create it when the size of the displayed
    console gets bigger than what you can actually display with the
    texture you’ve got. If your game runs at 800x600 and your console is
    only allowed to be half-height, 800x300 is needed. If the user
    changes game resolution to 640x480, 800x300 still works. If they go
    up to 1024x768, well, why aren’t you using OpenGL? :smiley: (Kidding,
    mostly?) No, at that point you’d have to actually re-allocate the
    texture to 1024x384. Even 10 year old graphics cards handle textures
    of that size, though hopefully not an endless number since a 32bpp
    texture at that size is 1.5 megabytes(!)

  2. The surface likewise should be kept unless you know you need to
    make it bigger. You might think resizing it smaller may reclaim some
    memory, but again not necessarily. Memory fragmentation is annoying,
    and even though the OS can sometimes clean it up on you, you’d rather
    not have the performance hit if it needs to do so.

See Generally Off-Topic (and somewhat simplified) History Lesson for
why that is.

  1. Render that surface only when the content of the console changes.
    Not the position. Theoretically you only have to re-render the new
    text, since you can just blit the old to, eg, scroll a line, but I
    bet you’re going to find insignificant difference in clearing the
    surface and re-rendering versus moving pre-rendered pixels, clearing
    the free space, and writing to it. You then stream the updated
    surface into the texture. That’s still moving potentially 1.5 MiB
    across your system bus at 1024x384, but ? how often do you do that?
    Besides, a pure software renderer would have to do it at 30+ frames
    per second. Which is why a lot of those software renderers back in
    the day ran at 320x200x8bpp? It’s a lot easier to move 64KiB on a
    386 than 1.5MiB! :wink:

  2. If you don’t need to read back pixels aside from the occasional
    screenshot, your best bet is usually to put your graphics in to
    textures (and hopefully offloaded to the GPU) as quick as possible
    and do your compositing there. If you’ve got sprites and tiles, load
    ’em and leave 'em is usually a good rule of thumb, unless you can’t
    as with your console.

Of course, if you had a pre-rendered font, that you could load as a
texture. There were libraries for SDL 1.x that did so, but obviously
that costs you unicode support.

Joseph

Generally Off-Topic History Lesson: Back in the DOS daze (and on
other dated platform where a game takes over the entire system),
games would typically allocate memory at startup?like generally all
of it?and use their own very simplistic memory manager. This memory
is your heap, and the offset of the first unused space in it is your
heap pointer.

They’d load all of the common resources first and save the heap
pointer upon doing so as a high water mark. Resources used only by
the current level/etc continue to be loaded into the heap. The
local heap equivalent of free() likely does very little, if it even
exists, until you decide you’re done with all of the previous level’s
data at once. Then you erase everything down to the high water mark,
basically by just replacing the heap pointer with the high water mark
location.

Then you have your stack, which generally goes at the top of the
heap and builds down. Need a temporary string buffer to use a printf
variant? Stack alloc. Same for any alloc you’ll free by the end of
the function.

This explains many of the glitches and bugs in those old games,
doesn’t it? Smashed heap and/or stack. :slight_smile:

These days you don’t take over the whole computer, and the OS still
does memory management for you. Heap allocations tend to be done in
pages, and CPUs pretty much since the 386 allow OSes to swap memory
around physical space and even out to the HD? And you can free and
reallocate unused parts of pages. But the page isn’t freed until all
allocations into it are freed (ref counting), and the OS generally
cannot clean up the contents of a page.

Incidentally, you’ll note that Windows and classic MacOS use things
called handles. These are pointers to opaque structures that happen
to contain pointers to stuff. These handles are used by the OS to be
able to move memory allocations for the OS around without telling you
about it. The APIs that spawned them predate CPUs that can remap
memory like that.

If you ever play with microcontrollers (such as those by Atmel,
Microchip, Parallax, and even some based ARM-based devices nowadays)
you’ll have exactly the same memory structure to deal with. RAM
allocated in a heap from the bottom up, with a stack allocated
(basically by calling functions in your C code) from the top down.
And all too easy with memory fragmentation to have the heap smash the
stack.On Sat, Sep 28, 2013 at 05:02:32PM +0000, NuclearDonkey wrote:

  1. Since my game is heavily ‘sprite’ based, would it be better to:
  • Stay in the surface area and just do SDL_UpdateWindowSurface()?
  • Do textures by writing everything to a ‘screen’ surface and push as a texture that at the end?
  • Cache as much as possible to textures and strictly do textures?
  1. My large fps drop is coming from the expensive operation of SDL_CreateTextureFromSurface() when attempting to take console log text and render it in the console every frame, what is the best way to handle this? Some thoughts I have are:
  • Keep everything as surfaces and just write them to the surface and push that at the end to a texture
  • Convert strings to a texture and only create new textures when new strings are needed for rendering (so if were displaying lines 10-20, then cache lines 10-20 as textures and just draw those, only update when we want to see 11-21 or such so only 10 textures are in memory at a time, and creation is only done when there’s an update)
  • Compose an alphabet of textures, and then just dynamically render the strings letter by letter each frame
    *Note that I am running SDL_CreateTextureFromSurface() every frame for x number of strings (create surface with TTF, surface => texture, rendercopy)
  1. Are surfaces better with software and textures better with hardware/accelerated? Or is it best to always go textures now no matter what?

  2. Should I keep the surfaces around in memory with the textures? I don’t know if its easier to access pixel data that way instead of constantly polling the texture

  3. Would there be any reason to have a texture that is ‘streaming’? This may be answered previously, but if there is, why would one use that rather than doing it all surface and pushing it at the very end to the renderer?
    How expensive is making a texture from a large surface? Like should I expect a significant processing power hit from lets say converting a final surface thats 800x600 to a texture before pushing?

  4. Slightly off topic: Is there a need to use SDL_Delay() at all? Or should I use SDL_Delay(1) at least once after a frame is rendered to possibly not choke the operating system?

Any input or help is greatly appreciated :slight_smile:


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I had need to display a console similar to VGA. ?This took much more pain than I expected so I’ll share the code to spare others. ?It’s in Python so you’ll have to tweak it for C, but the point is you can see all the SDL2 calls needed to get something to work. ?I think I extracted the relevant parts.

The font glyphs are stored as a 256x1 RGBA texture.??The char is from the old extended ascii font (cp850) whereas the font used (Lucida Console) is in Unicode. ?The font mapping is done once when the font texture is created. ?This simple texture works because the numbers of chars is small. ?To support a full font, the code would have to adapt to (cache) the chars used.

Since it’s modelled after VGA, it’s 80x25 with one byte of char and one byte of color information. ?Drawing is basically a lot of fill rects each with a character blended on top. ?There’s nothing smart done for blank lines or scrolling cases.

The code is not optimized much but it was good enough for now so I moved on.

I detected but didn’t render underlines.

I didn’t use sdl ttf strings for every draw because I 1) wanted only textures used, 2) had extreme color needs, 3) didn’t need/want font issues like kerning.

Maybe this helps.

-Roger

from os import path, getenv

import sdl2
import ctypes
import sdl2.sdlttf as ttf

class Console():

? ? def init(self, cpu, port):
? ? ? ? self.columns = 80
? ? ? ? self.rows = 25
? ? ? ? self.window = None
? ? ? ? self.renderer = None

? ? ? ? sdl2.SDL_Init(sdl2.SDL_INIT_VIDEO)

? ? ? ? ttf.TTF_Init()

? ? ? ? # need fonts that display 195 extended ascii (251c unicode) like (Lucida)?
? ? ? ? for f in [“lucon.ttf”]:
? ? ? ? ? ? ? ? self.font = ttf.TTF_OpenFont(f, 20) # look locally first?
? ? ? ? ? ? ? ? if not bool(self.font): ?# font != None
? ? ? ? ? ? ? ? ? ? self.font = ttf.TTF_OpenFont(path.join(getenv(‘windir’), “Fonts”, f), 20)

? ? ? ? ? ? ? ? if bool(self.font):
? ? ? ? ? ? ? ? ? ? #print 'found ’ + f + ’ ’ + 'named ’ + ttf.TTF_FontFaceFamilyName(self.font)
? ? ? ? ? ? ? ? ? ? break

? ? ? ? if not bool(self.font):
? ? ? ? ? ? raise Exception(‘no monospace fonts found’)
? ? ? ? self.font_texture = None

? ? ? ? # remember we do not automatically display a window unless the?
? ? ? ? # the code initializes the video.

? ? def font_set(self):
? ? ? ? # copy images of all 256 chars to self.font_texture?
? ? ? ? self.font_texture = sdl2.SDL_CreateTexture(self.renderer,?
? ? ? ? ? ? sdl2.SDL_PIXELFORMAT_RGBA8888, sdl2.SDL_TEXTUREACCESS_TARGET,?
? ? ? ? ? ? self.char_width * 256, self.char_height)
? ? ? ? sdl2.SDL_SetRenderTarget(self.renderer, self.font_texture)
? ? ? ? sdl2.SDL_SetRenderDrawColor(self.renderer, 0, 0, 0, 0) # black alpha
? ? ? ? sdl2.SDL_RenderClear(self.renderer)
? ? ? ? sdl2.SDL_SetRenderDrawBlendMode(self.renderer, sdl2.SDL_BLENDMODE_NONE)

? ? ? ? r = sdl2.SDL_Rect(0, 0, self.char_width, self.char_height)
? ? ? ? white = Color(‘white’)
? ? ? ? for c in range(1, 256):
? ? ? ? ? ? #char_surface = ttf.TTF_RenderGlyph_Solid(self.font, ord(chr©.decode(‘cp850’)), white)
? ? ? ? ? ? char_surface = ttf.TTF_RenderGlyph_Blended(self.font, ord(chr©.decode(‘cp850’)), white)
? ? ? ? ? ? char_texture = sdl2.SDL_CreateTextureFromSurface(self.renderer, char_surface)

? ? ? ? ? ? r.x = self.char_width * c
? ? ? ? ? ? sdl2.SDL_RenderCopy(self.renderer, char_texture, None, r)

? ? ? ? ? ? sdl2.SDL_FreeSurface(char_surface)
? ? ? ? ? ? sdl2.SDL_DestroyTexture(char_texture)

? ? ? ? sdl2.SDL_SetRenderTarget(self.renderer, None)
? ? ? ? sdl2.SDL_SetTextureBlendMode(self.font_texture, sdl2.SDL_BLENDMODE_BLEND) # for the background

? ? def reconfigure(self):
? ? ? ? app_name = path.splitext(path.basename(argv[0]))[0]
? ? ? ? width = ctypes.c_int()
? ? ? ? ttf.TTF_GlyphMetrics(self.font, ord(‘0’), None, None, None, None, width)
? ? ? ? self.char_width = width.value
? ? ? ? self.char_height = ttf.TTF_FontLineSkip(self.font)

? ? ? ? self.window = sdl2.SDL_CreateWindow(app_name,
? ? ? ? ? ? sdl2.SDL_WINDOWPOS_UNDEFINED,
? ? ? ? ? ? sdl2.SDL_WINDOWPOS_UNDEFINED,
? ? ? ? ? ? self.columns * self.char_width, self.rows * self.char_height, 0)

? ? ? ? self.renderer = sdl2.SDL_CreateRenderer(self.window, -1, sdl2.SDL_RENDERER_ACCELERATED)
? ? ? ? sdl2.SDL_SetRenderDrawColor(self.renderer, 0, 0, 0, 255) # black
? ? ? ? sdl2.SDL_RenderClear(self.renderer)
? ? ? ? sdl2.SDL_RenderPresent(self.renderer)

? ? ? ? if self.font_texture == None:
? ? ? ? ? ? self.font_set()

? ? def draw(self):
? ? ? ? if self.window:
? ? ? ? ? ? e = sdl2.SDL_Event()

? ? ? ? ? ? if sdl2.SDL_PollEvent(ctypes.byref(e)) != 0:
? ? ? ? ? ? ? ? if e.type == sdl2.events.SDL_QUIT:
? ? ? ? ? ? ? ? ? ? sdl2.SDL_DestroyTexture(self.font_texture)
? ? ? ? ? ? ? ? ? ? sdl2.SDL_DestroyWindow(self.window)
? ? ? ? ? ? ? ? ? ? self.window = None
? ? ? ? ? ? ? ? ? ? sdl2.SDL_DestroyRenderer(self.renderer)
? ? ? ? ? ? ? ? ? ? self.renderer = None
? ? ? ? ? ? ? ? ? ? sdl2.SDL_Quit()
? ? ? ? ? ? ? ? ? ? self.cpu.keep_going = False
? ? ? ? ? ? ? ? ? ? return # exit to avoid the draw below

? ? ? ? ? ? ? ? elif e.type == sdl2.events.SDL_KEYDOWN:
? ? ? ? ? ? ? ? ? ? print “key %d (%d)” % (e.key.keysym.sym, e.key.keysym.scancode)
? ? ? ? ? ? ? ? ? ? print e.key.keysym.sym
? ? ? ? ? ? ? ? ? ? self.handle_keypress(e.key.keysym.sym)

? ? ? ? ? ? self.draw_text()

? ? def draw_text(self):
? ? ? ? r = sdl2.SDL_Rect(0, 0, self.char_width, self.char_height)
? ? ? ? char_r = sdl2.SDL_Rect(0 * self.char_width, 0, self.char_width, self.char_height)

? ? ? ? sdl2.SDL_SetRenderDrawBlendMode(self.renderer, sdl2.SDL_BLENDMODE_NONE)
? ? ? ? back_last = 0
? ? ? ? palette_index = back_last * 3
? ? ? ? # the VGA RGB palette is 6 bits per color channel. ?
? ? ? ? # So shift it up to fill a full 8 bits each channel.?
? ? ? ? sdl2.SDL_SetRenderDrawColor(self.renderer,?
? ? ? ? ? ? self.palette_data[palette_index] << 2,?
? ? ? ? ? ? self.palette_data[palette_index + 1] << 2,?
? ? ? ? ? ? self.palette_data[palette_index + 2] << 2,
? ? ? ? ? ? 0xff)

? ? ? ? fore_last = 1
? ? ? ? palette_index = fore_last * 3
? ? ? ? sdl2.SDL_SetTextureColorMod(self.font_texture,?
? ? ? ? ? ? self.palette_data[palette_index] << 2,?
? ? ? ? ? ? self.palette_data[palette_index + 1] << 2,?
? ? ? ? ? ? self.palette_data[palette_index + 2] << 2)

? ? ? ? screen_p = 0xb8000
? ? ? ? screen_y = 0
? ? ? ? for y in range(self.rows):
? ? ? ? ? ? screen_x = 0

? ? ? ? ? ? for x in range(self.columns):
? ? ? ? ? ? ? ? c = self.cpu.m.m[screen_p]
? ? ? ? ? ? ? ? attribute = self.cpu.m.m[screen_p + 1]

? ? ? ? ? ? ? ? r.x = screen_x
? ? ? ? ? ? ? ? r.y = screen_y
? ? ? ? ? ? ? ? back = (attribute >> 4) & 0b1111
? ? ? ? ? ? ? ? if back != back_last:
? ? ? ? ? ? ? ? ? ? back_last = back
? ? ? ? ? ? ? ? ? ? palette_index = back * 3
? ? ? ? ? ? ? ? ? ? # the VGA RGB palette is 6 bits per color channel. ?
? ? ? ? ? ? ? ? ? ? # So shift it up to fill a full 8 bits each channel.?
? ? ? ? ? ? ? ? ? ? sdl2.SDL_SetRenderDrawColor(self.renderer,?
? ? ? ? ? ? ? ? ? ? ? ? self.palette_data[palette_index] << 2,?
? ? ? ? ? ? ? ? ? ? ? ? self.palette_data[palette_index + 1] << 2,?
? ? ? ? ? ? ? ? ? ? ? ? self.palette_data[palette_index + 2] << 2,
? ? ? ? ? ? ? ? ? ? ? ? 0xff)

? ? ? ? ? ? ? ? sdl2.SDL_RenderFillRect(self.renderer, r)
? ? ? ? ? ? ? ? # could draw/merge all rects first, then go back and render chars

? ? ? ? ? ? ? ? fore = attribute & 0b1111
? ? ? ? ? ? ? ? if fore != fore_last:
? ? ? ? ? ? ? ? ? ? fore_last = fore
? ? ? ? ? ? ? ? ? ? palette_index = fore * 3
? ? ? ? ? ? ? ? ? ? sdl2.SDL_SetTextureColorMod(self.font_texture,?
? ? ? ? ? ? ? ? ? ? ? ? self.palette_data[palette_index] << 2,?
? ? ? ? ? ? ? ? ? ? ? ? self.palette_data[palette_index + 1] << 2,?
? ? ? ? ? ? ? ? ? ? ? ? self.palette_data[palette_index + 2] << 2)

? ? ? ? ? ? ? ? char_r.x = c * self.char_width
? ? ? ? ? ? ? ? res = sdl2.SDL_RenderCopy(self.renderer, self.font_texture, char_r, r)

? ? ? ? ? ? ? ? u = (attribute & 0b1110111) == 0b0000001
? ? ? ? ? ? ? ? if u:
? ? ? ? ? ? ? ? ? ? print 'underline ’ + chr©

? ? ? ? ? ? ? ? screen_x += self.char_width
? ? ? ? ? ? ? ? screen_p += 2

? ? ? ? ? ? screen_y += self.char_height

? ? ? ? sdl2.SDL_RenderPresent(self.renderer)On Sat, Sep 28, 2013 at 05:02:32PM +0000, NuclearDonkey wrote:

The performance hit I’m getting is coming from my console text rendering. Right now lets say we have 10 lines being rendered from a stored log of?

* 
* 
* 









































* 
* 
*