Example Of "Ray Casting" With SDL1.2+OpenGL?

Hi.

I’m trying to find a demo with source of
"Ray Casting" using SDL 1.2 and OpenGL.

“Ray Casting” is a fast and simple 3D display technique.
(was used in original Doom game)

Does anyone know of a site that shows this?
I found good source examples at this site:-------------------------------------------------------------------------------
http://lodev.org/cgtutor/raycasting.html

but I don’t know how to use it with SDL 1.2 + OpenGL.

I want to get into some simple 3D now
and I think “Ray Casting” would be good place to start.

Thanks!

Jesse "JeZ+Lee"
16BitSoft
Video Game Design Studio
www.16BitSoft.com

I want to get into some simple 3D now
and I think “Ray Casting” would be good place to start.

I think it’s a bad place where to start from because raycasting was a
semi-fake 3d that has been used only until the hardware was not fast enough
to do real textured 3d in software.

If you want to learn opengl basics a classic starting point can be the nehe
opengl tutorials:

http://nehe.gamedev.net/

They all come with examples using also the SDL backend.–
Bye,
Gabry

Hi,On 4/29/11 4:12 PM, Jesse Palser wrote:

I’m trying to find a demo with source of
"Ray Casting" using SDL 1.2 and OpenGL.

“Ray Casting” is a fast and simple 3D display technique.
(was used in original Doom game)
Raycasting these days is part of 3D graphics, but not really used as the
rendering technique. Doom itself had a simple software rasterizer which
you cannot implement on top of OpenGL which does rasterization for you.

If you want to dive into 3D graphics, have a look at the OpenGL
superbible and learn some basic linear algebra which is very helpful for
3D graphics.

In general OpenGL makes it incredible easy to dive into 3D graphics
these days, especially if you ignore the fixed function pipeline
completely and dive into shaders directly.

Regards,
Armin

Hi All,

I’d like to point out that even though ray casting is no longer used as a
primary rendering technique, it is nevertheless immensely useful for things
like figuring out what a camera is pointing at, whether or not there is a
line that can join two world objects or entities without hitting anything
else, and other queries about reaching a part of space from another.

Recently I’ve been diving into this subject, and it’s very interesting. I’ve
found plenty of pages, papers and forum posts on the subject, yet still
found that I would be glad to hear anything knowledgeable people have to add
on the subject, because many things seem to be left to the reader’s
imagination.

Thus, I agree with the comments below, yet highly recommend people indulge
in talking about raycasts.

If people would rather this discussion be kept outside this list, since it
is slightly off-topic, then I’ll gladly share my email with all who wish to
talk about it.

Have a nice day!

ChristianOn Fri, Apr 29, 2011 at 10:31 AM, Armin Ronacher < armin.ronacher at active-4.com> wrote:

Hi,

On 4/29/11 4:12 PM, Jesse Palser wrote:

I’m trying to find a demo with source of
"Ray Casting" using SDL 1.2 and OpenGL.

“Ray Casting” is a fast and simple 3D display technique.
(was used in original Doom game)
Raycasting these days is part of 3D graphics, but not really used as the
rendering technique. Doom itself had a simple software rasterizer which
you cannot implement on top of OpenGL which does rasterization for you.

If you want to dive into 3D graphics, have a look at the OpenGL
superbible and learn some basic linear algebra which is very helpful for
3D graphics.

In general OpenGL makes it incredible easy to dive into 3D graphics
these days, especially if you ignore the fixed function pipeline
completely and dive into shaders directly.

Regards,
Armin


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hi,On 4/29/11 4:50 PM, Christian Leger wrote:

I’d like to point out that even though ray casting is no longer used as
a primary rendering technique, it is nevertheless immensely useful for
things like figuring out what a camera is pointing at, whether or not
there is a line that can join two world objects or entities without
hitting anything else, and other queries about reaching a part of space
from another.
Certainly. Raycats are very useful for object picking, lighting
calculations and a lot more.

Regards,
Armin

http://www.yurwelcome.com/wp-content/uploads/2010/05/laser-cats.jpgOn 29/04/2011 15:58, Armin Ronacher wrote:

Certainly. Raycats are very useful for object picking, lighting
calculations and a lot more.

No, no they are really not much use at all. They take too long.

Object picking is best done using the clipping hardware. Every 3D API
has a clipping API that uses the clipping hardware to do picking.
Lighting is best done using the rendering hardware to do the lighting
based on the surface normal of the thing being lighted.

In cases where you can’t use hardware picking such as when you want to
do things in the scene graph that aren’t going to the screen then you
still use clipping to to eliminate most of the graph to find the items
you want.

The question of whether there is line segment that connects to points
without passing through another object is also best done by clipping
to volume that contains the two points and all points in between.
Anything that isn’t clipped out is possibly in the way.

OTOH Ray Tracing (not ray casting) is a technique that lets you do
photo realistic rendering with true reflections and shadows. Amazing
stuff.

If someone wants to learn 3D graphics my suggestion is to go to the
web site of of a college that is language and culture appropriate for
you, look up the entry level computer graphics programming course, and
find the text book they are using. Then get a copy of that text book
and read it. Do the problems at the ends of chapters. And get on with
learning. A lot of colleges are putting entire courses on line for
free. You can just go an read the material. You might have to get the
text books, but that is cheap compared to taking the classes.

Or, you might even start by asking people to recommend web sites and
books to read. Do not believe that you are qualified to figure out
what is the right first thing to study. Do, try to do a lot of
programming and expect to throw most of it away. :slight_smile:

Bob PendletonOn Fri, Apr 29, 2011 at 10:26 AM, Tim Angus wrote:

On 29/04/2011 15:58, Armin Ronacher wrote:

Certainly. ?Raycats are very useful for object picking, lighting
calculations and a lot more.

http://www.yurwelcome.com/wp-content/uploads/2010/05/laser-cats.jpg


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

Bob, you never cease to amaze me. This is excellent advice and very well put.
Those who wish to learn programming (or really any discipline) would be well
advised to take it seriously. I speak both as a former software engineer and
as a current college instructor.

JeffOn Saturday 30 April 2011 16:22, Bob Pendleton wrote:

If someone wants to learn 3D graphics my suggestion is to go to the
web site of of a college that is language and culture appropriate for
you, look up the entry level computer graphics programming course, and
find the text book they are using. Then get a copy of that text book
and read it. Do the problems at the ends of chapters. And get on with
learning. A lot of colleges are putting entire courses on line for
free. You can just go an read the material. You might have to get the
text books, but that is cheap compared to taking the classes.

Or, you might even start by asking people to recommend web sites and
books to read. Do not believe that you are qualified to figure out
what is the right first thing to study. Do, try to do a lot of
programming and expect to throw most of it away. :slight_smile:

No, no they are really not much use at all. They take too long.

Object picking is best done using the clipping hardware. Every 3D API
has a clipping API that uses the clipping hardware to do picking.
Lighting is best done using the rendering hardware to do the lighting
based on the surface normal of the thing being lighted.

In cases where you can’t use hardware picking such as when you want to
do things in the scene graph that aren’t going to the screen then you
still use clipping to to eliminate most of the graph to find the items
you want.

The question of whether there is line segment that connects to points
without passing through another object is also best done by clipping
to volume that contains the two points and all points in between.
Anything that isn’t clipped out is possibly in the way.

OTOH Ray Tracing (not ray casting) is a technique that lets you do
photo realistic rendering with true reflections and shadows. Amazing
stuff.

If someone wants to learn 3D graphics my suggestion is to go to the
web site of of a college that is language and culture appropriate for
you, look up the entry level computer graphics programming course, and
find the text book they are using. Then get a copy of that text book
and read it. Do the problems at the ends of chapters. And get on with
learning. A lot of colleges are putting entire courses on line for
free. You can just go an read the material. You might have to get the
text books, but that is cheap compared to taking the classes.

Or, you might even start by asking people to recommend web sites and
books to read. Do not believe that you are qualified to figure out
what is the right first thing to study. Do, try to do a lot of
programming and expect to throw most of it away. :slight_smile:

Bob Pendleton

Bob,

I’m curious to know what is this clipping you refer to. Using which
algorithms? Which data structures? Are there examples of well-known
technologies that proceed in the manner you refer to? Or papers or websites
that explain this? I’m really curious, because after 5-6 years of learning
about game engine technologies (admittedly I’ve stayed away from
commercial-only engines), I’ve never heard of clipping as an
all-encompassing, best approach for object-picking or line-of-sight
computations.

I’m thinking right now about the example where you have two entities in a
world, one or both of which do not have a camera that results in a rendering
of a scene, where we want to know if they can see each other, or 'shoot’
each other, or what have you. What clipping mechanism is appropriate for
this? If we have world geometry stored in a BSP tree or an octree, then
there is a pretty widely-used approach of casting rays, which checks tree
nodes for ray-plane intersections. This is fast enough for real-time
interaction involving dozens of entities all shooting each other or anything
you want, with very varied world geometries. One example I use daily is the
Sauerbraten engine. How is this a bad approach?

Thanks,

ChristianOn Sat, Apr 30, 2011 at 7:22 PM, Bob Pendleton wrote:

No, no they are really not much use at all. They take too long.

Object picking is best done using the clipping hardware. Every 3D API
has a clipping API that uses the clipping hardware to do picking.
Lighting is best done using the rendering hardware to do the lighting
based on the surface normal of the thing being lighted.

Incorrect. In fact, the selection and feedback API has been removed from
OpenGL as of version 3.0, and hasn’t been updated since OpenGL 1.0 or
so. Even on those platforms where it is still supported, It runs on
software, and in most cases it runs slowly on software because nobody
uses it any more and there’s no support for it in hardware. So, no, this
is not an appropriate choice.

The traditional method for doing this in a modern application is,
indeed, ray casting, and Brian Hook has kindly posted a very good
explanation of the technique here:
http://bookofhook.com/phpBB/viewtopic.php?t=485 (Obligatory Disclaimer:
I checked the math, but I am credited as Nichola Vining for some
reason.) If you’re worried about speed, then yes, you use bounding
primitives and an acceleration structure. Bounding primitives are a good
idea; an acceleration structure may be overkill depending on how few
things you have on the screen.

The question of whether there is line segment that connects to points
without passing through another object is also best done by clipping
to volume that contains the two points and all points in between.
Anything that isn’t clipped out is possibly in the way.

It is unclear what you’re talking about here, but I certainly wouldn’t
use a clipping approach (at least in the sense of the Sutherland-Hodgman
algorithm, which is the standard algorithm for clipping things (lines?
polygons?)) I would just shoot a ray through the world, see what it
intersects, and then see if the point of intersection that occurs with
anything in the world appears between the two points on the line that we
are concerned with. If you want a line segment, remember from high
school mathematics that a line in space is defined by the equation L = o

  • td where t is a scalar, bounded between two values t0 and t1, and o
    and d are vectors. Simply do your line/everything intersection as per
    normal, and then see if your intersection point falls between the values
    of t0 and t1 for your line segment.

David Eberly’s book “Geometric Tools for Computer Graphics” is an
extremely good reference for intersection tests. Another useful resource
is the complete guide to How To Intersect Anything With Anything Else
at http://www.realtimerendering.com/intersections.html

N.On 4/30/2011 4:22 PM, Bob Pendleton wrote:

Hi,On 5/1/11 8:53 PM, Christian Leger wrote:

I’m curious to know what is this clipping you refer to. Using which
algorithms? Which data structures? Are there examples of well-known
technologies that proceed in the manner you refer to? Or papers or
websites that explain this? I’m really curious, because after 5-6 years
of learning about game engine technologies (admittedly I’ve stayed away
from commercial-only engines), I’ve never heard of clipping as an
all-encompassing, best approach for object-picking or line-of-sight
computations.
I suppose it refers to the now deprecated OpenGL select mode support
thing. I think it worked by having a buffer that attaches a value to a
pixel and then doing a screen space -> world space transformation and
finding what pixel value was hat and then going from that value back to
the object in question.

That is still possible, but you will most likely have to use multiple
render targets in modern OpenGL for something similar.

Regards,
Armin

Hi,

I’m curious to know what is this clipping you refer to. Using which
algorithms? Which data structures? Are there examples of well-known
technologies that proceed in the manner you refer to? Or papers or
websites that explain this? I’m really curious, because after 5-6 years
of learning about game engine technologies (admittedly I’ve stayed away
from commercial-only engines), I’ve never heard of clipping as an
all-encompassing, best approach for object-picking or line-of-sight
computations.
I suppose it refers to the now deprecated OpenGL select mode support
thing. I think it worked by having a buffer that attaches a value to a
pixel and then doing a screen space -> world space transformation and
finding what pixel value was hat and then going from that value back to
the object in question.

That is still possible, but you will most likely have to use multiple
render targets in modern OpenGL for something similar.

I’ve never even bothered to read the red book’s section on picking, always
assuming it was really outdated.

My guess is, though I haven’t performed benchmarks, that to render a scene,
even a partial, skeletal one, in order to to decide if one object is visible
to another (again, referring to entities with ‘eyes’ in a virtual world), is
on the far side of excessive. If I’m wrong I’ve love to learn it.

ChristianOn Mon, May 2, 2011 at 7:17 AM, Armin Ronacher <armin.ronacher at active-4.com>wrote:

On 5/1/11 8:53 PM, Christian Leger wrote:

I should note that Blender performs very badly on AMD consumer cards because the drivers have an unaccelerated path for this, where as NVIDIA consumer cards have an accelerated path for this.

The blender devs insist it is AMD’s problem, I tend to disagree because consumer cards are optimized for games and games have never used this feature, there’s no compelling reason for Blender to
expect it to be fast on consumer cards, although there is definitely a marketing angle to this (consumer cards lacking certain features is nothing new).On 05/02/2011 04:17 AM, Armin Ronacher wrote:

I suppose it refers to the now deprecated OpenGL select mode support
thing. I think it worked by having a buffer that attaches a value to a
pixel and then doing a screen space -> world space transformation and
finding what pixel value was hat and then going from that value back to
the object in question.


LordHavoc
Author of DarkPlaces Quake1 engine - http://icculus.org/twilight/darkplaces
Co-designer of Nexuiz - http://alientrap.org/nexuiz
"War does not prove who is right, it proves who is left." - Unknown
"Any sufficiently advanced technology is indistinguishable from a rigged demo." - James Klass
"A game is a series of interesting choices." - Sid Meier

I use a pixel color selection buffer to perform mouse selection of objects.
And with a simple fragmenter shader

Hi,

I’m curious to know what is this clipping you refer to. Using which
algorithms? Which data structures? Are there examples of well-known
technologies that proceed in the manner you refer to? Or papers or
websites that explain this? I’m really curious, because after 5-6 years
of learning about game engine technologies (admittedly I’ve stayed away
from commercial-only engines), I’ve never heard of clipping as an
all-encompassing, best approach for object-picking or line-of-sight
computations.
I suppose it refers to the now deprecated OpenGL select mode support
thing. I think it worked by having a buffer that attaches a value to a
pixel and then doing a screen space -> world space transformation and
finding what pixel value was hat and then going from that value back to
the object in question.

That is still possible, but you will most likely have to use multiple
render targets in modern OpenGL for something similar.

I’ve never even bothered to read the red book’s section on picking, always
assuming it was really outdated.

My guess is, though I haven’t performed benchmarks, that to render a
scene,
even a partial, skeletal one, in order to to decide if one object is
visible
to another (again, referring to entities with ‘eyes’ in a virtual world),
isOn May 2, 2011 5:12 PM, “Christian Leger” <chrism.leger at gmail.com> wrote:
On Mon, May 2, 2011 at 7:17 AM, Armin Ronacher <armin.ronacher at active-4.com>wrote:

On 5/1/11 8:53 PM, Christian Leger wrote:
on the far side of excessive. If I’m wrong I’ve love to learn it.

Christian

(Sorry accidentally hit send)

I use a pixel color selection buffer to perform mouse selection of objects.
And with a simple fragment shader that just sets fragment color to the
current object id in
an off-screen RenderBuffer/FrameBufferObject, this is close to free on a GTX
460

/Jacob

2011/5/2 Christian Leger <chrism.leger at gmail.com>>

On Mon, May 2, 2011 at 7:17 AM, Armin Ronacher < armin.ronacher at active-4.com> wrote:

Hi,

On 5/1/11 8:53 PM, Christian Leger wrote:

I’m curious to know what is this clipping you refer to. Using which
algorithms? Which data structures? Are there examples of well-known
technologies that proceed in the manner you refer to? Or papers or
websites that explain this? I’m really curious, because after 5-6 years
of learning about game engine technologies (admittedly I’ve stayed away
from commercial-only engines), I’ve never heard of clipping as an
all-encompassing, best approach for object-picking or line-of-sight
computations.
I suppose it refers to the now deprecated OpenGL select mode support
thing. I think it worked by having a buffer that attaches a value to a
pixel and then doing a screen space -> world space transformation and
finding what pixel value was hat and then going from that value back to
the object in question.

That is still possible, but you will most likely have to use multiple
render targets in modern OpenGL for something similar.

I’ve never even bothered to read the red book’s section on picking, always
assuming it was really outdated.

My guess is, though I haven’t performed benchmarks, that to render a scene,
even a partial, skeletal one, in order to to decide if one object is visible
to another (again, referring to entities with ‘eyes’ in a virtual world), is
on the far side of excessive. If I’m wrong I’ve love to learn it.

Christian


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


“It is no measure of health to be well adjusted to a profoundly sick
society.” - Krishnamurti

Disregarding for a while the obsolescence of the selection and feedback
APIs, and elaborating a bit more on the acceleration structures you
mention…

As fast as GPUs might be, hardware selection is linear with respect to
the number of polygons. Yes, there’s clipping against the frustum before
rasterization, but still, it’s linear: if you duplicate the number of
polygons, on average, you duplicate the time it takes to query a selection.

OTOH, using ray casting can be sub-linear, (think logarithmic). If the
number of polygons is small, you might just as well throw all of them to
the ray casting engine and query for the nearest intersection. But if
you have many many polygons, you might already have some sort of spatial
structure (k-d trees, octrees, portals, hierarchical bounding volumes,
etc…) which can be used to speed up the ray-polygon intersection tests
a lot.

As an example, using such structures (and several caching and
approximation techniques) is precisely how ray tracing can remain
manageable in complex scenes.

Again, if the number of polygons is small, any technique works.

-GatoOn 05/01/2011 03:11 PM, Nicholas Vining wrote:

On 4/30/2011 4:22 PM, Bob Pendleton wrote:

No, no they are really not much use at all. They take too long.

Object picking is best done using the clipping hardware. Every 3D API
has a clipping API that uses the clipping hardware to do picking.
Lighting is best done using the rendering hardware to do the lighting
based on the surface normal of the thing being lighted.

Incorrect. In fact, the selection and feedback API has been removed
from OpenGL as of version 3.0, and hasn’t been updated since OpenGL
1.0 or so. Even on those platforms where it is still supported, It
runs on software, and in most cases it runs slowly on software because
nobody uses it any more and there’s no support for it in hardware. So,
no, this is not an appropriate choice.

The traditional method for doing this in a modern application is,
indeed, ray casting, and Brian Hook has kindly posted a very good
explanation of the technique here:
http://bookofhook.com/phpBB/viewtopic.php?t=485 (Obligatory
Disclaimer: I checked the math, but I am credited as Nichola Vining
for some reason.) If you’re worried about speed, then yes, you use
bounding primitives and an acceleration structure. Bounding primitives
are a good idea; an acceleration structure may be overkill depending
on how few things you have on the screen.

No, no they are really not much use at all. They take too long.

Object picking is best done using the clipping hardware. Every 3D API
has a clipping API that uses the clipping hardware to do picking.
Lighting is best done using the rendering hardware to do the lighting
based on the surface normal of the thing being lighted.

Incorrect. In fact, the selection and feedback API has been removed from
OpenGL as of version 3.0, and hasn’t been updated since OpenGL 1.0 or so.
Even on those platforms where it is still supported, It runs on software,
and in most cases it runs slowly on software because nobody uses it any more
and there’s no support for it in hardware. So, no, this is not an
appropriate choice.

That is a damn shame. I didn’t know I was that out of date on those
APIs. Back in the bad old days you could very efficiently set up a
clipping volume, send a set of primitives down the pipe and get back
the ones that intersected the clipping volume. You could do that with
out every drawing anything and it was (it still should be) much faster
than than doing the same thing in software.

The traditional method for doing this in a modern application is, indeed,
ray casting, and Brian Hook has kindly posted a very good explanation of the
technique here: http://bookofhook.com/phpBB/viewtopic.php?t=485 (Obligatory
Disclaimer: I checked the math, but I am credited as Nichola Vining for some
reason.) If you’re worried about speed, then yes, you use bounding
primitives and an acceleration structure. Bounding primitives are a good
idea; an acceleration structure may be overkill depending on how few things
you have on the screen.

The question of whether there is line segment that connects to points
without passing through another object is also best done by clipping
to volume that contains the two points and all points in between.
Anything that isn’t clipped out is possibly in the way.

It is unclear what you’re talking about here, but I certainly wouldn’t use a
clipping approach (at least in the sense of the Sutherland-Hodgman

Funny you should mention Hodgman, I used to work for him. Best
technical/product development manager I ever met.

algorithm, which is the standard algorithm for clipping things (lines?
polygons?)) I would just shoot a ray through the world, see what it
intersects, and then see if the point of intersection that occurs with
anything in the world appears between the two points on the line that we are
concerned with. If you want a line segment, remember from high school
mathematics that a line in space is defined by the equation L = o + td where
t is a scalar, bounded between two values t0 and t1, and o and d are
vectors. Simply do your line/everything intersection as per normal, and then
see if your intersection point falls between the values of t0 and t1 for
your line segment.

David Eberly’s book “Geometric Tools for Computer Graphics” is an extremely
good reference for intersection tests. Another useful resource is the
complete guide to How To Intersect Anything With Anything Else at
?http://www.realtimerendering.com/intersections.html

N.

You use the term “ray casting” for testing a ray against a data
structure that represents your graphic world. That’s fine. Let me
describe in a little more detail what I was talking about. Basically,
how do you do that ray casting.

First thing you have to do is find the primitives that might intersect
the ray. You can use any kind of data structure you want to accelerate
the process. The right data structure lets you drop from O(n) to O(
log n) and might let you get down to constant time. This process is
part of the clipping path in any rendering pipeline. The second part
of the process is detailed clipping. That is where you use something
like Sutherland-Hodgman to clip the primitives to the actual clipping
volume. The whole process is called clipping. No matter where you stop
it, you wind up with a list of primitives that you can check your ray
against. But, you only want to do the check against the ones that are
likely to intersect the ray.

You said, “shoot a ray” and then talked about bounding primitives and
I said “The question of whether there is line segment that connects to
points without passing through another object is also best done by
clipping to volume that contains the two points and all points in
between. Anything that isn’t clipped out is possibly in the way.” The
clipping volume is a bounding primitive. The stuff that is possibly in
the way is the stuff you have to intersect with the ray (line segment)
to see if there is an intersection.

When you are rendering an image the clipping phase often also includes
things like back face culling and the use of a z-buffer (yes, the
z-buffer is part of the clipping system. At least from my point of
view).

Assuming you that you are the Nicholas Vining I just looked up on
Linkedin then congratulations on your MSCS. I got mine in '85. It is a
good degree to have.

Bob Pendleton.On Sun, May 1, 2011 at 3:11 PM, Nicholas Vining wrote:

On 4/30/2011 4:22 PM, Bob Pendleton wrote:


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------