0xDEADBEEF?

  • 0xCFCFCFCF and 0xDFDFDFDF are pretty high in system memory, so
    dereferencing them is likely to cause a SigSegV.

There’s no guarantee that your selector is going to base anything at
0x000. 0xcfcfcfcf may not be 3GB away. It boggles the mind why anyone
would want to do anything that could even have a remote possibility of
being valid when the NULL approach is infallible. Plus, some
OSes/environments allocate data and code separately - some together, and
some are initialized to default differently as well. Working off non-known
state data is a BAD thing.

  • They are NOT NULL (or 0) so they can’t be detected with “if(ptr)
    {printf(ptr->string);}”… This is IMPORTANT because you SHOULD NOT be
    trying to read from uninitialized memory… especially to determine IF
    it has valid data.

I’m really confused on this one. If you’re trying to determine if it has
valid data, how would you know when one nonzero pointer is valid and one
nonzero pointer is not, whereas 0 is always invalid?

  • When the program crashes, it’s pretty easy to look at the stack and
    figure out what happened…

By the time the program crashes, you’re well and gone from where the real
problem ocurred, whereas a NULL is catchable every time.

–>Neil-------------------------------------------------------------------------------
Neil Bradley What are burger lovers saying
Synthcom Systems, Inc. about the new BK Back Porch Griller?
ICQ #29402898 “It tastes like it came off the back porch.” - Me

No… you misunderstand me!.. The INTENT is to FIND pointers that
weren’t set to NULL!

I agree that NULL should be used to indicate a pointer that is empty.
That is not what I’m saying. There is a difference between an EMPTY
pointer and an UNINITIALIZED pointer!.. An unititalized pointer can be
any value!!! There is no way to say “if(!is_initialized(ptr)) {
initialize_ptr(&ptr);}”… This is a tool to help find pointers that
should have been initialized (probably to NULL) but weren’t. If instead
you set all unitialized memory to 0x00, you can’t distingush between an
EMPTY and and UNITIALIZED pointer, so code that really is broken works
when it should insted fail.

-LorenOn Wed, 2002-08-28 at 15:14, Atrix Wolfe wrote:

  • They are NOT NULL (or 0) so they can’t be detected with “if(ptr)
    {printf(ptr->string);}”… This is IMPORTANT because you SHOULD NOT be
    trying to read from uninitialized memory… especially to determine IF
    it has valid data.

wouldnt it be better to check first for a null value before reading data
from it? Cause it could still cause an exception and if it doesnt, you
could get some random junk data, leading bugs to happen later in the code
making for hard to trace bugs right? Im a fan of setting pointers to null,
but of course my programming conditions might not be everybody’s. I work
alone for one, and not on embeded systems (:

oh ok, point taken, lol (:

-Atrix> ----- Original Message -----

From: linux_dr@yahoo.com (Loren Osborn)
To:
Sent: Wednesday, August 28, 2002 3:31 PM
Subject: Re: [SDL] 0xDEADBEEF ???

On Wed, 2002-08-28 at 15:14, Atrix Wolfe wrote:

  • They are NOT NULL (or 0) so they can’t be detected with “if(ptr)
    {printf(ptr->string);}”… This is IMPORTANT because you SHOULD NOT be
    trying to read from uninitialized memory… especially to determine
    IF

it has valid data.

wouldnt it be better to check first for a null value before reading data
from it? Cause it could still cause an exception and if it doesnt, you
could get some random junk data, leading bugs to happen later in the
code

making for hard to trace bugs right? Im a fan of setting pointers to
null,

but of course my programming conditions might not be everybody’s. I
work

alone for one, and not on embeded systems (:

No… you misunderstand me!.. The INTENT is to FIND pointers that
weren’t set to NULL!

I agree that NULL should be used to indicate a pointer that is empty.
That is not what I’m saying. There is a difference between an EMPTY
pointer and an UNINITIALIZED pointer!.. An unititalized pointer can be
any value!!! There is no way to say “if(!is_initialized(ptr)) {
initialize_ptr(&ptr);}”… This is a tool to help find pointers that
should have been initialized (probably to NULL) but weren’t. If instead
you set all unitialized memory to 0x00, you can’t distingush between an
EMPTY and and UNITIALIZED pointer, so code that really is broken works
when it should insted fail.

-Loren


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

but of course my programming conditions might not be everybody’s. I work
alone for one, and not on embeded systems (:
No… you misunderstand me!.. The INTENT is to FIND pointers that
weren’t set to NULL!

You can’t do this unless your environment supports this. There’s no
gauarantee that the memory that your program was given is fresh & clean or
was part of an old program that was running. FWIW, MSVC Sets memory errors
to 0xe5 (I think) by default when in debug mode, but 0s when not.

I agree that NULL should be used to indicate a pointer that is empty.
That is not what I’m saying. There is a difference between an EMPTY
pointer and an UNINITIALIZED pointer!.. An unititalized pointer can be
any value!!!

Sure, but it also can be a value that happens to be a valid pointer!

There is no way to say “if(!is_initialized(ptr)) {
initialize_ptr(&ptr);}”…

I think there’s an IsValidPtr() macro in MSVC if I’m not mistaken, but
it’ll only tell you if it’s within the program’s data/code range - not if
it points to valid data.

This is a tool to help find pointers that
should have been initialized (probably to NULL) but weren’t. If instead
you set all unitialized memory to 0x00, you can’t distingush between an
EMPTY and and UNITIALIZED pointer, so code that really is broken works
when it should insted fail.

I adapted a policy of making malloc/free wrapper functions that zero all
memory on allocation, and zero the pointer on a free operation, and in 6
years have never had this problem again. :wink:

–>Neil-------------------------------------------------------------------------------
Neil Bradley What are burger lovers saying
Synthcom Systems, Inc. about the new BK Back Porch Griller?
ICQ #29402898 “It tastes like it came off the back porch.” - Me

ROFL

0x0B00B135!!!

Or the Java magic number (first 4 bytes of a .class file):

0xCAFEBABE

Supposedly some cute waitress strolled by when they were
trying to decide what that signature should be. :slight_smile:

But this is getting way off topic …On Wed, 28 Aug 2002, Atrix Wolfe wrote:

You had something that threw a trap on 0xdeadbeef but not on 0x0?

Motorola MC68K based system. Accessing non-existant memory caused a bus
error. 0x0 referenced real memory–no bus error.

The signature was also useful for determining maximum depth of stack
It can also be determined by other methods as well (0x55aa, whatever

What I had in mind was runtime stack checks at each procedure entry. Takes
cpu cycles that may affect the behavior of some embedded systems.

Figuring maximum depth of stack will still take
something going and looking at how much stack has been used.

The stack usage check was initiated by the user. It was nothing more than a
means for the developers to guess how much stack space should be provided for
each process. The developer entered a special command at a terminal connected
to the device, a background task went and checked stack usage for all
processes, and let the developer know what percentage of stack had been used
for each. We then added a healthy fudge factor to the max stack used and set
that as the stack size.

This is getting really off-topic, but the info may be useful to someone out
there. :wink:

We’re talking Linux/Windows, here. Better to use NULL.

Agreed. That’s why I brought this up specifically in reference to embedded
systems. Whoever wrote the code that triggered this thread apparently thought
it was a good idea for Windows also.On Wednesday 28 August 2002 02:21 pm, you wrote:

  • 0xCFCFCFCF and 0xDFDFDFDF are pretty high in system memory, so
    dereferencing them is likely to cause a SigSegV.

There’s no guarantee that your selector is going to base anything at
0x000. 0xcfcfcfcf may not be 3GB away. It boggles the mind why anyone
would want to do anything that could even have a remote possibility of
being valid when the NULL approach is infallible. Plus, some
OSes/environments allocate data and code separately - some together, and
some are initialized to default differently as well. Working off non-known
state data is a BAD thing.

This is true… What you don’t see is this is a debugging tool to find
unitialized memory. The problem with your approach is that you need to
initialize memory to 0x00 always. This is inefficient when the memory
will be changed right away. In “my” method (I’m not going to actually
take credit for this myself, as I’ve seen it in several books) you only
need to artificially initialize memory when debugging. Otherwise, the
program itself is responsible for setting sane values. This method
simply ensures that the program does not assume anything about
uninitialized memory (the program should never check for 0xCF or 0xDF
explicitly)…

  • They are NOT NULL (or 0) so they can’t be detected with “if(ptr)
    {printf(ptr->string);}”… This is IMPORTANT because you SHOULD NOT be
    trying to read from uninitialized memory… especially to determine IF
    it has valid data.

I’m really confused on this one. If you’re trying to determine if it has
valid data, how would you know when one nonzero pointer is valid and one
nonzero pointer is not, whereas 0 is always invalid?

The idea is that a 0 pointer is valid but empty, but a 0xcfcfcfcf or
0xdfdfdfdf pointer is invalid… It isn’t possible to guarentee that
neither of these addresses are valid memory, but given that they are so
high in the address space, their invalidity is likely. REMEMBER, if the
program is working properly, it should never check for 0xcfcfcfcf or
0xdfdfdfdf…

  • When the program crashes, it’s pretty easy to look at the stack and
    figure out what happened…

By the time the program crashes, you’re well and gone from where the real
problem ocurred, whereas a NULL is catchable every time.

I’ve worked with this for over two years, and it has made tracking down
uninitialized memory bugs very easy. I can’t say it will be easier for
you personally, but that is my experience.

-LorenOn Wed, 2002-08-28 at 15:47, Neil Bradley wrote:

Exactly. That’s why those easy-to-spot byte codes are used to prevent use
of uninitialized data
. It really does work if you’re using a graphical
debugger with watches and stuff.
And it’s not only about pointers anyway. It could be integers, floats,
strings, anything you can put into malloc()ed memory. Stuff like all CCs or
CFs simply draws attention when you’re looking at memory dumps. Plus, it
happens to be an illegal pointer on many platforms. What more do you want?

Of course, valgrinding is even better :slight_smile:

cu,
NicolaiOn Thursday 29 August 2002 00:47, Neil Bradley wrote:

  • 0xCFCFCFCF and 0xDFDFDFDF are pretty high in system memory, so
    dereferencing them is likely to cause a SigSegV.

There’s no guarantee that your selector is going to base anything at
0x000. 0xcfcfcfcf may not be 3GB away. It boggles the mind why anyone
would want to do anything that could even have a remote possibility of
being valid when the NULL approach is infallible. Plus, some
OSes/environments allocate data and code separately - some together, and
some are initialized to default differently as well. Working off
non-known state data is a BAD thing.

i like your thinking. There has been many times when making games or
programs when i was like “man initilizing pointers to null wastes CPU cycles
when i could just code tighter and get away with less initialization”. I
never have had the balls too cause i know its asking for trouble, but what
your talking about could make that a reality.

Very cool stuff
-Atrix> ----- Original Message -----

From: linux_dr@yahoo.com (Loren Osborn)
To:
Sent: Wednesday, August 28, 2002 3:52 PM
Subject: Re: [SDL] 0xDEADBEEF ???

On Wed, 2002-08-28 at 15:47, Neil Bradley wrote:

  • 0xCFCFCFCF and 0xDFDFDFDF are pretty high in system memory, so
    dereferencing them is likely to cause a SigSegV.

There’s no guarantee that your selector is going to base anything at
0x000. 0xcfcfcfcf may not be 3GB away. It boggles the mind why anyone
would want to do anything that could even have a remote possibility of
being valid when the NULL approach is infallible. Plus, some
OSes/environments allocate data and code separately - some together, and
some are initialized to default differently as well. Working off
non-known

state data is a BAD thing.

This is true… What you don’t see is this is a debugging tool to find
unitialized memory. The problem with your approach is that you need to
initialize memory to 0x00 always. This is inefficient when the memory
will be changed right away. In “my” method (I’m not going to actually
take credit for this myself, as I’ve seen it in several books) you only
need to artificially initialize memory when debugging. Otherwise, the
program itself is responsible for setting sane values. This method
simply ensures that the program does not assume anything about
uninitialized memory (the program should never check for 0xCF or 0xDF
explicitly)…

  • They are NOT NULL (or 0) so they can’t be detected with “if(ptr)
    {printf(ptr->string);}”… This is IMPORTANT because you SHOULD NOT be
    trying to read from uninitialized memory… especially to determine
    IF

it has valid data.

I’m really confused on this one. If you’re trying to determine if it has
valid data, how would you know when one nonzero pointer is valid and one
nonzero pointer is not, whereas 0 is always invalid?

The idea is that a 0 pointer is valid but empty, but a 0xcfcfcfcf or
0xdfdfdfdf pointer is invalid… It isn’t possible to guarentee that
neither of these addresses are valid memory, but given that they are so
high in the address space, their invalidity is likely. REMEMBER, if the
program is working properly, it should never check for 0xcfcfcfcf or
0xdfdfdfdf…

  • When the program crashes, it’s pretty easy to look at the stack and
    figure out what happened…

By the time the program crashes, you’re well and gone from where the
real

problem ocurred, whereas a NULL is catchable every time.

I’ve worked with this for over two years, and it has made tracking down
uninitialized memory bugs very easy. I can’t say it will be easier for
you personally, but that is my experience.

-Loren


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

but of course my programming conditions might not be everybody’s. I work
alone for one, and not on embeded systems (:
No… you misunderstand me!.. The INTENT is to FIND pointers that
weren’t set to NULL!

You can’t do this unless your environment supports this. There’s no
gauarantee that the memory that your program was given is fresh & clean or
was part of an old program that was running. FWIW, MSVC Sets memory errors
to 0xe5 (I think) by default when in debug mode, but 0s when not.

I have yet to see an environment that prevents you from writting a
malloc wrapper (or from overloading operator new on C++).

The “dirtyness” of the memory buffer is actually the thing this
debugging tool is trying to make your program agnostic to. It is
actually SIMULATING a dirty memory buffer that is most likely to make
your program choke on it. Bare in mind that this is only intended to be
done while debugging.

I agree that NULL should be used to indicate a pointer that is empty.
That is not what I’m saying. There is a difference between an EMPTY
pointer and an UNINITIALIZED pointer!.. An unititalized pointer can be
any value!!!

Sure, but it also can be a value that happens to be a valid pointer!

The program should never check for 0xcfcfcfcf… Once the pointers are
initialized, no pointers should ever contain that value. It is simply
to force your program to crash if it DOESN’T initialize a pointer.

Admittedly I do occationally add a
assert(ptr_that_should_be_initialized != 0xcfcfcfcf);
when I suspect I’m choking on unitialized memory, but:

  • That only compiles into the debug version of the program
  • It simply causes the program to crash there and then if not true.
  • Usually isn’t even the problem.

There is no way to say “if(!is_initialized(ptr)) {
initialize_ptr(&ptr);}”…

I think there’s an IsValidPtr() macro in MSVC if I’m not mistaken, but
it’ll only tell you if it’s within the program’s data/code range - not if
it points to valid data.

My point is the program should never check if the memory is unitialized
(unless you are tracking down a particular bug… and then only in the
debug build)…

This is a tool to help find pointers that
should have been initialized (probably to NULL) but weren’t. If instead
you set all unitialized memory to 0x00, you can’t distingush between an
EMPTY and and UNITIALIZED pointer, so code that really is broken works
when it should insted fail.

I adapted a policy of making malloc/free wrapper functions that zero all
memory on allocation, and zero the pointer on a free operation, and in 6
years have never had this problem again. :wink:

This works, but has one major problem… It must ALWAYS do that…
Initializing memory needlessly is wastful in a program that is speed
critical… When I compile a production build, the memory doesn’t get
memset(), which saves time, and since I bullet proofed it against
uninitialized memory, it isn’t a problem.

Hope that helped,

-LorenOn Wed, 2002-08-28 at 15:59, Neil Bradley wrote:

Okay, very informative, but this thread is starting to devolve. Please
cease and continue your regularly scheduled programming. :slight_smile:

Thanks,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

This is from the Jargon File, by the way:
http://www.tuxedo.org/~esr/jargon/index.htmlOn Wed, Aug 28, 2002 at 02:13:26PM -0700, Atrix Wolfe wrote:

check this out, my friend posted it to me


Matthew Miller @Matthew_Miller http://www.mattdm.org/
Boston University Linux ------> http://linux.bu.edu/

Not at all strange. I’ve used that value for decades to initialize memory in
embedded systems in order to catch invalid pointers, data not properly
initialized, and other coding bugs. It has the advantages of being an odd
address (many pointer bugs), is an invalid opcode for many CPUs, very likely
points to non-existant memory, etc. Initializing memory with ‘deadbeef’ is
enabled in development code and disabled in production (shipable) code. I
highly recommend the practice.

Depending on your system, though, couldn’t “deadbeef” theoretically point
to some sensible area of memory?

Nick

Murlock writes:
[…]

so 0x5 is valid… at least for data

Yeah, I was thinking about word aligned data. Cons cells on the
brain.

Actually, most reasonably modern CPUs will read and write any word size on odd addresses. Do not assume anything about this - it may even change between generations of the same CPU family.

Learnt that lesson when I replaced my Amiga 500 with an Amiga 3000 (68000->68030); I saw my s/w blitter do strange things, but the last thing I checked was pointer/index alignment, as the 68000 would have thrown a h/w exception… heh

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Wed, 28/08/2002 14:56:10 , jvalenzu at icculus.org wrote:

I have also used variations of 0XDEADBEEF in the debug build of my
applications
and I have caught many bugs early.

0XDEADBEEF and other high values are also useful for catching
uninitialized
non-pointer data accesses. For instance, reading an uninitialized data
word and
calling malloc() will result in malloc(0xDEADBEEF) which will very
probably
fail. Also, you will also certainly notice that somethings is wrong in
a loop like
"for (i=0; i < 0xDEADBEEF; i++) {…}".

Conversely, initializing memory to zero is good for hiding bugs. In
many systems,
malloc(0) returns a non-NULL pointer, and "for (i=0; i < 0; i++) {…}"
might seem
to work well.

/Bjorn GustavssonOn onsdag, aug 28, 2002, at 15:52 US/Pacific, Loren Osborn wrote:

On Wed, 2002-08-28 at 15:47, Neil Bradley wrote:

  • 0xCFCFCFCF and 0xDFDFDFDF are pretty high in system memory, so
    dereferencing them is likely to cause a SigSegV.

There’s no guarantee that your selector is going to base anything at
0x000. 0xcfcfcfcf may not be 3GB away. It boggles the mind why anyone
would want to do anything that could even have a remote possibility of
being valid when the NULL approach is infallible. Plus, some
OSes/environments allocate data and code separately - some together,
and
some are initialized to default differently as well. Working off
non-known
state data is a BAD thing.

This is true… What you don’t see is this is a debugging tool to find
unitialized memory. The problem with your approach is that you need to
initialize memory to 0x00 always. This is inefficient when the memory
will be changed right away. In “my” method (I’m not going to actually
take credit for this myself, as I’ve seen it in several books) you only
need to artificially initialize memory when debugging. Otherwise, the
program itself is responsible for setting sane values. This method
simply ensures that the program does not assume anything about
uninitialized memory (the program should never check for 0xCF or 0xDF
explicitly)…

  • They are NOT NULL (or 0) so they can’t be detected with “if(ptr)
    {printf(ptr->string);}”… This is IMPORTANT because you SHOULD NOT
    be
    trying to read from uninitialized memory… especially to determine
    IF
    it has valid data.

I’m really confused on this one. If you’re trying to determine if it
has
valid data, how would you know when one nonzero pointer is valid and
one
nonzero pointer is not, whereas 0 is always invalid?

The idea is that a 0 pointer is valid but empty, but a 0xcfcfcfcf or
0xdfdfdfdf pointer is invalid… It isn’t possible to guarentee that
neither of these addresses are valid memory, but given that they are so
high in the address space, their invalidity is likely. REMEMBER, if
the
program is working properly, it should never check for 0xcfcfcfcf or
0xdfdfdfdf…

  • When the program crashes, it’s pretty easy to look at the stack and
    figure out what happened…

By the time the program crashes, you’re well and gone from where the
real
problem ocurred, whereas a NULL is catchable every time.

I’ve worked with this for over two years, and it has made tracking down
uninitialized memory bugs very easy. I can’t say it will be easier for
you personally, but that is my experience.

-Loren


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl