Examples Delphi

Reverse-engineering protection (built-in and active)
Protecting Your Software
All programs involved in security present vulnerabilities; it is up
to the programmers to either eliminate them or provide detection. By
reverse-engineering a program, an attacker can make it do or not do
anything he pleases. Button captions and string constants can be
modified to trick the user into compromising security, and a simple
swapping of pointers could result in sensitive information being
leaked. The simplest counter-measure to these types of attacks is
ensuring no program files have been modified. A more versatile
tactic involves digital signatures and a program loader.
Attackers will usually try cracking the program before breaking the
cryptosystem; most programs are surprisingly weak against
reverse-engineering.
A program is a series of instructions stored locally; if the user
can run the program, then he can reverse- engineer it, and change it
to his will. With a good debugger, an attacker can trace the program
through its interface messages and procedure calls. It is important
to understand that all programs can be reverse-engineered, but the
task can be made very difficult. It has been common practice for
years to: camouflage security-related functions, sprinkle decoy
function calls, and implement anti-debugging procedures. These
practices do not make programs tamper-proof in any way, but they
certainly make them tamper-resistant. There is no simple solution to
this problem, but the true goal remains to prevent the program from
being modified.
Strings contained within a program have tremendous influence, and
can be a security weakness. They are used to communicate with the
user; if strings are replaced with misleading information, an
attacker could remove "security compromised" messages, and prevent
the user from detecting intrusion. These attacks only require a
simple hex editor, and are very easy to achieve. There are many ways
to make his attack difficult. If the strings are encrypted properly,
an attacker would have to: reverse-engineer the program, find the
key, extract the decryption functions, solve the encryption
functions (they aren't in the program!), encrypt the fake strings,
and swap the strings. With clever usage of pointers, the key can be
sprinkled throughout the source code, and recombined without making
any function calls that would attract an attacker's attention.
Strings must be hidden to ensure users are not passed false
messages, and thus compromising security.
Pointers offer abundant opportunities to reek data leaking havoc.
Imagine a situation where a pointer to a record of the user's
program settings was swapped with a pointer to the user's key, and
the key was written to the registry, out in the open for the
attacker to grab. These situations aren't heard of very often
because programs are unprotected, and simpler means to obtaining
data are usually available to attackers. If you name the key "key"
nothing is left to the attacker's imagination. It is very easy to
add:
"#define key btnOkClicked"
at the beginning of the source code to rename all references to
"key" while keeping the source code clear and consistent.
"btnOkClicked" is very misleading and will be ignored by attackers
for a very long time unless the procedure you call it from is called
"getKey". It is important that the naming scheme does not reveal
which pointers, variables, and procedures are security related. It
is simple enough to make spoofed version of the source code that
will be compiled with ambiguous names for the release version so
that reverse-engineering reveals no significant information from the
naming scheme. Names give attackers significant clues about a
pointer's use; use this as an advantage to confuse and misdirect
attackers.
Developers often run checksums or hash functions on the executable
and compare it with a value hard-coded into the program. There are a
few problems that emerge from this counter-measure. An attacker only
has to change a conditional statement from "if = then" to "if <>
then" to follow the same procedure regardless of whether the
checksums matched or not. Of course you can camouflage this
conditional statement but dodging these counter-measures is no
problem for an experienced cracker. Simply verifying if a file has
changed is not enough; ideally modifying the executable would render
it unworkable.
Digital signatures offer a very sound and comforting solution. By
signing the executable you ensure 2 things: anyone can run the
program, and only you can create or modify it. For an attacker to
modify the program he would have to break the digital signature
algorithm or the private key. You can either use an unsigned portion
of the program to decipher the rest of the program or use an
external program; this is referred to as a program loader. In either
case the attacker will only know the public key. If an attacker
modifies the executable he will not be able to sign it properly,
making detection simple. Another variety of this protection involves
getting the public key from a server to prevent the attacker from
modifying the public key hard-coded into the program.
It is recommended to combine counter-measures. For example, digital
signatures can be used in conjunction with a checksum. This method
is behind Microsoft's Authenticode signatures; part of the CryptoAPI
Tools. Certified programs are guarantied to come from the specified
source, and to be unmodified. The problem with Authenticode is the
program is only protected as online content, and once on your
hard-drive, there is no guarantee. Because of the complexity in
designing programs which employ digital signatures natively, very
few implementations are used, but it is likely some variety of them
will be built into most applications in the next decade as security
concerns increase amongst the consumers' priorities.
(C)Copyright DrMungkee 2000