Originally Posted by macFanDave
I'll never understand how buffer overflow attacks even get started.
Back when I was programming regularly in C, I'd use strlen() or strncpy() to check whether strings were within a limit and truncate it to a safe length, if necessary.
Are programmers these days too lazy to check string length before using it to execute potentially dangerous code? Or do they think that performance would suffer if they wasted clock cycles for safety?
My guess is that they mostly occur within (closed-source) libraries... If I use an API that refers to a closed library, I don't necessarily know what my buffer limitations are... In Microsoft's case, right through XP (which is still the majority of their install base), there are literally hundreds of legacy APIs, some of which are undocumented, some of which are part of legacy libraries that haven't been rewritten in a decade...
I believe IE7 and IE8 run in a sandbox. And in Vista and Windows 7 the code couldn't execute without the users permission.
IE7 is sandbox-y, but still passes executable code to the kernel without user intervention or knowledge. I have no information on IE8. In Windows 6.0 ("Vista") and Windows 6.1 ("7"), most functions requiring administrator-level access require user intervention (also true on Unix operating systems, including OS X, though Windows 6.x waives the password requirement)-- but in every OS, it's possible by a variety of means to bypass these security features and gain superuser ("administrator" in Windows, "root" in unix) access without user intervention (or knowledge).
Such bypass methods are called "security vulnerabilities", and exist in every operating system ever devised. Windows 6.x is the first Windows version to offer a tool for user-intervention to grant superuser access to a process thread (the lack of this feature in previous Windows editions has a lot to do with why Windows in general and IE in particular has, historically, been stupidly easy to hijack: with no way to perform many reasonable and critical functions, like installing software, other than logging out and back in as an Administrator, Microsoft, their users, and their developers came up with a variety of workarounds, all of which created an opportunity to exploit...)
Glad I caught this post before posting mine as it's dead on right. As much as I love the Mac and feel it's more secure I still have to realize that if Mac's owned 90% of the market we'd be seeing much of the same thing Windows users go through. Maybe less, but still much of the same. Attacks are less for a number of reason, but market share is definitely #1.
We can probably safely assume that greater market share will eventually result in greater effort from malware engineers-- but there's strong evidence that Mac OS X (like all other Unix variants) is actually harder to write malware for than Windows 5.x and below. It's probably a bit early in Windows 6.x's lifecycle to say if it's still harder to write malware for it, than for Unix (my guess is, still easier on Windows, but I personally can't say that with certainty).
Notably, for example, as Mac OS X pushes towards 10% of OS install base, we do NOT see anything like 10% (not even 1%, probably not even 0.1%) of malware install base on OS X.
Currently, Windows 6.x is estimated at 25% of the ~90% that's running Windows-- a total of about 22% of the install base, or a bit more than twice as many 6.x (Vista / Win7) machines as Mac OS X machines. I'll be interested to see if the malware install base continues to be proportionally higher on Win6x than on Mac OS X, as the 6x install base grows.