Q&A: Cyber Crime's Chief Investigator
- By Kathleen Richards
- April 22, 2008
Howard A. Schmidt served as Microsoft's first
Chief Security Officer and helped found Redmond's Trustworthy Computing
Howard A. Schmidt has used technology to thwart crime since his early career
as a policeman and pioneer in computer forensics. He started working with the
U.S. Air Force in the early '90s, helping the Office of Special Investigations
to counter some "hacks" in the DOD systems, building better processes
to protect the systems and, in his words, "a switch flipped." He began
to focus on information security. Schmidt continued in his role as an information
security advisor to the government for more than 30 years, working for the FBI,
the U.S. Air Force and the Bush administration after Sept. 11, 2001.
Recruited by Microsoft in the mid-'90s, Schmidt served as the company's first
chief security officer and in April 2001 helped launch the Trustworthy Computing
initiative. In 2003, he became the CSO of eBay. Today, Schmidt is the president
and CEO of R&H Security Consulting LLC. He sits on multiples boards and
advises companies and non-profits. RDN Senior Editor Kathleen Richards
caught up with Schmidt the week after the RSA Conference to find out where security
in a Web 2.0 world is headed.
How did you become interested in security and technology?
In the mid-'70s I was a ham radio operator and I built my first computer in
1976 and was involved in bulletin board systems and that sort of thing through
the '70s and '80s. And when I became a policeman, one of the things we were
living with at that time, was sort of the older MIS departments that weren't
real keen on moving over to a more distributed PC environment. So I wrote a
couple grants, and got some federal money to put together my own sort of in
house network of PC databases for organized crime investigations. Because of
that, once we started to see criminals using computers -- everything from keeping
ledgers of their drug stuff to writing plans on how to rob banks -- I started
to work in computer forensics and started to do some of the early development
in that area.
Today you think of security as a business process?
Correct. I think in the early days of security, we viewed security as the necessary
evil -- myself included -- the cost center, the bad guys out there. And in the
past few years, we've fully recognized that we have to do the business of security.
So is the idea that if you follow the process, you'll produce more secure
The business looks to define process with a desired outcome to generate revenue,
to run an HR system, whatever the desired state would be and with that security
has got to be baked into that from the very outset. So it is not just a matter
of creating an application that has a really good user interface, where it is
on or under budget, and easy to use, it has also got to be secure, so that has
got to be part of the business plan itself.
What kind of tools should developers be using?
We have to look across the entire spectrum. We should not be asking our developers
to develop software and then throw it over the fence and say, OK, Quality Assurance
will find the problems with it. We should be giving the developers the tools
right from the very outset to do the software scanning and the source code analysis.
And that does two things. One, it helps them develop better code as they discover
things through the automated scanning process on the base code itself. But it
also, once it gets to Quality Assurance, gives them the ability to focus more
on quality stuff, then looking at security things which you can eliminate in
the first round.
The second thing, when you look at the compiled binaries and stuff like that,
the way those things work, generally we look at the pen test side of the thing.
We can't ignore that because that is really one of those things when you put
it on the production environment, there may be other linkages somewhere that
may create a security flaw in the business process while the code itself is
Then clearly the third level of that is in a Web application, Web 2.0 environments,
for example. Now you have the ability not just to pull information down but
to interact directly -- this creates a really, really dynamic environment, and
even simple things like cross-site scripting and SQL injection have to be tested
for, at the end result once things are out in the wild.
How can developers reconcile the tradeoffs between data accessibility
One, they are not mutually exclusive. And I think that is one thing that we
have seen with the efforts at Oracle, Microsoft -- all the big companies. They
have really, really focused on this whole software dev lifecycle -- building
security in from the very beginning. Because as I think you would agree, data
is the gold, the silver and the diamonds of the world we live in today that
is where the value is and protecting it through better security in applications
One of the things that I'm concerned about is the small and medium size developers
that are developing a lot of the things that we see on our laptops and desktops.
Things to burn DVDs with, things to play music with, which aren't often times
recognized as critical applications, but none the less, still have interactions
with the Internet, still have vulnerabilities and still give a bad guy a way
to get into your system or server.
Another aspect of this is, even those companies that buy the majority of their
software from large software houses that are doing better, that are doing the
code analysis, that are sort of looking at this from a 360 [degree] perspective.
They are writing their own local applications to interface between applications,
and I worry because they can inadvertently be introducing vulnerabilities in
the software that they are developing because they are not using the same rigor
that some of the bigger guys are using now in doing some of this software code
What's your take on identity management protocols like OpenID and Windows
Those are things that I've been looking for, for years. I think many of us agree
that the more complex it is to do something, the more security will suffer.
We tell people all the time, use strong passwords, change your passwords frequently.
So obviously with us having more accessibility particularly in a Web 2.0 world
-- that means we have to manage more user IDs, manage more passwords in order
to do it right and what happens just by pure human nature people will not be
as robust about their security. So using an OpenID or using some sort of an
identity management schema gives us the ability to have strong identity management,
multifactor generally, and have the ability to have it recognize across multiple
environments, whether it is ecommerce, online banking, interacting with the
government. That is a big plus. But as we move forward in this we should not
let the software applications become our Achilles heal and have really good
systems that are undermined by the vulnerabilities in the applications that
we are using with it.
You worked at Microsoft for five years and were one of the founders of
its Trustworthy Computing Strategies Group. Craig Mundie outlined an "End
to End Trust" model at the recent RSA conference. What's your take -- is
there something new there?
I don't know that there is something new. I think it is just a continuation
of the fact, that there is no single point solution in any of these things in
any environment. It is not a hardware solution. It is not a software solution.
It is not a business process solution. It is not an identity management solution.
All of these pieces -- and the way I like to explain it -- have you ever been
to the Taj Mahal? The Taj Mahal is not just a building, it is actually comprised
of literally millions and millions of little tiles that are sort of inlays that
make up the entire thing -- it is just this huge mosaic...When you put it all
together, you have this fantastic building. I relate that to where we are today
-- the hardware guys, the firmware people, the big software houses, the small
software houses, the ISPs. Individually they all work, but when you put them
together you get something really fantastic.
So the direction that Craig is talking about is clearly I think that sort of
a concept. The problem is that when someone is building red tiles and someone
else is building blue tiles, somebody has got to reach an agreement somewhere
on how those things are going to link up -- and once again I had no role in
creating Craig's speech or anything else -- it is my interpretation of it. We
need to spend some time looking for the way that those tiles that we are all
building will fit together. But the end result when we get together is we are
all going to be securing our part of cyberspace to make it better for all of
Does Microsoft's recent interoperability
pledge change the security equation?
It does, and that's one of the things when you start looking at one of the complaints
that people had over the years is the inability to write security-related APIs
because they didn't know what it was going to do with the other ones. So having
access to the APIs, knowing what function calls are out there, knowing how the
security that you implement is going to impact that is going to once again take
us a step further.
In what other ways can developers address security that have yet to catch
It is really hard to do these things, when you're told you've got two weeks
to get this done, and here's the budget you've got to do it with, so to go back
over, and spend some focused time looking at security stuff is a challenge in
some environments. And that is one of the things that we need to figure out.
The development community is going to figure out a way to say, "Sure we
can pump it out, but do you want to have something where six months from now
we are spending three times the resources to fix it -- the reputation hit and
the impact it would have on the business and everything else?"
And then the second piece of that is the use of automated tools. When you start
looking at some of the development that we have had over the years, when people
are dealing with 25,000 or 75,000 lines of code in certain applications, it
is not feasible by any stretch of the imagination to have somebody manually
going over that. And in the past few years, there's actually been the development
of automated process to do software analysis at all the different levels. There
are now tools available that makes that more reliable but also makes it a lot
quicker, we've got to make it so that they get those tools in their hands as
What did you find noteworthy at the recent RSA Security Conference?
As we develop greater dependency on mobile devices, the bad guys will start
using unsigned applications on the mobile device to commit the next-gen of cyber
crimes and we need to look at it now and build that into the phones that we
will start using in the near future.