lkml.org 
[lkml]   [2016]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/6] Intel Secure Guard Extensions
On May 2, 11:37am, "Austin S. Hemmelgarn" wrote:
} Subject: Re: [PATCH 0/6] Intel Secure Guard Extensions

Good morning, I hope the day is starting out well for everyone.

> On 2016-04-29 16:17, Jarkko Sakkinen wrote:
> > On Tue, Apr 26, 2016 at 09:00:10PM +0200, Pavel Machek wrote:
> >> On Mon 2016-04-25 20:34:07, Jarkko Sakkinen wrote:
> >>> Intel(R) SGX is a set of CPU instructions that can be used by
> >>> applications to set aside private regions of code and data. The code
> >>> outside the enclave is disallowed to access the memory inside the
> >>> enclave by the CPU access control.
> >>>
> >>> The firmware uses PRMRR registers to reserve an area of physical memory
> >>> called Enclave Page Cache (EPC). There is a hardware unit in the
> >>> processor called Memory Encryption Engine. The MEE encrypts and decrypts
> >>> the EPC pages as they enter and leave the processor package.
> >>
> >> What are non-evil use cases for this?
> >
> > I'm not sure what you mean by non-evil.

> I would think that this should be pretty straightforward. Pretty
> much every security technology integrated in every computer in
> existence has the potential to be used by malware for various
> purposes. Based on a cursory look at SGX, it is pretty easy to
> figure out how to use this to hide arbitrary code from virus
> scanners and the OS itself unless you have some way to force
> everything to be a debug enclave, which entirely defeats the stated
> purpose of the extensions. I can see this being useful for tight
> embedded systems. On a desktop which I have full control of
> physical access to though, it's something I'd immediately turn off,
> because the risk of misuse is so significant (I've done so on my new
> Thinkpad L560 too, although that's mostly because Linux doesn't
> support it yet).

We were somewhat surprised to see Intel announce the SGX driver for
Linux without a bit more community preparation given the nature of the
technology. But, given the history of opacity around this technology,
it probably isn't surprising. We thought it may be useful to offer a
few thoughts on this technology as discussion around integrating the
driver moves forward.

We have been following and analyzing this technology since the first
HASP paper was published detailing its development. We have been
working to integrate, at least at the simulator level, portions of
this technology in solutions we deliver. We have just recently begun
to acquire validated reference platforms to test these
implementations.

I told my associates the first time I reviewed this technology that
SGX has the ability to be a bit of a Pandora's box and it seems to be
following that course.

SGX belongs to a genre of solutions collectively known as Trusted
Execution Environments (TEE's). The intent of these platforms is to
support data and application confidentiality and integrity in the face
of an Iago threat environment, ie. a situation where a security
aggressor has complete control of the hardware and operating system,
up to and including the OS 'lying' about what it is doing to the
application.

There are those, including us, who question the quality of the
security gurantee that can be provided but that doesn't diminish the
usefulness or demand for such technology. If one buys the notion that
all IT delivery will move into the 'cloud' there is certainly a
rationale for a guarantee that clients can push data into a cloud
without concern for whether or not the platform is compromised or
being used to spy on the user's application or data.

As is the case with any security technology, the only way that such a
guarantee can be made is to have a definable origin or root of trust.
At the current time, and this may be the biggest problem with SGX, the
only origin for that root of trust is Intel itself. Given the nature
and design of SGX this is actually a bilateral root of trust since
Intel, by signing a developer's enclave key, is trusting the developer
to agree to do nothing nefarious while being shrouded by the security
guarantee that SGX provides.

It would be helpful and instructive for anyone involved in this debate
to review the following URL which details Intel's SGX licening
program:

https://software.intel.com/en-us/articles/intel-sgx-product-licensing

Which details what a developer is required to do in order to obtain an
enclave signing key which will be recognized by an SGX capable
processor. Without a valid signing key an SGX capable system will
only launch an enclave in 'debug' mode which allows the enclave to be
single stepped and examined in a debugger, which obviously invalidates
any TEE based security guarantees which SGX is designed to effect.

Intel is obviously cognizant of the risk surrounding illicit uses of
this technology since it clearly calls out that, by agreeing to have
their key signed, a developer agrees to not implement nefarious or
privacy invasive software. Given the known issues that Certificate
Authorities have with validating certificate recipients the security
guarantee inherent in such an agreement is questionable.

So, giving Intel the benefit of the doubt, the licensing issues
surrounding SGX is probably more about cognizance of the security risk
associated with the technology rather then a quest for world
domination and control. They probably have enough on their hands with
attempting to convert humanity to FPGA's and away from devices which
are capable of maintaining a context of exection... :-)

The much bigger debate is about the utility of the security guarantee
inherent in SGX. Anyone who is interested in understanding all of the
issues surrounding this technology would do well to start by reading
the Haven paper in which Microsoft Research discussed how SGX could be
used to run unmodified Windows applications within an SGX TEE.

I think Intel was somewhat sobered by the follow on paper in which
Microsoft demonstrated that in an Iago environment an interloper was
capable of determing with accuracy levels greater then 60% what was
being done in an SGX TEE. Matt Hoekstra was very quick to call out
the need for the community to understand and develop side channel
remediation strategies which anyone familiar with the field would have
anticipated to be the primary threat to attempting to induct security
into a completely insecure environment.

One of the participants in the thread suggested that one of the
'non-evil' uses of this technology would be to run virtual machines
and by extensions containers. It is pretty obvious at this point that
Intel has shifted to advocating limiting the scope of SGX to providing
protection to isolated applications and data, think password managers,
rather then as a wholesale execution shroud. That is the role we have
leveraged SGX for in our security supervisor.

At the end of the day it is very, very difficult to give up complete
platform control and maintain integrity and confidentiality guarantees
about an application and the data it is operating on. With its recent
re-organization Intel is obviously going to be heavily focused on the
cloud and the the notion of TEE's are seductive, particularly for data
in which there is no ex-post-facto regress in the face of compromise.

I think the only way forward to make all of this palatable is to
embrace something similar to what has been done with Secure Boot. The
Root Enclave Key will need to be something which can be reconfigured
by the Platform Owner through BIOS/EFI. That model would take Intel
off the hook from a security perspective and establish the notion of
platform trust to be a bilateral relationship between a service
provider and client.

This note is probably a study in TL;DR but this technology is not
inherently evil nor a ubiquitous security solution. It is simply
another arrow in the quiver and as Satya Nadella from Microsoft has
pointed out, the mess we have gotten ourselves in is not going to be
amenable to a simple or single fix.

In the TL;DR department I would highly recommend that anyone
interested in all of this read MIT's 170+ page review of the
technology before jumping to any conclusions.... :-)

Best wishes for a productive week.

Greg

}-- End of excerpt from "Austin S. Hemmelgarn"

As always,
Dr. G.W. Wettstein, Ph.D. Enjellic Systems Development, LLC.
4206 N. 19th Ave. Specializing in information infra-structure
Fargo, ND 58102 development.
PH: 701-281-1686
FAX: 701-281-3949 EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"If you plugged up your nose and mouth right before you sneezed, would
the sneeze go out your ears or would your head explode? Either way I'm
afraid to try."
-- Nick Kean

\
 
 \ /
  Last update: 2016-05-03 11:41    [W:0.151 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site