US20010018675A1 - Method and apparatus for compliance checking in a trust-management system - Google Patents

Method and apparatus for compliance checking in a trust-management system Download PDF

Info

Publication number
US20010018675A1
US20010018675A1 US09/780,892 US78089201A US2001018675A1 US 20010018675 A1 US20010018675 A1 US 20010018675A1 US 78089201 A US78089201 A US 78089201A US 2001018675 A1 US2001018675 A1 US 2001018675A1
Authority
US
United States
Prior art keywords
assertion
policy
request
credential
acceptance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/780,892
Inventor
Matthew Blaze
Joan Feigenbaum
Martin Strauss
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US09/780,892 priority Critical patent/US20010018675A1/en
Publication of US20010018675A1 publication Critical patent/US20010018675A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/009Trust

Definitions

  • the invention relates to trust-management systems. More particularly, the invention relates to a method and apparatus for compliance checking in a trust-management system.
  • the heart of the trust-management system is an algorithm for compliance checking.
  • the inputs to the compliance checker are a “request,” a “policy” and a set of “credentials.”
  • the compliance checker returns a “yes” (acceptance) or a “no” (rejection), depending on whether the credentials constitute a proof that the request complies with the policy.
  • a central challenge in trust management is to find an appropriate notion of “proof” and an efficient algorithm for checking proofs of compliance.
  • the compliance-checking problem may be mathematically undecidable in its most general form. Moreover, the compliance-checking problem is still non-deterministic polynomial time (NP) hard even when restricted in several natural ways.
  • Blaze, Feigenbaum and Lacy discloses the trust-management problem as a distinct and important component of security in network services. Aspects of the trust-management problem include formulation of policies and credentials, deferral of trust to third parties, and a mechanism for “proving ” that a request, supported by one or more credentials, complies with a policy. A comprehensive approach to trust management independent of the needs of any particular product or service is disclosed along with a trust-management system that embodies the approach.
  • the Policy Maker mechanism for checking that a set of credentials proves that a requested action complies with local policy does not depend on the semantics of the application-specific request, credentials or policy. This allows different applications with varying policy requirements to share a credential base and a trust-management infrastructure.
  • the head of the loan division must authorize approvers' public keys.
  • the division head's public key is currently PK 3 . This key expires on Dec. 31, 1998.
  • ⁇ i t ⁇ transforms A 0 such that ⁇ i t enters r into a cell not previously containing r, i.e., whether it is possible for some subject, not having right r over some object, ever to gain that right.
  • Harrison et al. identify several possible restrictions on ⁇ and give decision algorithms for input subject to one of these restrictions. One restriction they consider yields a PSPACE-complete problem.
  • n 1 has the “grant” right over n 2
  • n has some right r over n 3
  • a legal transaction is for n 1 to grant right r over n 3 to n 2 .
  • subjects can create new nodes and remove their 5 own rights over their immediate successors. Although rights are constrained to flow only via take-grant paths, take-grant systems do model non trivial applications.
  • an embodiment of the present invention formalizes the complexity of a general-purpose, working system for processing requests of this nature.
  • a general purpose trust-management system is, very roughly speaking, a meta-system in the protection system framework.
  • an application-independent notion of compliance checking can be useful and can enhance security. Any product or service that requires proof that a requested transaction complies with a policy could implement a special-purpose compliance checker from scratch.
  • One important advantage of a general purpose compliance checker is the soundness and reliability of both the design and the implementation of the compliance checker. Formalizing the notion of “credentials proving that a request complies with a policy” involves subtlety and detail.
  • a general-purpose compliance checker can facilitate inter-operability. Requests, policies, and credentials, if originally written in the native language of a specific product or service, must be translated into a standard format understood by the compliance checker. Because a wide variety of applications will each have translators with the same target language, policies and credentials originally written for one application can be used by another. The fact that the compliance checker can serve as a locus of inter-operability may prove particularly useful in e-commerce applications and, more generally, in all setting in which publickey certificates are needed.
  • the source-IDs are just strings, and the assertions encode a set of, possibly indirect and possibly conditional, trust relationships among the issuing sources. Associating each assertion with the correct source-ID is, according to this embodiment, the responsibility of the calling application and takes place before the POC instance is handed to the compliance checker.
  • the request r may be a string encoding an “action” for which the calling application seeks a proof of compliance.
  • the compliance checker's domain of discourse may need to include other action strings.
  • a request r may include, for example, a request to access or copy a data object, or to play a data object that contains, for example, audio content.
  • a set of acceptance records is referred to as an “acceptance set.” It is by maintaining acceptance sets and making them available to assertions that the compliance checker manages “inter-assertion communication,” giving assertions the chance to make decisions based on conditional decisions by other assertions.
  • the compliance checker starts with an “initial acceptance set” ⁇ ( ⁇ , ⁇ , R) ⁇ , in which the one acceptance record means that the action string for which approval is sought is R and that no assertions have yet signed off on it or anything else.
  • the checker runs the assertions ( ⁇ 0 , POLICY), ( ⁇ 1 , s 1 ), . . .
  • the compliance checker approves the request r if the acceptance record (0, POLICY, R), which means “policy approves the initial action string,” is produced. Note that the use of the string “POLICY” herein is by way of example only, and any other information may of course be used instead.
  • an assertion is a mapping from acceptance sets to acceptance sets. Assertion ( ⁇ i , s i ) looks at an acceptance set A encoding the actions that have been approved so far, and the numbers and sources of the assertions that approved them. Based on this information about what the sources it trusts have approved, ( ⁇ i , s i ) outputs another acceptance set A′.
  • R is the action string that corresponds to the request r?
  • N is the length of the original problem instance, i.e., the number of bits needed to encode r, ( ⁇ 0 , POLICY), ( ⁇ 1 , s 1 ), . . . , ( ⁇ n-1 , s n-1 ), and d in some standard fashion.
  • the length of the input fed to an individual assertion ( ⁇ i j , s i j ) in the course of checking a proof may be considerably bigger than the length of the original problem instance (r, ⁇ ( ⁇ 0 , POLICY), ( ⁇ 1 , s 1 ), . . . , ( ⁇ n-1 , s n-1 ) ⁇ , c), because the running of assertions ( ⁇ i 1 , s i 1 ), . . . , ( ⁇ i j-1 , s i j-1 ) may have caused the creation of many new acceptance records.
  • the meaningful “domain of discourse” for assertion (f i , s i ) is of size at most m—there are at most m actions that it would make sense for ( ⁇ i , s i ) to sign off on, no matter what the other assertions in the instance say about r.
  • the class NPP consists of all promise problems with at least one solution in NP.
  • a promise problem is NP-hard if it has at least one solution and all of its solutions are NP-hard.
  • Q, R promise problem
  • POC variants that can be shown to be NP-hard, which is generally interpreted to mean that they are computationally intractable in the worst case.
  • the “input” is a request r, a set ⁇ ( ⁇ 0 , POLICY), ( ⁇ 1 , s 1 ), . . . , ( ⁇ n-1 , s n-1 ) ⁇ of assertions, and integers c, l, m, and S.
  • the “promise” is that each ( ⁇ i , s i ) runs in time O(N c ).
  • the “input” is a request r, a set ⁇ ( ⁇ 0 , POLICY), (f 1 , s 1 ), . . . , ( ⁇ n-1 , s n-1 ) ⁇ of assertions, and an integer d.
  • the “question” can be stated as follows: is there a sequence i 1 , . . . , i t of indices such that:
  • Each i j is in ⁇ 0, 1, . . . , n ⁇ 1 ⁇ , but the i j need not be distinct or collectively exhaustive of ⁇ 0, 1, . . . , n ⁇ 1 ⁇ ;
  • the “input” is a request r, a set ⁇ ( ⁇ o , POLICY), ( ⁇ 1 , s 1 ), . . . , ( ⁇ n-1 , s n-1 )) ⁇ of assertions, and integers l and c.
  • the “promise” is that each assertion ( ⁇ i , s i ) is monotonic and runs in time O(N c ).
  • the “question” can be stated as follows: is there a sequence i j , . . . , i t of indices such that:
  • Each i j is in ⁇ 0, 1, . . . , n ⁇ 1 ⁇ , but the i j need not be distinct or collectively exhaustive of ⁇ 0, 1, . . . , n ⁇ 1 ⁇ ;
  • Each version of POC may be defined using “agglomeration” ( ⁇ 2 , s 2 ) ⁇ ( ⁇ 1 , s t ) instead of composition ( ⁇ 2 , s 2 ) ⁇ ( ⁇ 1 , s 1 ).
  • ( ⁇ 1 t , s i t ) agglomeratively to an acceptance set S 0 is defined inductively as follows: S I ⁇ ( ⁇ i1 , s i1 )(S 0 ) ⁇ S 0 and, for 2 ⁇ i ⁇ t, S j ⁇ ( ⁇ t j , s 1 j ) (S j-1 ) ⁇ S j-1 .(Si-,) u Se 1
  • agglomerative versions of the decision problems are identical to the versions already given, except that the acceptance condition is “(0, POLICY, R) ⁇ ( ⁇ i t , s i t ) ⁇ . . . ⁇ ( ⁇ i t , s i t ) ( ⁇ ( ⁇ , ⁇ , R) ⁇ )?”
  • agglomerative POC refers to the version defined in terms of ⁇ instead of ⁇ .
  • a trust-management system that defines “proof of compliance” in terms of agglomeration can make it impossible for an assertion to “undo” an approval that it (or any other assertion) has already given to an action string during the course of constructing a proof. This definition of proof may make sense if the trust-management system should guard against a rogue credential-issuer's ability to thwart legitimate proofs. Note that the question of whether the compliance checker combines assertions using agglomeration or composition is separate from the question of whether the assertions themselves are monotonic.
  • the “input is a request r, a set ⁇ ( ⁇ 0 , POLICY), ( ⁇ i , s 1 ), . . . , ( ⁇ n-1 , s n-1 ) ⁇ of assertions, and integers c, m, and S.
  • the “promise” is that each ( ⁇ i , s i ) is monotonic, authentic, and runs in time O(N c ).
  • FIG. 1 a flow diagram of a method of compliance checking for a trust-management system according to an embodiment of the present invention.
  • the flow chart in FIG. 1 is not meant to imply a fixed order to the steps; embodiments of the present invention can be practiced in any order that is practicable.
  • a request r a policy assertion ( ⁇ 0 , POLICY) associated with the request r, and n ⁇ 1 credential assertions ( ⁇ 1 , s 1 ), . . .
  • an acceptance record set S is initialized to ⁇ ( ⁇ , ⁇ , R) ⁇ at step 110 , where A represents a distinguished “null string” and R represents the initial request, r.
  • step 120 j is initialized to 1.
  • each assertion ( ⁇ i , s i ), for integers i from 0 to n ⁇ 1, is run and the result is added to the acceptance record set S. If j does not equal mn at step 140 , where m is a number greater than 1, j is increased by 1 at step 150 and step 130 is repeated.
  • CCA 1 Compliance-Checking Algorithm version 1
  • CCA 1 (r, ⁇ ( ⁇ 0 , POLICY), ( ⁇ 1 , s 1 ) . . . , ( ⁇ n-1 , s n-1 ) ⁇ , m):
  • an assertion ( ⁇ i , s i ) is “ill-formned” if it violates the promise. If CCA 1 discovers that ( ⁇ i , s i ) is ill-formed, the assertion is ignored for the remainder of the computation. An assertion ( ⁇ i , s i ) may be undetectably ill-formed. For example, there may be sets A B such that ( ⁇ i , s i )(A) ( ⁇ i , s i )(B), but such that A and B do not arise in this run of the compliance checker.
  • the CCA 1 algorithm may check for violations of the promise every time it simulates an assertion. Detailed pseudo-code for these checks is not included in CCA 1 , because it would not illustrate the basic structure of the algorithm. Instead, the predicate Ill-Formed () indicates that the checks may done for each simulation.
  • CCA 1 accepts if and only if the acceptance record (0, POLICY, R) is produced when it simulates the input assertions. Unlike the previous algorithms, however, it cannot non-deterministically guess an order in which to do the simulation. Instead, it uses an arbitrary order. CCA 1 also ensures that, if a proper subset F of the input assertions contains a proof that R complies with POLICY and every ( ⁇ i , s i ) ⁇ F satisfies the promise, then the remaining assertions do not destroy all or part of the acceptance records produced by F during the simulation (and destroy the proof), even if these remaining assertions do not satisfy the promise. CCA 1 achieves this by maintaining one set of approved acceptance records, from which no records are ever deleted, i.e., by agglomerating, and by discarding assertions that it discovers are ill-formed.
  • CCA 1 does mn iterations of the sequence ( ⁇ n-1 , s n-1 ), . . . , ( ⁇ 1 , s 1 ), ( ⁇ 0 , POLICY), for a total of mn 2 assertion-simulations.
  • a set F ⁇ ( ⁇ j t , s j t ), . . . , ( ⁇ j t , s j t ) ⁇ ⁇ ( ⁇ 0 , POLICY), . . .
  • F ⁇ ( ⁇ 0 , POLICY), ( ⁇ 1 , s 1 ), . . . , ( ⁇ n-1 , s n-1 ) ⁇ contains a proof that R complies with POLICY and that every ( ⁇ i , s i ) ⁇ F satisfies the promise of LBMAPOC.
  • CCA 1 accepts (r, ⁇ ( ⁇ 0 , POLICY), ( ⁇ 1 , s 1 ), . . . , ( ⁇ n-1 , s n-1 ) ⁇ , c, m, s).
  • CCA 1 rejects (r, ⁇ ( ⁇ 0 , POLICY), ( ⁇ 1 , s 1 ), . . . , ( ⁇ n-1 , s n-1 ) ⁇ , c, m, s).
  • k 1 , . . . , k u be a sequence of indices, each in ⁇ j 1 , . . . , j t ⁇ , but not necessarily distinct and not necessarily exhaustive of ⁇ (j 1 , . . . , j t ⁇ , such that (0, POLICY, R) ⁇ ( ⁇ k u , s k u ) ⁇ . . . ⁇ ( ⁇ k w , s k w ) ( ⁇ ( ⁇ , ⁇ , R) ⁇ ).
  • a u be the acceptance sets produced by applying ( ⁇ k 1 , s k 1 ), . . . , ( ⁇ k w , s k w ). Because k 1 , . . . , k u is a shortest sequence that proves compliance using assertions in F, each set A p must contain at least one action string that is not present in any of A 1 , . . . , A p-1 . Thus, u iterations of ( ⁇ 0 , POLICY) ⁇ ( ⁇ 1 , s 1 ) ⁇ . . . ⁇ ( ⁇ n-1 , s n-1 ) would suffice for CCA 1 .
  • cases (1) and (2) do not cover all possible inputs to CCA 1 .

Abstract

A method and apparatus are provided for compliance checking in a trust-management system. A request r, a policy assertion (ƒ0, POLICY), and n−1 credential assertions (ƒ1, s1) . . . , (ƒn−1, sn-1) are received, each credential assertion comprising a credential function ƒi and a credential source si. Each assertion may be monotonic, authentic, and locally bounded. An acceptance record set S is initialized to {(Λ, Λ, R)}, where Λ represents a distinguished null string, and R represents the request r. Each assertion (ƒi, si), where i represents the integers from n−1 to 0, is run and the result is added to the acceptance record set S. This is repeated mn times, where m represents a number greater than 1, and an acceptance is output if any of the results in the acceptance record set S comprise an acceptance record (0, POLICY, R).

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The present application claims the benefit of U.S. provisional patent application Ser. No. 60/074,848 entitled “Compliance Checking in the Policy Maker Trust Management System” to Matthew A. Blaze, Joan Feigenbaum and Martin J. Strauss and filed on Feb. 17, 1998. [0001]
  • FIELD OF THE INVENTION
  • The invention relates to trust-management systems. More particularly, the invention relates to a method and apparatus for compliance checking in a trust-management system. [0002]
  • COPYRIGHT NOTICE
  • A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. [0003]
  • BACKGROUND OF THE INVENTION
  • Emerging electronic commerce services that use public-key cryptography on a mass-market scale require sophisticated mechanisms for managing trust. For example, a service that receives a signed request for action may need to answer a basic question: “is the key used to sign this request authorized to take this action?” In some services, the question may be more complicated, requiring techniques for formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Matt Blaze, Joan Feigenbaum and Jack Lacy, “Decentralized Trust Management,” Proc. IEEE Conference on Security and Privacy (May 1996) (hereinafter “Blaze, Feigenbaum and Lacy”), the entire contents of which is hereby incorporated by reference, discloses such a trust-management problem as a component of network services and describes a general tool for addressing it, the “PolicyMaker” trust-management system. [0004]
  • As will be explained, the heart of the trust-management system is an algorithm for compliance checking. The inputs to the compliance checker are a “request,” a “policy” and a set of “credentials.” The compliance checker returns a “yes” (acceptance) or a “no” (rejection), depending on whether the credentials constitute a proof that the request complies with the policy. Thus, a central challenge in trust management is to find an appropriate notion of “proof” and an efficient algorithm for checking proofs of compliance. [0005]
  • Unfortunately, the compliance-checking problem may be mathematically undecidable in its most general form. Moreover, the compliance-checking problem is still non-deterministic polynomial time (NP) hard even when restricted in several natural ways. [0006]
  • Blaze, Feigenbaum and Lacy discloses the trust-management problem as a distinct and important component of security in network services. Aspects of the trust-management problem include formulation of policies and credentials, deferral of trust to third parties, and a mechanism for “proving ” that a request, supported by one or more credentials, complies with a policy. A comprehensive approach to trust management independent of the needs of any particular product or service is disclosed along with a trust-management system that embodies the approach. [0007]
  • In particular, the Policy Maker system comprises policies, credentials, and trust relationships that are expressed as functions or programs (or parts of programs) in a “safe” programming language. A common language for policies, credentials, and relationships makes it possible for applications to handle security in a comprehensive, consistent, and largely transparent manner. [0008]
  • The Policy Maker system is also expressive enough to support the complex trust relationships that can occur in large-scale network applications. At the same time, simple and standard policies, credentials, and relationships can be expressed succinctly and comprehensibly. [0009]
  • The Policy Maker system provides local control of trust relationships. Each party in the network can decide in each transaction whether to accept the credential presented by a second party or, alternatively, which third party it should ask for additional credentials. Local control of trust relationships, as opposed to a top-down centralized approach, eliminates the need for the assumption of a globally known, monolithic hierarchy of “certifying authorities.” Such hierarchies do not scale easily beyond single “communities of interest” in which trust can be defined unconditionally from the top down. [0010]
  • The Policy Maker mechanism for checking that a set of credentials proves that a requested action complies with local policy does not depend on the semantics of the application-specific request, credentials or policy. This allows different applications with varying policy requirements to share a credential base and a trust-management infrastructure. [0011]
  • Three examples of application-specific requests, and local policies with which they may need to comply, will now be described. Although individually the examples are of limited complexity, collectively they demonstrate that an expressive, flexible notion of “proof of compliance” is needed. [0012]
  • As a first example, consider an e-mail system in which messages arrive with headers that include, among other things, the sender's name, the sender's public key, and a digital signature. When a recipient's e-mail reader processes an incoming message, it uses the public key to verify that the message and the signature go together (i.e., an adversary has not spliced a signature from another message onto this message). The recipient may also be concerned about whether the name and public key go together. In other words, could an adversary have taken a legitimate message-signature pair that he produced with this own signing key and then attached to it his public key and someone else's name? To address this concern, the recipient needs a policy that determines which name-key pairs are trustworthy. Because signed messages may regularly arrive from senders that he has never met, a simple private database of name-key pairs may not be sufficient. By way of example, a plausible policy might include the following: [0013]
  • (1) He maintains private copies of the name-key pairs (N[0014] 1, PK1) and (N2, PK2). A reasonable interpretation of this part of the policy is that he knows the people named N1 and N2 personally and can get reliable copies of the public keys directly from them.
  • (2) He accepts “chains of trust” of length one or two. An arc in a chain of trust is a “certificate” of the form (PK[0015] i, (Nj, PKj), S). This is interpreted to means that the owner Ni of PKi vouches for the binding between the name Nj and the public key PKj. This can also mean that Ni attests that Nj is trusted to provide certificates of this form. The party Ni signs (Nj, PKj) with his private key and the resulting signature S.
  • (3) He insists that there be two disjoint chains of trust from the keys that he maintains privately to the name-key pair that arrives with a signed message. [0016]
  • As a second example, consider a loan request submitted to an electronic banking system. Such a request might contain, among other things, the name of the requester and the amount requested. A plausible policy for approval of such loans might take the following form: [0017]
  • (1) Two approvals are needed from loans of less than $5,000. Three approvals are needed for loans of between $5,000 and $10,000. Loans of more than $10,000 are not handled by this automated loan-processing system. [0018]
  • (2) The head of the loan division must authorize approvers' public keys. The division head's public key is currently PK[0019] 3. This key expires on Dec. 31, 1998.
  • As a third example, consider a typical request for action in a web-browsing system, such as “view URL http://www.research.att.com/.” In constructing a viewing policy, a user may decide what type of metadata, or labels, she wants documents to have before viewing them, and whom she trusts to label documents. The user may insist that documents be rated (S≦2, L≦2, V=0, N≦2) on the sex (S), language (L), violence (V) and nudity (N) scales, respectively. She may trust self-labeling by some companies or any labels approved by certain companies. [0020]
  • Previous work on “protection systems” is loosely related to the concept of a trust-management system. Recent work that is similarly related to the present invention can be found in, for example, T. Y. C. Woo and S. S. Lam, “Authorization in distributed Systems: A New Approach,” [0021] Journal of Computer Security 2 pp. 107-36 (1993). In addition, protection systems, as described by D. Denning, Cryptography and Data Security, Addison-Wesley, Reading (1982), address a similar, but not identical, problem.
  • M. A. Harrison, W. L. Ruzzo and J. D. Ullman, “Protection in Operating Systems,” [0022] Communications of the ACM 19, pp. 461-71 (1976) analyze a general protection system based on the “access matrix” model. In matrix A, indexed by subjects and objects, cell Aso records the rights of subject S over the object o; a set of transition rules describes the rights needed as preconditions to modify A and the specific ways in which A can be modified, by creating subjects and objects or by entering or deleting rights at a single cell. Harrison et al. showed that given (1) an initial state A0; (2) a set A of transition rules and (3) a right r, it is undecidable whether some sequence δi 0 . . . δi t εΔ transforms A0 such that δi t enters r into a cell not previously containing r, i.e., whether it is possible for some subject, not having right r over some object, ever to gain that right. On the other hand, Harrison et al. identify several possible restrictions on Δ and give decision algorithms for input subject to one of these restrictions. One restriction they consider yields a PSPACE-complete problem.
  • Independently, A. K. Jones, R. J. Lipton and L. Snyder, “A Linear Time Algorithm for Deciding Security, Proceedings of the Symposium on Foundations of Computer Science,” [0023] IEEE Computer Society Press, Los Alamitos, pp. 33-41 (1976) define and analyze “take-grant” directed-graph systems. Subjects and objects are nodes; an arc a from node n1 to n2 is labeled by the set of rights n1 has over n2. If subject n1 has the “take” right over n2, and n2 has some right r over n3, then a legal transition is for n1 to take right r over n3. Similarly, if the subject n1 has the “grant” right over n2, and n, has some right r over n3, then a legal transaction is for n1 to grant right r over n3 to n2. Besides these transitions, subjects can create new nodes and remove their 5 own rights over their immediate successors. Although rights are constrained to flow only via take-grant paths, take-grant systems do model non trivial applications.
  • Jones et al. asked whether a right r over a node x possessed by n[0024] 1, but not possessed by n2, could ever be acquired by n2. They showed that this question can be decided in time linear in the original graph by depth-first search. Thus, Denning concludes that although safety in protection systems is usually undecidable, the results in, for example, Jones et al. demonstrate that safety can be decided feasibly in systems with sets of transition rules from a restricted though non-trivial set. The related results on compliance-checking described herein provide additional support for Denning's conclusion.
  • Having reviewed the basics of “protection systems,” it can be seen why they address a similar but not identical problem to the one addressed by the compliance-checking algorithm described herein. In the protection system world, there is a relatively small set of potentially dangerous actions that could ever be performed, and this set is agreed upon in advance by all parties involved. A data structure, such as an access matrix, records which parties are allowed to take which actions. This data structure is pre-computed offline, and, as requests for action arrive, their legitimacy is decided via a lookup operation in this data structure. “Transition rules” that change the data structure are applied infrequently, and they are implemented by a different mechanism and in a separate system module from the ones that handle individual requests for action. [0025]
  • In the trust-management system world, the set of potentially dangerous actions is large, dynamic, and not known in advance. A system provides a general notion of “proof of compliance” for use by diverse applications that require trust policies. The users of these applications and the semantics of their actions and policies are not even known to the compliance-checking algorithm; hence it is not possible for all parties to agree in advance on a domain of discourse for all potentially dangerous actions. The compliance-checking question “is request r authorized by policy P and credential set C?” is analogous to the question “can subject S eventually obtain right r by transition rules Δ” in the protection system world. However, a single instance of request processing, especially one that involves deferral of trust, can require a moderately complex computation and not just a lookup in a pre-computed data structure. Accordingly, an embodiment of the present invention formalizes the complexity of a general-purpose, working system for processing requests of this nature. In summary, a general purpose trust-management system is, very roughly speaking, a meta-system in the protection system framework. [0026]
  • In addition, an application-independent notion of compliance checking can be useful and can enhance security. Any product or service that requires proof that a requested transaction complies with a policy could implement a special-purpose compliance checker from scratch. One important advantage of a general purpose compliance checker is the soundness and reliability of both the design and the implementation of the compliance checker. Formalizing the notion of “credentials proving that a request complies with a policy” involves subtlety and detail. It is easy to get wrong, and an application developer who sets out to implement something simple to avoid an “overly complicated” syntax of a general-purpose compliance checker is likely to find that: (1) she has underestimated the complexity of the application's needs for expressiveness and proof or (2) her special-purpose compliance checker is not turning out so simple. [0027]
  • A general-purpose notion of proof of compliance can be explained, formalized, proven correct, and implemented in a standard package, to free developers of individual applications from the need to reinvent the system. Applications that use a standard compliance checker can be assured that the answer returned for any given input (such as a request, a policy, and a set of credentials) depends on the input, and not on any implicit policy decisions (or bugs) in the design or implementation of the compliance checker. As policies and credentials become more diverse and complex, the issue of assuring correctness will become even more important, and modularity of function (with a clean separation between the role of the application and the role of the compliance checker) will make further development more manageable. [0028]
  • Two important sources of complexity that are often underestimated are delegation and cryptography. Products and services that need a notion of “credential” almost always have some notion of “delegation” of the authority to issue credentials. The simplest case, unconditional delegation, is easily handled by a special-purpose mechanism. However, if the product or service grows in popularity and starts to be used in ways that were not foreseen when originally deployed, delegation can quickly become more complex, and a special-purpose language that restricts the types of conditional delegation that the service can use may become an impediment to widespread and imaginative use. [0029]
  • The general framework for compliance checking avoids this by letting delegation be described by ordinary programs. Similarly, digital signatures and other browsers can be designed to accommodate “safe surfing” policies configurable by parents, but may not initially involve cryptographic functions. If the application is subsequently integrated into the wider world of electronic commerce, however, cryptography may be desired and cryptographic credentials, such as public-key certificates, may need to be incorporated into the application's notion of proof of compliance. If the application already uses a general-purpose notion of proof of compliance, this can be done without having to rethink and re-code the compliance-checker. [0030]
  • In addition, a general-purpose compliance checker can facilitate inter-operability. Requests, policies, and credentials, if originally written in the native language of a specific product or service, must be translated into a standard format understood by the compliance checker. Because a wide variety of applications will each have translators with the same target language, policies and credentials originally written for one application can be used by another. The fact that the compliance checker can serve as a locus of inter-operability may prove particularly useful in e-commerce applications and, more generally, in all setting in which publickey certificates are needed. [0031]
  • Another possible problem with a compliance-checking algorithm is the possibility of self-referencing assertions. For example, a digitally signed assertion by party A might represent “I approve this request if, and only if, party B approves this request” while an assertion by party B represents “I approve this request if, and only if, party A approves this request.” Although this request should perhaps be approved, a compliance-checking algorithm may not recognize this fact. [0032]
  • In view of the foregoing, it can be appreciated that a substantial need exists for a method, solvable in polynomial time and widely applicable, that checks the compliance of a request with a policy assertion based on credential assertions and solves the other problems discussed above. [0033]
  • SUMMARY OF THE INVENTION
  • The disadvantages of the art are alleviated to a great extent by a method and apparatus for compliance checking in a trust-management system. A request r, a policy assertion (ƒ[0034] 0, POLICY), and n−1 credential assertions (ƒ1, s1), (ƒn-1, Sn-1) are received, each credential assertion comprising a credential function ƒi and a credential source si. Each assertion may be monotonic, authentic, and locally bounded. An acceptance record set S is initialized to a set of the triple {(Λ, Λ, R)}, where A represents an empty portion of the acceptance record, and R represents the request r. Each assertion (ƒi, si), where i represents the integers from n−1 to 0, is run and the result is added to the acceptance record set S. This is repeated mn times, where m represents a number greater than 1, and an acceptance is output if any of the results in the acceptance record set S comprise an acceptance record (0, POLICY, R).
  • With these and other advantages and features of the invention that will become hereinafter apparent, the nature of the invention may be more clearly understood by reference to the following detailed description of the invention, the appended claims and to the several drawings attached herein. [0035]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flow diagram of a method of compliance checking for a trust-management system according to an embodiment of the present invention. [0036]
  • FIG. 2 is a block diagram of a compliance checker for a trust-management system according to an embodiment of the present invention. [0037]
  • DETAILED DESCRIPTION
  • The present invention is directed to a method and apparatus for compliance checking in a trust-management system. A general problem addressed by an embodiment of the present invention is Proof of Compliance (POC). The question is whether a “request” r complies with a “policy.” The policy is simply a function ƒ[0038] 0 encoded in a programming system or language and labeled by, for example, a keyword such as “POLICY.” In addition to the request and the policy, a POC instance contains a set of “credentials,” which also include general functions. Policies and credentials are collectively referred to as “assertions.”
  • Credentials are issued by “sources.” Formally, a credential is a pair (ƒ[0039] i, si) of function ƒi and source identifier (ID) si, which may be a string over some appropriate alphabet Π. Some examples of source IDs include public keys of credential issuers, URLs, names of people, and names of companies. In one embodiment of the present invention, with the exception of the keyword POLICY, the interpretation of source-IDs is part of the application-specific semantics of an assertion, and it is not the job of the compliance checker. From the compliance checker's point of view, the source-IDs are just strings, and the assertions encode a set of, possibly indirect and possibly conditional, trust relationships among the issuing sources. Associating each assertion with the correct source-ID is, according to this embodiment, the responsibility of the calling application and takes place before the POC instance is handed to the compliance checker.
  • The request r may be a string encoding an “action” for which the calling application seeks a proof of compliance. In the course of deciding whether the credentials (ƒ[0040] 1, s1), . . . , (ƒn-1, sn-1) constitute a proof that R complies with the policy (ƒ0, POLICY), the compliance checker's domain of discourse may need to include other action strings. A request r may include, for example, a request to access or copy a data object, or to play a data object that contains, for example, audio content.
  • For example, if POLICY requires that r be approved by credential issuers s[0041] 1 and s2, the credentials (ƒ1, s1) and (ƒ2, s2) may want a way to say that they approve r “conditionally,” where the condition is that the other credential also approve it. A convenient way to formalize this is to use strings R, R1 and R2 over some finite alphabet Σ. The string R corresponds to the requested action r. The strings R1 and R2 encode conditional versions of R that might by approved by s1 and s2 as intermediate results of the compliance-checking procedure.
  • More generally, for each request r and each assertion (ƒ[0042] i, si), there is a set {Rij} of “action strings” that might arise in a compliance check. By convention, there is a distinguished string R that corresponds to the input request r. The range of assertion (ƒi, si) is made up of “acceptance records” of the form (i, si, Rij), the meaning of which is that, based on the information at its disposal, assertion number i, issued by source si, approves action Rij. A set of acceptance records is referred to as an “acceptance set.” It is by maintaining acceptance sets and making them available to assertions that the compliance checker manages “inter-assertion communication,” giving assertions the chance to make decisions based on conditional decisions by other assertions. The compliance checker starts with an “initial acceptance set” {(Λ, Λ, R)}, in which the one acceptance record means that the action string for which approval is sought is R and that no assertions have yet signed off on it or anything else. The checker runs the assertions (ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1) that it has received as input, not necessarily in that order and not necessarily once each, to determine which acceptance records are produced. Ultimately, the compliance checker approves the request r if the acceptance record (0, POLICY, R), which means “policy approves the initial action string,” is produced. Note that the use of the string “POLICY” herein is by way of example only, and any other information may of course be used instead.
  • Thus, abstractly, an assertion is a mapping from acceptance sets to acceptance sets. Assertion (ƒ[0043] i, si) looks at an acceptance set A encoding the actions that have been approved so far, and the numbers and sources of the assertions that approved them. Based on this information about what the sources it trusts have approved, (ƒi, si) outputs another acceptance set A′.
  • The most general version of the compliance-checking problem, or “proof of compliance,” is: given as input a request r and a set of assertions (ƒ[0044] 0, POLICY), (ƒ1,s1),. . . , (ƒn-1, sn-1), is there a finite sequence i1, i2, . . . , it of indices such that each ij is in {0, 1, . . . , n−1}, but the ij's are not necessarily distinct and not necessarily exhaustive of {0, 1, . . . , n−1}, and such that:
  • (0, POLICY, R)ε(ƒi t , s1 t )∘ . . . ∘ (ƒt 1 , st 1 ) ({(Λ, Λ, R)}),
  • where R is the action string that corresponds to the request r? [0045]
  • This general version of the problem is mathematically undecidable. A compliance checker cannot even decide whether an arbitrary assertion (ƒ[0046] i, si) halts when given an arbitrary acceptance set as input, much less whether some sequence containing (ƒi, si) produces the desired output. Therefore, various special cases of POC will now be described, including one that is both useful and computationally tractable.
  • The statement “{(ƒ[0047] 0, POLICY), (ƒ1, s1), . . . , (ƒ, sn-1)} contains a proof that r complies with POLICY,” means that (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}) is a “yes-instance” of this unconstrained, most general form of POC. If F is a, possibly proper, subset of {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)} that contains all of the assertions that actually appear in the sequence (ƒi t , si t ) ∘ . . . ∘ (ƒt 1, si 1 ), then “F contains a proof that r complies with POLICY.”
  • In order to obtain a useful restricted version of POC, various pieces of information may be added to the problem instances. Specifically, the instance (r, {(ƒ[0048] 0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}) may be augmented in one or more of the following ways.
  • Global Run Time Bound [0049]
  • An instance may contain an integer d such that a sequence of assertions (ƒ[0050] i 1, si 1 ), . . . , (ƒi t , si t ) is considered a valid proof that r complies with POLICY if the total amount of time that the compliance checker needs to compute (ƒi t , si t ) ∘ . . . ∘ (ƒi 1, si 1) ({(Λ, Λ, R)}) is O(Nd). Here N is the length of the original problem instance, i.e., the number of bits needed to encode r, (ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1), and d in some standard fashion.
  • Local Run Time Bound [0051]
  • An instance may contain an integer c such that (ƒ[0052] t 1 , si 1 ) . . . , (ƒi t , si t ) is considered a valid proof that R complies with POLICY if each (ft j, si j ) runs in time O(Nc). Here N is the length of the actual acceptance set that is input to (ƒi j , si j ) when it is run by the compliance checker. Note that the length of the input fed to an individual assertion (ƒi j , si j ) in the course of checking a proof may be considerably bigger than the length of the original problem instance (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}, c), because the running of assertions (ƒi 1 , si 1 ), . . . , (ƒi j-1 , si j-1 ) may have caused the creation of many new acceptance records.
  • Bounded Number of Assertions in a Proof [0053]
  • An instance may contain an [0054] integer 1 such that (ƒi t , si 1 ), . . . , (ƒ1 t, si t) is considered a valid proof if t≦1.
  • Bounded Output Set [0055]
  • An instance may contain integers m and S such that an assertion (ƒ[0056] i, si) can be part of a valid proof that r complies with POLICY if there is a set Oi={Ri1, . . . , Rim} of m action strings, such that (ƒi, si)(A)
    Figure US20010018675A1-20010830-P00900
    Oi for any input set A , and the maximum size of an acceptance record (i, si, Rij) is S. Intuitively, for any user-supplied request r, the meaningful “domain of discourse” for assertion (fi, si) is of size at most m—there are at most m actions that it would make sense for (ƒi, si) to sign off on, no matter what the other assertions in the instance say about r.
  • Monotonicity [0057]
  • Other variants of POC are obtained by restricting attention to instances in which the assertions have the following property: (ƒ[0058] i, si) is “monotonic” if, for all acceptance sets A and B, A
    Figure US20010018675A1-20010830-P00900
    B→(ƒi, si)(A)
    Figure US20010018675A1-20010830-P00900
    i, si)(B). Thus, if (ƒi, si) approves action Rij when given a certain set of “evidence” that Ri is ok, it will also approve Rij when given a super-set of that evidence—it does not have a notion of “negative evidence.”
  • Any of the parameters l, m, and S that are present in a particular instance may be written in unary so that they play an analogous role to n, the number of assertions, in the calculation of the total size of the instance. The parameters d and c are exponents in a run time bound and hence may be written in binary. [0059]
  • Any subset of the parameters d, c, l, m, and S may be present in a POC instance, and each subset defines a POC variant. Including a global run time bound d makes the POC problem decidable, as does including parameters c and l. [0060]
  • In stating and proving results about the complexity of POC, the notion of a “promise problem,” as discussed in S. Even, A. Selman and Y. Yacobi, the “Complexity of Promise Problems with Applications to Public-Key Cryptography,” [0061] Information and Control 61, pp. 159-174 (1984), may be used. In a standard decision problem, a language L is defined by a predicate R in that xεL
    Figure US20010018675A1-20010830-P00901
    R(x). In a promise problem, there are two predicates, the promise Q and the property R. A machine M solves the promise problem (Q, R) if, for all inputs for which the promise holds, the machine M halts and accepts x if and only if the property holds. Formally, ∀x[Q(x)→[M halts on x and M(x) accepts
    Figure US20010018675A1-20010830-P00901
    R(x)]]. Note that M's behavior is unconstrained on inputs that do not satisfy the promise, and each set of choices for the behavior of M on these inputs determines a different solution. Thus, predicates Q and R define a family of languages, namely all L such that L=L(M) for some M that solves (Q, R).
  • The class NPP consists of all promise problems with at least one solution in NP. A promise problem is NP-hard if it has at least one solution and all of its solutions are NP-hard. To prove that a promise problem (Q, R) is NP-hard, it suffices to start with an NP-hard language L and construct a reduction whose target instances all satisfy the promise Q and satisfy the property R if and only if they are images of strings in L. [0062]
  • The following are POC variants that can be shown to be NP-hard, which is generally interpreted to mean that they are computationally intractable in the worst case. [0063]
  • Locally Bounded Proof of Compliance (LBPOC) [0064]
  • In this case, the “input” is a request r, a set {(ƒ[0065] 0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)} of assertions, and integers c, l, m, and S. The “promise” is that each (ƒi, si) runs in time O(Nc). On any input set that contains (Λ, Λ, R), where R is the action string corresponding to request r, for each (ƒi, si) there is a set Oi of at most m action strings such that (fi, si) only produces output from Oi, and S is the maximum size of an acceptance record (i, si, Rij), where RijεOi. Finally, the “question” can be stated as follows: is there a sequence i1, . . . , it of indices such that:
  • 1. Each i[0066] j is in {0, 1, . . . , n−1}, but the ij need not be distinct or collectively exhaustive of {0, 1, . . . , n−1};
  • 2. t≦1; and [0067]
  • 3. (0, POLICY, R)ε (ƒ[0068] i t , si 1 ) ∘ . . . ∘ (ƒi 1 , si 1) ({(Λ, Λ, R)})?
  • Globally Bounded Proof of Compliance (GBPOC) [0069]
  • In this case, the “input” is a request r, a set {(ƒ[0070] 0, POLICY), (f1, s1), . . . , (ƒn-1, sn-1)} of assertions, and an integer d. The “question” can be stated as follows: is there a sequence i1, . . . , it of indices such that:
  • 1. Each i[0071] j is in {0, 1, . . . , n−1}, but the ij need not be distinct or collectively exhaustive of{0, 1, . . . , n−1};
  • 2. (0, POLICY, R)ε (ƒ[0072] i t , s1 t ) ∘ . . . ∘ (ƒi t , si t) ({(Λ, Λ, R)}), where R is the action string corresponding to request r, and;
  • 3. The computation of (ƒ[0073] t t , si t) ∘ . . . ∘ (ƒi t , si 1 ) ({(Λ, Λ, R)}) runs in total time O(Nd)?
  • Monotonic Proof of Compliance (MPOC) [0074]
  • In this case, the “input” is a request r, a set {(ƒ[0075] o, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1))} of assertions, and integers l and c. The “promise” is that each assertion (ƒi, si) is monotonic and runs in time O(Nc). The “question” can be stated as follows: is there a sequence ij, . . . , it of indices such that:
  • 1. Each i[0076] j is in {0, 1, . . . , n−1}, but the ij need not be distinct or collectively exhaustive of {0, 1, . . . , n−1};
  • 2. t≦1; and [0077]
  • 3. (0, POLICY, R)ε (ƒ[0078] t 1 , si 1) ∘ . . . ∘ (ƒt 1 , s1 1 ) ({(Λ, Λ, R)}), where R is the action string corresponding to request r?
  • Each version of POC may be defined using “agglomeration” (ƒ[0079] 2, s2)★(ƒ1, st) instead of composition (ƒ2, s2)∘(ƒ1, s1). The result of applying the sequence of assertions (ƒt 1 , si t ), . . . , (ƒ1 t , si t ) agglomeratively to an acceptance set S0 is defined inductively as follows: SI≡(ƒi1, si1)(S0)∪S0 and, for 2≦i≦t, Sj≡ t j , s1 j ) (Sj-1)∪Sj-1.(Si-,) u Se 1 Thus, for any acceptance set A, A
    Figure US20010018675A1-20010830-P00900
    1 t , si t ) ★ . . . ★ (ƒi 1 , si 1 ) (A). The agglomerative versions of the decision problems are identical to the versions already given, except that the acceptance condition is “(0, POLICY, R) ε (ƒi t , si t ) ★ . . . ★ (ƒi t , si t ) ({(Λ, Λ, R)})?” As used herein, “agglomerative POC,” “agglomerative MPOC,” etc., refer to the version defined in terms of ★ instead of ∘.
  • A trust-management system that defines “proof of compliance” in terms of agglomeration can make it impossible for an assertion to “undo” an approval that it (or any other assertion) has already given to an action string during the course of constructing a proof. This definition of proof may make sense if the trust-management system should guard against a rogue credential-issuer's ability to thwart legitimate proofs. Note that the question of whether the compliance checker combines assertions using agglomeration or composition is separate from the question of whether the assertions themselves are monotonic. [0080]
  • A compliance-checking algorithm according to a preferred embodiment of the present invention will now be described. A specific case of a POC problem associated with this embodiment will be explained. The promise that defines this special case includes some conditions that have already been discussed, namely monotonicity and bounds on the run time of assertions and on the total size of acceptance sets that assertions can produce. According to one embodiment of the present invention, however, another condition is considered, called “authenticity,” which could be ignored when proving hardness results. An authentic assertion (ƒ[0081] i, si) produces acceptance records of the form (i, si, Rij). That is, it does not “impersonate” another assertion by producing an acceptance record of the form (i′, si′, Ri′j), for i′ not equal to i, or si′ not equal to si.
  • An embodiment of the present invention constructs proofs in an agglomerative fashion, and hence ★ is used in the following problem statement. Note that a variant of POC could be defined using ∘ as well. [0082]
  • Locally Bounded, Monotonic, and Authentic Proof of Compliance (LBMAPOC) [0083]
  • According to this embodiment of the present invention, the “input is a request r, a set {(ƒ[0084] 0, POLICY), (ƒi, s1), . . . , (ƒn-1, sn-1)} of assertions, and integers c, m, and S. The “promise” is that each (ƒi, si) is monotonic, authentic, and runs in time O(Nc). On any input set that contains (Λ, Λ, R), where R is the action string corresponding to request r, for each (ƒi, si) there is a set Oi of at most m action strings such that (ƒi, si) produces output from Oi. Moreover, S is the maximum size of an acceptance record (i, si, Rij), such that RijεOi. Finally, the “question” can be stated as follows: is there a sequence i1, . . . , it of indices such that each ij is in {0, 1, . . . , n−1}, but the ij need not be distinct or collectively exhaustive of {0, 1, . . . , n−1}, and (0, POLICY, R)ε (ƒi t , si t ) ★ . . . ★ (ƒt 1 , si 1 ) ({(Λ, Λ, R)})?
  • Referring now in detail to the drawings wherein like parts are designated by like reference numerals throughout, there is illustrated in FIG. 1 a flow diagram of a method of compliance checking for a trust-management system according to an embodiment of the present invention. The flow chart in FIG. 1 is not meant to imply a fixed order to the steps; embodiments of the present invention can be practiced in any order that is practicable. At [0085] step 110, a request r, a policy assertion (ƒ0, POLICY) associated with the request r, and n−1 credential assertions (ƒ1, s1), . . . , (ƒn-1, sn-1) are received, each credential assertion comprising a credential function ƒi and a credential source si. In addition, an acceptance record set S is initialized to {(Λ, Λ, R)} at step 110, where A represents a distinguished “null string” and R represents the initial request, r.
  • At [0086] step 120, j is initialized to 1. At step 130 each assertion (ƒi, si), for integers i from 0 to n−1, is run and the result is added to the acceptance record set S. If j does not equal mn at step 140, where m is a number greater than 1, j is increased by 1 at step 150 and step 130 is repeated.
  • If j does equal mn at [0087] step 140, it is determined if acceptance set S contains an acceptance record, such as (0, POLICY, R), at step 160. If not, a rejection is output at step 170. If acceptance set S does contain the acceptance record, an acceptance is output at step 180.
  • The following pseudo-code demonstrates the algorithm according to one embodiment of the present invention, referred to herein as the “Compliance-[0088] Checking Algorithm version 1” (CCA1):
  • CCA[0089] 1(r, {(ƒ0, POLICY), (ƒ1, s1) . . . , (ƒn-1, sn-1)}, m):
  • { [0090]
  • S←{(Λ,Λ,R)} [0091]
  • I←{} [0092]
  • For j←1 to mn [0093]
  • { [0094]
  • For i←n−1 to 0 [0095]
  • { [0096]
  • If (ƒ[0097] i, si) εI, Then S′←(ƒi, si)(S)
  • If IIIFormed((ƒ[0098] i, si)), Then I←I∪{(ƒi, si)}, Else S←S∪S′
  • } [0099]
  • {[0100]
  • If (0, POLICY, R) εS, Then Output(Accept), Else Output(Reject) [0101]
  • } [0102]
  • Note that an assertion (ƒ[0103] i, si) is “ill-formned” if it violates the promise. If CCA1 discovers that (ƒi, si) is ill-formed, the assertion is ignored for the remainder of the computation. An assertion (ƒi, si) may be undetectably ill-formed. For example, there may be sets A
    Figure US20010018675A1-20010830-P00900
    B such that (ƒi, si)(A)
    Figure US20010018675A1-20010830-P00902
    i, si)(B), but such that A and B do not arise in this run of the compliance checker. The CCA1 algorithm may check for violations of the promise every time it simulates an assertion. Detailed pseudo-code for these checks is not included in CCA1, because it would not illustrate the basic structure of the algorithm. Instead, the predicate Ill-Formed () indicates that the checks may done for each simulation.
  • Like the non-deterministic algorithms discussed above, CCA[0104] 1 accepts if and only if the acceptance record (0, POLICY, R) is produced when it simulates the input assertions. Unlike the previous algorithms, however, it cannot non-deterministically guess an order in which to do the simulation. Instead, it uses an arbitrary order. CCA1 also ensures that, if a proper subset F of the input assertions contains a proof that R complies with POLICY and every (ƒi, si) ε F satisfies the promise, then the remaining assertions do not destroy all or part of the acceptance records produced by F during the simulation (and destroy the proof), even if these remaining assertions do not satisfy the promise. CCA1 achieves this by maintaining one set of approved acceptance records, from which no records are ever deleted, i.e., by agglomerating, and by discarding assertions that it discovers are ill-formed.
  • Note that CCA[0105] 1 does mn iterations of the sequence (ƒn-1, sn-1), . . . , (ƒ1, s1), (ƒ0, POLICY), for a total of mn2 assertion-simulations. Recall that a set F={(ƒj t , sj t ), . . . , (ƒj t, sj t )}
    Figure US20010018675A1-20010830-P00900
    {(ƒ0, POLICY), . . . , (ƒn-1, sn-1)}“contains a proof that r complies with POLICY” if there is some sequence k1, . . . , ku of the indices j1, . . . , jt, not necessarily distinct and not necessarily exhaustive of j1, . . . , jt, such that (0, POLICY, R)ε (ƒk u , sk u ) ★ . . . ★ (ƒk 1 , sk 1 ) ({(Λ, Λ, R)}).
  • Let (r, {(ƒ[0106] 0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}, c, m, s) be an agglomerative LBMAPOC instance. As a result:
  • 1. Suppose that F[0107]
    Figure US20010018675A1-20010830-P00900
    {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)} contains a proof that R complies with POLICY and that every (ƒi, si)εF satisfies the promise of LBMAPOC. Then CCA1 accepts (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}, c, m, s).
  • 2. If {(ƒ[0108] 0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)} does not contain a proof that R complies with POLICY, then CCA1 rejects (r, {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}, c, m, s).
  • 3. CCA[0109] 1 runs in time O(mn2(nms)c).
  • The only non trivial claim above is (1). Let F={(ƒ[0110] j t , sj t ) . . . , (ƒj t , sj t )} be a set that satisfies the hypothesis of (1). Each assertion in F is monotonic, and, as CCA1 runs assertions agglomeratively, it never deletes acceptance records that have already been produced but rather just adds new ones. Therefore, it may be assumed without loss of generality that F contains all of the well-formed assertions in {(ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1)}.
  • Let k[0111] 1, . . . , ku be a sequence of indices, each in {j1, . . . , jt}, but not necessarily distinct and not necessarily exhaustive of {(j1, . . . , jt}, such that (0, POLICY, R)ε (ƒk u , sk u ) ★ . . . ★ (ƒk w , sk w ) ({(Λ, Λ, R)}). Assume without loss of generality that no sequence of length less than u has this property. Let A1, . . . , Au be the acceptance sets produced by applying (ƒk 1 , sk 1 ), . . . , (ƒk w , sk w ). Because k1, . . . , ku is a shortest sequence that proves compliance using assertions in F, each set Ap must contain at least one action string that is not present in any of A1, . . . , Ap-1. Thus, u iterations of (ƒ0, POLICY)★(ƒ1, s1) ★ . . . ★ (ƒn-1, sn-1) would suffice for CCA1. At some point in the first iteration (ƒk 1 , sk 1) would be run, and because CCA1 adds but never deletes acceptance records, A1 or some super-set of A1 would be produced. At some point during the second iteration, (ƒk 2 , sk 2 ) would be run, and because A1 would be contained in its input, A2 or some superset of A2 would be produced.
  • Each (ƒ[0112] j h , sj h ) εF satisfies the local boundedness promise, producing at most m distinct action strings in any computation that begins with {(Λ, Λ, R)}, regardless of the behavior of other (even ill-formed) assertions. Because |F|=t≦n, at most mn distinct action strings can be produced by assertions in F, and at most mn sets Ap can be produced if each is to contain a record that is not contained in any earlier set. Thus, u≦mn, and mn iterations of (ƒ0, POLICY) ★ (ƒ1, s1) ★ . . . ★ (ƒn-1, sn-1) suffice.
  • Note that cases (1) and (2) do not cover all possible inputs to CCA[0113] 1. There may be a subset F of the input assertions that does contain a proof that r complies with POLICY but that contains one or more ill-formed assertions. If CCA1 does not detect that any of these assertions is ill- formed, because their ill-formedness is exhibited on acceptance sets that do not occur in this computation, then CCA1 will accept the input. If it does detect ill-formedness, then, as specified here, CCA1 may or may not accept the input, perhaps depending on whether the record (0, POLICY, R) has already been produced at the time of detection. According to another embodiment of the present invention, CCA1 is modified to restart whenever ill-formedness is detected, after discarding the ill-formed assertion so that it is not used in the new computation. The point is simply that CCA1 should not be given a policy that trusts, directly or indirectly, a source of ill-formed assertions. Therefore, the policy author should know which sources to trust, and modify the policy if a trusted source issues ill-formed assertions.
  • FIG. 2 is a block diagram of a compliance checker for a trust-management system according to an embodiment of the present invention. An [0114] application 210 running on a user device 200 sends a request r to a trust management platform input port 410 through a communication network 300 such as, for example: a Local Area Network (LAN), the Public Switched Telephone Network (PSTN), an intranet, an extranet or the Internet. A compliance-checking unit 450 coupled to the input port 410 receives the request along with a policy assertion (ƒ0, POLICY) associated with the request and n−1 credential assertions (ƒ1, s1), . . . , (ƒn-1, sn-1), each credential assertion including a credential function fi and a credential source si. Note that the input port 410 may be a single physical input port, or several different input ports that may in turn be coupled to different networks or other devices. That is, the request, policy and credentials may not come from the same source or through the same channel.
  • The [0115] input port 410 is coupled to a compliance-checking unit 450, which may comprise, for example, the following (not shown in FIG. 2): a processing module with a Central Processing Unit (CPU); “memories” comprising a Random Access Memory (RAM) and a Read Only Memory (ROM); and a storage device. The memories and the storage device may store instructions adapted to be executed by the CPU to perform at least one embodiment of the method of the present invention. For the purposes of this application, the memories and storage device could include any medium capable of storing information and instructions adapted to be executed by a processor. Some examples of such media include, but are not limited to, floppy disks, CD-ROM, magnetic tape, hard drives, and any other device that can store digital information. In one embodiment, instructions are stored on the medium in a compressed and/or encrypted format. As used herein, the phrase “adapted to be executed by a processor” is meant to encompass instructions stored in a compressed and/or encrypted format, as well as instructions that have to be compiled or installed by an installer before being executed by the processor.
  • The compliance-checking [0116] unit 450 initializes an acceptance record set S to {(Λ, Λ, R)}, where Λ represents a distinguished null string and R represents the request r. The compliance-checking unit 450 runs assertion (ƒi, si) for integers i from 0 to n−1 and adds the result of each assertion (ƒi, si) to the acceptance record set S. This process is repeated mn times, where m represents a number greater than 1. The compliance-checking unit 450 may output an “acceptance,” such as through port 410, or some other communication port, if any of the results in the acceptance record set S comprise an acceptance record (0, POLICY, R). The compliance-checking unit 450 may instead, according to another embodiment of the present invention, perform the action R itself.
  • Thus, according to one embodiment of the present invention, the PolicyMaker system uses a notion of “proof that a request complies with a policy” that is amenable to definition and analysis. The choice of this notion of proof, however, is a subjective one and other notions of proof may also be used. [0117]
  • In deciding how a set of executable assertions can cooperate to produce a proof, a mechanism for “inter-assertion communication” of intermediate results may be used. For simplicity, assertions may communicate by outputting acceptance records that are input to other assertions. More sophisticated interactions, such as allowing assertions to call each other as subroutines, might be useful but may require a more complex execution environment. A trade-off might therefore exist between the cost of building and analyzing such an execution environment and the potential power to be gained by using more sophisticated interactions to construct proofs of compliance. [0118]
  • The choice of a simple communication mechanism implies that a part of constructing a proof of compliance is choosing an order in which to execute assertions. According to an embodiment of the present invention, the responsibility of choosing this order rests with the compliance checker and not, for example, the calling application. Although the compliance checker's job could be made easier by requiring the calling application to give it the correct order as an input, such a requirement may not be consistent with the system's overall goals. For example, applications may need to use credentials issued by diverse and far-flung sources without having to make assumptions about the order in which these credentials communicate via acceptance records. In an extreme case, the issuing sources may not be aware of each others' existence, and no such assumptions by the calling application would be valid. Although the most general version of the POC problem allows assertions to be arbitrary functions, the computationally tractable version may only be correct when all assertions are monotonic. [0119]
  • In particular, according to one embodiment of the present invention, monotonic policy assertions may produce a correct result, and this excludes certain types of policies that are used in practice, including those that use “negative credentials” such as revocation lists. Despite this restriction, the monotonicity requirement has certain advantages. Although the compliance checker may not handle all potentially desirable policies, it is at least analyzable and provably correct on a well-defined class of policies. Furthermore, the requirements of many non-monotonic policies can often be achieved by monotonic policies. For example, instead of requiring that an entity not appear on a revocation list, the system may require a “certificate of non-revocation.” The choice between these two approaches involves trade-offs among the (system-wide) costs of the two kinds of credentials and the benefits of a standard compliance checker with provable properties. Moreover, restriction to monotonic assertions encourages a conservative, prudent approach to security. In order to perform a potentially dangerous action, a user must present an adequate set of affirmative credentials. Potentially dangerous action are not allowed “by default,” simply because of the absence of negative credentials. [0120]
  • According to an embodiment of the present invention, the POC problem has been formulated in a way that allows assertions to be as expressive as possible. As a result, well-formedness promises such as monotonicity and boundedness, while formal and precise, may not be verified. Each assertion that conditionally trusts an assertion source for application-specific expertise (such as suitability for a loan) must also trust that source to write bounded and monotonic assertions and to trust other similar sources of assertions. The resulting notion of soundness is that if there is no proof from a set of trusted, well-formed assertions, then CCA[0121] 1 will not accept the input.
  • Full expressiveness, however, is just one goal of a trust-management system. Another goal is the clear separation of the trust relationships of assertions from programming details. To some extent, these goals are at odds—the compliance checker may not perform verifications on fully general programs, and thus an assertion writer may need to worry about some programming details. [0122]
  • Note that monotonic assertions may actually be written as, for example, AND-OR circuits and bounded assertions may actually “declare” the finite set from which they will produce output. A compliance-checking algorithm could then easily detect the ill-formed assertions and discard them. This would free assertion writers of the burden of deciding when another writer is trusted to write bounded and monotonic code, just as requiring assertions to be written in a safe (and therefore restricted) language frees the assertion writer from worrying about certain application-independent programming details. This verifiability comes at a price: listing a finite output set is relatively inexpensive, but there are monotonic functions that require exponentially bigger circuits to express over a basis of AND and OR than they require over a basis of AND, OR, and NOT. See, E. Tardos, “The Gap Between Monotone and Non-monotone Circuit Complexity is Exponential,” Combinatorica 8, pp. 141-142 (1988). In some applications it may be cheaper, on average, to write assertions that are verfiably bounded and monotonic than to determine the set of sources trusted (even indirectly) by a given assertion and to judge whether they are trusted to be monotonic and bounded. [0123]
  • According to another embodiment of the present invention, the compliance checker makes the original code of an assertion that produced a record available to other assertions reading that acceptance record. A conservative policy then, before trusting assertions (ƒ[0124] 1, s1) and (ƒ2, s2), could require and check that ƒ1 and ƒ2 be verifiably monotonic and bounded and that ƒ1 and ƒ2 each include specific standard code to check all assertions whose acceptance records (ƒ1, s1) and (ƒ2, s2) wish to trust. A complex monotonic assertion that needs to be written compactly using NOT gates can, if desired, still be used with the modified compliance algorithm.
  • Although various embodiments are specifically illustrated and described herein, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention. For example, although specific pseudo-code was used to describe one embodiment of the present invention, it will be understood that other compliance-checking algorithms will also fall within the scope of the invention. [0125]

Claims (25)

What is claimed is:
1. A method of compliance checking in a trust-management system, comprising:
a) receiving a request r, a policy assertion (ƒ0, POLICY) associated with the request r, and n−1 credential assertions (ƒ1, s1), . . . , (ƒn-1, sn-1), each credential assertion comprising a credential function ƒi and a credential source si;
b) initializing an acceptance record set S to {(Λ, Λ, R)}, where Λ represents an empty portion of the acceptance record set S, and R represents the request r;
c) running assertion (ƒi, si) on the acceptance set S for each integer i from n−1 to 0 and adding the result of each assertion (ƒi, si) to the acceptance record set S;
d) repeating step (c) mn times, where m represents a number greater than 1; and
e) determining if the acceptance record set S includes (0, POLICY, R).
2. The method of
claim 1
, further comprising:
f) determining whether an assertion (ƒi, si) is ill-formned;
wherein step (c) is only performed for assertions (ƒi, si) that are not ill-formed.
3. The method of
claim 2
, further comprising:
g) initializing a set I to an empty set; and
h) adding any ill-formed assertions (ƒi, si) to set I.
4. The method of
claim 1
, wherein a request r is a request to access a data object.
5. The method of
claim 1
, wherein a request r is a request to make a copy of a data object.
6. The method of
claim 1
, wherein a request r is a request to play a data object that includes audio content.
7. The method of
claim 1
, wherein a credential function includes a subject, an action, and an object.
8. The method of
claim 1
, wherein the request r is a string encoding an action for which a calling application seeks a proof of compliance.
9. The method of
claim 1
, wherein R represents an action string corresponding with the request r.
10. The method of
claim 9
, wherein the action string R includes a subject, an action and an object.
11. The method of
claim 1
, wherein a credential assertion includes one of a public key, a uniform resource locator and a name.
12. The method of
claim 1
, wherein credential function ƒi is correlated with a credential source si by cryptographically signing the credential function ƒi with a private cryptographic key belonging to credential source si.
13. The method of
claim 1
, wherein each assertion is monotonic, authentic, and locally bounded.
14. A method of compliance checking in a trust-management system, comprising:
a) receiving a request;
b) receiving a policy associated with the request;
c) receiving a number of credentials, the policies and credentials comprising a number of monotonic, authentic, and locally bounded assertions; and
d) deciding whether the credentials prove that the request complies with the policy.
15. The method of
claim 14
, wherein a monotonic assertion approves an action when provided with a set of evidence if the assertion would approve the action when provided with a subset of that evidence.
16. The method of
claim 14
, wherein an authentic assertion produces acceptance records that do not impersonate another assertion.
17. The method of
claim 14
, wherein a locally bounded assertion is bounded in terms of a maximum runtime and a maximum size of acceptance sets that can be produced.
18. The method of
claim 14
, wherein the policy comprises a function ƒ0 encoded in a programming system
19. A method of compliance checking in a trust-management system, comprising:
receiving (i) a request r to perform an action R and (ii) assertions (ƒ0, POLICY), (ƒ1, si), . . . , (ƒn-1, sn-1);
executing, mn times, assertion (ƒi, si) for each integer i from n−1 to 0, the execution being performed using any information generated by previously executed assertions, m representing a number greater than 1; and
determining if (0, POLICY, R) has been generated.
20. An apparatus for compliance checking in a trust-management system, comprising:
a processor; and
a memory storing instructions adapted to be executed by said processor to receive a request R to perform an action and assertions (ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1) initialize an acceptance record set S to {(Λ, Λ, R)}, where A represents a distinguished null string, iteratively run, mn times, assertion (ƒi, si) on the acceptance set S for each integer i from n−1 to 0 and add the result of each assertion (ƒi, si) to the acceptance record set S, where m represents a number greater than 1, and determine if the acceptance record set S includes (0, POLICY, R).
21. A trust management platform, comprising:
an input port configured to receive a request, a policy assertion (ƒ0, POLICY), and credential assertions (ƒ1, s1), . . . , (ƒn-1, sn-1), each credential assertion comprising a credential function ƒi and a credential source si; and
a compliance checking unit coupled to said input port and configured to:
a) initialize an acceptance record set S to {(Λ, Λ, R)}, where Λ represents a distinguished null string and R represents information corresponding with the request,
b) run assertion (ƒi, si) on the acceptance set S for each integer i from n−1 to 0 and add the result of each assertion (ƒi, si) to the acceptance record set S,
c) repeat step (b) mn times, where m represents a number greater than 1, and
d) determine if acceptance record set S includes an acceptance record (0, POLICY, R).
22. A trust-management system, comprising:
means for receiving a request to perform an action r and a set of assertions (ƒ0, POLICY), (ƒ1, s1), . . . , (ƒn-1, sn-1); and
means for proving that the request r is consistent with the set of assertions.
23. A medium storing instructions adapted to be executed by a processor to perform steps including:
a) receiving a request r, a policy assertion (ƒ0, POLICY) associated with the request r, and n−1 credential assertions (ƒ1, s1), . . . , (ƒn-1, sn-1) each credential assertion comprising a credential function ƒi and a credential source si;
b) initializing an acceptance record set S to {(Λ, Λ, R)}, where Λ represents a distinguished null string and R represents the request r,
c) running assertion (fi, si) on the acceptance set S for each integer i from n−1 to 0 and adding the result of each assertion (ƒi, si) to the acceptance record set S;
d) repeating step (c) mn times, where m represents a number greater than 1; and
e) determining whether the acceptance record set S includes (0, POLICY, R).
24. A method of compliance checking in a trust-management system, comprising:
a) receiving a request r, a policy assertion (ƒ0, POLICY) associated with the request r, and n−1 credential assertions (ƒ1, s1), . . . , (ƒn-1, sn-1), each credential assertion comprising a credential function ƒi and a credential source si;
b) initializing an acceptance record set S to {(Λ, Λ, R)}, where Λ represents a distinguished null string and R represents the request r;
c) for each integer i from n−1 to 0:
running assertion (ƒi, si) against the acceptance set S and adding the result to the acceptance record set S,
determining if the acceptance record set includes (0, POLICY, R), and
if the acceptance record set includes (0, POLICY, R), then stopping said method; and
d) repeating step (c) mn times, where m represents a number greater than 1.
25. A method of compliance checking in a trust-management system, comprising:
a) receiving credential assertions (ƒ1, s1), . . . , (ƒn-1, sn-1), each credential assertion comprising a credential function ƒi and a credential source si;
b) initializing an acceptance record set S to {(Λ, R) }, where Λ represents an empty portion of the acceptance record set S, and R represents a request;
c) running assertion (ƒi, si) on the acceptance set S for each integer i from n−1 to 0 and adding the result of each assertion (ƒi, si) to the acceptance record set S;
d) repeating step (c) mn times, where m represents a number greater than 1.
US09/780,892 1998-02-17 2001-02-09 Method and apparatus for compliance checking in a trust-management system Abandoned US20010018675A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/780,892 US20010018675A1 (en) 1998-02-17 2001-02-09 Method and apparatus for compliance checking in a trust-management system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US7484898P 1998-02-17 1998-02-17
US25029699A 1999-02-17 1999-02-17
US09/414,778 US6256734B1 (en) 1998-02-17 1999-10-08 Method and apparatus for compliance checking in a trust management system
US09/780,892 US20010018675A1 (en) 1998-02-17 2001-02-09 Method and apparatus for compliance checking in a trust-management system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/414,778 Continuation US6256734B1 (en) 1998-02-17 1999-10-08 Method and apparatus for compliance checking in a trust management system

Publications (1)

Publication Number Publication Date
US20010018675A1 true US20010018675A1 (en) 2001-08-30

Family

ID=26756128

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/414,778 Expired - Lifetime US6256734B1 (en) 1998-02-17 1999-10-08 Method and apparatus for compliance checking in a trust management system
US09/780,892 Abandoned US20010018675A1 (en) 1998-02-17 2001-02-09 Method and apparatus for compliance checking in a trust-management system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/414,778 Expired - Lifetime US6256734B1 (en) 1998-02-17 1999-10-08 Method and apparatus for compliance checking in a trust management system

Country Status (1)

Country Link
US (2) US6256734B1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030050981A1 (en) * 2001-09-13 2003-03-13 International Business Machines Corporation Method, apparatus, and program to forward and verify multiple digital signatures in electronic mail
US20030074321A1 (en) * 2001-10-15 2003-04-17 Vidius Inc. Method and system for distribution of digital media and conduction of electronic commerce in an un-trusted environment
US20050283443A1 (en) * 2004-06-16 2005-12-22 Hardt Dick C Auditable privacy policies in a distributed hierarchical identity management system
US20060005020A1 (en) * 2004-06-16 2006-01-05 Sxip Networks Srl Graduated authentication in an identity management system
US20060005263A1 (en) * 2004-06-16 2006-01-05 Sxip Networks Srl Distributed contact information management
US20060200425A1 (en) * 2000-08-04 2006-09-07 Enfotrust Networks, Inc. Single sign-on for access to a central data repository
US20060242688A1 (en) * 2005-04-22 2006-10-26 Microsoft Corporation Supporting statements for credential based access control
US20080066169A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Fact Qualifiers in Security Scenarios
US20080065899A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Variable Expressions in Security Assertions
US20080066147A1 (en) * 2006-09-11 2008-03-13 Microsoft Corporation Composable Security Policies
US20080066170A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Security Assertion Revocation
US20080066175A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Security Authorization Queries
US20080066159A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Controlling the Delegation of Rights
US20080066171A1 (en) * 2006-09-11 2008-03-13 Microsoft Corporation Security Language Translations with Logic Resolution
US20080066160A1 (en) * 2006-09-11 2008-03-13 Microsoft Corporation Security Language Expressions for Logic Resolution
US20080066158A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Authorization Decisions with Principal Attributes
US20080091859A1 (en) * 2006-10-17 2008-04-17 Hon Hai Precision Industry Co., Ltd. Test Method for verifying installation validity of a PCI device on an electronic device
US20090049514A1 (en) * 2007-08-15 2009-02-19 Nokia Corporation Autonomic trust management for a trustworthy system
US20090210293A1 (en) * 2000-08-04 2009-08-20 Nick Steele Information transactions over a network
US7814534B2 (en) 2006-09-08 2010-10-12 Microsoft Corporation Auditing authorization decisions
US8260806B2 (en) 2000-08-04 2012-09-04 Grdn. Net Solutions, Llc Storage, management and distribution of consumer information

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7269580B2 (en) * 2000-10-03 2007-09-11 Celcorp, Inc. Application integration system and method using intelligent agents for integrating information access over extended networks
US7581103B2 (en) 2001-06-13 2009-08-25 Intertrust Technologies Corporation Software self-checking systems and methods
US7165718B2 (en) * 2002-01-16 2007-01-23 Pathway Enterprises, Inc. Identification of an individual using a multiple purpose card
WO2004051437A2 (en) * 2002-12-02 2004-06-17 Elemental Security System and method for providing an enterprise-based computer security policy
US20040260946A1 (en) * 2003-06-20 2004-12-23 Cahill Conor P. User not present
WO2005040995A2 (en) * 2003-10-24 2005-05-06 Dynexus, Inc Systems and methods of establishment of secure, trusted dynamic environments and facilitation of secured communication exchange networks
US8104075B2 (en) * 2006-08-10 2012-01-24 Intertrust Technologies Corp. Trust management systems and methods
US20090192784A1 (en) * 2008-01-24 2009-07-30 International Business Machines Corporation Systems and methods for analyzing electronic documents to discover noncompliance with established norms
US20100205649A1 (en) * 2009-02-06 2010-08-12 Microsoft Corporation Credential gathering with deferred instantiation
CN101923615B (en) * 2010-06-11 2013-04-03 北京工业大学 Grey fuzzy comprehensive evaluation-based trust quantization method
JP5620781B2 (en) * 2010-10-14 2014-11-05 キヤノン株式会社 Information processing apparatus, control method thereof, and program
US9015037B2 (en) 2011-06-10 2015-04-21 Linkedin Corporation Interactive fact checking system
US9087048B2 (en) 2011-06-10 2015-07-21 Linkedin Corporation Method of and system for validating a fact checking system
US8185448B1 (en) 2011-06-10 2012-05-22 Myslinski Lucas J Fact checking method and system
US8768782B1 (en) 2011-06-10 2014-07-01 Linkedin Corporation Optimized cloud computing fact checking
US9176957B2 (en) 2011-06-10 2015-11-03 Linkedin Corporation Selective fact checking method and system
US20140019762A1 (en) * 2012-07-10 2014-01-16 Digicert, Inc. Method, Process and System for Digitally Signing an Object
US9210051B2 (en) * 2012-09-12 2015-12-08 Empire Technology Development Llc Compound certifications for assurance without revealing infrastructure
US9483159B2 (en) 2012-12-12 2016-11-01 Linkedin Corporation Fact checking graphical user interface including fact checking icons
US20150095320A1 (en) 2013-09-27 2015-04-02 Trooclick France Apparatus, systems and methods for scoring the reliability of online information
US10169424B2 (en) 2013-09-27 2019-01-01 Lucas J. Myslinski Apparatus, systems and methods for scoring and distributing the reliability of online information
US9972055B2 (en) * 2014-02-28 2018-05-15 Lucas J. Myslinski Fact checking method and system utilizing social networking information
US9643722B1 (en) 2014-02-28 2017-05-09 Lucas J. Myslinski Drone device security system
US8990234B1 (en) 2014-02-28 2015-03-24 Lucas J. Myslinski Efficient fact checking method and system
US9189514B1 (en) 2014-09-04 2015-11-17 Lucas J. Myslinski Optimized fact checking method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6058383A (en) * 1996-06-27 2000-05-02 Kent Ridge Digital Labs Computationally efficient method for trusted and dynamic digital objects dissemination
US6212634B1 (en) * 1996-11-15 2001-04-03 Open Market, Inc. Certifying authorization in computer networks

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5315657A (en) * 1990-09-28 1994-05-24 Digital Equipment Corporation Compound principals in access control lists
EP0520709A3 (en) * 1991-06-28 1994-08-24 Digital Equipment Corp A method for providing a security facility for remote systems management
US5666416A (en) * 1995-10-24 1997-09-09 Micali; Silvio Certificate revocation system
US5958050A (en) * 1996-09-24 1999-09-28 Electric Communities Trusted delegation system
US6049872A (en) * 1997-05-06 2000-04-11 At&T Corporation Method for authenticating a channel in large-scale distributed systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6058383A (en) * 1996-06-27 2000-05-02 Kent Ridge Digital Labs Computationally efficient method for trusted and dynamic digital objects dissemination
US6212634B1 (en) * 1996-11-15 2001-04-03 Open Market, Inc. Certifying authorization in computer networks

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928508B2 (en) 2000-08-04 2018-03-27 Intellectual Ventures I Llc Single sign-on for access to a central data repository
US8566248B1 (en) 2000-08-04 2013-10-22 Grdn. Net Solutions, Llc Initiation of an information transaction over a network via a wireless device
US20060200425A1 (en) * 2000-08-04 2006-09-07 Enfotrust Networks, Inc. Single sign-on for access to a central data repository
US8260806B2 (en) 2000-08-04 2012-09-04 Grdn. Net Solutions, Llc Storage, management and distribution of consumer information
US20090210293A1 (en) * 2000-08-04 2009-08-20 Nick Steele Information transactions over a network
US7389422B2 (en) 2001-09-13 2008-06-17 International Business Machines Corporation System for forwarding and verifying multiple digital signatures corresponding to users and contributions of the users in electronic mail
US20030050981A1 (en) * 2001-09-13 2003-03-13 International Business Machines Corporation Method, apparatus, and program to forward and verify multiple digital signatures in electronic mail
US20060190545A1 (en) * 2001-09-13 2006-08-24 Banerjee Dwip N Method, apparatus, and program to forward and verify multiple digital signatures in electronic mail
US20080235797A1 (en) * 2001-09-13 2008-09-25 International Business Machines Corporation Method, Apparatus, and Program to Forward and Verify Multiple Digital Signatures in Electronic Mail
US20080235345A1 (en) * 2001-09-13 2008-09-25 International Business Machines Corporation Method, Apparatus, and Program to Forward and Verify Multiple Digital Signatures in Electronic Mail
US20030074321A1 (en) * 2001-10-15 2003-04-17 Vidius Inc. Method and system for distribution of digital media and conduction of electronic commerce in an un-trusted environment
US8527752B2 (en) 2004-06-16 2013-09-03 Dormarke Assets Limited Liability Graduated authentication in an identity management system
US9398020B2 (en) 2004-06-16 2016-07-19 Callahan Cellular L.L.C. Graduated authentication in an identity management system
US11824869B2 (en) 2004-06-16 2023-11-21 Callahan Cellular L.L.C. Graduated authentication in an identity management system
US10904262B2 (en) 2004-06-16 2021-01-26 Callahan Cellular L.L.C. Graduated authentication in an identity management system
US10567391B2 (en) 2004-06-16 2020-02-18 Callahan Cellular L.L.C. Graduated authentication in an identity management system
US10298594B2 (en) 2004-06-16 2019-05-21 Callahan Cellular L.L.C. Graduated authentication in an identity management system
US20050283443A1 (en) * 2004-06-16 2005-12-22 Hardt Dick C Auditable privacy policies in a distributed hierarchical identity management system
US8504704B2 (en) 2004-06-16 2013-08-06 Dormarke Assets Limited Liability Company Distributed contact information management
US9245266B2 (en) * 2004-06-16 2016-01-26 Callahan Cellular L.L.C. Auditable privacy policies in a distributed hierarchical identity management system
US8959652B2 (en) 2004-06-16 2015-02-17 Dormarke Assets Limited Liability Company Graduated authentication in an identity management system
US20060005020A1 (en) * 2004-06-16 2006-01-05 Sxip Networks Srl Graduated authentication in an identity management system
US20060005263A1 (en) * 2004-06-16 2006-01-05 Sxip Networks Srl Distributed contact information management
US7657746B2 (en) * 2005-04-22 2010-02-02 Microsoft Corporation Supporting statements for credential based access control
US20060242688A1 (en) * 2005-04-22 2006-10-26 Microsoft Corporation Supporting statements for credential based access control
US8095969B2 (en) * 2006-09-08 2012-01-10 Microsoft Corporation Security assertion revocation
US20080066170A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Security Assertion Revocation
US8060931B2 (en) * 2006-09-08 2011-11-15 Microsoft Corporation Security authorization queries
US20080066175A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Security Authorization Queries
US8201215B2 (en) 2006-09-08 2012-06-12 Microsoft Corporation Controlling the delegation of rights
US8225378B2 (en) 2006-09-08 2012-07-17 Microsoft Corporation Auditing authorization decisions
US20110030038A1 (en) * 2006-09-08 2011-02-03 Microsoft Corporation Auditing Authorization Decisions
US7814534B2 (en) 2006-09-08 2010-10-12 Microsoft Corporation Auditing authorization decisions
US20080066169A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Fact Qualifiers in Security Scenarios
US20080066159A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Controlling the Delegation of Rights
US8584230B2 (en) 2006-09-08 2013-11-12 Microsoft Corporation Security authorization queries
US20080066158A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Authorization Decisions with Principal Attributes
CN102176226A (en) * 2006-09-08 2011-09-07 微软公司 Security authorization queries
US20080065899A1 (en) * 2006-09-08 2008-03-13 Microsoft Corporation Variable Expressions in Security Assertions
US20080066147A1 (en) * 2006-09-11 2008-03-13 Microsoft Corporation Composable Security Policies
US9282121B2 (en) 2006-09-11 2016-03-08 Microsoft Technology Licensing, Llc Security language translations with logic resolution
US8938783B2 (en) * 2006-09-11 2015-01-20 Microsoft Corporation Security language expressions for logic resolution
US8656503B2 (en) 2006-09-11 2014-02-18 Microsoft Corporation Security language translations with logic resolution
US20080066160A1 (en) * 2006-09-11 2008-03-13 Microsoft Corporation Security Language Expressions for Logic Resolution
US20080066171A1 (en) * 2006-09-11 2008-03-13 Microsoft Corporation Security Language Translations with Logic Resolution
US20080091859A1 (en) * 2006-10-17 2008-04-17 Hon Hai Precision Industry Co., Ltd. Test Method for verifying installation validity of a PCI device on an electronic device
US20090049514A1 (en) * 2007-08-15 2009-02-19 Nokia Corporation Autonomic trust management for a trustworthy system

Also Published As

Publication number Publication date
US6256734B1 (en) 2001-07-03

Similar Documents

Publication Publication Date Title
US6256734B1 (en) Method and apparatus for compliance checking in a trust management system
Blaze et al. Compliance checking in the policymaker trust management system
US7730138B2 (en) Policy processing model
Bertino et al. Trust-/spl Xscr/;: a peer-to-peer framework for trust establishment
Blaze et al. The role of trust management in distributed systems security
Blaze et al. The KeyNote trust-management system version 2
Abadi et al. Analyzing security protocols with secrecy types and logic programs
US7665120B2 (en) Visual summary of a web service policy document
JP5010160B2 (en) System and method for issuing certificates independent of format
US20020087859A1 (en) Trust management systems and methods
Canetti et al. On the random-oracle methodology as applied to length-restricted signature schemes
Squicciarini et al. PP-trust-X: A system for privacy preserving trust negotiations
US7512976B2 (en) Method and apparatus for XSL/XML based authorization rules policy implementation
US20080066171A1 (en) Security Language Translations with Logic Resolution
Blaze et al. RFC2704: The KeyNote Trust-Management System Version 2
US20070277225A1 (en) Method and system for providing a secure message transfer within a network system
Schwoon et al. On generalized authorization problems
Rowe et al. Measuring protocol strength with security goals
WO1999041878A1 (en) Method and apparatus for compliance checking in a trust-management system
Chu Trust management for the world wide web
Trček Security policy conceptual modeling and formalization for networked information systems
Strauss Compliance Checking in the PolicyMaker Trust Management System
CN112016118A (en) Anonymous database rating updates
US8166306B2 (en) Semantic digital signatures
Cheval et al. Automatic verification of transparency protocols (extended version)

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION