List of Archived Posts

2005 Newsgroup Postings (05/18 - 05/31)

More Phishing scams, still no SSL being used
Brit banks introduce delays on interbank xfers due to phishing boom
Certificate Services
General PKI Question
Authentication - Server Challenge
Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY
Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY
Improving Authentication on the Internet
More Phishing scams, still no SSL being used
More Phishing scams, still no SSL being used
Revoking the Root
Revoking the Root
The Worth of Verisign's Brand
The Worth of Verisign's Brand
The Worth of Verisign's Brand
Now the crackpots are trying to make it their own
Outsourcing
The Worth of Verisign's Brand
Outsourcing
Improving Authentication on the Internet
First assembly language encounters--how to get started?
The Worth of Verisign's Brand
technical question about fingerprint usbkey
The Worth of Verisign's Brand
The Worth of Verisign's Brand
technical question about fingerprint usbkey
The Worth of Verisign's Brand
REPOST: Authentication, Authorization TO Firewall
REPOST: Authentication, Authorization TO Firewall
Improving Authentication on the Internet
Status of Software Reuse?
Improving Authentication on the Internet
Improving Authentication on the Internet
Improving Authentication on the Internet
The Worth of Verisign's Brand
The Worth of Verisign's Brand
Improving Authentication on the Internet
Secure FTP on the Mainframe
More Phishing scams, still no SSL being used
Behavior in undefined areas?
Friday question: How far back is PLO instruction supported?
The 8008
Development as Configuration
Development as Configuration
SqlServerCE and SOA - an architecture question
The 8008
Friday question: How far back is PLO instruction supported?
Listserver for DFSMS/HSM
defeating firewalls made easy
Where should the type information be?
XOR passphrase with a constant
Regarding interrupt sharing architectures!
Single Password - Linux & Windows
Single Password - Linux & Windows

More Phishing scams, still no SSL being used

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From:  <lynn@garlic.com>
Date: Wed, 18 May 2005 16:52:20 -0700
Newsgroups: netscape.public.mozilla.crypto
Subject: Re: More Phishing scams, still no SSL being used...
Peter Gutmann wrote:
This assumes that the OCSP responder has access to live CA data. Many responders are fed from CRLs, so you get the illusion of a quick response with all the drawbacks of a CRL (OCSP was specially designed to be 100% bug-compatible with CRLs, a much better name for it would be Online CRL-Query Protocol). As one PKI architect put it, "Being able to determine in 10ms that a certificate was good as of a week ago and not to expect an update for another week seems of little, if any, use to our customers".

there was this small client/server startup in silicon valley that wanted to payment transactions on their server.
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

that had this thing they called SSL. in the year that we worked with them, they moved to mountain view and changed their name from mosiac to netscape (trivia question: who owned the term "netscape" at the time?).

as part of doing credit card payments ... we specified that the webservers and the payment gateway had to do mutual authentication (this was before there was any thing like mutual authentication defined in ssl).

along the way, we realized that the certificate part was essentially a facade since the payment gateway and the allowable webservers required conventional business processes for managing their relationship ... and that having certificates was purely an artifact of the existing code being implemented that way (as opposed to the public key operations relying on more traditional repositories for access to public keys).

This was also in the period that we coined the term ;certificate manufacturing to distinguish the prevalent deployment of certificates from the descriptioins of PKIs commoningly found in the literature.

The juxtaposition of credit card transactions and PKIs were also startling. The common accepted design point for PKIs were the offline email model from the early 80s ... where the recipient dialed their electronic post office, exchanged email and hung up. They then could be faced with attempting to deal with first time email from total stranger that they had never communicated with before. A certificate filled the role of providing information about total strangers on first contact when there were no other resources available (online or offline ... aka the letters of credit paradigm from sailing ship days).

imagine four quadrunts defined by offline/online and electronic/non-electronic. in the 60s, the credit card industry was in the upper left quadrant; offline and non-electronic. They mailed out monthly revokation lists to all registered merchants. With new technology they could have moved into the offline/electronic quardrant (the online/non-electronic quadrant possibly not being practical). However, in the 70s, you saw the credit card industry moving directly to the online/quadrant where they had real-time, online authorization of every transaction. In the mid-90s when there were suggestions that the credit card industry could move into the 20 century by doing PKI, certificate-based transactions ... I got to repeatedly point out that would represent regressing the credit card transaction state of the art by 20-30 years ... back to the non-electronic archaic days of offline transactions and the mailed revokation booklets.

It was sometime after having repeatedly pointed out how archaic the whole PKI & CRL paradigm actually was that OCSP showed up on the scene (when real-time, online facilities are actually available). It is somewhat a rube-goldberg fabrication that attempts to gain some of the appearance of having modern, online, real-time transactions ... while trying to preserve the fiction that certificates (from the offline & electronic guardrant) are even necessary.

The problem is that the original design point for PKI, CRLs, etc .... the offline & electronic guardrant is rapidly disappearing in the always on, ubiquitous internet connected environment.

The other market niche that PKIs, CRLs, etc have sometimes attempted to carve out for themselves has been the no-value transaction sector ... where the value of the transaction is not sufficient to justify the (rapidly decreasing) cost of an online transaction. The problem with trying to make a position in the no-value transaction market ... is that it is difficult to justify spending any money on CAs, PKIs, certificates, etc.

some amount of past posts on SSL certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

Another facet of the SSL certificate market has to do with SSL domain name server certificates (probably the most prevaling use). One of the justifications for SSL domain name server certificates were concerns about the integrity of the domain name infrastructure. So browsers were setup that would check the domain name in a typed in URL against the URL in a server certificate.

The business scenario has a certificate applicant going to a certificate authority (CA) to apply for an SSL domain name server certificate. They supply a bunch of identification ... and the certification authority then attempts the expensive, complex and error-prone process of matching the supplied identification information with the domain name owner identification information on file with the authoritative agency that is responsible for domain name ownership (aka the domain name infrastructure).

Now it turns out that the integrity concerns with the domain name infrastructure can extend to the domain name owner information on file ... putting any certification process by a certification authority (for ssl domain name certificates) at serious risk.

So somewhat from the certification authority industry, there is a proposal that when people get domain names, they register a public key. All future communication with the domain name infrastructure is then digital signed and verified with the onfile public key (purpose is to improve the overall integrity of the domain name infrastructure). SSl certificate applicants can also digital sign their SSL certificate applications to a certification authority. The certification authority can retrieve the onfile public key (from the domain name infrastructure) to verify the digital signature on the application which turns a complex, expensive, and error-prone identification process into a much simpler, less expensive, and reliable authentication process.

However, there is a couple catch-22s for the PKI industry. First, improving the integrity of domain name infrastructure medigates some of the original justification for having SSL domain name certificates. Also, if the certification authority can build the trust basis for their whole operation on the onfile public keys at the domain name infrastructure ... it is possible others might realize that they could also do real-time retrieval of onfile public keys as part of the SSL protocol ... im place of relying on certficate-based public keys.

Brit banks introduce delays on interbank xfers due to phishing boom

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Date: Thu, 19 May 2005 08:43:17 -0700
Newsgroups: news.admin.net-abuse.email
Subject: Re: Brit banks introduce delays on interbank xfers due to phishing boom
Vernon Schryver wrote:
That is a popular and well established brand of snake oil peddled by the usual suspects including PKI vendors. Authentication by itself cannot stop phishing any more than it can stop spam. Authentication is only part of authentication and authorization. Phishers would react to digitally signed legitimate banks mail just as spammers have reacted to SPF, by digitally signing their bait. If banks used PKI certificates for the key distribution problem, then phishers would buy throw-away $350 certs from Verisign to go with their $10 throw-away domain names....if they're not already doing that to make HTTPS to their web sites look safe to the suckers.

nominal ID theft involving skimming/harvesting some form of static data that can be used for fraudulent transactions. nominally there is some differentiation between

1) id theft involving static data that can be turned around and used to perform fraudulent transactions on existing acctouns (authentication risk)

2) id theft involving static data that can be turned around and used to establish new accounts or operations (identification risk)

With the extremely prevalent use of static data resulting in both authentication and identification fraud ... there has been lots of skimming/harvesting going on. This skimming/harvesting can be viewed of being of two forms .... skimming/harvesting data-in-flight and skimming/harvesting data-at-rest.

Typically SSL (encrypted sessions) is viewed as a countermeasure for the skimming/harvesting data-in-flight threat. While skimming/harvesting of data-in-flight has been observed in other environments ... there seems to be little evidence of skimming/harvesting (evesdropping) of internet data-in-flight (aka seems to be much more theoritical issue). There appears to be lots of excamples of skimming/harvesting data-at-rest .... large databases of people information and/or things like merchant transaction files .... slightly related reference ... security proportional to risk:
https://www.garlic.com/~lynn/2001h.html#61

A large proporition of threats lumped under ID theft actually involve static data used for authentication fraud (aka fraudulent transactions on existing accounts as opposed to identification fraud ... establishing new accounts in the victim's name).

In the three-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


id theft involving static authentication data is frequently something you know. This is the pin/passwords. Another example is your mother's maiden name or SSN frequently used for authentication purposes.

These forms of static data authentication are frequently also referred to as shared-secrets (aka pin/passwords) and learning the shared-secret is sufficient to impersonating the victim in faudulent transactions on their existing accounts. While payment cards fall nominally into the something you have authentication ... their magstripes represent static data that has frequently been involved in skimming/harvesting activities where it enables the manufactur of counterfeit cards. In security terms, shared-secrets and/or static data infrastructures are also subject to replay attacks (i.e. fraud from recording the shared-secret or static data and simply replaying what was recorded).

Skimming/harvesting of large "data-at-rest" databases of personal information has represented a significantly large return-on-investment for the crooks (cost of the attack vis-a-vis expected fraud enabled). there have been recent studies that claim at least 77 percent of these kinds of exploits involve insiders.

Phishing attacks involve getting unsuspecting victims to voluntarily give up their static data or other forms of shared-secrets. The internet have provided the crooks with technology for phishing attacks that start to compare with traditional skimming/harvesting of data-at-rest in terms of fraudulent return on investment (cost of attack vis-a-vis amount of fraud enabled).

Traditional security technique for limiting exposure of pin/password (or other forms of static data shared-secrets) is to require that a different unique shared-secret be used for every unique security domain. Among others, this is a countermeasures to insider skimming/harvesting shared-secret in one domain and then using it in another domain (aka high school kid working for the local garage ISP getting your connection password ... and being able to use the same password with your home banking account).

Phishing attacks involve getting unsuspecting and naive victims to voluntarily give up such information (heavily leaveraging internet technology to attack large numbers of people at very low cost and risk).

Possible countermeasures to phishing attacks involve

1) education of large numbers of potential victims, 2) minimizing crooks ability to impersonate authorities as part of convincing people to divulge their information 3) minimizing the use of static data in authentication paradigms ... people can't divulge information that they don't know.

Note that the last countermeasure is also countermeasure for the skimming/harvesting attacks (frequently by insiders) ... where there is no shared-secrets or static data that can be havested by crooks to replay in future fraudulent transactions.

The extensive use of shared-secrets and static data is also vulnerable to humans unable to cope with rapidly increasing number of shared-secrets that they must memorize (for authentication purposes). many individuals are faced with electronically interacting with scores of unique, different security domains, each requiring their own unique password.

The PKI highlighted in recent post to the PKIX mailing list:
https://www.garlic.com/~lynn/aadsm19.htm#11 EuroPKI 2005 - Call for Participation

is that the majority of activity involved in PKIs seem to revolve around certification activity related to producing a certificate.

In 3-factor authentication model, the verification of a digital signature with a public key can be viewed as a form of something you have authentication ... aka the subject has access and use of the corresponding private key producing the digital signature. PKIs are not required in order to deploy a public key authentication infrastructure
https://www.garlic.com/~lynn/subpubkey.html#certless

The use of public key authentication operations can eliminate much of the burden and exposures associated with shared-secrets and static data infrastructures; knowledge of the publc key is not sufficient to impersonate the victim; 1) eliminates exploit of skimming/harvesting static data for use in replay attacks or fraudulent transactions and 2) eliminates requirement to have a unique public key for every unique, different security domain. In the static data, shared-secret paradigm the same value (pin, password, mother's maident name, etc) is used for both originating the request as well as verifying the request
https://www.garlic.com/~lynn/subintegrity.html#secrets

A public key can only be used for verifying the request ... it can't be used for originating the request. Divulging a public key is not sufficient for a crook to be able to perform a fraudulent transaction.

However, some forms of public key operations are still subject to phishing attacks. Many public key deployments have the private key resident in a software file. Phishing attacks involve convincing the victims to perform various operations ... typically giving up information that enables the crook to perform fraudulent operation. However a phishing attack could also invole getting the victim (possibly w/o even knowing what they are doing) to transmit their software file containing a private key or shared-secret.

So one (public key environment) countermeasure against phishing attacks exposing the victim's private key is to guarantee that private keys are encapsulated in hardware tokens that can't be easily transmitted (even the hardware token owner has no direct access to the private ... just access to operations that utilize the private key).

This is countermeasure for the phishing attacks where the crooks are harvesting static data for later use in fraudulent transactionsq. Such private key hardware tokens are still vulnerable to other kinds of qsocial engineering attacks where the crooks convince naive users to directly perform transactions on behalf of the crook.

The issue for the existing common PKI vendors ... is they frequently view their revenue flow from the manufacturing of certificates. Certificates are not necessary to deploy a public key authentication infrastructure.

The PKI, certificate design point is the offline email environment from the early '80s. The recipient dials up their local electronic post office, exchanges email, hangs up and starts reading their email. They encounter a email from a total stranger than they've never communicated with before and they have no method of obtaining any information about this stranger. Certificates were designed to supply some minimum amount of information where the recipient had no other recourse in an offline environment dealing with total strangers (the letters of credit model from sailing ship days).

The issue is that the such certificate, offline design point doesn't apply to common business relationships where prior communication is the norm and there are existing business process relationship management conventions (some of them having evolved over hundreds of years). A simple example is bank accounts where there is not only information about the customer ... but also other things that represent dynamic and/or aggregated information (like current balance) that is not practical in a certificate model. Given that the relying party already has information about the originator and/or has real-time, online access to such information, then a stale, static certificate becomes redundant and superfluous.

recent related posting about SSL domain name certificates
https://www.garlic.com/~lynn/2005i.html#0 More Phishing scams, still no SSL being used

lots of past posts about SSL domain mame certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

Certificate Services

Refed: **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.security
Subject: Re: Certificate Services
Date: Thu, 19 May 2005 21:30:11 -0700
Dan wrote:
Implementing WPA with RADIUS doesn't mean you HAVE TO install Certificate services, unless you are implementing EAP-TLS. You can always use PEAP-MS-CHAPV2 which will require username and password instead.

note that there have been certificate-less public key implementations for both kerberos and radius done. in fact the kerberso pk-init ietf draft for supporting public keys started out simply with certificate-less public key operation.
https://www.garlic.com/~lynn/subpubkey.html#kerberos
https://www.garlic.com/~lynn/subpubkey.html#radius

in principle, certificate-less operations maintains existing business processes for registering authentication material ... but replaces the registration of a pin/password with the registration of a public key. then the user authenticates with a userid/digital signature .... where the digital signature is verified with the onfile public key.
https://www.garlic.com/~lynn/subpubkey.html#certless

the original design point for PKIs and certificates was the offline email model of the early 80s; the recipient dailed up their local electronic post office, exchanged email, hung up and found themselves with an email from a total stranger that they had never communicated with before. in this first-time stranger communication in the offline world, the recipient had not resources to determine information about the sender. this is somewhat the email analogy to the letters of credit paradigm from sailing ship days.

using somewhat abstract information theory, a certificate represents an armored, stale, static, distributed catched information. it is pushed by the sender to the relying-party ... so that the relying party can have information about the sender in the stranger, first-time communication where the relying party is offline and has no recourse for obtaining any information about a stranger in a first time communication situation.

in the early 90s, there was some move for x.509 identity certificates by trusted third party certification authorities. however, it was somewhat difficult for a CA to predict exactly what identity information some unknown relying party in the future might require. As a result there was some move to grossly overlead identity certificates with enormous amounts of privacy information.

in the mid-90s, various infrastructures (like financial institutions) were coming to realize that enormous amounts of identity infomration represented significant liability and privacy issues. as a result there was some efforts in the area of relying-party-only certificate
https://www.garlic.com/~lynn/subpubkey.html#rpo

where a certificate might only contain some form of an account number as certified information. the key owner would constantly digitally sign transactions with their private key and push the transaction, the digital signature, and the certificate to the relying party (who had originally issued the certificate and has a superset of all the information already on file, including the public key and the associated account record). in all cases the account selection (number, userid, or some other value) was also present in the digitally signed transaction.

when the relying-party receives the transaction, they pull the look-up value from the transaction, read the associated account information, retrieve the public key from the account, and using the onfile public key, verify the digital signature. In such scenarios, it is possible to demonstrate that such stale, static digital certificates are redundant and superfluous.

there was another downside in the case of financial payment transactions. the typical payment transactions is on the order of 60-80 bytes. the typical relying-party-only certificate from the mid-90s was on the order of 4k-12k bytes. The scenario for adding stale, static, redundant and superfluous digital certificates to every financial transaction did represent a factor of 100 times payload bloat added to each transmission (constantly sending redundant and superfluous information back to the financial institution that it already had on file).

General PKI Question

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.security
Subject: Re: General PKI Question
Date: Fri, 20 May 2005 08:45:22 -0700
Ted Zieglar wrote:
First, a bit of background: If you want to send an encrypted message, you encrypt the message with the intended recipient's public key. That way, only the intended recipient can decrypt the message (with their private key).

If you want to send a signed message, you encrypt the hash of your message with your own private key. If the recipient can decrypt the hash with your public key, that proves that the message came from you.

Now to your question: Where does one obtain someone's public key? Well, there are several methods but in general it works like this: If you're encrypting a message your software obtains it from a PKI. If you're signing a message your software will attach your digital certificate to the message. The digital certificate contains your public key.


basically there is asymmetric key cryptography as opposed to symmetric key cryptography. in symmetric key cryptography, the same key is used for both encrypting and decrypting the same message. in asymmetric key cryptography they are different keys.

a business process has been defined for asymmetric key cryptography where one key is designated as "public" and divulged to other parties and the other key is designated as "private" and is never divulged.

some additional business processes have been defined

1) digital signature authentication .... a secure hash is computed for the message, which is then encoded with the private key. other parties with the corresponding public key can decode the digital signature and compare the decoded secure hash with a freshly computed secure hash of the message. this will validate a) the origin of the message and/or b) if the message has been modified.

2) confidential data transfer ... people will encode messages with the recipient's public key. only the recipient with their (never divulged) private key can decode the message. frequently because of overhead of asymmetric key cryptography ... a random symmetric key is generated, the message is encrypted with the symmetric key and the symmetric key is encode with the public key. The encrypted message and encoded symmetric key are transmitted together. only the corresponding private key can decode the symmetric key ... and then, in turn decode the actual message.

In general, public keys can be registered with other parites ... in much the same way shared-secrets and other kinds of authentication materials are registered today ... using well established business process relationship managment processes (some that have evolved over hundreds of years, like bank accounts).

The initial kerberos pk-init ietf draft for adding public keys to kerberos implementations specified registering public keys in lieu of passwords
https://www.garlic.com/~lynn/subpubkey.html#kerberos
later specifications were added that certificate-based public keys could be used

There have also been RADIUS implementations where public keys were registered in lieu of passwords and digital signature authentication operation was performed
https://www.garlic.com/~lynn/subpubkey.html#radius

From 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


digital signature authentication schemes are a form of something you have authentication .... only you have access and use of a (never divulged) private key.

Certificate-based public keys (PKIs) were designed to address the offline emails scenario of the early 80s; recipient dials up their (electronic) postoffice, exchanged email, and hungup. They were then possibly faced with some email from a total stranger that they had never communicated with before. Certificates were somewhat the "letters of credit" analogy (from the sailing ship days) ... where the recipient/relying-party had no other means of obtaining information about the subject ... either locally (or heaven forbid using online, electronic means).

In the early 90s, there were x.509 identity certificates ... where the CAs, not being able to reliably predict what information some future relying party might need .... were looking at grossly overloading certifictes with excessive amounts of privacy information. Later in the 90s, some number of infrastructures (like financial institutions) were realizing that identity certificates, grossly overloaded with excessive amount of information represented significant liability and privacy issues.

At this time, you saw some appearance of relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

where the information in a certificate was reduced to little more than a record lookup indicator (userid, account number, etc). a person would create a message, digitally sign it, and package the message, the digital signature and the certificate and send it off to the relying party. the relying party then would use the indicator in the base message to index the appropriate relationship record and retrieve the associated information (including the registered, onfile public key). the onfile public key would then be used to verify the digital signature (authenticating the message). It was trivial to demonstrate that the stale, static certificate was redundant and superfluous.

in the financial sector, these relying-party-only certificates were also be targeted at payment transactions. the typical payment message is on the order of 60-80 bytes. the typical relying-party-only certificate from the period was on the order of 4k-12k bytes. not only were the stale, static certificates redundant and superfluous, but they could also contribute a factor of 100 times in message payload bloat.

a basic issue is that the certificate design point was addressing the problems of a offline, unconnected world for first time communication between total strangers. as the world transitions to ubiquitous, online, certificates are looking more like horse buggies on an interstate with 75mph speed limit.

we were asked to do some consulting with this small client/server startup in silicon valley that wanted to do some payment transactions on their server
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

and they had this thing called SSL that could encrypt internet transmission. slightly related, recent posting
https://www.garlic.com/~lynn/2005h.html#39 Attacks on IPsec

there was also a perceived integrity problem with the domain name infrastructure ... so SSL server domain name certificates were defined
https://www.garlic.com/~lynn/subpubkey.html#sslcert

where the browser would compare the domain name in the typed-in URL with the domain name in the digital certificate. Along with working on the specifics of payment gateway ... we also got to go around and do end-to-end business audits of several of the certification authorities that would be providing SSL server domain name certificates.

The process has an applicant for an SSL server domain name certificate providing loads of identification information. The certification authority then performs the expensive, complex, and error-prone identification matching process of checking the supplied identification material with the identification material on file with the authoritative agency for domain name ownership.

Note that the authoritative agency for domain name ownership is the same domain name infrastructure that has the integrity issues that give rise to the requirement for SSL server domain name certificates.

So somewhat from the certification authority industry, there is a proposal that SSL domain name owners register a public key at the same time they register a domain name ... as part of an attempt to improve the integrity of the domain name infrastructure (so that the information that goes into certification of SSL domain name certificates is also more reliable).

Now, somebody can digitally sign their SSL domain name certificate application. The CA (certification authority) can now retrieve the onfile public key from the domain name infrastructure to validate the applicant's digital signature ... note this is a certificate-less digital signature authentication using online, onfile public keys
https://www.garlic.com/~lynn/subpubkey.html#certless

this also has the side-effect of turning an expensive, complex, and error prone identification process into a simpler and more reliable authenticatin process.

However, this integrity improvement represents something of a catch-22 for the CA PKI industry ...

1) improvements in the integrity of the domain name infrastructure mitigates some of the original requirement for SSL domain name certificates

2) if the CA PKI industry can make the trust basis of their whole infrastructure on certificate-less, real-time retrieval of onfile public keys .... it may occur to others that they could use the same public keys directly (potentially modifying the SSL protocol implementation to use public keys directly obtained from the domain name infrastructure rather than relying on stale, static certificates).

Authentication - Server Challenge

Refed: **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: microsoft.public.dotnet.framework.aspnet.webservices
Subject: Re: Authentication - Server Challenge
Date: Fri, 20 May 2005 12:26:09 -0700
de9me . via .NET 247 wrote:
- Client is a C# WinForm .NET app

-Webservice is hosted on a linux machine running BEA WebLogic

I want the client app to respond to the server challenge for authentication:

a) Using https (SSL), which would prompt user for a PKI certificate. b) A server challenge for BASIC authentication, which would pop-up a logon window requesting username/password. c) BASIC authentication over SSL, which would prompt for PKI cert and pop-up logon window requesting username/password.

This is the behavior that Internet Explorer provides to end users. How do I provide this same functionality in a C# .NET client app? Therefore, how do you capture/interpret a server challenge (non IIS web servers) or are there methods/libraries you can use to do the same thing that IE provides?


a lot of webservers provide stub authentication routine ... for implementation of client authentication. a frequent proposal in the past has been to radius-enable such stub routines ... then the infrastructure can use the full power of radius based authentication/authorization/permissions/administration/etc operations for managing their clients.

There are a number of standard radius implementations that accept some asserted entity ... which is then authenticated from information maintained at radius and then the permissions &/or policies associated with that entity are established.

standard radius have been shared-secret based supporting clear-text password or challenge/response protocols. there have also been enhancements to radius for supporting digital signature verification where the shared-secret password registration is replaced with public key registration (all the administration and business practices are preserved for relationship real-time management).
https://www.garlic.com/~lynn/subpubkey.html#radius

the simple pulic key in lieu of shared-secret password is effectively a certificate-less based operation
https://www.garlic.com/~lynn/subpubkey.html#certless

depending on whether shared-secret clear-text or non-clear-text authentication is used ... the mechanism may or may not require an encrypted SSL channel.

Somewhat the design point for certificate-based public keys was the offline email environment of the early 80s. The recipient dialed up their (electronic) post office, exchanged email, hung up and were then possibly faced with handling first time communication with a complete stranger. This is the letters of credit paradigm from the sailing ship days .... how does the relying party determine anything about complete stranger on initial communication ... when there was no direct access to either local or remote information available to the relying party.

The early 90s saw some certificate-oriented operations attempting to deal with x.509 identity certificates and the inability to predict what information some unknown relying party in the future might require from a complete stranger. The tendency was to look at grossly overloaded the identity certificate with enormous amounts of privacy information.

By the mid 90s, some infrastructures were starting to realize the x.509 identity certificates overloaded with enormous amounts of privacy information represented serious liability and privacy concerns. There was then some retrenchment to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

basically certificates that contained little more than an index to some kind of account/entity record (where all the real information was) bound to a public index/key. However, since this fundamentally violated the target environment that certificates were designed to address (offline enviornment with first time communication between total strangers), it was trivial to demonstrate that stale, state certificates were redundant and superfluous. The subject generated a message, digitally signed the message and then packaged the message, the digital signature, and the digital certificate and sent it off to the relying party. The relying party extrated the record index/key from the initial message and retrieved the message (including the originally registered public key). The onfile public key was then used to validate the digital signature. The inclusion of the stale, static digital certificate in the transmission was redundand and superfluous.

The redundant and superfluous, stale, static digital certificate did represent something of an issue in proposal for use in payment transactions of the period. A typical payment message is on the order of 60-80 bytes. Even the typical relying-party-only certificate from the period was on the 4k-12k bytes. While the stale, static certificate was redundant and superfluous it did have the potential of creating enormouos payload bloat in the payment networks, increasing transmission requirements by a factor of one hundred times.

Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinky lights  WAS: The SR-71 Blackbird was designed ENTIRELY with slide rules. (fwd)
Newsgroups: alt.folklore.urban,alt.folklore.computers
Date: Fri, 20 May 2005 13:50:05 -0600
Morten Reistad writes:
There was a twice daily routine to go check the cabinet lights. There were several hundred things that were checked. They embraced SNMP when it arrived.

from my view, snmp had some interesting wars in the 80s. at interop 88
https://www.garlic.com/~lynn/subnetwork.html#interop88

snmp was still duking it out with the other contenders

there was some amount of osi out in force also (the world govs. were starting to mandate the internet be eliminated and everything be converted to osi).

I got a couple workstations in a booth diagonal from a booth that case had snmp being demo'ed (about 10-15' away) ... and case was convinced to help with an after hours snmp port to the workstations (demo it on other machines than his).

from my rfc index:
https://www.garlic.com/~lynn/rfcidx3.htm#1067
1067 -
Simple Network Management Protocol, Case J., Davin J., Fedor M., Schoffstall M., 1988/08/01 (33pp) (.txt=67742) (Obsoleted by 1098) (See Also 1065, 1066) (Refs 768, 1028, 1052) (Ref'ed By 1089, 1095, 1156, 1704)


or
https://www.garlic.com/~lynn/rfcauthor.htm#xaCaseJ
Case J. (case@snmp.com)
3412 3410 2572 2570 2272 2262 1908 1907 1906 1905 1904 1903 1902 1901 1628 1512 1452 1451 1450 1449 1448 1444 1443 1442 1441 1285 1157 1098 1089 1067 1028


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Blinky lights  WAS: The SR-71 Blackbird was designed ENTIRELY with slide rules. (fwd)
Newsgroups: alt.folklore.urban,alt.folklore.computers
Date: Fri, 20 May 2005 13:55:37 -0600
oh yes, past posts mentionding sr-71
https://www.garlic.com/~lynn/2000f.html#13 Airspeed Semantics, was: not quite an sr-71, was: Re: jet in IBM ad?

... there are a different set of stories from boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

doing the F16.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Improving Authentication on the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: netscape.public.mozilla.security
Subject: Re: Improving Authentication on the Internet
Date: Fri, 20 May 2005 22:06:04 -0700
Frank Hecker wrote:
As I've said before, I don't think use of certs in general and SSL in particular should be artificially constrained to fit the perceived requirements of the Internet e-commerce market. To get back to Gerv's draft paper, I think his discussion is consistent with that approach:

He's proposing leaving the existing browser CA/SSL model and UI in place for legacy CAs and certs, and basically creating an extension to the model and UI specifically for SSL uses with financial implications. Certainly one can quibble with the various details of his proposal; for example, it may be that it would be more appropriate to give special treatment to only one additional class of cert, rather than the two classes ("shopping" and "banking"). However this general approach is IMO worth discussing.


we were asked to work with this small client/server startup in silicon valley that wanted to do payments on their server
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

and they had this thing called ssl. in the year we worked with them, they moved to mountain view and changed their name from mosaic to netscape (trivia question ... who had the original rights to the name netscape?).

so part of this was for what became to be called e-commerce. the customer would type in the url ... and it would go to an ssl shopping shite. the browser would get back an ssl domain name certificate and check the domain name in the certificate with the name typed in by the customer. merchant webservers began complaining that running sll for the shopping experience was cutting performance by 80-90 percent and almost all such merchant sites now run w/o ssl. as a result there is no checking of what the customer typed in to the browser for a URL against what site the user is actually visiting.

eventually the customer gets to the end of shopping and hits the check-out button ... which does supply a URL that specifies ssl. Now if this was really a fraudulent site ... it is highly likely that the crooks will have established some perfectly valid domain name and gotten a valid SSL certificate for it ... and they are likely to also make sure that whatever domain name that the check-out button supplies ... corresponds to the domain name in some certificate that have valid control over.

now when somebody applies for a SSL domain name certificate ... the certification authority usually goes to a great deal of trouble to validate that the entity is associated with an identifiable valid company (this basically is a complex, costly, and error-prone identification process). They then contact the domain name infrastructure and try and cross-check that the company listed as the owner of the domain is the same company that is applying for the certificate. Now if there has actually been a domain name hijacking ... there is some possibility that the crooks have managed to change the name of the company owning the domain to some valid dummy front company that they have formed. in which case the whole certificate authority process falls apart since it is fundamentally based on the integrity of the domain name infrastructure registry of true owners.

so there is a proposal to have domain name owners register a public key along with their domain name. then future communication from the domain name owner is digitally signed and the domain name infrastructure can verify the digital signature with the (certificate-less) onfile public key for that domain name. this supposedly mitigates some of the forms of domain name hijacking.

the proposal is somewhat back the SSL certification authority industry since it improves the integrity of the domain name infrastructure ... on which there ability to correctly certify the true domain name owner is based.

it has another side effor for the ssl certification authority industry ... rather than doing the expensive, time-consumer and error prone identification process they can require that ssl certificate applications also be digitally signed. they then have a much simpler, less-expensive, and reliable authentication process by retrieving the (certificate-less) onfile public key for the domain name owner (from the domain name infrastructure).
https://www.garlic.com/~lynn/subpubkey.html#certless

it does represent something of catch-22 for the ssl certification authority industry. if the integrity of the domain name infrastructure is improved it somewhat mitigates one of the original justifications for having ssl certificates. another issue is if the ssl certification authority can base the trust of their whole operation on the retrieval of (certificate-less) onfile public keys from the domain name infrastructure ... one could imagine that others in the world might also start trusting real-time retrieval of certificate-less, onfile public keys from the domain name infrastructure. There might even be a slightly modified version of SSL that used real time retrieval of certificate-less, onfile public keys ... rather than a public key from a stale, state (and potentially revoked?) certificate.

part os this comes somewhat from the original design point for PKI certificates which was the offline mail environment of the early 80s. A recipient would dialup their (electronic) post-office, exchange email, and hangup. They then could be dealing with a first-time communication from a total stranger. This is somewhat a later day analogy to the letters of credit model from the sailing ship days ... where the relying party had no (other) method of validating first-time interaction with total stranger.

In the early 90s, there were x.509 identity certificates. The certification authorities were somewhat faced with not really being able to predict what information an unknown, future relying party might require about an individual. There was some tendency to want to grossly overload such identity certificates with excessive amounts of personal information.

Somewhat in the mid-90s, various institutions came to the realization that such identity certificates grossly overloaded with excessive personal information presented significant liability and privacy issues. There was some effort to retrench to something called a relying-party-only certificate
https://www.garlic.com/~lynn/subpubkey.html#rpo

which basically contained some sort of unique record index pointer (account number, userid, or other distinguishing value) and a public key. The subject would create a message (also containing the distinguishing value) and digitally sign it with their private key. They then would package the message, the digital signature and the certificate and send it off to the relying party. The relying party would extract the distinguishing index value from the message and retrieve the indicated record containing all the necessary relationship information about the originating entity (including their registed public key). They then could validate the digital signature using the onfile public key. In such situations it was trivial to proove that such a stale, static certificate was redundant and superfluous. Part of what made it so easy to proove they were redundant and superfluous was the whole operation violated the original design point that certificates were met to serve ... first-time communication between complete strangers where the relying party had absolutely no other method for establishing any information about the stranger they were communicating with.

There was also some look at these stale, static redundant and superfluous digital certificates for payment transactions by customers with their respective fianncial institutions (again violating the basic design point environment that certificates were met to serve). It turns out that the nominal payment message size is about 60-80 bytes. The nominal relying-party-only certificate from the mid-90s (even with only an account number and a public key) was on the order of 4k-12k bytes. Not only was the attachment of stale, static digital certificates to every payment transaction redundant and superfluous, but doing so would represent an enormous payload bloat of a factor of one hundred times.

More Phishing scams, still no SSL being used

From: <lynn@garlic.com>
Newsgroups: netscape.public.mozilla.crypto
Subject: Re: More Phishing scams, still no SSL being used...
Date: Sat, 21 May 2005 10:57:22 -0700
Gervase Markham wrote:
In other words, a nonce is a way of having a lifetime of zero for the

OCSP request.

IMO, given other latencies which would be present in a system for revoking the cert of a phishing site, a near-equivalent level of security with much greater scalability could be achieved by having nonce-less operation, 1-minute timeouts, and using the TLS extensions

which (I am told) allow the webserver to deliver the OCSP response rather than the OCSP responder itself. Then, the OCSP server has to service one request every 30 seconds per webserver, rather than one request per client connection.


so, sort of per the earlier postings
https://www.garlic.com/~lynn/2005i.html#0 More Phishing scams, still no SSL being used

you could use a realtime, certificate-less, onfile public key retrieval from trusted DNS infrastructure ... for using in establishing encrypted SSL session (instead of obtaining server public key from a certificate).

now for 20 some years, DNS has had generalized mechanism for multi-level caching of information with per entry cache expiration interval (including at the lowest end-user end-point).

i think it was 1991(?) acm sigmod conference in san jose ... somebody raised a question about what was this x.5xx stuff going on .... and somebody else explained that it was a bunch of networking engineers attempting to reinvent 1960s database technology.

so the primary target for SSL has been client access for e-commerce. There have been studies that show the e-commerce activity is highly skewed ... with possibly only 200 sites accounting for upwards of 90 percent of activity. If you were looking specifically at public key serving within a DNS real-time retrieval paradigm .... with standard caching and cache entry expiration intervals to address performance issues that might hypothetically crop up ... you are looking at relatively small number of public keys that have to be cache to cover the majority of actual world-wide SSL activity.

More Phishing scams, still no SSL being used

Refed: **, - **, - **
From: <lynn@garlic.com>
Newsgroups: netscape.public.mozilla.crypto
Subject: Re: More Phishing scams, still no SSL being used...
Date: Sat, 21 May 2005 13:20:09 -0700
.... and of course what you could really do is slightly tweak multiple A-record support ... and add an option to ip-address resolve request ... and piggy back any available public key(s) on the response giving the server's ip-address(es). no additional transactions needed at all ... and you have the public key before you even have made the attempt to open the tcp session. if the server had also registered their crypto preferences when they registered their public key ... you could almost imagine doing the ssl session setup integrated with the tcp session setup.

when we were on the xtp tab .... xtp was looking at doing reliable transaction in 3-packet minimum exchange; by comparison tcp has a minimum 7-packet exchange ... and currently any ssl then is over and above the tcp session setup/tear-down. the problem with the current server push of the certificate ... the tcp session has to be operational before any of the rest can be done.

misc. past ssl certificate postings
https://www.garlic.com/~lynn/subpubkey.html#sslcert

misc. past xtp/hsp (and some osi) postings
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

Revoking the Root

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: netscape.public.mozilla.security
Subject: Re: Revoking the Root
Date: Sat, 21 May 2005 18:50:47 -0700
Ian G wrote:
Oh... so it's written in the standard. Are you saying that the standard defines no way to revoke a lost CA root? Or that it is impossible to revoke a CA root? They are two entirely different things.

OpenPGP does it, the keys can revoke themselves, and indeed the early docs used to exhort you to create a revocation certificate for emergency use.

I don't know what SPKI does - do roots in that system revoke themselves? In my own designs, we would just go the manual route and alert everyone.

No financial system permits such trust to be placed in a single point of failure; Most of the finance systems I have seen have defence in depth and layered "meltdown" plans. For example, there was Mondex's famous plan to distro new code into the cards if the crypto got cracked, and all of the cards schemes operated secret shadow accounting systems for meltdown motives (originally).


i think there was a discussion 8-9 years ago ... root can sign a CRL revoking itself. the issue was what can a bad guy do with a stolen root private key. they can sign fraudulent certificates and they can sign revokations. however, supposedly revokation was only a one-way operation .... either the bad guys or the good guys might sign a revokation of the root key ... but it was not possible for anybody to sign anything that could unrevoke a revoked root key. The idea was once somebody put the revokation of a root key in play (whether it was the good guys or the bad guys) nobody would be able to reverse the operation.

there were numerous statements by major financial transaction operations over the past ten years that they would never convert to a conventional PKI system because they would never deploy any kind of infrastructure that had single points of failures (or even small number points of failure) .... the compromise of a root key (no matter how unlikely) was viewed as traditional single point of failure scenario. there was some study that major financial operations could be out of commission for 4-8 weeks assuming a traditional PKI-based operation and a compromise of root private key. such positions were frequently mentioned in conjunction with systemic risk and the preoccupation of major financial operations with systemic risk vulnerabilities.

The basic motivation for mondex was the float that mondex international got when the next lower level business unit bought one of their "superbricks". Anybody lower than mondex international in the chain were just replacing float lost to the next higher level in the chain. This issue might be considered the primary financial driving force behind the whole mondex movement. It was so significant that mondex internatioinal began offering to split the float as an inducement for institutions to signup (i.e. the organizations that would purchase a superbrick from them).

A spectre hanging over the whole operation was some statement by the EU central banks that basically said that mondex would be given a two year grace period to become profitable, but after that they would have to start paying interest on customer balances held in mondex cards (effectively float disappears from the operation ... and with it much of the interest in institutions participating in the effort).

mondex international did do some other things ... they sponsored a series of meetings (starting in san francisco) on internet standard work ... which eventually morphed into IOTP.

from my rfc index:
https://www.garlic.com/~lynn/rfcietff.htm

select Term (term->RFC#) in RFCs listed by section and then scroll down to
Internet Open Trading Protocol
3867 3506 3505 3504 3354


Clicking on the individual RFC numbers fetches the RFC summary for that RFC. Clicking on the "txt=nnn" field, retrieves the actual RFC.

misc past posts mentioning systemic risk:
https://www.garlic.com/~lynn/98.html#41 AADS, X9.59, & privacy
https://www.garlic.com/~lynn/99.html#156 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#238 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#240 Attacks on a PKI
https://www.garlic.com/~lynn/aepay2.htm#fed Federal CP model and financial transactions
https://www.garlic.com/~lynn/aepay2.htm#cadis disaster recovery cross-posting
https://www.garlic.com/~lynn/aepay2.htm#aadspriv Account Authority Digital Signatures ... in support of x9.59
https://www.garlic.com/~lynn/aadsm2.htm#risk another characteristic of online validation.
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm2.htm#strawm3 AADS Strawman
https://www.garlic.com/~lynn/aadsm3.htm#cstech7 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsmail.htm#variations variations on your account-authority model (small clarification)
https://www.garlic.com/~lynn/aadsmail.htm#complex AADS/CADS complexity issue
https://www.garlic.com/~lynn/aadsmail.htm#parsim parsimonious
https://www.garlic.com/~lynn/aadsmail.htm#mfraud AADS, X9.59, security, flaws, privacy
https://www.garlic.com/~lynn/aadsmail.htm#vbank Statistical Attack Against Virtual Banks (fwd)
https://www.garlic.com/~lynn/aadsm10.htm#smallpay2 Small/Secure Payment Business Models
https://www.garlic.com/~lynn/aepay10.htm#13 Smartcard security (& PKI systemic risk) thread in sci.crypt n.g
https://www.garlic.com/~lynn/aepay10.htm#19 Misc. payment, security, fraud, & authentication GAO reports (long posting)
https://www.garlic.com/~lynn/2000.html#36 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2001c.html#34 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#45 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001f.html#35 Security Concerns in the Financial Services Industry
https://www.garlic.com/~lynn/2001n.html#54 The demise of compaq
https://www.garlic.com/~lynn/2002c.html#7 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#31 You think? TOM
https://www.garlic.com/~lynn/2002l.html#5 What good is RSA when using passwords ?
https://www.garlic.com/~lynn/2003l.html#64 Can you use ECC to produce digital signatures? It doesn't see
https://www.garlic.com/~lynn/2003m.html#11 AES-128 good enough for medical data?
https://www.garlic.com/~lynn/2004j.html#2 Authenticated Public Key Exchange without Digital Certificates?
https://www.garlic.com/~lynn/2004j.html#5 Authenticated Public Key Exchange without Digital Certificates?
https://www.garlic.com/~lynn/2004j.html#14 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento

Revoking the Root

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: <lynn@garlic.com>
Newsgroups: netscape.public.mozilla.security
Subject: Re: Revoking the Root
Date: Sat, 21 May 2005 19:15:43 -0700
Ian G wrote:
Whereas if a root cert was used, then that could only have been lifted in a very few places. The use of a root cert would then send a very strong signal back that would lead to how and when and where it was ripped off.

the proposals that allow backup/copying of a private key (as a countermeasure to a physical single point of failure) ... when we were running ha/cmp project:
https://www.garlic.com/~lynn/subtopic.html#hacmp

we coined the terms disaster survivability and geographic survivability to differentiate from simple disaster/recovery.

in any case, just having a process allowing copying of a private key and multiple copies increase the vulnerability to diversion of copies for fraudulent purposes

some recent studies claim that at least 77percent of fraud/exploits involve insiders. from an insider fraud standpoint, diversion of root private key becomes analogous to embezzlement.

bad things can happen with compromise of PKI root key and resulting fraudulent transactions.

however, the systemic risk of having a single PKI root key revoked and having to put the infrastructure thru restart/recovery from scratch is viewed as possibly being even a worse prospect.

The Worth of Verisign's Brand

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Sun, 22 May 2005 06:50:23 -0600
"Anders Rundgren" writes:
If this list believe that users should do conscious decisions on what CAs to trust you are on the wrong track as this is impossible to do for mere mortals. A possible solution would be that you for a fee "outsourced" CA trust decisions to a party that have this as their prime business. Such a model would in fact add considerably more interesting stuff to the plot than just CA validity. It could actually claim that a reputation of an organization your are about to contact is not the best.

trust is a funny thing. in the non-association payment card world ... each merchant that accepts payment card has a bilaterial agreement (aka contract) with each financial institution issueing cards (for which they accept/trust cards, aka N*M contracts, aka N merchants, M issuers). in turn, each financial institution issueing cards has effectively bilaterial agreement with the consumers they issue a card for. for ten thousand merchants, a thousand issuing institutions, and a million customers ... for merchant having contract with every issuing institution, it would be 10K*1K contracts (on the merchant side) and 10**6 on the issueing side. this avoids the larger problem of every merchant having a contract with every customer with a payment cards (or even for each of a customer's payment card).

in the association payment card world ... the contract problem is slightly simplified. a merchant has a contract with their merchant financial institution (one per merchant). A merchant financial institution has a contract with the association (one per merchant financial institution). A consumer has a contract for each one of their payment cards with the respective card issuing financial institutions. Each card issuing financial institution has a contract with the association. Basically there is three level hierarchy of trust & contractual relationship. This mitigates the worst case scenario of having ever customer sign a trust/contract with every merchant (say worst case with billion customers and a million merchants ... having a million, billion contracts).

The other characteristic that carries thru via the financial institution contracts with the associations, is that the merchant financial institution is liable to the association on behalf of the merchants they sponsor ... and consumer financial institution is liable to the association on behalf of consumers they issue cards to. The merchant example, is that merchant financial institutions both love & hate big ticket merchants like airlines; they love the percent charges they get on the big ticket transactions for accepting the liability ... but it is painful when an airline goes bankrupt and they have to make good on all outstanding charged tickets (which frequently has run to tens of millions).

The typical TTP CA trust model (from a business standpoint) is interesting ... since there are explicit trust/contract typically between the CA and the key owners that they issue certificates to. However, there frequently is no explicit trust/contractual relationship that traces from relying parties to the CA (as you would find in the financial world that traces a trust/contractual trail between every merchant and every constumer issued a payment card, aka a merchant can trust a payment card via the thread of contracts thru their merchant financial institution, to the association and then to the card issuing consumer financial institition, with a corresponding thread of explicit liability at each step).

In the federal PKI model ... they sort of made up for this by having GSA (as a representative of the federal gov. as a relying party) sign contracts with every approved CA issuing certificates. That way the federal gov. (as a relying party), when accepting a certificate and making some decision or obligation based on the trust in that certificate ... has a legal liability trail to the certificate issuing institution.

In the SSL certificate model, when a client/end-user makes any decision or obligation based on the trust in the SSL certificate, it is rare that every client/end-user has a contractual trail back to the SSL certificate issuing institution. In the payment card world, the merchant accepting a payment card has a contractual trail to the responsible financial institution accepting liability on behalf of the consumer they issued a card to. In a similar way, each consumer presenting a card to a merchant has a contractual trail to the responsible financial institution accepting liability on behalf of the merchant they represent.

When a merchant presents an SSL certificate to a consumer ... the consumer has no contractual trail to the SSL certificate issuing institution.

In the early development of this stuff that became to be called e-commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

there was some look at the business issues of payment associations issuing branded SSL certificates ... so that the presentation of such a branded SSL certificate carried with it trust/liability obligations similar to the branded association decals you see on physical store fronts. One problem was what might be impled by placing trust in an SSL certificate (and therefor the associated possible liabilities) are quite a bit more ambiqutous than what is defined for trust between merchant & consumer in a payment card transaction. If all such branded SSL certificate were only to explicitly cover the actual payment transaction and no other activity ... then it was much easier to define the liability issues (aka there are contractual/business issues that tend to be orthogonal to technical protocol issues ... and people working on technical protocol issues may not even have any concept of what they are about).

One of the issues has been that many merchant financial institutions were having a hard time even coming to grips with reality of signing contracts allowing internet merchants to be credit card enabled. They could see it for brick&morter institutions that already are MOTO authorized (i.e. internet credit card transactions mapping into the existing non-face-to-face, credit-card holder not present contractual parameters of mail-order/telephone-order transactions).

The issue was for a purely internet merchant that had no inventory, no physical property ... etc. ... no assets that could be forfeited in the event of a bankruptcy that would cover the risk exposure to the financial institution for taking liability of all possible outstanding transactions.

The other charactistic of CA PKI certificate paradigm being targeted at the early 80s, offline email paradigm ... was that in the payment card scenario every merchant transaction is online and passes thru the (liability accepting) merchant financial institution (or some agent operating on behalf of the merchant financial institution).

The CA PKI certificate paradigm for the early 80s, offline email had the relying-party/recipient dialing their (electronic) postoffice, exchanging email, hanging up, and being faced with first-time communication from a total stranger ... where the relying-party had no other recourse for establishing any attributes regarding the stranger. The PKI certificate was sort of filled an analogous role to the "letters of credit" from the sailing ship days. This is an offline push model where the subject is pushing the credential to the relying-party ... and the intended purpose was to address the environment where the relying-party had no real-time method for corroborating the information.

In the 90s when some suggested that the credit card model should be brought into modern era with certificates; I would comment that it would be regressing the payment card industry to the archaic, ancient non-electronic period of the '50s & '60s (when the merchant, relying-party had no recourse to online information and had to rely on the revokation booklets mailed out every month, and then every week).

The payment card industry transitioned to the online model in the 70s and left the old fashion offline model (that the CA PKI model uses) mostly in history.

In any case, the issue for the merchant financial institution, accepting liability on behalf of the merchant, gets to see every financial transaction in real time, as it is happening. At any point in time, the merchant financial institution has an approximate idea of the aggregate, outstanding fianncial liability it has per merchant (because it is seeing and aggregating the transactions in real time) and could choose to shut it off at any moment.

One of the financial institutions objections to the CA PKI certificate model ... was that there could be an incremental financial liability every time the merchant presented the certificate ... and there was no provisions for an issuing financial institution (that chose to stand behind such a paradigm) to calculate their potential, outstanding risk. The issue of not knowing their potential liability exposure at any moment was somewhat othogonal to not knowing how to deal with operations that might not having any assets ... and therefor there was nothing to recover in forfeiture if a bankruptcy occurred.

That was somewhat the idea that CA PKI certificates ... in the modern online risk management world ... was ideally suited for no-value transactions (i.e. since the trust issue involved no value ... it would be easy to always know the oustanding, aggregated risk ... since you knew that summing values of zero ... still came up zero, no matter how many such events there were).

- Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Sun, 22 May 2005 07:19:33 -0600
Anne & Lynn Wheeler writes:
accepting liability on behalf of the merchant, gets to see every financial transaction in real time as it is happening. At any point in time, the merchant financial institution has an approximate idea of the aggregate, outstanding fianncial liability it has per merchant (because it is seeing and aggregating the transactions in real time) and could choose to shut it off at any moment.

slight addenda ... the merchant financial institution ... accepting liabiilty on behalf of merchants they sponsored in the infrastructure has one other mechanism.

basically, in a payment card transaction ... the card issuing financial institution comes back with a real-time promise to pay the merchant. the card issuing financial institution then tranfers the promised funds to the merchant financial institution.

the merchant financial institution, in calculating the outstanding run-rate liability for any particular merchant ... can put a delay on actually making such funds available to the merchant. ... aka they have some calculation on the risk history of the merchant and an idea (from real-time transactions) of the current outstanding liability. Another of the ways that a merchant financial institution can control the aggregate financial risk exposure they have per merchant ...is by delaying the actual availability of funds (in any default/bankruptcy by the merchant, since the funds haven't actually be released ... the delayed funds can be used by the merchant financial institution to help cover their outstanding financial liability on behalf of the merchant).

In the CA PKI model, unless you are dealing with purely no-value transactions ... there is a double whammy of having the per transaction risk being somewhat ambiguous ... and in the offline certificate push model ... having no idea at all ... how many times a particular certificate has been pushed (basically multiplying an unknown number by another unknown number to come up with some idea of the outstanding liability at any specific moment).

somewhat in the business or value world ... trust frequently is translated into terms of financial liability.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Sun, 22 May 2005 10:39:28 -0600
"Anders Rundgren" writes:
How should a good system have been designed? The IETF should have recognized the obvious: e-mail is a TWO-DIMENSIONAL identity and thus trust structure. That is, domains (MTAs) should authenticate/encrypt to each other, preferably using the in fact not too useless SSL PKI. Then end-users should authenticate to the mail-servers. As they already do that for fetching mail it is odd that it is not required for sending mail. There is very little reason for end-to-end security in a corporate environment. In fact, archiving and automatic content control (including virus checks) mostly make encryption a bad choice in such environments.

there is a separate issue ... ISPs for a long time tended to not want to take responsibility (and therefor liability) for spam origination.

had this argument maybe ten years ago about ISPs filtering originating packets (from the ISP customers before hitting the internet) based on things like bogus origin ip-address (various kinds of spoofing attacks ... not totally dissimilar to phishing attacks with bogus origin). even as late as 5-6 years ago, the counter arguments were that ISPs had neither the processing capacity nor the technology capability for recognising incoming packets and filtering packets that had bogus origin ip-address. However, in this period, ISPs were starting to do all kinds of other packet/traffic filtering & monitoring of their customers for things in violation of the terms & conditions of their service contract (prooving that they did have the capacity and technology).

A possible scenario is if ISPs somehow demonstrated that they were doing filtering/censoring on things coming from their customers before it got on the internet ... if something actually got thru and reached a destination victim ... the destination victom might be able to turn around and sue the originator's ISP. I think that ISPs want to avoid being seen as financially liable for bad things that might be done by their customers.

the other counter argument raised was that even if responsible ISPs started censoring activity of their customers ... there were enuf irresponsible ISPs in the world that it wouldn't have any practical effect. However, there is multi-stage scenario 1) responsible ISPs might be able to do origin filtering on 90% of the bad traffic, 2) doing origin censoring rather than destination censoring eliminates a lot of infrastructure processing overhead getting between the origin and the destination, 3) for store & forward traffic, responsible ISPs could still perform entry censorship at the boundaries where it cross from an irresponsible ISP to responsible ISP.

in many of these situations it isn't whether the receiving can absolutely proove who the originator is .... it is whether the originator is generating traffic with spoofed address that the originator's ISP would have reason to know isn't consistant with the originator's assigned address.

Not too long after the greencard incident ... we were on a business trip to scottsdale and having dinner at a restaurant in old town. Three people came in and were seated behind us (a man and a couple). The man spent most of the dinner explaining to the couple how to configure their service for commercial purposes ... and how he was going to be able to send out loads of spam on their behalf (if they would sign up with him); the techniques he had for staying ahead of the ISPs that might want to get around to shutting down one or another of his spam producing facilities (we suspected that the man doing the talking might have been involved somehow with the greencard incident).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Now the crackpots are trying to make it their own

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Now the crackpots are trying to make it their own
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.arch
Date: Sun, 22 May 2005 11:19:44 -0600
"Tom Linden" writes:
Can the cache be shared by separate processes?

ugh ... nearly 40 years ago as an undergraduate . .. i started doing a bunch of work on page replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#wsclock
on multi-user timesharing systems
https://www.garlic.com/~lynn/submain.html#timeshare
that was being turned around and shipped in commercial systems, actually, i was also doing the generalized resource scheduling algorithms that also were being shipped in commercial systems
https://www.garlic.com/~lynn/subtopic.html#fairshare

... anyway ... I was asked several times about the problem of low-bandidth information leakage and what was I going to do about it.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Outsourcing

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Outsourcing
Newsgroups: alt.folklore.computers
Date: Sun, 22 May 2005 14:49:19 -0600
greymaus writes:
'Twas an article in one of the financial papers, UK, about the rise of Asia, one of the reasons for the rise of india given was that they were not as hung up on moralities, veracities, as the Europeans... :)

there was an article in a hong kong paper around 90 or 91 about the competition for the emerging world-wide outsourcing market ... and the position that the province just across on the mainland wanted to play.

the article basically went into some amount of detail why india was in much better position to compete (vis-a-vis the nearby mainland province) in the emerging world-wide outsorcing market ... a primary issue was that india had a significantly better civil servant middle class ("left behind by the british") providing essential infrastructure support (that was needed/required to compete effectively in the emerging world-wide outsourcing market).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Sun, 22 May 2005 15:03:03 -0600
Anne & Lynn Wheeler writes:
there is a separate issue ... ISPs for a long time tended to not want to take responsibility (and therefor liability) for spam origination.

there is also the economic model that ISPs might be able to feature/charge for protecting their customers from the bad things that happen on the internet .... where there doesn't seem to be anybody to pay ISP to have them protect the internet from bad things their customers might do to the internet (and in fact ... the customers that were planning on doing bad things to the internet might even be willing to pay more for the privilege).

a couple days ago i ran across a quicky comment about a new book called freakonmics ... and just now stopping by a local computer bookstore ... it is the first book you see at the door. it purports to be "a rogue economist explores the hidden side of everything"

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Outsourcing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Outsourcing
Newsgroups: alt.folklore.computers
Date: Mon, 23 May 2005 09:10:16 -0600
Nick Spalding writes:
The fact that middle class mostly knows English doesn't do much harm either. -- Regards, Nick

english would be useful for dealing with english speaking customers,

this point of the HK newspaper article specifically was that the internal infrastructure was significantly better operated ... like how many weeks (months, years) it would take for a local business to get electricity, water, permits, phone ... and how reliable was the electricity, water, phone, transportation, utility, etc, services.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Improving Authentication on the Internet

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Improving Authentication on the Internet
Newsgroups: netscape.public.mozilla.security
Date: Mon, 23 May 2005 09:15:18 -0600
Gervase Markham writes:
I don't agree. Without transparency, you can't know how much security you have.

Nevertheless, quoting aphorisms is not particularly helpful.

The process will acquire more transparency; there are plans afoot to make that happen. But we had to start somewhere. Gerv


may be 2nd order factor ... complexity tends to be related to security vulnerabilities ... transparency can be useful when dealing with complexity ... and therefore a countermeasure for security complexity vulnerabilities. KISS can also be a countermeasure for security complexity vulnerabilities.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

First assembly language encounters--how to get started?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: First assembly language encounters--how to get started?
Newsgroups: alt.folklore.computers
Date: Mon, 23 May 2005 11:55:41 -0600
forbin@dev.nul (Colonel Forbin) writes:
In the Pentagon, it is often the only one.

in the late 70s, we used to joke that FSD's (federal service division, sold to loral in the 90s) primary language was script (aka document application that supports GML ... precursor to SGML, XML, etc; most people thot of the language with respect to the command they typed as opposed to the language itself).

gml (& standardization as sgml)
https://www.garlic.com/~lynn/submain.html#sgml

brought to you curtesy of the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Mon, 23 May 2005 14:09:40 -0600
Anne & Lynn Wheeler writes:
trust is a funny thing. in the non-association payment card world ... each merchant that accepts payment card has a bilaterial agreement (aka contract) with each financial institution issueing cards (for which they accept/trust cards, aka N*M contracts, aka N merchants, M issuers). in turn, each financial institution issueing cards has effectively bilaterial agreement with the consumers they issue a card for. for ten thousand merchants, a thousand issuing institutions, and a million customers ... for merchant having contract with every issuing institution, it would be 10K*1K contracts (on the merchant side) and 10**6 on the issueing side. this avoids the larger problem of ever merchant having a contract with every customer with a payment cards (or even for each of a customer's payment card).

using the non-association & non-merchant bank example ... then if SSL TTP PKI CAs were to apply normal business processes ... then every relying party for an SSL domain name server certificate would need to have a pre-existing contract with every SSL domain name server certificate issuing institution. Looking in a typical browser repository of trusted CA public keys ... there are possibly 40-50 (although some are multiples for the same business operation).

Taking one billion internet clients as first order approximation to estimated SSL domain name certificate relying parties ... then gross, first under approximation to required number of such contracts would be 50 billion individually signed contracts.

in the payment card scenario this is mitigated by having the credential (payment cards) issuing institutions sign contracts with the brand associations (hierarchical legal trust rollup on the issuing side). Then the relying-party merchants have contracts with merchant financial institutions which in turn rollup with the merchant financial institutions having (legal trust) contracts with the payment associations.

in this sense, SSL TTP PKI CAs are more efficient than an approximately analogous real business operation (aka payment cards as issued credentials which require explicit business processes between all the parties) by eliminating conforming to standard business practices (having explicit legal trust relationship between the relying parties and the credential issuing institutions). An example where this was addressed has been in the Federal PKI .. where the federal gov., as a relying party, signed explicit (legal trust) contracts with each authorized certificate issuing certification auhtority.

one of the things on the table (when originally pulling together the current e-commerce infrastructure) was that the same finanical infrastructure that took liability for merchant transactions would also issue SSL domain name certificates (which in addition to prooving domain name ownership would also indicate the liability accepting relationships). However, for whatever reasons, that option was not followed.

The current scenario is that the SSL domain name certificates basically represent some due diligence in checking with the domain name infrastructure as to the true domain name owner. However, there is (nominally) no related, explicit, contractual chain of legal trust that can be followed from relying parties to the certificate issuing operations.

Also, as oft repeated, one of the motivating factors in the perceived need for domain name due diligence (aka SSL domain name certificates by any other name) has been integrity concerns with regard to the domain name infrastructure .... and how can a client really be sure that the server they are talking to is actually the real server related to the domain name they typed in as part of the URL. This becomes somewhat ambiguous when one realizes that the domain name infrastructure is the authoritative agency for domain name ownership ... and the same authoritative agency that certification authorities have to check with regarding true domain name ownership.

misc. past ssl certificate postings:
https://www.garlic.com/~lynn/subpubkey.html#sslcert

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

technical question about fingerprint usbkey

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: technical question about fingerprint usbkey
Newsgroups: sci.crypt
Date: Tue, 24 May 2005 08:31:40 -0600
"frozt" writes:
i saw them fool those things with a melted gummy bear on techtv...

the issue can be security proportional to risk.

something like 30 percent of debit cards are reputed to have the PINs written on them. part of this is the serious proliferation of shared-secrets ... and the difficulty of people being forced to remember scores of different shared-secrets.
https://www.garlic.com/~lynn/subintegrity.html#secrets

some number of the fingerprint scenarios are targeted at the pin/password market where there is significant, common practice for people writing down their pin/password.

in this case the issue comes down what is simpler, easier for a crook (having stolen a card)

1) to lift a pin written on the card and fraudulently enter the pin

2) to lift a fingerprint possibly left on the card and fraudulently enter the fingerprint.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Tue, 24 May 2005 08:25:33 -0600
"Anders Rundgren" writes:
Mutual authentication is not rocket science but in order to work you need OTPs or PKI. That is, it is time to let passwords RIP.

for the original stuff that was going to turn into this stuff called e-commerce ... we specified mutual authentication SSL ... before there was a thing in SSL for mutual authentication.

however, it is evident that the design point for certificates & PKI based infrastructure is for stangers that have never communicated before. in this original mutual authentication deployment that we had specified ... it was between (merchant) webservers and the payment gateway.

however, it quickly became evidently clear that there had to be prior contract between merchant webservers and their respective payment gateway. that the use of certificates in the SSL establishment was purely an artificial artifact of the existing SSL implementation.

in actual fact, before the SSL session was ever established ... the merchant webserver had a preconfigured set of data on what payment gateway they were going to contact and the payment gateways had preconfigured information on which merchants they would process for. Once the SSL session was established ... this preconfigured authentication was exercised w/o regard for any certificates. The use of certificates as authentication mechanism was purely a facade and an artificial artificate of the use of the existing SSL implemenation ... and in no ways represented the real (online) business authentication process.
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

relying party, business parties have well established processes for maintaining information about their business relationships (some of this well established business relationship processes have evolved over hundreds of years). passwords are an authentication technology that have been managed using these relationship business management processes. it is possible to use the existing business relationship processes for managing other kinds of authentication material, including public keys.

certificates are not intrinsicly a substitute for replacing all of the existing, well established, business relationship processes, nor are they a mandatory requirement as the only means of managing public key authentication material in well-established business relationships.

the design point for PKI and certificates were the offline email paradigm of the early 80s, where a recipient would dial their (electronic) postoffice, exchange email, and then hangup. The recipient (relying party) was then possibly faced with processing first time email with a total stranger. the role of the certificate was analogous to the letters of credit from the sailing ship days, the relying party lacking any prior information of their own regarding the stranger and/or any timely, direct access to a certifying authority.

this is the analogy to the early days of the payment card industry, where the plastic card was the credential and their were weekly mailed booklets to every merchant (relying party) of the list of revoked credentials. in the 70s, the payment card industry quickly moved into the modern, online world of real-time transactions (even between relying parties that were strangers that never had any prior contact). in the mid-90s when the suggestions were made that the payment card industry could move into modern times by converting to (offline, stale, static) certificates ... my observation that moving to certificates would actually represent regressing 30 years to an offline model ... rather than the real, modern, online model that they had been using for over 20 years.

It is perfectly possible to take well established business processes used for managing relationships ... and "RIP" shared-secret authentication technology ... by substituting public key registration in lieu of shared-secret (pin/password) registration. businesses are not likely to regress to stale, static certificates for the management of timely and/or aggregated information ... like current account balance. From there is trivial step-by-step process to proove that stale, static certificates are redundant and superfluous between relying parties that have existing business relationships.
https://www.garlic.com/~lynn/subpubkey.html#certless

the original pk-init draft standard for kerberos specified only certificate-less management of public keys, treating public keys as authentication material in lieu of shared-secrets and leveraging the existing extensive online management of roles and permissions ... that are typically implicit once authentication has been performed aka it is not usual that authentication is performed just for the sake of performing authentication acts ... authentication is normally performed within the context of permitting specific set of permissions (in the financial world, some of these permissions can be related to real-time, aggregated information like current account balance)
https://www.garlic.com/~lynn/subpubkey.html#kerberos

similarly it is possible to take another prevelant relationship management infrastructure, RADIUS. and substitute digital signatures and the registration of public keys in lieu of shared-secrets ... and maximize the real-time, online management and administration of authentication and permissions within a synergistic whole environment.
https://www.garlic.com/~lynn/subpubkey.html#radius

in any sort of value infrastructure, if it is perceived advantageous to have real-time management, admnistration and access to permissions, authorization and other kinds of authentication information ... then in such an environment, it would seem not only redundant and superfluous but also extremely archaic to rely on the offline certificate paradigm designed for first-time, communication between total strangers (and the stale, static certificate would substitute for direct, real-time access to trusted authority).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Tue, 24 May 2005 13:52:23 -0600
"Anders Rundgren" writes:
Mutual authentication is not rocket science but in order to work you need OTPs or PKI. That is, it is time to let passwords RIP.

so passwords by definition are a preestablished relationship and a relationship administrative and management infrastructure.

in the mid-90s some were complaining ... so what if stale, static, redundant and superfluous certificates were redundant and superfluous in an environment involving pre-established relationship and a existing relationship administration and management system ... they can't actually hurt anything. However that doesn't take into account the redundant and superfluous overhead costs of actually doing the redundant and superfluous certificate-oriented processing where there already is an established administrative and management relationship system. The other scenario is that some might get confused and decide to rely on the stale, static, redundant and superfluous certificate data in lieu of actually accessing the real data.

the other scenario would be to leverage a certificate-based operations in no-value scenario ... and eliminate any established relationship administrative and management infrastructure. Say, a membership environment, where any member could "buy" (obtain) any resource possible and there was no need to perform per member reconciliation. Say a bank ... that would allow any customer to perform as many withdrawels as they wanted ... regardless of their current balance (in fact, eliminate totally the concept of a financial institution even having to keep track of customer balances ... as being no-value and superfluous).

however, the truth is ... with regard to value infrastructure, there tends to be a requirement for a relationship administrative and management infrastructure (some of the methodology has been evolving for hundreds of years) that tracks and accumulates information on individual relationships ... even dynamically and in real time.

for value infrastructures that are managing and administrating relationships with tried & true established methodology ... then certificate-oriented PKIs become redundant and superfluous ... as are the stale static certificates themselves.

the issue then in a mature and well established administrative and management infrastructure it is straight-forward to upgrade any shared-secret (identity information, SSN#, mother's maiden name, pin, password) oriented authentication infrastructure
https://www.garlic.com/~lynn/subintegrity.html#secrets

with a digital signature infrastructure where public keys are registered as authentication information in lieu of shared-secrets and digital signature validation (using the public key) is used in lieu of shared-secret matchine.
https://www.garlic.com/~lynn/subpubkey.html#certless

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

technical question about fingerprint usbkey

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: technical question about fingerprint usbkey
Newsgroups: sci.crypt
Date: Tue, 24 May 2005 14:32:13 -0600
"frozt" writes:
i saw them fool those things with a melted gummy bear on techtv...

the other is that a major skimming fraud with ATM machine overlays and pin-hole camera (you even see them on crime-shows these days) is picking up the magstripe and the pin-hole camera recording which keys were used to enter the pin. current pin-hole camera technology is having a harder time picking up fingerprint for counterfeiting than it is having picking up keys entered for counterfeit PIN entry.

it isn't that there aren't fingerprint vulnerabilities ... but they are more difficult than some common PIN vulnerabilitys ... aka lost/stolen card with pin written on the card ... or ATM overlay picking out PIN from keys used on pin-pad.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Tue, 24 May 2005 23:45:41 -0600
Ian G writes:
As an observation, what's happening on the litigation front suggests that the scene is now set for this conflict of goals to be tested in court. There are now 4 separate thrusts in litigation testing the assumptions of Internet security (two of these are not public). Which means that patience is exhausted, and what is presented as security is no longer taken at face value.

one might also be tempted to make a case that in a situation where their are two parties with ongoing relationship and there are well established infrastructures for managing that relationship (in some cases involving methodologies that have evolved over hundreds of years) ... that and that the introduction of any external operations interferring in management of that relationship ... like a TTP CA .. is detrimental to the efficient business operation.

there is a case made that the exploding use of electronic, online access has created a severe strain on the shared-secret authentication paradigm ... people having to memorize scores of unique pin/passwords.
https://www.garlic.com/~lynn/subintegrity.html#secrets

asynmmetric cryptography created a business solution opportunity.

In the shared-secret paradigm, the same datdum is used to both originate as well as authenticate. Persons having access or gaining access to the authentication information also have the information to fraudulent impersonate and originate.

The business solution applied to asymmetric cryptography was to designate one of the paired-keys as "public" and freely available for authentication purposes. The business process then defines the other of the paired-keys as "private" and is to be kept confidential and never divulged. The business process defines only the private key (which can never be divulged) can be used to originate a digital signature ... and only the public key is used to verify the digital signature.

from the 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


the validation of a digital signature with a specific public key implies something you have authentication ... i.e. the originator has access and use of the corresponding private key (which has always been kept confidential and has never been divulged).

Attacks on authentication material files involving public key authentication doesn't open the avenue of impersonation (as can occur when using shared-secrets).

Therefor registering public keys as authentication material in existing relationship administrative and management infrastructures acts as a countermeasure to individuals compromising those files and being able to used the information for impersonation and fraud.

The business role of CAs and certificates ,,, especially TTP CAs, is to provide information for relying parties in situations involving first time contact between strangers where the relying party has no recourse to any resources for determining information about the originator.

In situations where two parties have established, on going relationship and there are well established facilities for administuring and managing that relationship that the statle, static offline paradigm certificates are redundant and superfluous.

It is possible that the significant paradigm mismatch between well established relationship adminstrative and management infrastructures and CA TTPs (targeted at addressing the problem of first time communication between two strangers) is responsible for at least some of the discord.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

REPOST: Authentication, Authorization TO Firewall

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REPOST: Authentication, Authorization TO Firewall
Newsgroups: comp.security.firewalls
Date: Wed, 25 May 2005 10:37:11 -0600
"Greenhorn" writes:
Do firewalls provide dynamically defined access control i.e., can they act as access controllers. e.g., it should be able to do the following, a user tries to access a resource, the packets would come to the firewall, if they are HTTP packets and the user is new (from IP address not being in the authenticated list), the packets would be redirected to a webproxy, the webproxy tries to get the user authenticated by a AAA server (say RADIUS), the firewall would get an authorization message from the AAA server (or webproxy), saying the time the user must be allowed access, the resources he can access etc. The firewall would provide that access.

Can this be done by the firewalls in the market such as Checkpoint firewall-1


authentication/authorization boundary checkers are frequently being referred to as portals (when used as boundary interface to the internet). the application firewalls and packet-filter routers have frequently just been somewhat transparent boxes that filter out identifiable bad stuff.

boxes that clients interact with for authentication/authorization function are frequently referred to as portals.

authorization policy filtering based on origin ip-address possibly by time-of-day ... could be an administrative function that updated/changed packet filerting router rules at different times of the day. This frequently would be a push operation from the policy and administrative infrastructure ... rather than a pull function from the individual boxes.

authenticaton tends to be asserting some characteristic (like an account number or userid) and then providing some information supporting that assertion ... from 3-factor authentcation paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


using ip-address origin is more akin to identification w/o necessarily requiring any proof (or additional interaction demanding proof).

authorization frequently tends to be taking some characteristic (either from simple identification or from an authorization process) and looking up the related permissions defined for that characteristic (like which systems can an authenticated userid access).

RADIUS was originally developed by livingston for their modem concentrator boxes (i.e. provided authentication boundary for userid/login authentication for dail-up modem pools). It has since grown into a generalized IETF standard for AAA
https://www.garlic.com/~lynn/subpubkey.html#radius

In the original livingston case ... the modem concentrator provided both the RADIUS boundary authentication/authorization as well as the traffic routing function in the same box. This continues as dominate technology used world-wide by ISPs to authenticate their dial-in customers.

the boxes that are routing traffic between intranet and internet are frequently not exposted to clients as separate functional boxes ... as is the case of the modem-pool routers that managed the boundary between the ISP intranet and their dial-in customers.

there is a related but different kind of administrative boundary situation for DSL/cable customers. They typically have a unquely identifiable box or (non-ip) address. DHCP requests come in from these boxes ... if the boxes are associated with a registered, up-to-date account .. and administrative policy will return DHCP responses that enable access to generally available ISP services. However, if the box is not associated with a registered, up-to-date account ... the DHCP response can configure them so that all their DNS requests and the resulting ip-address responses go to an in-house sign-up (regardless of the domain name supplied by your browser ... it would always get back the same ip-address directing it to a webservice associated with administrative signup). You tend to find similar setup/configuration for hotel high-speed internet service and many of the wireless ISP service providers.

in this scenario ... the dynamic administrative policy isn't based on ip-address (as an identification) but some other lower level hardware box address (enet mac address, cable box mac address, etc).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

REPOST: Authentication, Authorization TO Firewall

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REPOST: Authentication, Authorization TO Firewall
Newsgroups: comp.security.firewalls
Date: Wed, 25 May 2005 12:04:05 -0600
roberson@ibd.nrc-cnrc.gc.ca (Walter Roberson) writes:
All models of the PIX support (from PIX 5.1 onward) RADIUS downloadable access-lists . This suggests an alternative approach to the pay-for-use question: if one were using 515/515E, 525, or 535 with the 7.0 software, then the downloadable access list could be time-based. When the time ran out, then it could be arranged so that the user fell into a deny-everything situation.

The user interface would be a bit different, though: instead of the user getting challenged for a username/password and told that it is no longer valid, the user simply would suddenly not be able to get to anywhere. Same effect, but perhaps more confusing for the user.


long ago and far away ... for the original e-commerce payment gateway
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

we started out with administrative pushing permitted/allowed ip-addresses (webservers that had valid contracts to use the payment gateway) into routers.

this was also in the early days of haystack labs, wheel group, and some others.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Improving Authentication on the Internet

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Improving Authentication on the Internet
Newsgroups: netscape.public.mozilla.security
Date: Wed, 25 May 2005 12:14:21 -0600
Nelson B writes:
Ah, I was wondering when paradoxes would enter this discussion. CA self revocation: Everything I say is a lie.

"I think not" said Descartes, who promptly vanished.


the original scenario was that CA could only assert that they were no longer valid ... they could never assert the reverse. So only a valid CA could declare themselves no longer valid ... or bad guys that had compromised the private key could declare the CA no longer valid ... but the inverse couldn't be asserted.

so if the bad guys wanted to do a DOS after having compromised the private key ... then they could, at most, declare the CA no longer valid ... which by definition is what you want to happen anyway when a key has been compromised.

the other thing that they could do ... was hope that the CA went unrevoked as longer as possible ... so that they could use the compromised private key to generate fraudulent certificates.

However, specifically with respect to revoking a CA ... you could either do it or not do it ... nobody could ever undo it.

So the bad guys could either say nothing (about the CA) or lie about the CA by using the compromised private key to revoke the CA. However, by definition, if the private key has been compromised then what you want anyway is a revokation of the CA.

The only thing that the valid CA could do is say nothing (about themselves) or revoke themselves. If the real CA has made a decision to revoke itself ... then there isn't much else you can do about it.

In any case, self-revokation is a special case of "everyhing else I've said is a lie". Once it asserts that special case ... then it is no longer able to assert anything more (and somewhat immaterial whether that special case was a lie or not).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Status of Software Reuse?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Status of Software Reuse?
Newsgroups: alt.folklore.computers
Date: Wed, 25 May 2005 13:06:38 -0600
for some topic drift ... cp67/cms had an update command, it basically was used to merge an "update" deck with base software source and produce a temporary file that was then assembled/compiled.

it was oriented towards 80 column "card" records and sequence numbers in cols. 73-80. The cp67/cms assembler source had convention of ISEQ assembler statement ... to indicate that the sequences in the sequence number field should be checked (from physical card deck days when decks could be dropped and shuffled).

the control commands were of the form


./ d nnnnnn <mmmmmm> (delete record from <to> )
./ r nnnnnn <mmmmmm> (replace records from <to> with following source)
./ i nnnnnn          (insert new source after record nnnnn

it started out essentially being a single change file application

as an undergraduate ... i was making enormous amount of source code changes to cp67 and cms ... and the default process required you to manually type-in the sequence number field (cols. 73-80) for all new source records.

I got tired of this and created an update preprocessor that supported "$" for replace & insert commands


./ r nnnnnn <mmmmm> <$ <aaaaaaa <bb>>>
,/ i nnnnnn         <$ <aaaaaaa <bbb>>>

where it would generate a temporary update file (for feeding into the update command) that had the sequence number field automatically generated. It could default to choosing number & increment based on previous/following cards ... or you could specify a starting number and any increments.

i believe it was the virtual 370 project that really kicked off the multi-level update effort. The "H" modifications to cp67 running on real 360/67 that supported virtual 370 machines ... which had some number of new/different control operations and instructions. The "I" modifications were applied after the "H" modificationa and product a cp67 kernel that ran on (real) 370 architecture (rather than real 360/67 hardware).

this was a set of execs (command processor file) that used a "cntl" file to select updates and their sequence for applying incrementally a hierarchical set of update files. This would iteratively processe "$" update files ... generating temporary update file and applying the temporary update file to the source file ... creating a temporary source file. The first iteration involved updated the base source file ... additional iterations wuould update the previously generated temporary update file.

I had replicated archived a H/I system (all the source and all the processes and files needed to generate running systems) on multiple tapes. Unfornately the datacenter i was using in the mid-80s had an internal operational problem ... where they had a rash of operators mouting valid tapes for scratch tapes and destorying data. The H/I archives tapes were wiped out in this period.

As a small reprieve ... not too long earlier ... Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist

was looking for early examples of the multi-level source update process. i managed to pull a comple package (execs, control files, executables, etc) from the h/i archive tapes (prior to their getting wiped).

In the early time-frame, an MIT student (that has since become quite well known for work he has since done on the internet) was giving task for an application that would attempt to merge multiple independent update hiererachies. This is sort of software re-use ... in the sense that the same common source was used possibly by different organizations for developing different target solutions.

As the use of the multi-level update feature became wider used ... the "$" preprocessing support and the iterative application was merged into the base update command. Now rather than creating multiple iterative temporary files ... it would manage everything in memory, applying things as it went along ... and not producing a temporary source file until after the last update had been applied.

misc. past posts about the cp67 h/i operating system work
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#58 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries

a fiew past posts on cms source update
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003.html#62 Card Columns
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004g.html#43 Sequence Numbbers in Location 73-80

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Improving Authentication on the Internet

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Improving Authentication on the Internet
Newsgroups: netscape.public.mozilla.security
Date: Wed, 25 May 2005 13:11:42 -0600
Ian G writes:
Sure, that's obvious. But, Lynn, can you shed any light on why the standards didn't include a mechanism? You seem to be intimating that the original PKI concept included it.

i have memory of the exchanges taking place about the protocol process ... i would have to dig back thru the archives to see if i can find the actual copies. it might also be possible to use a search engine to find archived copies somewhere on the web.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Improving Authentication on the Internet

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Improving Authentication on the Internet
Newsgroups: netscape.public.mozilla.security
Date: Wed, 25 May 2005 13:46:03 -0600
I thot discussion might have been pkix &/or x9f related .. as an easier step then starting to search my own archives ... i've done a quicky web search engine ...

one entry in pkix thread
http://www.imc.org/ietf-pkix/old-archive-01/msg01776.html

here is recent m'soft article mentioning the subject:
http://www.microsoft.com/technet/itsolutions/wssra/raguide/CertificateServices/CrtSevcBP_2.mspx
http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/security/mngpki.mspx

i also believe that it showed up in x9f5 work on PKI CPS ... but i would have to check my archives ... however here is pointer to a verisign cps ... that search engine claims contains words on revoking CA (ra, etc):
http://www4.ncsu.edu/~baumerdl/Verisign.Certification.Practice.Word.doc

another verisign related reference:
http://www.verisign.com/repository/cis/CIS_VTN_CP_Supplement.pdf

also, i remember OCSP coming on the scene sometime after I had been going for awhile about how CRLs were 1960s technology (and least in the payment card business) .... before payment card moved into the modern online world with online authentication & authorization (moving away from having to manage credentials/certificates that had been designed for an offline paradigm).

one might assert that OCSP is a rube-golberg solution trying to preserve some facade of the usefulness of certificates (designed to solve real-world offline paradigm issues) in an online world (somehow avoiding having to make a transition to straight online paradigm and preserving the appearance that stale, static redundant and superfluous certificates serve some useful purpose).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Improving Authentication on the Internet

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Improving Authentication on the Internet
Newsgroups: netscape.public.mozilla.security
Date: Wed, 25 May 2005 14:25:13 -0600
Anne & Lynn Wheeler writes:
also, i remember OCSP coming on the scene sometime after I had been going for awhile about how CRLs were 1960s technology (and least in the payment card business) .... before payment card moved into the modern online world with online authentication & authorization (moving away from having to manage credentials/certificates that had been designed for an offline paradigm).

for instance in an offline credential scenario ... you would place the person's date of birth in the credential

several years ago, we did a survey of corporate databases for security issues ... one was a field by field analysis of types of information and the vulnerability. for instance ... any information where we could find a business process that made use of that kind of information for authentication ... was labeled as having an "id theft" vulnerability. several business processes made use of knowledge about date-of-birth for authentication purposes ... and therefore date-of-birth was given an id-theft attribute (we made claims at the time about doing semantic analysis rather than purely syntactic security analysis).

these kinds of information were the types of things being looked at in the early 90s to grossly overload x.509 identity certificates ... in the anticipation that some random relying-parties in the future (after the certificate was issued) might find the information to be of some use. it was issues like these that prompted some institutions in the mid-90s to retrench to relying-party-only certificates ... effectively containing only a pointer to the real information in some accessible database. however, these databases were typically part of an overall relationship management and administrative function ... which made the function of stale, static certificate redundant and superfluous.

Now in the late 90s, tstc
http://www.fstc.org/

was looking at fast protocol which would respond to certain kinds of questions with yes/no. they would utialize existing 8583 interconnect network structure ... but extend 8583 transactions to include non-payment questions ... like whether is the person an adult or not. The real-world, online authoritative agency could simple respond yes/no to the adult question w/o divulging a birth date ... which represents an identity theft vulnerability.

part of the issue prompting the fast protocol was the appearance of a number of online services selling transactions on whether a person was an adult or not. these online services were having people register and supply credit card information for doing a "$1 auth" transaction that would never clear. Their claim was that since people that had credit cards had to sign a legal contract, and to sign a legal contract, you had to be an adult ... then anybody that performed a valid credit card transaction must be a valid adult. As a psuedo credit card merchant they paid maybe 25cents to perform the "$1 auth" to get back a valid authorization response. Since they never cleared the transaction ...it never actually showed up as a transaction on the statement (although there was a $1 reduction in the subjects open-to-buy for a period).

An issue was that the adult verification services were making quite a bit of money off of being able to perform a transactions that brought in only 25cents to the financial institution ... and the whole thing involved, online, real-time responses ... no stale, static, redundant and superfluous certificates designed to address real-world offline paradigm issues (and which also tended to unecessarily expose privacy and even identity theft related information).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Wed, 25 May 2005 14:50:43 -0600
"Anders Rundgren" writes:
Replacing the _indeed_ stale cert info with a stale signed account claim would not have any major impact this scenario except for a few saved CPU cycles.

SSL is by no means perfect but frankly; Nobody have come up with a scalable solution that can replace it. To use no-name certs is not so great as it gives user hassles


i got to do some amount of the early work on the original aspects of SSL deployments ... so we went thru almost all these issues over and over again when we were doing it originally

now for a small topic drift ... slightly related posting
https://www.garlic.com/~lynn/2005i.html#33 Improving Authentication on the Internet

in the above ... fast could have certificate-less, digitally signed transactions approving the operation. in much the same way that x9.59 transactions
https://www.garlic.com/~lynn/x959.html#x959

could be certificate-less and digitally signed ... fast transactions could involve matters other than approving a specific amount of money (i.e. standard payment transaction getting back approval that the issuing institution stood behind the amount of the transaction). in much the same way that an x9.59 transaction wouldn't be viewed valid unless the corresponding digital signature correctly verified ... the requirement to have the subject's digital signature on other types of requests would also serve to help protect their privacy.

the fast age thing was of interest ... because it eliminated having to divulge birthdate (an identity theft issue) while still confirming whether a person was an adult or wasn't an adult. There was also some fast look at zip-code verification in addition to age verification. Some number of people were proposing certificates could follow the driver's license offline credential model ... and that anything that might be on a driver's license (and more) would be fair game to put into a certificate. This overlooked the fact that driver's licenses were really offline paradigm credentials ... and as the various relying parties acquired online connectivity ... there was less & less a requirement for information content on the driver's license itself (it could migrate more to the relying-party-certificate model with little more than an account number to the information in an online repository ... little things like aggregated information ... number of outstanding parking tickets ... etc).

the "fast" issue (especially age verification, not actually age ... just yes/no as to being an adult) for the financial institutions was that while quite a bit of money is being made by the online age verification services (... and there is almost no incrmental costs needed to add such an option to the existing 8583 infrastructure and providing internet access) most of the money flow into the age verification operations comes from a segment of the internet market that many find embarrassing ... and as a result many financial institutions are ambivalent about being involved.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Worth of Verisign's Brand

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Worth of Verisign's Brand
Newsgroups: netscape.public.mozilla.crypto
Date: Wed, 25 May 2005 15:11:55 -0600
Anne & Lynn Wheeler writes:
the "fast" issue (especially age verification, not actually age ... just yes/no as to being an adult) for the financial institutions was that while quite a bit of money is being made by the online age

for some topic drift ... "don't know" could be a valid response ... having worked on the original sql/relational system/r
https://www.garlic.com/~lynn/submain.html#systemr

and some number of other kinds of data organizations ..... i've made a couple posts in comp.database.theory over the years about 3value logic ... which is a difficult problem for many relational paradigms.

some specific postings on 3-value logic
https://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
https://www.garlic.com/~lynn/2004f.html#2 Quote of the Week
https://www.garlic.com/~lynn/2004l.html#75 NULL
https://www.garlic.com/~lynn/2005b.html#17 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#15 Amusing acronym

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Improving Authentication on the Internet

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Improving Authentication on the Internet
Newsgroups: netscape.public.mozilla.security
Date: Thu, 26 May 2005 13:10:45 -0600
Anne & Lynn Wheeler writes:
several years ago, we did a survey of corporate databases for security issues ... one was a field by field analysis of types of information and the vulnerability. for instance ... any information where we could find a business process that made use of that information for authentication ... was labeled as having an "id theft" vulnerability. several business processes made use of knowledge about date-of-birth for authentication purposes ... and therefore date-of-birth was given an id-theft attribute (we made claims at the time about doing semantic analysis rather than purely syntactic security analysis).

the semantic analysis was with regard to how could birth dates be used in various ways

1) authentication .... can you supply the matching value
2) grouping ... adult, child, senior citizen
3) current age ... say for life insurance

in cases #2 & #3, we claimed that answers could be returned w/o returning the actual birth date.

the problem is fundamental share-secret issues
https://www.garlic.com/~lynn/subintegrity.html#secrets

security guidelines tend to mandate that a unique shared-secret is required for every unique security domain. The vulnerability is somebody with the access to the shared-secret in one security domain can perform authentication impersonation fraud in another security domain (say your local neighborhood garage isp and your online banking operation).

for some people this has led to them having to manage and administer scores of unique shared-secrets.

at the same time, it has been recognized that people have a hard time remembering even one or two such pieces of data ... so many infrastructures (for a long time) have used personal information as authentication shared-secrets (birth date, mother's maiden name, SSN#, place of birth, etc).

this puts the infrastructures that expect people to manage and administer scores of unique shared-secrets quite at odds with the infrastructures that recognize such a task is fundamentially out-of-synch with long years of human nature experience ... and have chosen instead to use, easier to remember personal information for authentication shared-secrets.

the problem with personal information authentication shared-secrets is that the same information tends to crop up in lots of different security domains. as a result, any personal information that might be used in any domain as personal information authentication shared-secret ... becomes identity theft vulnerable.

we had been asked to come in and consult on something (that was going to be called e-commerce) with this small client/server company in silicon valley that had this technology called ssl
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

among other things we eventually went around and did some end-to-end business walkthrus with these organizations called certification authorities about something called an ssl domain name certificate
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

now they were leveraging this basic business processed called public keys. basically there is this cryptography stuff called asymmetric cryptography ... where keys used to decode stuff are different than the keys used to encode stuff. A business process is defined using this technology that is frequently called public keys. Basically a business process defines one of a key-pair as being "public" and is freely available and the other of a key-pair is designated "private" and kept confidential and never divulged. There is an adjunct authentication business process called digital signature authentication .... where the private key is used to encode a hash of a message. The relying party can calaculate a hash of the same message and use the corresponding public key to decode the digital signature to come up with the originally calculated hash. If the two hashes are the same, it demonstrates that the message hasn't been altered and authenticates the originator.

from three factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


digital signature verification is a form of something you have authentication, demonstrating that the originator has access to and use of a specific private key.

possibly somewhat because of early work on SSL and digital signatures, we were brought in to work on both the cal. and fed. digital signature legislation ... minor refs
https://www.garlic.com/~lynn/aepay11.htm#61 HIPAA, privacy, identity theft
https://www.garlic.com/~lynn/aepay12.htm#4 Confusing business process, payment, authentication and identification
https://www.garlic.com/~lynn/aadsm17.htm#23 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#24 Privacy, personally identifiable information, identity theft
https://www.garlic.com/~lynn/aadsm17.htm#47 authentication and authorization ... addenda

there were these other business operations interested that have been frequently referred to as trusted-third party certificate authorites (TTP CAs) which were interested in a business case that involves $100/annum certificate for every person in the US (basically $20b/annum revenue flow).

Now possibly the most prevalent internet authentication platform is RADIUS ... used by majority of ISPs for authenticating client service. This was originally developed by Livingston for their modem pool processors (long ago and far away, I once was involved in putting together a real RADIUS installation on a real Livingston box).
https://www.garlic.com/~lynn/subpubkey.html#radius

a couple recent radius related postings
https://www.garlic.com/~lynn/2005i.html#2 Certificate Services
https://www.garlic.com/~lynn/2005i.html#3 General PKI Question
https://www.garlic.com/~lynn/2005i.html#4 Authentication - Server Challenge
https://www.garlic.com/~lynn/2005i.html#23 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005i.html#27 REPOST: Authentication, Authorization TO Firewall
https://www.garlic.com/~lynn/2005i.html#28 REPOST: Authentication, Authorization TO Firewall

RADIUS has since become an ietf standard and there are some number of freely available RADIUS implementations. The default RADIUS primarily uses shared-secret based authentication. However, there are implementations that simply upgrade the registration of a shared-secret with the registration of a public key ... and perform digital signature verification (something you have authentication) instead of shared-secret matching.

There are lots of infrastructures that it would be possible to preserve and leverage their existing relationship management and administrative infrastructures (handling both authentication and authorization information) by simply replacing shared-secret registration with public key registration.
https://www.garlic.com/~lynn/subpubkey.html#certless

However, there is lots of publicity about TTP CA-based PKI infrastructures, possibly because it has represented a $20b/annum revenue flow to the TTP-CA PKI industry.

By comparison, upgrading an existing relationship administrative and management infrastructure represents a cost factor for relying party infrastructures (as opposed to the motivation of a $20b/annum revenue flow for the TTP CA-based PKI industry). In the case, of freely available RADIUS implementations there is little or no revenue flow involved to motivate stake-holders for widely deployed digital signature authentication (leveraging existing business practices and relationship management and administration infrastructures; other than it represents a big impact on harvesting of shared-secrets for the authentication fraud flavor of identity theft).

The issue is especially significant for infrastructures that rely heavily on widely available shared-secret information as a means of authentication (in many cases common personal information). Another flavor is transaction-based operations where the authentication informaction is part of the transactions and therefor part of long-term transaction logs/files ... an example from security proportional to risk posting:
https://www.garlic.com/~lynn/2001h.html#61

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Secure FTP on the Mainframe

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Secure FTP on the Mainframe
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 27 May 2005 14:18:07 -0600
Sabireley@ibm-main.lst (Steve Bireley) writes:
Something to be aware of when using SSL/TLS with FTP is how these sessions will make it through a firewall. If your users will be coming through the Internet to your mainframe FTP server, you may have some difficulty unless you plan for it up front. The FTP protocol requires two connections, a Control connection and a Data connection. Normally, a firewall scans the data on the control port looking for the PASV response from the server that tells the client how to connect the data port. Since the data stream is encrypted, the firewall cannot get this information. This issue is further compounded when you add Network Address Translation in the firewall.

To handle the first case, your FTP server must be able to define a narrow range of ports that it will assign as data ports for the data connection. This can be one or more ports. These ports must then be open on the firewall. The PASV response from the host will contain the IP address and port the client to which the client will connect the data port. The firewall will have an open range of ports to accommodate the data connection.

If NAT it enabled in the firewall, then the FTP server will send back its true IP address and port, in the PASV response, rather than the public IP address and port. Since the firewall cannot see the PASV response, it cannot fix it on way as it does with clear text FTP. To get around this, some FTP clients and servers support EPSV rather than PASV. In this case, the FTP server only returns the port number and the client assumes the IP address to be the same as the control port. In other cases, the FTP client can be configured to always connect the data connection to the same IP as the control connection.


There is an RFC about the problems converting FTP from arpanet (host protocol) to IP (internetworking protocol).

Arpanet had a lot of similarities to JES2 networking ... homogeneous networking, host to front end processor (in arpanet case, called an IMP), limited number of nodes, no gateway functionality, etc.

some minor references (the following NCP references is not to the mainframe variety):
https://www.garlic.com/~lynn/internet.htm#27 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/internet.htm#28 Difference between NCP and TCP/IP protocols

the above makes reference to RFC721 ... out-of-band control signals in a host-to-host protocol ... and some of the difficulties of converting application like FTP to TCP.

from my RFC index
https://www.garlic.com/~lynn/rfcietff.htm

summery for RFC 721
https://www.garlic.com/~lynn/rfcidx2.htm#721
721
Out-of-band control signals in a Host-to-Host Protocol, Garlick L., 1976/09/01 (7pp) (.txt=13566) (Refs 675)


as always ... clicking on the ".txt=nnnn" field in an rfc summary fetches the actual RFC.

now, the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

originated at science center
https://www.garlic.com/~lynn/subtopic.html#545tech

in many ways was a lot more robust ... didn't have the network size limitation and effectively had a type of gateway function in every node. this contributed to the internal network being larger than the arpanet/internet into approx. summer of 1985. the big change in the arpanet was the great cahnge over on 1/1/83 to internetworking protocol ... and getting gateway functionality. arpanet had almost its limited of 255 nodes at the switchover.

jes2 sort of had 255 node limitations ... jes2 networking had come from hasp ... quite a bit having been originated at TUCC. They used the hasp psuedo device table to define nodes. A typical hasp/jes2 installation might have 60-80 psuedo devices defined ... so that actually only left maybe 170-190 positions for defining network nodes.

jes2 had (at least) two problems ... one was that network control information was all jumbled up with other control information in the header and it would trash any incoming traffic if either the origin node or the destination node were not defined in the limited table.

shortly after the 1/1/83 conversion to internetworking protocol, the internal network passed 1000 nodes
https://www.garlic.com/~lynn/internet.htm#22

and was way beyond any JES2 network addressing capability at the time.

because of the tendency for trashing traffic where it didn't have the origin and/or destination nodes defined ... and its inability to even come close to defining all the nodes in the internal network ... jes2 nodes were pretty much relegated to boundary nodes.

the other problem was the jumbling of information in jes2 control headers. a jes2 system at a different version or release level could have slightly different header definition and result in crashing other jes2 systems and bringing the whole mvs systems crashing down.

the standard internal networking software developed a extensive library of gateway code for different hasp and jes2 releases and versions. When a jes2 system would crash bringing down the mvs system, it would frequently be blamed on the standard internal networking software not correctly protecting one jes2 from another jes2. It quickly became the responsibility of the gateway code in the standard internal networking nodes to correctly re-arrainge jes2 header information to correspond to the version/level of the jes2 system it was directly communicating with (regardless of what jes2 system that such trafic might have originated from).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

More Phishing scams, still no SSL being used

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More Phishing scams, still no SSL being used...
Newsgroups: netscape.public.mozilla.crypto
Date: Fri, 27 May 2005 15:38:26 -0600
pgut001@cs.auckland.ac.nz (Peter Gutmann) writes:
Actually I have recently run into a situation that isn't always-on, it's remote data sensing, which involves hauling a laptop out to various isolated locations and then sending the results in from a hotel room at night. I'm not exactly sure what the benefit of carrying a CRL around with you as opposed to just rejecting the data when it's submitted that evening is, you could also just carry around a list of remote sites whose keys you don't trust any more rather than a CRL.

Anyway, it seems like a lot of effort to be maitaining a whole PKI model just for special-case situations like this.


from the late 90s ... truth is stranger than fiction.

a large institution was looking at converting their customer base from shared-secret authentication to public key authentication. what they were to do was to upgrade their software software to handle public keys and register public keys for all of their clients.

then there were to ship their master client account file off to a TTP CA, which would munge and reformat the bits in the account records and generate a digital certificate for each account record, appropriately digitally signed (selectively leaving out many bits and fields because of privacy concerns). for this re-formating of each account record and the CA's digital signature ... the instituttion would only be charged $100/annum for every account record processed (well in excess of $1b US).

the institution would then distribute the resulting certificates to each of their clients so that in the future ... the clients could create a electronic message and digitally sign it. The client would package the electronic message, the digital signature and the ($100/annum) digital certificate and send it off to the institution. The institution would receive the transmission, pull the account number from the message, and retrieve the appropriate account record, validating the digital signature with the onfile public key (from the account record). They then could disregard the stale, static, stunted, abbreviated, redundant and superfluous ($100/annum) digital certificate and continue processing the message.

executives eventually scrapped the project before they actually got into sending off the master account file.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Behavior in undefined areas?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Behavior in undefined areas?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 28 May 2005 09:53:05 -0600
"Del Cecchi" writes:
Ah yes. Check out the story in "the mythical man month" (a classic that everyone here should read) about doing the 7094 emulation software for S/360. It was found that the documentation was insufficient to get the software to work correctly, and reference had to be made to actual hardware. Of course we are far advanced from the early 60's, but the hardware still doesn't lie....

I've told the story about the joint cambridge
https://www.garlic.com/~lynn/subtopic.html#545tech

endicott project for the cp67 H&I kernels. Basically the "H" cp67 updates were to create virtual machines that conformed to the 370 architecture definition (instead of 360/67 definition). The "I" cp67 updates were for the cp67 kernel to run on a 370 architecture.

In regular use was


CP67-L on real 360/67 providing 360/67 virtual machines
 CP67-H in 360/67 virtual machine providing 370 virtual machines
CP67-I in 370 virtual machine providing 370 virtual machines
CMS in 370 virtual machine

a year before the first 370/145 engineering machine was running.

when endicott finally got a 370/145 engineering machine with virtual memory hardware support running ... they wanted to have a copy of CP67-I kernel to validate the hardware operation.

So the kernel was booted on an engineering machine that had soemthing like a knife switch in lieu of a real "IPL" button ... and it failed. After some diagnostics ... it turned out that the engineers had implemented two of the new extedd "B2" opcdoes reversed.

Normal 360 instruction opcodes are one byte ... 370 introduced new "B2" opcodes ... where the 2nd byte is actually the instruction opcode. Some vague recollection was that the reversed "B2" opcodes were RRB (resert reference bit) and PTLB (purge table lookaside buffer). The kernel was then quickly patched to correspond to the (incorrect) engineering implementation and the rest of the tests ran fine.

the "L", "H", "I" designations come somewhat from the initial pass at doing multi-level source update system was being built to support the effort. this initial pass was hierarchical ... so "L" updates were applied to the base source, then the "H" updates could be applied, and then "L" source updates. recent multi-level source update posting
https://www.garlic.com/~lynn/2005i.html#30 Status of Software Reuse?

in the vm/370 release 3 time-frame, one of the vm370 product test people had developed an instruction execution regression program to test vm370 compliance. however, running the full test suite would actually cause vm370 crash. in that time frame ... I got pegged to release a bunch of my system enhancements as the "resource manager"
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

we developed an automated benchmarking process where we could define arbitrary workloads and configurations for validating and calibrating the resource manager ... eventually we did a series of 2000 benchmarks that took over 3 months elapsed time
https://www.garlic.com/~lynn/submain.html#bench

however ... leading up to that effort ... we found a number of defined workloads (extreme conditions) that also would reliably (predictably) crash the kernel. so i undertook an effort to rewrite sections of the kernel to eliminate all the kernel failures by either our benchmark test or the instruction execution regression program that the guy in product test had. these changes were also included as part of the released "resource manager" changes.

misc. past h/i postings:
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#58 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Friday question: How far back is PLO instruction supported?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Friday question: How far back is PLO instruction supported?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 28 May 2005 13:18:45 -0600
edjaffe@ibm-main.lst (Edward E. Jaffe) writes:
No. PLO was introduced with G3 CMOS. (I can still remember being jealous that our "little" P/390 running VM and VSE had PLO while our "big" two-way 9672-R22 running MVS didn't.) PLO was later retrofitted to G2 along with several other features in an effort to expand the scope of the first Architectural Level Set.

and the precursor to PLO is compare&swap ... done by C.A.S. at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

the first thing had to come up with was a mnemonic that were charlie's initials.

charlie had been done a lot of fine-grain multiprocessor locking on cp67 ... when he invented compare&swap. initially trying to get it in the 370 architecture ... the pok owners of the 370 architecture redbook ... aka ... drift warning ... a superset of the prinicple of operations ... basically done in cms script with conditionals ... one conditional got you the full redbook ... the other conditional got the subset published as the pinricple of operation ... this redbook is named for the color of the 3ring binder it was distributed in ... different than the common known as "redbooks" available to customers ...
http://www.redbooks.ibm.com/
... now returning to regular schedule programming ...

... the pok owners of the 370 architecture redbook .... said that there wasn't any justification for an smp-specific (locking) instruction ... that the POK operating system people were more than happy with the global kernel spinlock using test-and-set. In any case, to get compare&swap justified for 370 architecture ... had to come up with a justification of compare&swap use that wasn't smp specific; thus was born the descriptions for compare&swap in multi-threaded (interruptable) code (whether running on uniprocessor or multiprocessr).

misc. smp, multiprocessor, compare&swap, etc postings
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008
Newsgroups: alt.folklore.urban,alt.folklore.computers
Followup-To: y
Date: Sat, 28 May 2005 13:27:02 -0600
stanb45@dial.pipex.com (Stan Barr) writes:
Part of the skill of using a mangle was to insert the material in such a way as to minimise creasing and make ironing easier. Important when your irons had to be heated on the stove or the fire.

(Due to a strange set of circumstances, I have done washing with a dolly and dolly tub, and a washboard, squeezed the water out with a Victorian cast-iron and wood mangle, and ironed with cast-iron irons(!) heated by the fire. I must be one of the last people in Britain to have done so!)


but was this outside in the backyard using water from the rain barrel (heated over a fire) ... i.e. barrels under house downspots for collecting water for washing?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Development as Configuration

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Development as Configuration
Newsgroups: comp.databases.theory
Date: Sat, 28 May 2005 14:05:37 -0600
"dawn" writes:
There are other approaches for the "bump up" such as IDEs, a service-oriented architecture, OO with libraries of type definitions, various industry standards, an so on, each with its own charm and advances, but nothing strikes me as yet as getting us the next big productivity boost in application software development. The one I've decided really isn't going to get us there is code generation, so that is the one that makes me yawn the most when I hear it.

we were asked to do some consulting work with this small client/server startup in silicon valley that wanted to do some payment transactions
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

they had two people that had been at oracle who we had worked with on parallel oracle
https://www.garlic.com/~lynn/95.html#13

for ha/cmp and scalable distributed lock manager
https://www.garlic.com/~lynn/subtopic.html#hacmp

and were now at this startup responsible for something called the commerce server (this startup also had this technology called SSL). we worked with them on doing payment transactions and something called a payment gateway (collection somewhat now referred to as e-commerce).

we frequently advised (possibly harangued) many people at the startup that to take a typical straight forward, well designed and well tested staightline application and turn it into a sevice typically could require 4-10 times the code (of the straightline application) and ten times the effort.

a simple example was that after the straight line application was built and tested .... we built something like a five state by 20-30 possible failure mode matrix ... and required that each possible condition could be identified/diagnosed and recovered from (well documented remediation processes and possibly automated recovery). eventually had something like a 40 page diagnostic document for webserver operations and payment gateway operations (and a lot more code).

this was in somewhat the timeframe of lots of opject development platforms ... many primarily oriented towards quickly turning out fancy que toy demos. one that was around in this era was taligent. we spent a one-week JAD with taligent exploring what would be required to take their existing infrastructure and transform it into a business critical platform for deliverying service oriented application (moving a lot of the traditional service oriented operations out have having to be repeatedly reimplemented in every application ... and into the underlying platfrom). The net was an estimate of approximately 30 percent hit to their existing code base ... and approximately 1/3rd additional new frameworks (specifically oriented towards service oriented operations ... as opposed to the more commoingly found frameworks involved in screen graphics).

we even took a pass at trying to drop some number of 2167a certification requirements into the infrastructure .... attempting to significantly reduce the repeated effort involved in buidling and deploying service oriented applications.

misc. past taligent refs:
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000e.html#48 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000.html#10 Taligent
https://www.garlic.com/~lynn/2001j.html#36 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002i.html#60 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002j.html#76 Difference between Unix and Linux?
https://www.garlic.com/~lynn/2002m.html#60 The next big things that weren't
https://www.garlic.com/~lynn/2003d.html#45 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003e.html#28 A Speculative question
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#38 Where should the type information be: in tags and descriptors

misc. past business critical &/or service related postings
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#32 Mainframes & Unix
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/98.html#18 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/98.html#51 Mainframes suck? (was Re: Possibly OT: Disney Computing)
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2001b.html#25 what is interrupt mask register?
https://www.garlic.com/~lynn/2001c.html#16 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#56 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002c.html#30 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002l.html#15 Large Banking is the only chance for Mainframe
https://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
https://www.garlic.com/~lynn/2002p.html#6 unix permissions
https://www.garlic.com/~lynn/2003.html#38 Calculating expected reliability for designed system
https://www.garlic.com/~lynn/2003c.html#23 difference between itanium and alpha
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
https://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
https://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
https://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2004m.html#51 stop worrying about it offshoring - it's doing fine
https://www.garlic.com/~lynn/2004m.html#56 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#63 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005.html#18 IBM, UNIVAC/SPERRY, BURROUGHS, and friends. Compare?
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#42 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005f.html#38 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005h.html#16 Today's mainframe--anything to new?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Development as Configuration

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Development as Configuration
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Sat, 28 May 2005 14:39:54 -0600
re:
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration

a little more SOA topic drift ... but one of the other SOA characteristics frequently is multi-tier architecture. Somewhat prior to starting ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

my wife had co-authored and presented the response to a gov. RFI for a large campus-like distributed environment. In the RFI response ... she had formulated the principles for multi-tier architecture. We then expanded on those principles and started presenting them in customer executive briefings as 3-tier architecture.

unfortunately this was in the SAA period ... which could be characterized as the company attempting to put the client/server (2-tier) genie back into the bottle ... which frequently put us at direct odds with SAA crowd. We were also heavily pushing enet as a connectivity. The SAA crowd were heavily in with the token-ring people who were advocating corporate environments with something like 300 stations on a single lan (which aided in pushing the idea of a PC as an extremely thin client to the corporate mainframe). this is somewhat related to some terminal emulation postings:
https://www.garlic.com/~lynn/subnetwork.html#emulation

Somebody in the T/R crowd turned out a comparison of enet & T/R, making statements of enet typically degrading to 1mbit/sec. (or less) effective thruput. This was about the time of an acm sigcomm paper about typical enet degrading to 8.5mbit/sec effective thruput under worst case scenaro with all stations in low-level device driver loop constantly transmitting minimum size packets.

various past postings on saa. t/r and coming up with 3-tier and middle layer architectures
https://www.garlic.com/~lynn/subnetwork.html#3tier

T/R disclaimer ... my wife is listed as co-inventor on one of the token passing patents from the 70s.

in this time-frame she was in pre-school and con'ed into going to POK to be in charge of loosely-coupled architecture (loosely-coupled being one of the SOA buzzwords). while there she authored Peer-Coupled Shared Data architecture which took years to show up in places like parallel sysplex
https://www.garlic.com/~lynn/submain.html#shareddata

SqlServerCE and SOA - an architecture question

From: <lynn@garlic.com>
Newsgroups: microsoft.public.sqlserver.ce
Subject: Re: SqlServerCE and SOA - an architecture question
Date: Sat, 28 May 2005 13:23:28 -0700
Darren Shaffer wrote:
Replication and RDA both use HTTP transport to achieve their data synchronization but not a web services approach per se. You can certainly use web services and flesh out your own synchronization strategy between SQL Mobile and a larger SQL Server, however this will not perform well beyond about 1MB of payload data. While SOA attempts to encourage all of us to abstract our services as interfaces (a good thing), including RDA and/or merge replication in your architecture may be the simplest thing available to get the job done. I'd worry less about SOA compliance and follow another computer science tenet that has served me well - do the simplest thing that could possibly work.

recent postings in comp.database.theory regarding other facets of service oriented architecture
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005i.html#43 Development as Configuration

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The 8008

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The 8008
Newsgroups: alt.folklore.urban,alt.folklore.computers
Date: Sat, 28 May 2005 17:00:44 -0600
a primary reason for collecting rain water for wash day ... was that the well water was extremely hard (had lots of minerals in it) ... rain barrel was effective way of getting "soft" water for wash day.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Friday question: How far back is PLO instruction supported?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Friday question: How far back is PLO instruction supported?
Newsgroups: bit.listserv.ibm-main
Date: Sat, 28 May 2005 18:12:25 -0600
Chris_Craddock@ibm-main.lst (Craddock, Chris) writes:
I am constantly amazed at the gymnastics some people will go through to "support" back level "customers". One thing you can say for sure about customers who are not on reasonably current hardware and software... they aren't spending any money! Why would anyone go to that amount of trouble when there's no revenue in it anyway? Charity?

originally software was provided free as part of enabling customer's being able to use the hardware. unbundling announcement of 6/23/69 and pricing software was somewhat prompted by fed. gov. anti-trust litigation.

more recently there has been some defection from the mainframe to other platforms. there is some potential future business by keeping customers at least on some mainframe platform ... as opposed to possibility that they will drift away to other platforms.

some recent postings on software pricing theme
https://www.garlic.com/~lynn/2005g.html#51 Security via hardware?
https://www.garlic.com/~lynn/2005g.html#53 "Best practices" or "Best implementations"?
https://www.garlic.com/~lynn/2005g.html#54 Security via hardware?
https://www.garlic.com/~lynn/2005g.html#57 Security via hardware?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Listserver for DFSMS/HSM

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Listserver for DFSMS/HSM
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 30 May 2005 09:30:45 -0600
Raymond.Noal@ibm-main.lst (Raymond Noal) writes:
Dear List,

Is there a list server for IBM's DFSMS / HSM topics?


"official" list
http://www.lsoft.com/lists/listref.html

history of listserv
http://www.lsoft.com/products/listserv-history.asp

listserv somewhat grew up on bitnet ... from a internal corporate precursor.

the internal network was larger than arpanet/internet from just about the start until possibly mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet

bitnet (and earn) was an application of some of the internal network technology (but bitnet nodes weren't included in the calculation of internal network nodes)
https://www.garlic.com/~lynn/subnetwork.html#bitnet

an old posting referencing startup of earn
https://www.garlic.com/~lynn/2001h.html#65

recent posting discussing some internal network characteristics
https://www.garlic.com/~lynn/2005i.html#37 Secure FTP on the Mainframe

some of the internal listserv evolution has been blamed on stuff i was doing. there were some number of studies in the early 80s about what was going. one involved assigning a researcher to study what i was doing. for 9 months they sat in the back of my office taking notes on how i communicated. they also had access to all my incoming & outgoing email and logs of all my instant messages for analysis. the study also turned into a stanford phd thesis (joint between computer ai and language depts) and material for subsequent papers and books. some discussion in collection of postings on cmc (computer mediated communication)
https://www.garlic.com/~lynn/subnetwork.html#cmc

the listserv search page
http://www.lsoft.com/lists/list_q.html

doesn't list either dfsms or hsm. also the bit.listserv usenet groups (bitnet listserv groups gatewayed to usenet) doesn't indicate anything obviously dfsms/hsm.

does have some listings for ADSM (now TSM) ... which grew up out of an internal backup/archive tool that I originally wrote
https://www.garlic.com/~lynn/submain.html#backup

doing dfsms search in google groups shows up references primarily in bit.listserv.ibm-main (ibm-main listserv group gatewayed to usenet)
http://groups-beta.google.com/groups?as_q=dfsms&num=100&scoring=d&hl=en&as_epq=&as_oq=&as_eq=&as_ugroup=&as_usubject=&as_uauthors=&lr=&as_drrb=q&as_qdr=&as_mind=1&as_minm=1&as_miny=1981&as_maxd=30&as_maxm=5&as_maxy=2005&safe=off

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

defeating firewalls made easy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: defeating firewalls made easy
Newsgroups: comp.security.firewalls
Date: Mon, 30 May 2005 10:04:36 -0600
ibuprofin@painkiller.example.tld (Moe Trin) writes:
Agreed - first we don't use windoze, and second, we don't use a web browser because that's the only application you've learned. For example, I'm using an application called 'mailx' which is a derivative of 'mail' which has been around since the mind 1980s - before microsoft invented networking or whatever. Finally, you have to remember that those of us who are serious about networks don't use a toy firewall - and it doesn't offer any services and is locked away from unauthorized people. Our firewalls are not administrated over the internal (never mind DMZ or external) network.

and even more historical drift (from recent posting in another n.g.) ....
https://www.garlic.com/~lynn/2005i.html#47

another aspect from posting in this n.g.
https://www.garlic.com/~lynn/2005i.html#27
https://www.garlic.com/~lynn/2005i.html#28

slashdot ... History of Netscape and Mozilla (browser)
http://slashdot.org/articles/05/05/29/1314243.shtml?tid=114&tid=1

some from the commerce server side
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

and from SOA view point
https://www.garlic.com/~lynn/2005i.html#42
https://www.garlic.com/~lynn/2005i.html#43

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers
Date: Mon, 30 May 2005 11:29:30 -0600
small postscript to earlier reply
https://www.garlic.com/~lynn/2005e.html#17 Where should the type information be?

stu had created cms script command at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

for document formating. later at the science center, in '69, G, L, and M invented GML (which has since morphed into SGML, HTML, XML, FSML, etc)
https://www.garlic.com/~lynn/submain.html#sgml

and gml tag support was added to script document formating. however, it was quickly being realized that gml tags were useful for more than just specifying document formating.

however, even before invention of gml, bob adair was strong advocate of self-describing data. the performance statistic gathering process ... which ran on the cambridge machine continuously ... created files to tape and there was always header that described the format and fields in the data ... so that even years later the data could be retrieved and analyzed.

some of this legacy from the mid-60s were still around over ten years later when i was getting the resource manager ready to ship. we had over ten years of system performance statistics that could be used for workload and thruput profiling (not only from the cambridge system but also from a number of other internal systems as the methodology migrated into general use). some past posts about workload and performance profiling that went into calibrating the resource manager ... as well as the related technology evolving into things like capacity planning
https://www.garlic.com/~lynn/submain.html#bench

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

XOR passphrase with a constant

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: XOR passphrase with a constant
Newsgroups: sci.crypt
Date: Mon, 30 May 2005 14:51:43 -0600
"Andrew" writes:
If I have a list of 10,000 good passphrases (whatever you consider 'good' to be) and XOR all of them with a constant of equal or differing length, before passing them through a hash function, for example MD5 or SHA, is the task of producing a collision, assuming an attacker has gained access to the entire list, made any less computationally challenging?

one-time-password .... was suppose to allow a person to carry round knowledge of a single passphrase ... and use it in multiple environments w/o the end-user needing any additional baggage.

the basic idea was repeated hashing of the passphrase ... ... server would record N and the Nth passphrase. when the user connected, the server would send back N-1. the user would have the passphrase repeatedly hashed N-1 and send it to the server. The server would hash it one more time and compare it with the previously recorded hash. If it compared, there was authentication ... the number would be decremented by 1 and the most recent hash recorded.

this was improved by having the server provide a salt & the number for the initialization ... and all subsequent iterations. the idea is that different servers would provide different salts ... so that the end users would be able to use the same passprhase for all environments.

supposedly this is resistant to MITM attacks w/o the end user having to carry anything (other than the memory of the passphrase ... which hopefully won't be entered on a compromised device).

the attack is for the MITM to intercept the salt and number and substitute a one (or other sufficiently small value). The MITM gets back the hash for the first round ... and then can iterate the hash for the additional rounds for the correct number. MITM now has information to generate correct authentication values for potentially several hundred rounds (for a specific server).

a possible countermeasure is for the end-user to carry some baggage to track what is going on ... like recording the most recent N that they had seen ... to recognize unauthorized uses (N having been decremented by more than it should have been). however, this violates the original design point justifying the implementation. if the end-user is going to be carrying some baggage to track things like previous hash interations ... then they might as well go with something like digital signature and have the public key recorded (at the server) rather than the Nth hash iteration of a passphrase.
https://www.garlic.com/~lynn/subpubkey.html#certless

couple past posts:
https://www.garlic.com/~lynn/2003m.html#50 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#0 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003o.html#46 What 'NSA'?

other mitm posts
https://www.garlic.com/~lynn/subintegrity.html#mitmattack

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Regarding interrupt sharing architectures!

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Regarding interrupt sharing architectures!
Newsgroups: comp.arch.embedded,comp.arch,alt.folklore.computers
Date: Tue, 31 May 2005 09:45:32 -0600
"ssubbarayan" writes:
At many points of time during my career path as an embedded developer I have come across the technology of interrupt sharing but still has not been fortunate enough to understand what it means.Can anyone explain me how this interrupt sharing works? My most curious doubt is,how does the CPU come to know which device has interrupted him?Is there any special register which will have a reference for the devices sharing the same interrupt so that the CPU can verify it and find out who is the interrupt requester.

well in 360 ... there were channels which were typically shared i/o bus. the processor could mask interrrupts from individual channels. allowing interrupts from a channel allowed that any pending interrupts from devices on that channel could be presented.

an i/o interrupt would have the processor load a new PSW (program status word, contains instruction address, interrupt masking bits, bunch of other stuff) from the i/o new psw location (this new psw normally specified masking all interrupts) ... and storee the current PSW into the I/O old psw field.

OS interrupt routines were frequently referred to as FLIH (first level interrupt handlers) which would identify the running task, saved the current registers, copied in the old PSW information saving the instruction address, etc. There was typically a FLIH specific to each interrupt type (i/o, program, machine, supervisor call, external). The I/O FLIH would pick up the device address from the i/o old PSW field and locate the related control block structures.

the low-end and mid-range 360s typically had integrated channels ... i.e. they had microprocessing engines which was shared between the microcode that implemented 360 instruction set and the microcode that implemented the channel logic.

an identified thruput issue in 360 ... was the SIO (start i/o) instruction ... would interrogate the channel, control unit, and device as part of its operation. channel distances to a control unit could be up to 200'. The various propagation and processing dalays could mean that SIO instruction could take a very long time.

for 370, they introduced a new instruction SIOF (start i/o fast) ... which would basically interrogate the channel and pass off the information and not wait to hear back from the control unit and device. a new type of I/O interrupt was also defined ... if there was unusual status in the control unit or the device ... in the 360, it would be indicated as part of the completion of the SIO instruction. With 370, the SIOF instruction had already completed ... so any unusual selection status back from the control unit or the device now had to be presented as a new i/o interrupt flavor.

however, interrupts themselves had a couple of thruput issues. first in the higher performance cache machines ... asynchronous interrupt could have all sort of bad effects on cache hit ratios. also on large systems there was an issue with device i/o redrive latency. in a big system there might be a queue of requests from different sources for the same device. a device would complete some operation and queue and interrupt. from the time the interrupt was queued, until the operating system took the interrupt, processed the interrupt, discovered there was some queued i/o for the device ... and redrove i/o to the device could represent quite a bit of device idle time. compound this with systems that might have several hundred devices ... this could represent some amount of inefficiency.

370-XA added bump storage and expanded channel function with some high-speed dedicated asynchronous processors. a new type of processor instruction could add stuff to a device i/o queue managed by the channel processor. the processor could also specify that i/o completion was to be placed on a queue of pending requests ... also available to the processor. the channel processor could now do real-time queuing of device i/o completion and immedate restart the device with the next request in the queue.

this was also frequently referred to as i/o handling offload. the issue here was that some of the operating systems had something like a few 10k pathlength to take i/o interrupt and get around to doing an i/o device redrive.

in the late 70s ... just prior to introduction of 370-XA ... i was wandering around the disk engineering lab ... and they had this problem with regression testing of disk technology ("testcells") in engineering development. they had tried doing some of this in a traditional operating system environment ... and found that the operating system MTBF was something like 15 minutes. As a result they were scheduling stand-alone machine time between all the various testcells contending for regression time. So i thot I would rewrite an operating system i/o subsystem to be failure bullet proof so they could concurrently work with all testcells concurrently ... not having to wait for stand-alone test time.
https://www.garlic.com/~lynn/subtopic.html#disk

another thing i did was to cleanup the total pathlength so device i/o redrive time was a frew hundred instructions instead of a few 10k insruction pathlength. i claimed that this would significantly mitigate the requirement for doing i/o offload in 370-xa.

this was just a hypothetical argument (somewhat to highlight how inefficient some kernel implementation pathlengths were). a number of years earlier, I had done VAMPS ... a multiprocessor system (that never shipped to customers
https://www.garlic.com/~lynn/submain.html#bounce

that had multiple close-in microcoded engines (in addition to the processor microcode engines) all on the same memory bus. i had designed a queued i/o offload interface for VAMPS.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Single Password - Linux & Windows

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Single Password - Linux & Windows
Newsgroups: alt.linux.redhat,alt.os.linux.redhat,comp.os.linux.redhat,comp.os.linux.security,linux.redhat,microsoft.linux,microsoft.public.windows.server.active_directory,microsoft.public.windows.server.general,microsoft.public.windows.server.setup,redhat.se
Date: Tue, 31 May 2005 14:24:18 -0600
"Jason Williard" writes:
I have 5 servers; 3 RedHat and 2 Windows 2003 Server. I would like to use a single password for all of these. When thinking about this, I had 2 thoughts that came to mind. 1) Setup the 2 Windows servers as domain controllers and find a mechanism to connect the linux machines to that. 2) Setup 2 of the linux machines with Samba to act as domain controllers. Unfortunately, I don't know which is the best option, or if either of these is the best.

What is the best way for me to do this? In the end, I would like to have the system administrators be able to login to any of the servers with a single login. As well, I would like to use the password for specific access for employees, such as pop3/imap/smtp and a few other integrated services.

Any suggestions or information would be appreciated.


in theory, an underlying m'soft mechanism is kerberos ... so it should be possible to deploy a kerberos configuration (across both windows and many other operating systems).

(windows) kerberos interoperability
http://www.microsoft.com/resources/documentation/Windows/XP/all/reskit/en-us/Default.asp?url=/resources/documentation/Windows/XP/all/reskit/en-us/prdp_log_tjil.asp

windows kerberos security tutorial
http://www.mcmcse.com/win2k/guides/kerberos.shtml

from my rfc index
https://www.garlic.com/~lynn/rfcietff.htm

select Term (term->RFC#) in the RFCs listed by section and scroll down to kerberos:
kerberos
see also authentication , security
3962 3961 3244 3129 2942 2712 2623 1964 1510 1411


...

selecting any of the RFC numbers then brings up the summary for that RFC. in the summery field, selecting the ".txt=nnnn" field retrieves the actual RFC.

some past kerberos related postings:
https://www.garlic.com/~lynn/subpubkey.html#kerberos

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Single Password - Linux & Windows

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Single Password - Linux & Windows
Newsgroups: alt.linux.redhat,alt.os.linux.redhat,comp.os.linux.redhat,comp.os.linux.security,linux.redhat,microsoft.linux,microsoft.public.windows.server.active_directory,microsoft.public.windows.server.general,microsoft.public.windows.server.setup,redhat.se
Date: Tue, 31 May 2005 17:27:54 -0600
disclaimer .... kerberos was a project athena activity at MIT. DEC and IBM equally funded athena for $50m total (unrelated drift, ibm funded cmu for mach/andrew stuff alone for $50m). in any case, in previous life, my wife and I got to periodically visit project athena for reviews ... including kerberos.

not too long ago ... i was at a SAML-based product description and they were describing cross-domain support. in looked to me like the same exact flows that had been presented for cross-domain kerberos (we happened to be visiting athena right in the middle of the cross-domain invention) ... except with saml messages instead of kerberos tickets.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/


previous, next, index - home