From: <lynn@garlic.com> Date: Wed, 18 May 2005 16:52:20 -0700 Newsgroups: netscape.public.mozilla.crypto Subject: Re: More Phishing scams, still no SSL being used...Peter Gutmann wrote:
that had this thing they called SSL. in the year that we worked with them, they moved to mountain view and changed their name from mosiac to netscape (trivia question: who owned the term "netscape" at the time?).
as part of doing credit card payments ... we specified that the webservers and the payment gateway had to do mutual authentication (this was before there was any thing like mutual authentication defined in ssl).
along the way, we realized that the certificate part was essentially a facade since the payment gateway and the allowable webservers required conventional business processes for managing their relationship ... and that having certificates was purely an artifact of the existing code being implemented that way (as opposed to the public key operations relying on more traditional repositories for access to public keys).
This was also in the period that we coined the term ;certificate manufacturing to distinguish the prevalent deployment of certificates from the descriptioins of PKIs commoningly found in the literature.
The juxtaposition of credit card transactions and PKIs were also startling. The common accepted design point for PKIs were the offline email model from the early 80s ... where the recipient dialed their electronic post office, exchanged email and hung up. They then could be faced with attempting to deal with first time email from total stranger that they had never communicated with before. A certificate filled the role of providing information about total strangers on first contact when there were no other resources available (online or offline ... aka the letters of credit paradigm from sailing ship days).
imagine four quadrunts defined by offline/online and electronic/non-electronic. in the 60s, the credit card industry was in the upper left quadrant; offline and non-electronic. They mailed out monthly revokation lists to all registered merchants. With new technology they could have moved into the offline/electronic quardrant (the online/non-electronic quadrant possibly not being practical). However, in the 70s, you saw the credit card industry moving directly to the online/quadrant where they had real-time, online authorization of every transaction. In the mid-90s when there were suggestions that the credit card industry could move into the 20 century by doing PKI, certificate-based transactions ... I got to repeatedly point out that would represent regressing the credit card transaction state of the art by 20-30 years ... back to the non-electronic archaic days of offline transactions and the mailed revokation booklets.
It was sometime after having repeatedly pointed out how archaic the whole PKI & CRL paradigm actually was that OCSP showed up on the scene (when real-time, online facilities are actually available). It is somewhat a rube-goldberg fabrication that attempts to gain some of the appearance of having modern, online, real-time transactions ... while trying to preserve the fiction that certificates (from the offline & electronic guardrant) are even necessary.
The problem is that the original design point for PKI, CRLs, etc .... the offline & electronic guardrant is rapidly disappearing in the always on, ubiquitous internet connected environment.
The other market niche that PKIs, CRLs, etc have sometimes attempted to carve out for themselves has been the no-value transaction sector ... where the value of the transaction is not sufficient to justify the (rapidly decreasing) cost of an online transaction. The problem with trying to make a position in the no-value transaction market ... is that it is difficult to justify spending any money on CAs, PKIs, certificates, etc.
some amount of past posts on SSL certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert
Another facet of the SSL certificate market has to do with SSL domain name server certificates (probably the most prevaling use). One of the justifications for SSL domain name server certificates were concerns about the integrity of the domain name infrastructure. So browsers were setup that would check the domain name in a typed in URL against the URL in a server certificate.
The business scenario has a certificate applicant going to a certificate authority (CA) to apply for an SSL domain name server certificate. They supply a bunch of identification ... and the certification authority then attempts the expensive, complex and error-prone process of matching the supplied identification information with the domain name owner identification information on file with the authoritative agency that is responsible for domain name ownership (aka the domain name infrastructure).
Now it turns out that the integrity concerns with the domain name infrastructure can extend to the domain name owner information on file ... putting any certification process by a certification authority (for ssl domain name certificates) at serious risk.
So somewhat from the certification authority industry, there is a proposal that when people get domain names, they register a public key. All future communication with the domain name infrastructure is then digital signed and verified with the onfile public key (purpose is to improve the overall integrity of the domain name infrastructure). SSl certificate applicants can also digital sign their SSL certificate applications to a certification authority. The certification authority can retrieve the onfile public key (from the domain name infrastructure) to verify the digital signature on the application which turns a complex, expensive, and error-prone identification process into a much simpler, less expensive, and reliable authentication process.
However, there is a couple catch-22s for the PKI industry. First, improving the integrity of domain name infrastructure medigates some of the original justification for having SSL domain name certificates. Also, if the certification authority can build the trust basis for their whole operation on the onfile public keys at the domain name infrastructure ... it is possible others might realize that they could also do real-time retrieval of onfile public keys as part of the SSL protocol ... im place of relying on certficate-based public keys.
From: <lynn@garlic.com> Date: Thu, 19 May 2005 08:43:17 -0700 Newsgroups: news.admin.net-abuse.email Subject: Re: Brit banks introduce delays on interbank xfers due to phishing boomVernon Schryver wrote:
1) id theft involving static data that can be turned around and used to perform fraudulent transactions on existing acctouns (authentication risk)
2) id theft involving static data that can be turned around and used to establish new accounts or operations (identification risk)
With the extremely prevalent use of static data resulting in both authentication and identification fraud ... there has been lots of skimming/harvesting going on. This skimming/harvesting can be viewed of being of two forms .... skimming/harvesting data-in-flight and skimming/harvesting data-at-rest.
Typically SSL (encrypted sessions) is viewed as a countermeasure for
the skimming/harvesting data-in-flight threat. While
skimming/harvesting of data-in-flight has been observed in other
environments ... there seems to be little evidence of
skimming/harvesting (evesdropping) of internet data-in-flight (aka
seems to be much more theoritical issue). There appears to be lots of
excamples of skimming/harvesting data-at-rest .... large databases of
people information and/or things like merchant transaction files ....
slightly related reference ... security proportional to risk:
https://www.garlic.com/~lynn/2001h.html#61
A large proporition of threats lumped under ID theft actually involve static data used for authentication fraud (aka fraudulent transactions on existing accounts as opposed to identification fraud ... establishing new accounts in the victim's name).
In the three-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are
id theft involving static authentication data is frequently something
you know. This is the pin/passwords. Another example is your mother's
maiden name or SSN frequently used for authentication purposes.
These forms of static data authentication are frequently also referred to as shared-secrets (aka pin/passwords) and learning the shared-secret is sufficient to impersonating the victim in faudulent transactions on their existing accounts. While payment cards fall nominally into the something you have authentication ... their magstripes represent static data that has frequently been involved in skimming/harvesting activities where it enables the manufactur of counterfeit cards. In security terms, shared-secrets and/or static data infrastructures are also subject to replay attacks (i.e. fraud from recording the shared-secret or static data and simply replaying what was recorded).
Skimming/harvesting of large "data-at-rest" databases of personal information has represented a significantly large return-on-investment for the crooks (cost of the attack vis-a-vis expected fraud enabled). there have been recent studies that claim at least 77 percent of these kinds of exploits involve insiders.
Phishing attacks involve getting unsuspecting victims to voluntarily give up their static data or other forms of shared-secrets. The internet have provided the crooks with technology for phishing attacks that start to compare with traditional skimming/harvesting of data-at-rest in terms of fraudulent return on investment (cost of attack vis-a-vis amount of fraud enabled).
Traditional security technique for limiting exposure of pin/password (or other forms of static data shared-secrets) is to require that a different unique shared-secret be used for every unique security domain. Among others, this is a countermeasures to insider skimming/harvesting shared-secret in one domain and then using it in another domain (aka high school kid working for the local garage ISP getting your connection password ... and being able to use the same password with your home banking account).
Phishing attacks involve getting unsuspecting and naive victims to voluntarily give up such information (heavily leaveraging internet technology to attack large numbers of people at very low cost and risk).
Possible countermeasures to phishing attacks involve
1) education of large numbers of potential victims, 2) minimizing crooks ability to impersonate authorities as part of convincing people to divulge their information 3) minimizing the use of static data in authentication paradigms ... people can't divulge information that they don't know.
Note that the last countermeasure is also countermeasure for the skimming/harvesting attacks (frequently by insiders) ... where there is no shared-secrets or static data that can be havested by crooks to replay in future fraudulent transactions.
The extensive use of shared-secrets and static data is also vulnerable to humans unable to cope with rapidly increasing number of shared-secrets that they must memorize (for authentication purposes). many individuals are faced with electronically interacting with scores of unique, different security domains, each requiring their own unique password.
The PKI highlighted in recent post to the PKIX mailing list:
https://www.garlic.com/~lynn/aadsm19.htm#11 EuroPKI 2005 - Call for Participation
is that the majority of activity involved in PKIs seem to revolve around certification activity related to producing a certificate.
In 3-factor authentication model, the verification of a digital
signature with a public key can be viewed as a form of something you
have authentication ... aka the subject has access and use of the
corresponding private key producing the digital signature. PKIs are
not required in order to deploy a public key authentication
infrastructure
https://www.garlic.com/~lynn/subpubkey.html#certless
The use of public key authentication operations can eliminate much of
the burden and exposures associated with shared-secrets and static
data infrastructures; knowledge of the publc key is not sufficient to
impersonate the victim; 1) eliminates exploit of skimming/harvesting
static data for use in replay attacks or fraudulent transactions and
2) eliminates requirement to have a unique public key for every
unique, different security domain. In the static data, shared-secret
paradigm the same value (pin, password, mother's maident name, etc) is
used for both originating the request as well as verifying the request
https://www.garlic.com/~lynn/subintegrity.html#secrets
A public key can only be used for verifying the request ... it can't be used for originating the request. Divulging a public key is not sufficient for a crook to be able to perform a fraudulent transaction.
However, some forms of public key operations are still subject to phishing attacks. Many public key deployments have the private key resident in a software file. Phishing attacks involve convincing the victims to perform various operations ... typically giving up information that enables the crook to perform fraudulent operation. However a phishing attack could also invole getting the victim (possibly w/o even knowing what they are doing) to transmit their software file containing a private key or shared-secret.
So one (public key environment) countermeasure against phishing attacks exposing the victim's private key is to guarantee that private keys are encapsulated in hardware tokens that can't be easily transmitted (even the hardware token owner has no direct access to the private ... just access to operations that utilize the private key).
This is countermeasure for the phishing attacks where the crooks are harvesting static data for later use in fraudulent transactionsq. Such private key hardware tokens are still vulnerable to other kinds of qsocial engineering attacks where the crooks convince naive users to directly perform transactions on behalf of the crook.
The issue for the existing common PKI vendors ... is they frequently view their revenue flow from the manufacturing of certificates. Certificates are not necessary to deploy a public key authentication infrastructure.
The PKI, certificate design point is the offline email environment from the early '80s. The recipient dials up their local electronic post office, exchanges email, hangs up and starts reading their email. They encounter a email from a total stranger than they've never communicated with before and they have no method of obtaining any information about this stranger. Certificates were designed to supply some minimum amount of information where the recipient had no other recourse in an offline environment dealing with total strangers (the letters of credit model from sailing ship days).
The issue is that the such certificate, offline design point doesn't apply to common business relationships where prior communication is the norm and there are existing business process relationship management conventions (some of them having evolved over hundreds of years). A simple example is bank accounts where there is not only information about the customer ... but also other things that represent dynamic and/or aggregated information (like current balance) that is not practical in a certificate model. Given that the relying party already has information about the originator and/or has real-time, online access to such information, then a stale, static certificate becomes redundant and superfluous.
recent related posting about SSL domain name certificates
https://www.garlic.com/~lynn/2005i.html#0 More Phishing scams, still no SSL being used
lots of past posts about SSL domain mame certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert
From: <lynn@garlic.com> Newsgroups: microsoft.public.security Subject: Re: Certificate Services Date: Thu, 19 May 2005 21:30:11 -0700Dan wrote:
in principle, certificate-less operations maintains existing business
processes for registering authentication material ... but replaces the
registration of a pin/password with the registration of a public key.
then the user authenticates with a userid/digital signature .... where
the digital signature is verified with the onfile public key.
https://www.garlic.com/~lynn/subpubkey.html#certless
the original design point for PKIs and certificates was the offline email model of the early 80s; the recipient dailed up their local electronic post office, exchanged email, hung up and found themselves with an email from a total stranger that they had never communicated with before. in this first-time stranger communication in the offline world, the recipient had not resources to determine information about the sender. this is somewhat the email analogy to the letters of credit paradigm from sailing ship days.
using somewhat abstract information theory, a certificate represents an armored, stale, static, distributed catched information. it is pushed by the sender to the relying-party ... so that the relying party can have information about the sender in the stranger, first-time communication where the relying party is offline and has no recourse for obtaining any information about a stranger in a first time communication situation.
in the early 90s, there was some move for x.509 identity certificates by trusted third party certification authorities. however, it was somewhat difficult for a CA to predict exactly what identity information some unknown relying party in the future might require. As a result there was some move to grossly overlead identity certificates with enormous amounts of privacy information.
in the mid-90s, various infrastructures (like financial institutions)
were coming to realize that enormous amounts of identity infomration
represented significant liability and privacy issues. as a result
there was some efforts in the area of relying-party-only certificate
https://www.garlic.com/~lynn/subpubkey.html#rpo
where a certificate might only contain some form of an account number as certified information. the key owner would constantly digitally sign transactions with their private key and push the transaction, the digital signature, and the certificate to the relying party (who had originally issued the certificate and has a superset of all the information already on file, including the public key and the associated account record). in all cases the account selection (number, userid, or some other value) was also present in the digitally signed transaction.
when the relying-party receives the transaction, they pull the look-up value from the transaction, read the associated account information, retrieve the public key from the account, and using the onfile public key, verify the digital signature. In such scenarios, it is possible to demonstrate that such stale, static digital certificates are redundant and superfluous.
there was another downside in the case of financial payment transactions. the typical payment transactions is on the order of 60-80 bytes. the typical relying-party-only certificate from the mid-90s was on the order of 4k-12k bytes. The scenario for adding stale, static, redundant and superfluous digital certificates to every financial transaction did represent a factor of 100 times payload bloat added to each transmission (constantly sending redundant and superfluous information back to the financial institution that it already had on file).
From: <lynn@garlic.com> Newsgroups: microsoft.public.security Subject: Re: General PKI Question Date: Fri, 20 May 2005 08:45:22 -0700Ted Zieglar wrote:
a business process has been defined for asymmetric key cryptography where one key is designated as "public" and divulged to other parties and the other key is designated as "private" and is never divulged.
some additional business processes have been defined
1) digital signature authentication .... a secure hash is computed for the message, which is then encoded with the private key. other parties with the corresponding public key can decode the digital signature and compare the decoded secure hash with a freshly computed secure hash of the message. this will validate a) the origin of the message and/or b) if the message has been modified.
2) confidential data transfer ... people will encode messages with the recipient's public key. only the recipient with their (never divulged) private key can decode the message. frequently because of overhead of asymmetric key cryptography ... a random symmetric key is generated, the message is encrypted with the symmetric key and the symmetric key is encode with the public key. The encrypted message and encoded symmetric key are transmitted together. only the corresponding private key can decode the symmetric key ... and then, in turn decode the actual message.
In general, public keys can be registered with other parites ... in much the same way shared-secrets and other kinds of authentication materials are registered today ... using well established business process relationship managment processes (some that have evolved over hundreds of years, like bank accounts).
The initial kerberos pk-init ietf draft for adding public keys to
kerberos implementations specified registering public keys in lieu of
passwords
https://www.garlic.com/~lynn/subpubkey.html#kerberos
later specifications were added that certificate-based public keys
could be used
There have also been RADIUS implementations where public keys were
registered in lieu of passwords and digital signature authentication
operation was performed
https://www.garlic.com/~lynn/subpubkey.html#radius
From 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are
digital signature authentication schemes are a form of something you
have authentication .... only you have access and use of a (never
divulged) private key.
Certificate-based public keys (PKIs) were designed to address the offline emails scenario of the early 80s; recipient dials up their (electronic) postoffice, exchanged email, and hungup. They were then possibly faced with some email from a total stranger that they had never communicated with before. Certificates were somewhat the "letters of credit" analogy (from the sailing ship days) ... where the recipient/relying-party had no other means of obtaining information about the subject ... either locally (or heaven forbid using online, electronic means).
In the early 90s, there were x.509 identity certificates ... where the CAs, not being able to reliably predict what information some future relying party might need .... were looking at grossly overloading certifictes with excessive amounts of privacy information. Later in the 90s, some number of infrastructures (like financial institutions) were realizing that identity certificates, grossly overloaded with excessive amount of information represented significant liability and privacy issues.
At this time, you saw some appearance of relying-party-only
certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo
where the information in a certificate was reduced to little more than a record lookup indicator (userid, account number, etc). a person would create a message, digitally sign it, and package the message, the digital signature and the certificate and send it off to the relying party. the relying party then would use the indicator in the base message to index the appropriate relationship record and retrieve the associated information (including the registered, onfile public key). the onfile public key would then be used to verify the digital signature (authenticating the message). It was trivial to demonstrate that the stale, static certificate was redundant and superfluous.
in the financial sector, these relying-party-only certificates were also be targeted at payment transactions. the typical payment message is on the order of 60-80 bytes. the typical relying-party-only certificate from the period was on the order of 4k-12k bytes. not only were the stale, static certificates redundant and superfluous, but they could also contribute a factor of 100 times in message payload bloat.
a basic issue is that the certificate design point was addressing the problems of a offline, unconnected world for first time communication between total strangers. as the world transitions to ubiquitous, online, certificates are looking more like horse buggies on an interstate with 75mph speed limit.
we were asked to do some consulting with this small client/server
startup in silicon valley that wanted to do some payment transactions on
their server
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
and they had this thing called SSL that could encrypt internet
transmission. slightly related, recent posting
https://www.garlic.com/~lynn/2005h.html#39 Attacks on IPsec
there was also a perceived integrity problem with the domain name
infrastructure ... so SSL server domain name certificates were defined
https://www.garlic.com/~lynn/subpubkey.html#sslcert
where the browser would compare the domain name in the typed-in URL with the domain name in the digital certificate. Along with working on the specifics of payment gateway ... we also got to go around and do end-to-end business audits of several of the certification authorities that would be providing SSL server domain name certificates.
The process has an applicant for an SSL server domain name certificate providing loads of identification information. The certification authority then performs the expensive, complex, and error-prone identification matching process of checking the supplied identification material with the identification material on file with the authoritative agency for domain name ownership.
Note that the authoritative agency for domain name ownership is the same domain name infrastructure that has the integrity issues that give rise to the requirement for SSL server domain name certificates.
So somewhat from the certification authority industry, there is a proposal that SSL domain name owners register a public key at the same time they register a domain name ... as part of an attempt to improve the integrity of the domain name infrastructure (so that the information that goes into certification of SSL domain name certificates is also more reliable).
Now, somebody can digitally sign their SSL domain name certificate
application. The CA (certification authority) can now retrieve the
onfile public key from the domain name infrastructure to validate the
applicant's digital signature ... note this is a certificate-less
digital signature authentication using online, onfile public keys
https://www.garlic.com/~lynn/subpubkey.html#certless
this also has the side-effect of turning an expensive, complex, and error prone identification process into a simpler and more reliable authenticatin process.
However, this integrity improvement represents something of a catch-22 for the CA PKI industry ...
1) improvements in the integrity of the domain name infrastructure mitigates some of the original requirement for SSL domain name certificates
2) if the CA PKI industry can make the trust basis of their whole infrastructure on certificate-less, real-time retrieval of onfile public keys .... it may occur to others that they could use the same public keys directly (potentially modifying the SSL protocol implementation to use public keys directly obtained from the domain name infrastructure rather than relying on stale, static certificates).
From: <lynn@garlic.com> Newsgroups: microsoft.public.dotnet.framework.aspnet.webservices Subject: Re: Authentication - Server Challenge Date: Fri, 20 May 2005 12:26:09 -0700de9me . via .NET 247 wrote:
There are a number of standard radius implementations that accept some asserted entity ... which is then authenticated from information maintained at radius and then the permissions &/or policies associated with that entity are established.
standard radius have been shared-secret based supporting clear-text
password or challenge/response protocols. there have also been
enhancements to radius for supporting digital signature verification
where the shared-secret password registration is replaced with public
key registration (all the administration and business practices are
preserved for relationship real-time management).
https://www.garlic.com/~lynn/subpubkey.html#radius
the simple pulic key in lieu of shared-secret password is effectively
a certificate-less based operation
https://www.garlic.com/~lynn/subpubkey.html#certless
depending on whether shared-secret clear-text or non-clear-text authentication is used ... the mechanism may or may not require an encrypted SSL channel.
Somewhat the design point for certificate-based public keys was the offline email environment of the early 80s. The recipient dialed up their (electronic) post office, exchanged email, hung up and were then possibly faced with handling first time communication with a complete stranger. This is the letters of credit paradigm from the sailing ship days .... how does the relying party determine anything about complete stranger on initial communication ... when there was no direct access to either local or remote information available to the relying party.
The early 90s saw some certificate-oriented operations attempting to deal with x.509 identity certificates and the inability to predict what information some unknown relying party in the future might require from a complete stranger. The tendency was to look at grossly overloaded the identity certificate with enormous amounts of privacy information.
By the mid 90s, some infrastructures were starting to realize the
x.509 identity certificates overloaded with enormous amounts of
privacy information represented serious liability and privacy
concerns. There was then some retrenchment to relying-party-only
certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo
basically certificates that contained little more than an index to some kind of account/entity record (where all the real information was) bound to a public index/key. However, since this fundamentally violated the target environment that certificates were designed to address (offline enviornment with first time communication between total strangers), it was trivial to demonstrate that stale, state certificates were redundant and superfluous. The subject generated a message, digitally signed the message and then packaged the message, the digital signature, and the digital certificate and sent it off to the relying party. The relying party extrated the record index/key from the initial message and retrieved the message (including the originally registered public key). The onfile public key was then used to validate the digital signature. The inclusion of the stale, static digital certificate in the transmission was redundand and superfluous.
The redundant and superfluous, stale, static digital certificate did represent something of an issue in proposal for use in payment transactions of the period. A typical payment message is on the order of 60-80 bytes. Even the typical relying-party-only certificate from the period was on the 4k-12k bytes. While the stale, static certificate was redundant and superfluous it did have the potential of creating enormouos payload bloat in the payment networks, increasing transmission requirements by a factor of one hundred times.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY with slide rules. (fwd) Newsgroups: alt.folklore.urban,alt.folklore.computers Date: Fri, 20 May 2005 13:50:05 -0600Morten Reistad writes:
snmp was still duking it out with the other contenders
there was some amount of osi out in force also (the world govs. were starting to mandate the internet be eliminated and everything be converted to osi).
I got a couple workstations in a booth diagonal from a booth that case had snmp being demo'ed (about 10-15' away) ... and case was convinced to help with an after hours snmp port to the workstations (demo it on other machines than his).
from my rfc index:
https://www.garlic.com/~lynn/rfcidx3.htm#1067
1067 -
Simple Network Management Protocol, Case J., Davin J., Fedor M.,
Schoffstall M., 1988/08/01 (33pp) (.txt=67742) (Obsoleted by 1098)
(See Also 1065, 1066) (Refs 768, 1028, 1052) (Ref'ed By 1089,
1095, 1156, 1704)
or
https://www.garlic.com/~lynn/rfcauthor.htm#xaCaseJ
Case J. (case@snmp.com)
3412 3410 2572 2570 2272 2262 1908 1907 1906 1905 1904 1903 1902
1901 1628 1512 1452 1451 1450 1449 1448 1444 1443 1442 1441 1285
1157 1098 1089 1067 1028
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELY with slide rules. (fwd) Newsgroups: alt.folklore.urban,alt.folklore.computers Date: Fri, 20 May 2005 13:55:37 -0600oh yes, past posts mentionding sr-71
... there are a different set of stories from boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2
doing the F16.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: <lynn@garlic.com> Newsgroups: netscape.public.mozilla.security Subject: Re: Improving Authentication on the Internet Date: Fri, 20 May 2005 22:06:04 -0700Frank Hecker wrote:
and they had this thing called ssl. in the year we worked with them, they moved to mountain view and changed their name from mosaic to netscape (trivia question ... who had the original rights to the name netscape?).
so part of this was for what became to be called e-commerce. the customer would type in the url ... and it would go to an ssl shopping shite. the browser would get back an ssl domain name certificate and check the domain name in the certificate with the name typed in by the customer. merchant webservers began complaining that running sll for the shopping experience was cutting performance by 80-90 percent and almost all such merchant sites now run w/o ssl. as a result there is no checking of what the customer typed in to the browser for a URL against what site the user is actually visiting.
eventually the customer gets to the end of shopping and hits the check-out button ... which does supply a URL that specifies ssl. Now if this was really a fraudulent site ... it is highly likely that the crooks will have established some perfectly valid domain name and gotten a valid SSL certificate for it ... and they are likely to also make sure that whatever domain name that the check-out button supplies ... corresponds to the domain name in some certificate that have valid control over.
now when somebody applies for a SSL domain name certificate ... the certification authority usually goes to a great deal of trouble to validate that the entity is associated with an identifiable valid company (this basically is a complex, costly, and error-prone identification process). They then contact the domain name infrastructure and try and cross-check that the company listed as the owner of the domain is the same company that is applying for the certificate. Now if there has actually been a domain name hijacking ... there is some possibility that the crooks have managed to change the name of the company owning the domain to some valid dummy front company that they have formed. in which case the whole certificate authority process falls apart since it is fundamentally based on the integrity of the domain name infrastructure registry of true owners.
so there is a proposal to have domain name owners register a public key along with their domain name. then future communication from the domain name owner is digitally signed and the domain name infrastructure can verify the digital signature with the (certificate-less) onfile public key for that domain name. this supposedly mitigates some of the forms of domain name hijacking.
the proposal is somewhat back the SSL certification authority industry since it improves the integrity of the domain name infrastructure ... on which there ability to correctly certify the true domain name owner is based.
it has another side effor for the ssl certification authority industry
... rather than doing the expensive, time-consumer and error prone
identification process they can require that ssl certificate
applications also be digitally signed. they then have a much simpler,
less-expensive, and reliable authentication process by retrieving the
(certificate-less) onfile public key for the domain name owner (from
the domain name infrastructure).
https://www.garlic.com/~lynn/subpubkey.html#certless
it does represent something of catch-22 for the ssl certification authority industry. if the integrity of the domain name infrastructure is improved it somewhat mitigates one of the original justifications for having ssl certificates. another issue is if the ssl certification authority can base the trust of their whole operation on the retrieval of (certificate-less) onfile public keys from the domain name infrastructure ... one could imagine that others in the world might also start trusting real-time retrieval of certificate-less, onfile public keys from the domain name infrastructure. There might even be a slightly modified version of SSL that used real time retrieval of certificate-less, onfile public keys ... rather than a public key from a stale, state (and potentially revoked?) certificate.
part os this comes somewhat from the original design point for PKI certificates which was the offline mail environment of the early 80s. A recipient would dialup their (electronic) post-office, exchange email, and hangup. They then could be dealing with a first-time communication from a total stranger. This is somewhat a later day analogy to the letters of credit model from the sailing ship days ... where the relying party had no (other) method of validating first-time interaction with total stranger.
In the early 90s, there were x.509 identity certificates. The certification authorities were somewhat faced with not really being able to predict what information an unknown, future relying party might require about an individual. There was some tendency to want to grossly overload such identity certificates with excessive amounts of personal information.
Somewhat in the mid-90s, various institutions came to the realization
that such identity certificates grossly overloaded with excessive
personal information presented significant liability and privacy
issues. There was some effort to retrench to something called a
relying-party-only certificate
https://www.garlic.com/~lynn/subpubkey.html#rpo
which basically contained some sort of unique record index pointer (account number, userid, or other distinguishing value) and a public key. The subject would create a message (also containing the distinguishing value) and digitally sign it with their private key. They then would package the message, the digital signature and the certificate and send it off to the relying party. The relying party would extract the distinguishing index value from the message and retrieve the indicated record containing all the necessary relationship information about the originating entity (including their registed public key). They then could validate the digital signature using the onfile public key. In such situations it was trivial to proove that such a stale, static certificate was redundant and superfluous. Part of what made it so easy to proove they were redundant and superfluous was the whole operation violated the original design point that certificates were met to serve ... first-time communication between complete strangers where the relying party had absolutely no other method for establishing any information about the stranger they were communicating with.
There was also some look at these stale, static redundant and superfluous digital certificates for payment transactions by customers with their respective fianncial institutions (again violating the basic design point environment that certificates were met to serve). It turns out that the nominal payment message size is about 60-80 bytes. The nominal relying-party-only certificate from the mid-90s (even with only an account number and a public key) was on the order of 4k-12k bytes. Not only was the attachment of stale, static digital certificates to every payment transaction redundant and superfluous, but doing so would represent an enormous payload bloat of a factor of one hundred times.
From: <lynn@garlic.com> Newsgroups: netscape.public.mozilla.crypto Subject: Re: More Phishing scams, still no SSL being used... Date: Sat, 21 May 2005 10:57:22 -0700Gervase Markham wrote:
you could use a realtime, certificate-less, onfile public key retrieval from trusted DNS infrastructure ... for using in establishing encrypted SSL session (instead of obtaining server public key from a certificate).
now for 20 some years, DNS has had generalized mechanism for multi-level caching of information with per entry cache expiration interval (including at the lowest end-user end-point).
i think it was 1991(?) acm sigmod conference in san jose ... somebody raised a question about what was this x.5xx stuff going on .... and somebody else explained that it was a bunch of networking engineers attempting to reinvent 1960s database technology.
so the primary target for SSL has been client access for e-commerce. There have been studies that show the e-commerce activity is highly skewed ... with possibly only 200 sites accounting for upwards of 90 percent of activity. If you were looking specifically at public key serving within a DNS real-time retrieval paradigm .... with standard caching and cache entry expiration intervals to address performance issues that might hypothetically crop up ... you are looking at relatively small number of public keys that have to be cache to cover the majority of actual world-wide SSL activity.
From: <lynn@garlic.com> Newsgroups: netscape.public.mozilla.crypto Subject: Re: More Phishing scams, still no SSL being used... Date: Sat, 21 May 2005 13:20:09 -0700.... and of course what you could really do is slightly tweak multiple A-record support ... and add an option to ip-address resolve request ... and piggy back any available public key(s) on the response giving the server's ip-address(es). no additional transactions needed at all ... and you have the public key before you even have made the attempt to open the tcp session. if the server had also registered their crypto preferences when they registered their public key ... you could almost imagine doing the ssl session setup integrated with the tcp session setup.
when we were on the xtp tab .... xtp was looking at doing reliable transaction in 3-packet minimum exchange; by comparison tcp has a minimum 7-packet exchange ... and currently any ssl then is over and above the tcp session setup/tear-down. the problem with the current server push of the certificate ... the tcp session has to be operational before any of the rest can be done.
misc. past ssl certificate postings
https://www.garlic.com/~lynn/subpubkey.html#sslcert
misc. past xtp/hsp (and some osi) postings
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
From: <lynn@garlic.com> Newsgroups: netscape.public.mozilla.security Subject: Re: Revoking the Root Date: Sat, 21 May 2005 18:50:47 -0700Ian G wrote:
there were numerous statements by major financial transaction operations over the past ten years that they would never convert to a conventional PKI system because they would never deploy any kind of infrastructure that had single points of failures (or even small number points of failure) .... the compromise of a root key (no matter how unlikely) was viewed as traditional single point of failure scenario. there was some study that major financial operations could be out of commission for 4-8 weeks assuming a traditional PKI-based operation and a compromise of root private key. such positions were frequently mentioned in conjunction with systemic risk and the preoccupation of major financial operations with systemic risk vulnerabilities.
The basic motivation for mondex was the float that mondex international got when the next lower level business unit bought one of their "superbricks". Anybody lower than mondex international in the chain were just replacing float lost to the next higher level in the chain. This issue might be considered the primary financial driving force behind the whole mondex movement. It was so significant that mondex internatioinal began offering to split the float as an inducement for institutions to signup (i.e. the organizations that would purchase a superbrick from them).
A spectre hanging over the whole operation was some statement by the EU central banks that basically said that mondex would be given a two year grace period to become profitable, but after that they would have to start paying interest on customer balances held in mondex cards (effectively float disappears from the operation ... and with it much of the interest in institutions participating in the effort).
mondex international did do some other things ... they sponsored a series of meetings (starting in san francisco) on internet standard work ... which eventually morphed into IOTP.
from my rfc index:
https://www.garlic.com/~lynn/rfcietff.htm
select Term (term->RFC#) in RFCs listed by section
and then scroll down to
Internet Open Trading Protocol
3867 3506 3505 3504 3354
Clicking on the individual RFC numbers fetches the RFC summary for that
RFC. Clicking on the "txt=nnn" field, retrieves the actual RFC.
misc past posts mentioning systemic risk:
https://www.garlic.com/~lynn/98.html#41 AADS, X9.59, & privacy
https://www.garlic.com/~lynn/99.html#156 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#238 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#240 Attacks on a PKI
https://www.garlic.com/~lynn/aepay2.htm#fed Federal CP model and financial transactions
https://www.garlic.com/~lynn/aepay2.htm#cadis disaster recovery cross-posting
https://www.garlic.com/~lynn/aepay2.htm#aadspriv Account Authority Digital Signatures ... in support of x9.59
https://www.garlic.com/~lynn/aadsm2.htm#risk another characteristic of online validation.
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm2.htm#strawm3 AADS Strawman
https://www.garlic.com/~lynn/aadsm3.htm#cstech7 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsmail.htm#variations variations on your account-authority model (small clarification)
https://www.garlic.com/~lynn/aadsmail.htm#complex AADS/CADS complexity issue
https://www.garlic.com/~lynn/aadsmail.htm#parsim parsimonious
https://www.garlic.com/~lynn/aadsmail.htm#mfraud AADS, X9.59, security, flaws, privacy
https://www.garlic.com/~lynn/aadsmail.htm#vbank Statistical Attack Against Virtual Banks (fwd)
https://www.garlic.com/~lynn/aadsm10.htm#smallpay2 Small/Secure Payment Business Models
https://www.garlic.com/~lynn/aepay10.htm#13 Smartcard security (& PKI systemic risk) thread in sci.crypt n.g
https://www.garlic.com/~lynn/aepay10.htm#19 Misc. payment, security, fraud, & authentication GAO reports (long posting)
https://www.garlic.com/~lynn/2000.html#36 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2001c.html#34 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#45 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001f.html#35 Security Concerns in the Financial Services Industry
https://www.garlic.com/~lynn/2001n.html#54 The demise of compaq
https://www.garlic.com/~lynn/2002c.html#7 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#31 You think? TOM
https://www.garlic.com/~lynn/2002l.html#5 What good is RSA when using passwords ?
https://www.garlic.com/~lynn/2003l.html#64 Can you use ECC to produce digital signatures? It doesn't see
https://www.garlic.com/~lynn/2003m.html#11 AES-128 good enough for medical data?
https://www.garlic.com/~lynn/2004j.html#2 Authenticated Public Key Exchange without Digital Certificates?
https://www.garlic.com/~lynn/2004j.html#5 Authenticated Public Key Exchange without Digital Certificates?
https://www.garlic.com/~lynn/2004j.html#14 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
From: <lynn@garlic.com> Newsgroups: netscape.public.mozilla.security Subject: Re: Revoking the Root Date: Sat, 21 May 2005 19:15:43 -0700Ian G wrote:
we coined the terms disaster survivability and geographic survivability to differentiate from simple disaster/recovery.
in any case, just having a process allowing copying of a private key and multiple copies increase the vulnerability to diversion of copies for fraudulent purposes
some recent studies claim that at least 77percent of fraud/exploits involve insiders. from an insider fraud standpoint, diversion of root private key becomes analogous to embezzlement.
bad things can happen with compromise of PKI root key and resulting fraudulent transactions.
however, the systemic risk of having a single PKI root key revoked and having to put the infrastructure thru restart/recovery from scratch is viewed as possibly being even a worse prospect.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Sun, 22 May 2005 06:50:23 -0600"Anders Rundgren" writes:
in the association payment card world ... the contract problem is slightly simplified. a merchant has a contract with their merchant financial institution (one per merchant). A merchant financial institution has a contract with the association (one per merchant financial institution). A consumer has a contract for each one of their payment cards with the respective card issuing financial institutions. Each card issuing financial institution has a contract with the association. Basically there is three level hierarchy of trust & contractual relationship. This mitigates the worst case scenario of having ever customer sign a trust/contract with every merchant (say worst case with billion customers and a million merchants ... having a million, billion contracts).
The other characteristic that carries thru via the financial institution contracts with the associations, is that the merchant financial institution is liable to the association on behalf of the merchants they sponsor ... and consumer financial institution is liable to the association on behalf of consumers they issue cards to. The merchant example, is that merchant financial institutions both love & hate big ticket merchants like airlines; they love the percent charges they get on the big ticket transactions for accepting the liability ... but it is painful when an airline goes bankrupt and they have to make good on all outstanding charged tickets (which frequently has run to tens of millions).
The typical TTP CA trust model (from a business standpoint) is interesting ... since there are explicit trust/contract typically between the CA and the key owners that they issue certificates to. However, there frequently is no explicit trust/contractual relationship that traces from relying parties to the CA (as you would find in the financial world that traces a trust/contractual trail between every merchant and every constumer issued a payment card, aka a merchant can trust a payment card via the thread of contracts thru their merchant financial institution, to the association and then to the card issuing consumer financial institition, with a corresponding thread of explicit liability at each step).
In the federal PKI model ... they sort of made up for this by having GSA (as a representative of the federal gov. as a relying party) sign contracts with every approved CA issuing certificates. That way the federal gov. (as a relying party), when accepting a certificate and making some decision or obligation based on the trust in that certificate ... has a legal liability trail to the certificate issuing institution.
In the SSL certificate model, when a client/end-user makes any decision or obligation based on the trust in the SSL certificate, it is rare that every client/end-user has a contractual trail back to the SSL certificate issuing institution. In the payment card world, the merchant accepting a payment card has a contractual trail to the responsible financial institution accepting liability on behalf of the consumer they issued a card to. In a similar way, each consumer presenting a card to a merchant has a contractual trail to the responsible financial institution accepting liability on behalf of the merchant they represent.
When a merchant presents an SSL certificate to a consumer ... the consumer has no contractual trail to the SSL certificate issuing institution.
In the early development of this stuff that became to be called
e-commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
there was some look at the business issues of payment associations issuing branded SSL certificates ... so that the presentation of such a branded SSL certificate carried with it trust/liability obligations similar to the branded association decals you see on physical store fronts. One problem was what might be impled by placing trust in an SSL certificate (and therefor the associated possible liabilities) are quite a bit more ambiqutous than what is defined for trust between merchant & consumer in a payment card transaction. If all such branded SSL certificate were only to explicitly cover the actual payment transaction and no other activity ... then it was much easier to define the liability issues (aka there are contractual/business issues that tend to be orthogonal to technical protocol issues ... and people working on technical protocol issues may not even have any concept of what they are about).
One of the issues has been that many merchant financial institutions were having a hard time even coming to grips with reality of signing contracts allowing internet merchants to be credit card enabled. They could see it for brick&morter institutions that already are MOTO authorized (i.e. internet credit card transactions mapping into the existing non-face-to-face, credit-card holder not present contractual parameters of mail-order/telephone-order transactions).
The issue was for a purely internet merchant that had no inventory, no physical property ... etc. ... no assets that could be forfeited in the event of a bankruptcy that would cover the risk exposure to the financial institution for taking liability of all possible outstanding transactions.
The other charactistic of CA PKI certificate paradigm being targeted at the early 80s, offline email paradigm ... was that in the payment card scenario every merchant transaction is online and passes thru the (liability accepting) merchant financial institution (or some agent operating on behalf of the merchant financial institution).
The CA PKI certificate paradigm for the early 80s, offline email had the relying-party/recipient dialing their (electronic) postoffice, exchanging email, hanging up, and being faced with first-time communication from a total stranger ... where the relying-party had no other recourse for establishing any attributes regarding the stranger. The PKI certificate was sort of filled an analogous role to the "letters of credit" from the sailing ship days. This is an offline push model where the subject is pushing the credential to the relying-party ... and the intended purpose was to address the environment where the relying-party had no real-time method for corroborating the information.
In the 90s when some suggested that the credit card model should be brought into modern era with certificates; I would comment that it would be regressing the payment card industry to the archaic, ancient non-electronic period of the '50s & '60s (when the merchant, relying-party had no recourse to online information and had to rely on the revokation booklets mailed out every month, and then every week).
The payment card industry transitioned to the online model in the 70s and left the old fashion offline model (that the CA PKI model uses) mostly in history.
In any case, the issue for the merchant financial institution, accepting liability on behalf of the merchant, gets to see every financial transaction in real time, as it is happening. At any point in time, the merchant financial institution has an approximate idea of the aggregate, outstanding fianncial liability it has per merchant (because it is seeing and aggregating the transactions in real time) and could choose to shut it off at any moment.
One of the financial institutions objections to the CA PKI certificate model ... was that there could be an incremental financial liability every time the merchant presented the certificate ... and there was no provisions for an issuing financial institution (that chose to stand behind such a paradigm) to calculate their potential, outstanding risk. The issue of not knowing their potential liability exposure at any moment was somewhat othogonal to not knowing how to deal with operations that might not having any assets ... and therefor there was nothing to recover in forfeiture if a bankruptcy occurred.
That was somewhat the idea that CA PKI certificates ... in the modern online risk management world ... was ideally suited for no-value transactions (i.e. since the trust issue involved no value ... it would be easy to always know the oustanding, aggregated risk ... since you knew that summing values of zero ... still came up zero, no matter how many such events there were).
- Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Sun, 22 May 2005 07:19:33 -0600Anne & Lynn Wheeler writes:
basically, in a payment card transaction ... the card issuing financial institution comes back with a real-time promise to pay the merchant. the card issuing financial institution then tranfers the promised funds to the merchant financial institution.
the merchant financial institution, in calculating the outstanding run-rate liability for any particular merchant ... can put a delay on actually making such funds available to the merchant. ... aka they have some calculation on the risk history of the merchant and an idea (from real-time transactions) of the current outstanding liability. Another of the ways that a merchant financial institution can control the aggregate financial risk exposure they have per merchant ...is by delaying the actual availability of funds (in any default/bankruptcy by the merchant, since the funds haven't actually be released ... the delayed funds can be used by the merchant financial institution to help cover their outstanding financial liability on behalf of the merchant).
In the CA PKI model, unless you are dealing with purely no-value transactions ... there is a double whammy of having the per transaction risk being somewhat ambiguous ... and in the offline certificate push model ... having no idea at all ... how many times a particular certificate has been pushed (basically multiplying an unknown number by another unknown number to come up with some idea of the outstanding liability at any specific moment).
somewhat in the business or value world ... trust frequently is translated into terms of financial liability.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Sun, 22 May 2005 10:39:28 -0600"Anders Rundgren" writes:
had this argument maybe ten years ago about ISPs filtering originating packets (from the ISP customers before hitting the internet) based on things like bogus origin ip-address (various kinds of spoofing attacks ... not totally dissimilar to phishing attacks with bogus origin). even as late as 5-6 years ago, the counter arguments were that ISPs had neither the processing capacity nor the technology capability for recognising incoming packets and filtering packets that had bogus origin ip-address. However, in this period, ISPs were starting to do all kinds of other packet/traffic filtering & monitoring of their customers for things in violation of the terms & conditions of their service contract (prooving that they did have the capacity and technology).
A possible scenario is if ISPs somehow demonstrated that they were doing filtering/censoring on things coming from their customers before it got on the internet ... if something actually got thru and reached a destination victim ... the destination victom might be able to turn around and sue the originator's ISP. I think that ISPs want to avoid being seen as financially liable for bad things that might be done by their customers.
the other counter argument raised was that even if responsible ISPs started censoring activity of their customers ... there were enuf irresponsible ISPs in the world that it wouldn't have any practical effect. However, there is multi-stage scenario 1) responsible ISPs might be able to do origin filtering on 90% of the bad traffic, 2) doing origin censoring rather than destination censoring eliminates a lot of infrastructure processing overhead getting between the origin and the destination, 3) for store & forward traffic, responsible ISPs could still perform entry censorship at the boundaries where it cross from an irresponsible ISP to responsible ISP.
in many of these situations it isn't whether the receiving can absolutely proove who the originator is .... it is whether the originator is generating traffic with spoofed address that the originator's ISP would have reason to know isn't consistant with the originator's assigned address.
Not too long after the greencard incident ... we were on a business trip to scottsdale and having dinner at a restaurant in old town. Three people came in and were seated behind us (a man and a couple). The man spent most of the dinner explaining to the couple how to configure their service for commercial purposes ... and how he was going to be able to send out loads of spam on their behalf (if they would sign up with him); the techniques he had for staying ahead of the ISPs that might want to get around to shutting down one or another of his spam producing facilities (we suspected that the man doing the talking might have been involved somehow with the greencard incident).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Now the crackpots are trying to make it their own Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.arch Date: Sun, 22 May 2005 11:19:44 -0600"Tom Linden" writes:
... anyway ... I was asked several times about the problem of low-bandidth information leakage and what was I going to do about it.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Outsourcing Newsgroups: alt.folklore.computers Date: Sun, 22 May 2005 14:49:19 -0600greymaus writes:
the article basically went into some amount of detail why india was in much better position to compete (vis-a-vis the nearby mainland province) in the emerging world-wide outsorcing market ... a primary issue was that india had a significantly better civil servant middle class ("left behind by the british") providing essential infrastructure support (that was needed/required to compete effectively in the emerging world-wide outsourcing market).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Sun, 22 May 2005 15:03:03 -0600Anne & Lynn Wheeler writes:
a couple days ago i ran across a quicky comment about a new book called freakonmics ... and just now stopping by a local computer bookstore ... it is the first book you see at the door. it purports to be "a rogue economist explores the hidden side of everything"
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Outsourcing Newsgroups: alt.folklore.computers Date: Mon, 23 May 2005 09:10:16 -0600Nick Spalding writes:
this point of the HK newspaper article specifically was that the internal infrastructure was significantly better operated ... like how many weeks (months, years) it would take for a local business to get electricity, water, permits, phone ... and how reliable was the electricity, water, phone, transportation, utility, etc, services.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Improving Authentication on the Internet Newsgroups: netscape.public.mozilla.security Date: Mon, 23 May 2005 09:15:18 -0600Gervase Markham writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: First assembly language encounters--how to get started? Newsgroups: alt.folklore.computers Date: Mon, 23 May 2005 11:55:41 -0600forbin@dev.nul (Colonel Forbin) writes:
gml (& standardization as sgml)
https://www.garlic.com/~lynn/submain.html#sgml
brought to you curtesy of the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Mon, 23 May 2005 14:09:40 -0600Anne & Lynn Wheeler writes:
Taking one billion internet clients as first order approximation to estimated SSL domain name certificate relying parties ... then gross, first under approximation to required number of such contracts would be 50 billion individually signed contracts.
in the payment card scenario this is mitigated by having the credential (payment cards) issuing institutions sign contracts with the brand associations (hierarchical legal trust rollup on the issuing side). Then the relying-party merchants have contracts with merchant financial institutions which in turn rollup with the merchant financial institutions having (legal trust) contracts with the payment associations.
in this sense, SSL TTP PKI CAs are more efficient than an approximately analogous real business operation (aka payment cards as issued credentials which require explicit business processes between all the parties) by eliminating conforming to standard business practices (having explicit legal trust relationship between the relying parties and the credential issuing institutions). An example where this was addressed has been in the Federal PKI .. where the federal gov., as a relying party, signed explicit (legal trust) contracts with each authorized certificate issuing certification auhtority.
one of the things on the table (when originally pulling together the current e-commerce infrastructure) was that the same finanical infrastructure that took liability for merchant transactions would also issue SSL domain name certificates (which in addition to prooving domain name ownership would also indicate the liability accepting relationships). However, for whatever reasons, that option was not followed.
The current scenario is that the SSL domain name certificates basically represent some due diligence in checking with the domain name infrastructure as to the true domain name owner. However, there is (nominally) no related, explicit, contractual chain of legal trust that can be followed from relying parties to the certificate issuing operations.
Also, as oft repeated, one of the motivating factors in the perceived need for domain name due diligence (aka SSL domain name certificates by any other name) has been integrity concerns with regard to the domain name infrastructure .... and how can a client really be sure that the server they are talking to is actually the real server related to the domain name they typed in as part of the URL. This becomes somewhat ambiguous when one realizes that the domain name infrastructure is the authoritative agency for domain name ownership ... and the same authoritative agency that certification authorities have to check with regarding true domain name ownership.
misc. past ssl certificate postings:
https://www.garlic.com/~lynn/subpubkey.html#sslcert
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: technical question about fingerprint usbkey Newsgroups: sci.crypt Date: Tue, 24 May 2005 08:31:40 -0600"frozt" writes:
something like 30 percent of debit cards are reputed to have the
PINs written on them. part of this is the serious proliferation
of shared-secrets ... and the difficulty of people being forced
to remember scores of different shared-secrets.
https://www.garlic.com/~lynn/subintegrity.html#secrets
some number of the fingerprint scenarios are targeted at the pin/password market where there is significant, common practice for people writing down their pin/password.
in this case the issue comes down what is simpler, easier for a crook (having stolen a card)
1) to lift a pin written on the card and fraudulently enter the pin
2) to lift a fingerprint possibly left on the card and fraudulently enter the fingerprint.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Tue, 24 May 2005 08:25:33 -0600"Anders Rundgren" writes:
however, it is evident that the design point for certificates & PKI based infrastructure is for stangers that have never communicated before. in this original mutual authentication deployment that we had specified ... it was between (merchant) webservers and the payment gateway.
however, it quickly became evidently clear that there had to be prior contract between merchant webservers and their respective payment gateway. that the use of certificates in the SSL establishment was purely an artificial artifact of the existing SSL implementation.
in actual fact, before the SSL session was ever established ... the
merchant webserver had a preconfigured set of data on what payment
gateway they were going to contact and the payment gateways had
preconfigured information on which merchants they would process for.
Once the SSL session was established ... this preconfigured
authentication was exercised w/o regard for any certificates. The use
of certificates as authentication mechanism was purely a facade and an
artificial artificate of the use of the existing SSL implemenation ...
and in no ways represented the real (online) business authentication
process.
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
relying party, business parties have well established processes for maintaining information about their business relationships (some of this well established business relationship processes have evolved over hundreds of years). passwords are an authentication technology that have been managed using these relationship business management processes. it is possible to use the existing business relationship processes for managing other kinds of authentication material, including public keys.
certificates are not intrinsicly a substitute for replacing all of the existing, well established, business relationship processes, nor are they a mandatory requirement as the only means of managing public key authentication material in well-established business relationships.
the design point for PKI and certificates were the offline email paradigm of the early 80s, where a recipient would dial their (electronic) postoffice, exchange email, and then hangup. The recipient (relying party) was then possibly faced with processing first time email with a total stranger. the role of the certificate was analogous to the letters of credit from the sailing ship days, the relying party lacking any prior information of their own regarding the stranger and/or any timely, direct access to a certifying authority.
this is the analogy to the early days of the payment card industry, where the plastic card was the credential and their were weekly mailed booklets to every merchant (relying party) of the list of revoked credentials. in the 70s, the payment card industry quickly moved into the modern, online world of real-time transactions (even between relying parties that were strangers that never had any prior contact). in the mid-90s when the suggestions were made that the payment card industry could move into modern times by converting to (offline, stale, static) certificates ... my observation that moving to certificates would actually represent regressing 30 years to an offline model ... rather than the real, modern, online model that they had been using for over 20 years.
It is perfectly possible to take well established business processes
used for managing relationships ... and "RIP" shared-secret
authentication technology ... by substituting public key registration
in lieu of shared-secret (pin/password) registration. businesses are
not likely to regress to stale, static certificates for the management
of timely and/or aggregated information ... like current account
balance. From there is trivial step-by-step process to proove that
stale, static certificates are redundant and superfluous between
relying parties that have existing business relationships.
https://www.garlic.com/~lynn/subpubkey.html#certless
the original pk-init draft standard for kerberos specified only
certificate-less management of public keys, treating public keys as
authentication material in lieu of shared-secrets and leveraging the
existing extensive online management of roles and permissions ... that
are typically implicit once authentication has been performed aka it
is not usual that authentication is performed just for the sake of
performing authentication acts ... authentication is normally
performed within the context of permitting specific set of
permissions (in the financial world, some of these permissions
can be related to real-time, aggregated information like current
account balance)
https://www.garlic.com/~lynn/subpubkey.html#kerberos
similarly it is possible to take another prevelant relationship
management infrastructure, RADIUS. and substitute digital signatures
and the registration of public keys in lieu of shared-secrets ... and
maximize the real-time, online management and administration of
authentication and permissions within a synergistic whole environment.
https://www.garlic.com/~lynn/subpubkey.html#radius
in any sort of value infrastructure, if it is perceived advantageous to have real-time management, admnistration and access to permissions, authorization and other kinds of authentication information ... then in such an environment, it would seem not only redundant and superfluous but also extremely archaic to rely on the offline certificate paradigm designed for first-time, communication between total strangers (and the stale, static certificate would substitute for direct, real-time access to trusted authority).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Tue, 24 May 2005 13:52:23 -0600"Anders Rundgren" writes:
in the mid-90s some were complaining ... so what if stale, static, redundant and superfluous certificates were redundant and superfluous in an environment involving pre-established relationship and a existing relationship administration and management system ... they can't actually hurt anything. However that doesn't take into account the redundant and superfluous overhead costs of actually doing the redundant and superfluous certificate-oriented processing where there already is an established administrative and management relationship system. The other scenario is that some might get confused and decide to rely on the stale, static, redundant and superfluous certificate data in lieu of actually accessing the real data.
the other scenario would be to leverage a certificate-based operations in no-value scenario ... and eliminate any established relationship administrative and management infrastructure. Say, a membership environment, where any member could "buy" (obtain) any resource possible and there was no need to perform per member reconciliation. Say a bank ... that would allow any customer to perform as many withdrawels as they wanted ... regardless of their current balance (in fact, eliminate totally the concept of a financial institution even having to keep track of customer balances ... as being no-value and superfluous).
however, the truth is ... with regard to value infrastructure, there tends to be a requirement for a relationship administrative and management infrastructure (some of the methodology has been evolving for hundreds of years) that tracks and accumulates information on individual relationships ... even dynamically and in real time.
for value infrastructures that are managing and administrating relationships with tried & true established methodology ... then certificate-oriented PKIs become redundant and superfluous ... as are the stale static certificates themselves.
the issue then in a mature and well established administrative and
management infrastructure it is straight-forward to upgrade any
shared-secret (identity information, SSN#, mother's maiden name,
pin, password) oriented authentication infrastructure
https://www.garlic.com/~lynn/subintegrity.html#secrets
with a digital signature infrastructure where public keys are
registered as authentication information in lieu of shared-secrets and
digital signature validation (using the public key) is used in lieu of
shared-secret matchine.
https://www.garlic.com/~lynn/subpubkey.html#certless
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: technical question about fingerprint usbkey Newsgroups: sci.crypt Date: Tue, 24 May 2005 14:32:13 -0600"frozt" writes:
it isn't that there aren't fingerprint vulnerabilities ... but they are more difficult than some common PIN vulnerabilitys ... aka lost/stolen card with pin written on the card ... or ATM overlay picking out PIN from keys used on pin-pad.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Tue, 24 May 2005 23:45:41 -0600Ian G writes:
there is a case made that the exploding use of electronic, online
access has created a severe strain on the shared-secret authentication
paradigm ... people having to memorize scores of unique pin/passwords.
https://www.garlic.com/~lynn/subintegrity.html#secrets
asynmmetric cryptography created a business solution opportunity.
In the shared-secret paradigm, the same datdum is used to both originate as well as authenticate. Persons having access or gaining access to the authentication information also have the information to fraudulent impersonate and originate.
The business solution applied to asymmetric cryptography was to designate one of the paired-keys as "public" and freely available for authentication purposes. The business process then defines the other of the paired-keys as "private" and is to be kept confidential and never divulged. The business process defines only the private key (which can never be divulged) can be used to originate a digital signature ... and only the public key is used to verify the digital signature.
from the 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are
the validation of a digital signature with a specific public key
implies something you have authentication ... i.e. the originator
has access and use of the corresponding private key (which has always
been kept confidential and has never been divulged).
Attacks on authentication material files involving public key authentication doesn't open the avenue of impersonation (as can occur when using shared-secrets).
Therefor registering public keys as authentication material in existing relationship administrative and management infrastructures acts as a countermeasure to individuals compromising those files and being able to used the information for impersonation and fraud.
The business role of CAs and certificates ,,, especially TTP CAs, is to provide information for relying parties in situations involving first time contact between strangers where the relying party has no recourse to any resources for determining information about the originator.
In situations where two parties have established, on going relationship and there are well established facilities for administuring and managing that relationship that the statle, static offline paradigm certificates are redundant and superfluous.
It is possible that the significant paradigm mismatch between well established relationship adminstrative and management infrastructures and CA TTPs (targeted at addressing the problem of first time communication between two strangers) is responsible for at least some of the discord.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REPOST: Authentication, Authorization TO Firewall Newsgroups: comp.security.firewalls Date: Wed, 25 May 2005 10:37:11 -0600"Greenhorn" writes:
boxes that clients interact with for authentication/authorization function are frequently referred to as portals.
authorization policy filtering based on origin ip-address possibly by time-of-day ... could be an administrative function that updated/changed packet filerting router rules at different times of the day. This frequently would be a push operation from the policy and administrative infrastructure ... rather than a pull function from the individual boxes.
authenticaton tends to be asserting some characteristic (like
an account number or userid) and then providing some information
supporting that assertion ... from 3-factor authentcation paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are
using ip-address origin is more akin to identification w/o necessarily
requiring any proof (or additional interaction demanding proof).
authorization frequently tends to be taking some characteristic (either from simple identification or from an authorization process) and looking up the related permissions defined for that characteristic (like which systems can an authenticated userid access).
RADIUS was originally developed by livingston for their modem
concentrator boxes (i.e. provided authentication boundary for
userid/login authentication for dail-up modem pools). It has
since grown into a generalized IETF standard for AAA
https://www.garlic.com/~lynn/subpubkey.html#radius
In the original livingston case ... the modem concentrator provided both the RADIUS boundary authentication/authorization as well as the traffic routing function in the same box. This continues as dominate technology used world-wide by ISPs to authenticate their dial-in customers.
the boxes that are routing traffic between intranet and internet are frequently not exposted to clients as separate functional boxes ... as is the case of the modem-pool routers that managed the boundary between the ISP intranet and their dial-in customers.
there is a related but different kind of administrative boundary situation for DSL/cable customers. They typically have a unquely identifiable box or (non-ip) address. DHCP requests come in from these boxes ... if the boxes are associated with a registered, up-to-date account .. and administrative policy will return DHCP responses that enable access to generally available ISP services. However, if the box is not associated with a registered, up-to-date account ... the DHCP response can configure them so that all their DNS requests and the resulting ip-address responses go to an in-house sign-up (regardless of the domain name supplied by your browser ... it would always get back the same ip-address directing it to a webservice associated with administrative signup). You tend to find similar setup/configuration for hotel high-speed internet service and many of the wireless ISP service providers.
in this scenario ... the dynamic administrative policy isn't based on ip-address (as an identification) but some other lower level hardware box address (enet mac address, cable box mac address, etc).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REPOST: Authentication, Authorization TO Firewall Newsgroups: comp.security.firewalls Date: Wed, 25 May 2005 12:04:05 -0600roberson@ibd.nrc-cnrc.gc.ca (Walter Roberson) writes:
we started out with administrative pushing permitted/allowed ip-addresses (webservers that had valid contracts to use the payment gateway) into routers.
this was also in the early days of haystack labs, wheel group, and some others.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Improving Authentication on the Internet Newsgroups: netscape.public.mozilla.security Date: Wed, 25 May 2005 12:14:21 -0600Nelson B writes:
so if the bad guys wanted to do a DOS after having compromised the private key ... then they could, at most, declare the CA no longer valid ... which by definition is what you want to happen anyway when a key has been compromised.
the other thing that they could do ... was hope that the CA went unrevoked as longer as possible ... so that they could use the compromised private key to generate fraudulent certificates.
However, specifically with respect to revoking a CA ... you could either do it or not do it ... nobody could ever undo it.
So the bad guys could either say nothing (about the CA) or lie about the CA by using the compromised private key to revoke the CA. However, by definition, if the private key has been compromised then what you want anyway is a revokation of the CA.
The only thing that the valid CA could do is say nothing (about themselves) or revoke themselves. If the real CA has made a decision to revoke itself ... then there isn't much else you can do about it.
In any case, self-revokation is a special case of "everyhing else I've said is a lie". Once it asserts that special case ... then it is no longer able to assert anything more (and somewhat immaterial whether that special case was a lie or not).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Status of Software Reuse? Newsgroups: alt.folklore.computers Date: Wed, 25 May 2005 13:06:38 -0600for some topic drift ... cp67/cms had an update command, it basically was used to merge an "update" deck with base software source and produce a temporary file that was then assembled/compiled.
it was oriented towards 80 column "card" records and sequence numbers in cols. 73-80. The cp67/cms assembler source had convention of ISEQ assembler statement ... to indicate that the sequences in the sequence number field should be checked (from physical card deck days when decks could be dropped and shuffled).
the control commands were of the form
./ d nnnnnn <mmmmmm> (delete record from <to> ) ./ r nnnnnn <mmmmmm> (replace records from <to> with following source) ./ i nnnnnn (insert new source after record nnnnnit started out essentially being a single change file application
as an undergraduate ... i was making enormous amount of source code changes to cp67 and cms ... and the default process required you to manually type-in the sequence number field (cols. 73-80) for all new source records.
I got tired of this and created an update preprocessor that supported "$" for replace & insert commands
./ r nnnnnn <mmmmm> <$ <aaaaaaa <bb>>> ,/ i nnnnnn <$ <aaaaaaa <bbb>>>where it would generate a temporary update file (for feeding into the update command) that had the sequence number field automatically generated. It could default to choosing number & increment based on previous/following cards ... or you could specify a starting number and any increments.
i believe it was the virtual 370 project that really kicked off the multi-level update effort. The "H" modifications to cp67 running on real 360/67 that supported virtual 370 machines ... which had some number of new/different control operations and instructions. The "I" modifications were applied after the "H" modificationa and product a cp67 kernel that ran on (real) 370 architecture (rather than real 360/67 hardware).
this was a set of execs (command processor file) that used a "cntl" file to select updates and their sequence for applying incrementally a hierarchical set of update files. This would iteratively processe "$" update files ... generating temporary update file and applying the temporary update file to the source file ... creating a temporary source file. The first iteration involved updated the base source file ... additional iterations wuould update the previously generated temporary update file.
I had replicated archived a H/I system (all the source and all the processes and files needed to generate running systems) on multiple tapes. Unfornately the datacenter i was using in the mid-80s had an internal operational problem ... where they had a rash of operators mouting valid tapes for scratch tapes and destorying data. The H/I archives tapes were wiped out in this period.
As a small reprieve ... not too long earlier ... Melinda
https://www.leeandmelindavarian.com/Melinda#VMHist
was looking for early examples of the multi-level source update process. i managed to pull a comple package (execs, control files, executables, etc) from the h/i archive tapes (prior to their getting wiped).
In the early time-frame, an MIT student (that has since become quite well known for work he has since done on the internet) was giving task for an application that would attempt to merge multiple independent update hiererachies. This is sort of software re-use ... in the sense that the same common source was used possibly by different organizations for developing different target solutions.
As the use of the multi-level update feature became wider used ... the "$" preprocessing support and the iterative application was merged into the base update command. Now rather than creating multiple iterative temporary files ... it would manage everything in memory, applying things as it went along ... and not producing a temporary source file until after the last update had been applied.
misc. past posts about the cp67 h/i operating system work
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#58 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
a fiew past posts on cms source update
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003.html#62 Card Columns
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004g.html#43 Sequence Numbbers in Location 73-80
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Improving Authentication on the Internet Newsgroups: netscape.public.mozilla.security Date: Wed, 25 May 2005 13:11:42 -0600Ian G writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Improving Authentication on the Internet Newsgroups: netscape.public.mozilla.security Date: Wed, 25 May 2005 13:46:03 -0600I thot discussion might have been pkix &/or x9f related .. as an easier step then starting to search my own archives ... i've done a quicky web search engine ...
one entry in pkix thread
http://www.imc.org/ietf-pkix/old-archive-01/msg01776.html
here is recent m'soft article mentioning the subject:
http://www.microsoft.com/technet/itsolutions/wssra/raguide/CertificateServices/CrtSevcBP_2.mspx
http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/security/mngpki.mspx
i also believe that it showed up in x9f5 work on PKI CPS ... but i
would have to check my archives ... however here is pointer to a
verisign cps ... that search engine claims contains words on
revoking CA (ra, etc):
http://www4.ncsu.edu/~baumerdl/Verisign.Certification.Practice.Word.doc
another verisign related reference:
http://www.verisign.com/repository/cis/CIS_VTN_CP_Supplement.pdf
also, i remember OCSP coming on the scene sometime after I had been going for awhile about how CRLs were 1960s technology (and least in the payment card business) .... before payment card moved into the modern online world with online authentication & authorization (moving away from having to manage credentials/certificates that had been designed for an offline paradigm).
one might assert that OCSP is a rube-golberg solution trying to preserve some facade of the usefulness of certificates (designed to solve real-world offline paradigm issues) in an online world (somehow avoiding having to make a transition to straight online paradigm and preserving the appearance that stale, static redundant and superfluous certificates serve some useful purpose).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Improving Authentication on the Internet Newsgroups: netscape.public.mozilla.security Date: Wed, 25 May 2005 14:25:13 -0600Anne & Lynn Wheeler writes:
several years ago, we did a survey of corporate databases for security issues ... one was a field by field analysis of types of information and the vulnerability. for instance ... any information where we could find a business process that made use of that kind of information for authentication ... was labeled as having an "id theft" vulnerability. several business processes made use of knowledge about date-of-birth for authentication purposes ... and therefore date-of-birth was given an id-theft attribute (we made claims at the time about doing semantic analysis rather than purely syntactic security analysis).
these kinds of information were the types of things being looked at in the early 90s to grossly overload x.509 identity certificates ... in the anticipation that some random relying-parties in the future (after the certificate was issued) might find the information to be of some use. it was issues like these that prompted some institutions in the mid-90s to retrench to relying-party-only certificates ... effectively containing only a pointer to the real information in some accessible database. however, these databases were typically part of an overall relationship management and administrative function ... which made the function of stale, static certificate redundant and superfluous.
Now in the late 90s, tstc
http://www.fstc.org/
was looking at fast protocol which would respond to certain kinds of questions with yes/no. they would utialize existing 8583 interconnect network structure ... but extend 8583 transactions to include non-payment questions ... like whether is the person an adult or not. The real-world, online authoritative agency could simple respond yes/no to the adult question w/o divulging a birth date ... which represents an identity theft vulnerability.
part of the issue prompting the fast protocol was the appearance of a number of online services selling transactions on whether a person was an adult or not. these online services were having people register and supply credit card information for doing a "$1 auth" transaction that would never clear. Their claim was that since people that had credit cards had to sign a legal contract, and to sign a legal contract, you had to be an adult ... then anybody that performed a valid credit card transaction must be a valid adult. As a psuedo credit card merchant they paid maybe 25cents to perform the "$1 auth" to get back a valid authorization response. Since they never cleared the transaction ...it never actually showed up as a transaction on the statement (although there was a $1 reduction in the subjects open-to-buy for a period).
An issue was that the adult verification services were making quite a bit of money off of being able to perform a transactions that brought in only 25cents to the financial institution ... and the whole thing involved, online, real-time responses ... no stale, static, redundant and superfluous certificates designed to address real-world offline paradigm issues (and which also tended to unecessarily expose privacy and even identity theft related information).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Wed, 25 May 2005 14:50:43 -0600"Anders Rundgren" writes:
now for a small topic drift ... slightly related posting
https://www.garlic.com/~lynn/2005i.html#33 Improving Authentication on the Internet
in the above ... fast could have certificate-less, digitally signed
transactions approving the operation. in much the same way that
x9.59 transactions
https://www.garlic.com/~lynn/x959.html#x959
could be certificate-less and digitally signed ... fast transactions could involve matters other than approving a specific amount of money (i.e. standard payment transaction getting back approval that the issuing institution stood behind the amount of the transaction). in much the same way that an x9.59 transaction wouldn't be viewed valid unless the corresponding digital signature correctly verified ... the requirement to have the subject's digital signature on other types of requests would also serve to help protect their privacy.
the fast age thing was of interest ... because it eliminated having to divulge birthdate (an identity theft issue) while still confirming whether a person was an adult or wasn't an adult. There was also some fast look at zip-code verification in addition to age verification. Some number of people were proposing certificates could follow the driver's license offline credential model ... and that anything that might be on a driver's license (and more) would be fair game to put into a certificate. This overlooked the fact that driver's licenses were really offline paradigm credentials ... and as the various relying parties acquired online connectivity ... there was less & less a requirement for information content on the driver's license itself (it could migrate more to the relying-party-certificate model with little more than an account number to the information in an online repository ... little things like aggregated information ... number of outstanding parking tickets ... etc).
the "fast" issue (especially age verification, not actually age ... just yes/no as to being an adult) for the financial institutions was that while quite a bit of money is being made by the online age verification services (... and there is almost no incrmental costs needed to add such an option to the existing 8583 infrastructure and providing internet access) most of the money flow into the age verification operations comes from a segment of the internet market that many find embarrassing ... and as a result many financial institutions are ambivalent about being involved.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Worth of Verisign's Brand Newsgroups: netscape.public.mozilla.crypto Date: Wed, 25 May 2005 15:11:55 -0600Anne & Lynn Wheeler writes:
and some number of other kinds of data organizations ..... i've made a couple posts in comp.database.theory over the years about 3value logic ... which is a difficult problem for many relational paradigms.
some specific postings on 3-value logic
https://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
https://www.garlic.com/~lynn/2004f.html#2 Quote of the Week
https://www.garlic.com/~lynn/2004l.html#75 NULL
https://www.garlic.com/~lynn/2005b.html#17 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#15 Amusing acronym
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Improving Authentication on the Internet Newsgroups: netscape.public.mozilla.security Date: Thu, 26 May 2005 13:10:45 -0600Anne & Lynn Wheeler writes:
1) authentication .... can you supply the matching value
2) grouping ... adult, child, senior citizen
3) current age ... say for life insurance
in cases #2 & #3, we claimed that answers could be returned w/o returning the actual birth date.
the problem is fundamental share-secret issues
https://www.garlic.com/~lynn/subintegrity.html#secrets
security guidelines tend to mandate that a unique shared-secret is required for every unique security domain. The vulnerability is somebody with the access to the shared-secret in one security domain can perform authentication impersonation fraud in another security domain (say your local neighborhood garage isp and your online banking operation).
for some people this has led to them having to manage and administer scores of unique shared-secrets.
at the same time, it has been recognized that people have a hard time remembering even one or two such pieces of data ... so many infrastructures (for a long time) have used personal information as authentication shared-secrets (birth date, mother's maiden name, SSN#, place of birth, etc).
this puts the infrastructures that expect people to manage and administer scores of unique shared-secrets quite at odds with the infrastructures that recognize such a task is fundamentially out-of-synch with long years of human nature experience ... and have chosen instead to use, easier to remember personal information for authentication shared-secrets.
the problem with personal information authentication shared-secrets is that the same information tends to crop up in lots of different security domains. as a result, any personal information that might be used in any domain as personal information authentication shared-secret ... becomes identity theft vulnerable.
we had been asked to come in and consult on something (that was going
to be called e-commerce) with this small client/server company in
silicon valley that had this technology called ssl
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
among other things we eventually went around and did some end-to-end
business walkthrus with these organizations called certification
authorities about something called an ssl domain name certificate
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
now they were leveraging this basic business processed called public keys. basically there is this cryptography stuff called asymmetric cryptography ... where keys used to decode stuff are different than the keys used to encode stuff. A business process is defined using this technology that is frequently called public keys. Basically a business process defines one of a key-pair as being "public" and is freely available and the other of a key-pair is designated "private" and kept confidential and never divulged. There is an adjunct authentication business process called digital signature authentication .... where the private key is used to encode a hash of a message. The relying party can calaculate a hash of the same message and use the corresponding public key to decode the digital signature to come up with the originally calculated hash. If the two hashes are the same, it demonstrates that the message hasn't been altered and authenticates the originator.
from three factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are
digital signature verification is a form of something you have
authentication, demonstrating that the originator has access to and
use of a specific private key.
possibly somewhat because of early work on SSL and digital signatures,
we were brought in to work on both the cal. and fed. digital signature
legislation ... minor refs
https://www.garlic.com/~lynn/aepay11.htm#61 HIPAA, privacy, identity theft
https://www.garlic.com/~lynn/aepay12.htm#4 Confusing business process, payment, authentication and identification
https://www.garlic.com/~lynn/aadsm17.htm#23 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#24 Privacy, personally identifiable information, identity theft
https://www.garlic.com/~lynn/aadsm17.htm#47 authentication and authorization ... addenda
there were these other business operations interested that have been frequently referred to as trusted-third party certificate authorites (TTP CAs) which were interested in a business case that involves $100/annum certificate for every person in the US (basically $20b/annum revenue flow).
Now possibly the most prevalent internet authentication platform is
RADIUS ... used by majority of ISPs for authenticating client service.
This was originally developed by Livingston for their modem pool
processors (long ago and far away, I once was involved in putting
together a real RADIUS installation on a real Livingston box).
https://www.garlic.com/~lynn/subpubkey.html#radius
a couple recent radius related postings
https://www.garlic.com/~lynn/2005i.html#2 Certificate Services
https://www.garlic.com/~lynn/2005i.html#3 General PKI Question
https://www.garlic.com/~lynn/2005i.html#4 Authentication - Server Challenge
https://www.garlic.com/~lynn/2005i.html#23 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005i.html#27 REPOST: Authentication, Authorization TO Firewall
https://www.garlic.com/~lynn/2005i.html#28 REPOST: Authentication, Authorization TO Firewall
RADIUS has since become an ietf standard and there are some number of freely available RADIUS implementations. The default RADIUS primarily uses shared-secret based authentication. However, there are implementations that simply upgrade the registration of a shared-secret with the registration of a public key ... and perform digital signature verification (something you have authentication) instead of shared-secret matching.
There are lots of infrastructures that it would be possible to
preserve and leverage their existing relationship management and
administrative infrastructures (handling both authentication and
authorization information) by simply replacing shared-secret
registration with public key registration.
https://www.garlic.com/~lynn/subpubkey.html#certless
However, there is lots of publicity about TTP CA-based PKI infrastructures, possibly because it has represented a $20b/annum revenue flow to the TTP-CA PKI industry.
By comparison, upgrading an existing relationship administrative and management infrastructure represents a cost factor for relying party infrastructures (as opposed to the motivation of a $20b/annum revenue flow for the TTP CA-based PKI industry). In the case, of freely available RADIUS implementations there is little or no revenue flow involved to motivate stake-holders for widely deployed digital signature authentication (leveraging existing business practices and relationship management and administration infrastructures; other than it represents a big impact on harvesting of shared-secrets for the authentication fraud flavor of identity theft).
The issue is especially significant for infrastructures that rely
heavily on widely available shared-secret information as a means of
authentication (in many cases common personal information). Another
flavor is transaction-based operations where the authentication
informaction is part of the transactions and therefor part of
long-term transaction logs/files ... an example from security
proportional to risk posting:
https://www.garlic.com/~lynn/2001h.html#61
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Secure FTP on the Mainframe Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 27 May 2005 14:18:07 -0600Sabireley@ibm-main.lst (Steve Bireley) writes:
Arpanet had a lot of similarities to JES2 networking ... homogeneous networking, host to front end processor (in arpanet case, called an IMP), limited number of nodes, no gateway functionality, etc.
some minor references (the following NCP references is not to
the mainframe variety):
https://www.garlic.com/~lynn/internet.htm#27 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/internet.htm#28 Difference between NCP and TCP/IP protocols
the above makes reference to RFC721 ... out-of-band control signals in a host-to-host protocol ... and some of the difficulties of converting application like FTP to TCP.
from my RFC index
https://www.garlic.com/~lynn/rfcietff.htm
summery for RFC 721
https://www.garlic.com/~lynn/rfcidx2.htm#721
721
Out-of-band control signals in a Host-to-Host Protocol, Garlick
L., 1976/09/01 (7pp) (.txt=13566) (Refs 675)
as always ... clicking on the ".txt=nnnn" field in an rfc summary
fetches the actual RFC.
now, the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
originated at science center
https://www.garlic.com/~lynn/subtopic.html#545tech
in many ways was a lot more robust ... didn't have the network size limitation and effectively had a type of gateway function in every node. this contributed to the internal network being larger than the arpanet/internet into approx. summer of 1985. the big change in the arpanet was the great cahnge over on 1/1/83 to internetworking protocol ... and getting gateway functionality. arpanet had almost its limited of 255 nodes at the switchover.
jes2 sort of had 255 node limitations ... jes2 networking had come from hasp ... quite a bit having been originated at TUCC. They used the hasp psuedo device table to define nodes. A typical hasp/jes2 installation might have 60-80 psuedo devices defined ... so that actually only left maybe 170-190 positions for defining network nodes.
jes2 had (at least) two problems ... one was that network control information was all jumbled up with other control information in the header and it would trash any incoming traffic if either the origin node or the destination node were not defined in the limited table.
shortly after the 1/1/83 conversion to internetworking protocol,
the internal network passed 1000 nodes
https://www.garlic.com/~lynn/internet.htm#22
and was way beyond any JES2 network addressing capability at the time.
because of the tendency for trashing traffic where it didn't have the origin and/or destination nodes defined ... and its inability to even come close to defining all the nodes in the internal network ... jes2 nodes were pretty much relegated to boundary nodes.
the other problem was the jumbling of information in jes2 control headers. a jes2 system at a different version or release level could have slightly different header definition and result in crashing other jes2 systems and bringing the whole mvs systems crashing down.
the standard internal networking software developed a extensive library of gateway code for different hasp and jes2 releases and versions. When a jes2 system would crash bringing down the mvs system, it would frequently be blamed on the standard internal networking software not correctly protecting one jes2 from another jes2. It quickly became the responsibility of the gateway code in the standard internal networking nodes to correctly re-arrainge jes2 header information to correspond to the version/level of the jes2 system it was directly communicating with (regardless of what jes2 system that such trafic might have originated from).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: More Phishing scams, still no SSL being used... Newsgroups: netscape.public.mozilla.crypto Date: Fri, 27 May 2005 15:38:26 -0600pgut001@cs.auckland.ac.nz (Peter Gutmann) writes:
a large institution was looking at converting their customer base from shared-secret authentication to public key authentication. what they were to do was to upgrade their software software to handle public keys and register public keys for all of their clients.
then there were to ship their master client account file off to a TTP CA, which would munge and reformat the bits in the account records and generate a digital certificate for each account record, appropriately digitally signed (selectively leaving out many bits and fields because of privacy concerns). for this re-formating of each account record and the CA's digital signature ... the instituttion would only be charged $100/annum for every account record processed (well in excess of $1b US).
the institution would then distribute the resulting certificates to each of their clients so that in the future ... the clients could create a electronic message and digitally sign it. The client would package the electronic message, the digital signature and the ($100/annum) digital certificate and send it off to the institution. The institution would receive the transmission, pull the account number from the message, and retrieve the appropriate account record, validating the digital signature with the onfile public key (from the account record). They then could disregard the stale, static, stunted, abbreviated, redundant and superfluous ($100/annum) digital certificate and continue processing the message.
executives eventually scrapped the project before they actually got into sending off the master account file.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Behavior in undefined areas? Newsgroups: comp.arch,alt.folklore.computers Date: Sat, 28 May 2005 09:53:05 -0600"Del Cecchi" writes:
endicott project for the cp67 H&I kernels. Basically the "H" cp67 updates were to create virtual machines that conformed to the 370 architecture definition (instead of 360/67 definition). The "I" cp67 updates were for the cp67 kernel to run on a 370 architecture.
In regular use was
CP67-L on real 360/67 providing 360/67 virtual machines CP67-H in 360/67 virtual machine providing 370 virtual machines CP67-I in 370 virtual machine providing 370 virtual machines CMS in 370 virtual machinea year before the first 370/145 engineering machine was running.
when endicott finally got a 370/145 engineering machine with virtual memory hardware support running ... they wanted to have a copy of CP67-I kernel to validate the hardware operation.
So the kernel was booted on an engineering machine that had soemthing like a knife switch in lieu of a real "IPL" button ... and it failed. After some diagnostics ... it turned out that the engineers had implemented two of the new extedd "B2" opcdoes reversed.
Normal 360 instruction opcodes are one byte ... 370 introduced new "B2" opcodes ... where the 2nd byte is actually the instruction opcode. Some vague recollection was that the reversed "B2" opcodes were RRB (resert reference bit) and PTLB (purge table lookaside buffer). The kernel was then quickly patched to correspond to the (incorrect) engineering implementation and the rest of the tests ran fine.
the "L", "H", "I" designations come somewhat from the initial pass at
doing multi-level source update system was being built to support the
effort. this initial pass was hierarchical ... so "L" updates were
applied to the base source, then the "H" updates could be applied, and
then "L" source updates. recent multi-level source update posting
https://www.garlic.com/~lynn/2005i.html#30 Status of Software Reuse?
in the vm/370 release 3 time-frame, one of the vm370 product test
people had developed an instruction execution regression program to
test vm370 compliance. however, running the full test suite would
actually cause vm370 crash. in that time frame ... I got pegged to
release a bunch of my system enhancements as the "resource manager"
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
we developed an automated benchmarking process where we could define
arbitrary workloads and configurations for validating and calibrating
the resource manager ... eventually we did a series of 2000 benchmarks
that took over 3 months elapsed time
https://www.garlic.com/~lynn/submain.html#bench
however ... leading up to that effort ... we found a number of defined workloads (extreme conditions) that also would reliably (predictably) crash the kernel. so i undertook an effort to rewrite sections of the kernel to eliminate all the kernel failures by either our benchmark test or the instruction execution regression program that the guy in product test had. these changes were also included as part of the released "resource manager" changes.
misc. past h/i postings:
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#58 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
https://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Friday question: How far back is PLO instruction supported? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 28 May 2005 13:18:45 -0600edjaffe@ibm-main.lst (Edward E. Jaffe) writes:
the first thing had to come up with was a mnemonic that were charlie's initials.
charlie had been done a lot of fine-grain multiprocessor locking on
cp67 ... when he invented compare&swap. initially trying to
get it in the 370 architecture ... the pok owners of the 370
architecture redbook ... aka ... drift warning ... a superset of the
prinicple of operations ... basically done in cms script with
conditionals ... one conditional got you the full redbook ... the
other conditional got the subset published as the pinricple of
operation ... this redbook is named for the color of the 3ring binder
it was distributed in ... different than the common known as
"redbooks" available to customers ...
http://www.redbooks.ibm.com/
... now returning to regular schedule programming ...
... the pok owners of the 370 architecture redbook .... said that there wasn't any justification for an smp-specific (locking) instruction ... that the POK operating system people were more than happy with the global kernel spinlock using test-and-set. In any case, to get compare&swap justified for 370 architecture ... had to come up with a justification of compare&swap use that wasn't smp specific; thus was born the descriptions for compare&swap in multi-threaded (interruptable) code (whether running on uniprocessor or multiprocessr).
misc. smp, multiprocessor, compare&swap, etc postings
https://www.garlic.com/~lynn/subtopic.html#smp
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The 8008 Newsgroups: alt.folklore.urban,alt.folklore.computers Followup-To: y Date: Sat, 28 May 2005 13:27:02 -0600stanb45@dial.pipex.com (Stan Barr) writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Development as Configuration Newsgroups: comp.databases.theory Date: Sat, 28 May 2005 14:05:37 -0600"dawn" writes:
they had two people that had been at oracle who we had worked with on
parallel oracle
https://www.garlic.com/~lynn/95.html#13
for ha/cmp and scalable distributed lock manager
https://www.garlic.com/~lynn/subtopic.html#hacmp
and were now at this startup responsible for something called the commerce server (this startup also had this technology called SSL). we worked with them on doing payment transactions and something called a payment gateway (collection somewhat now referred to as e-commerce).
we frequently advised (possibly harangued) many people at the startup that to take a typical straight forward, well designed and well tested staightline application and turn it into a sevice typically could require 4-10 times the code (of the straightline application) and ten times the effort.
a simple example was that after the straight line application was built and tested .... we built something like a five state by 20-30 possible failure mode matrix ... and required that each possible condition could be identified/diagnosed and recovered from (well documented remediation processes and possibly automated recovery). eventually had something like a 40 page diagnostic document for webserver operations and payment gateway operations (and a lot more code).
this was in somewhat the timeframe of lots of opject development platforms ... many primarily oriented towards quickly turning out fancy que toy demos. one that was around in this era was taligent. we spent a one-week JAD with taligent exploring what would be required to take their existing infrastructure and transform it into a business critical platform for deliverying service oriented application (moving a lot of the traditional service oriented operations out have having to be repeatedly reimplemented in every application ... and into the underlying platfrom). The net was an estimate of approximately 30 percent hit to their existing code base ... and approximately 1/3rd additional new frameworks (specifically oriented towards service oriented operations ... as opposed to the more commoingly found frameworks involved in screen graphics).
we even took a pass at trying to drop some number of 2167a certification requirements into the infrastructure .... attempting to significantly reduce the repeated effort involved in buidling and deploying service oriented applications.
misc. past taligent refs:
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000e.html#48 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000.html#10 Taligent
https://www.garlic.com/~lynn/2001j.html#36 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002i.html#60 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002j.html#76 Difference between Unix and Linux?
https://www.garlic.com/~lynn/2002m.html#60 The next big things that weren't
https://www.garlic.com/~lynn/2003d.html#45 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003e.html#28 A Speculative question
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#38 Where should the type information be: in tags and descriptors
misc. past business critical &/or service related postings
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#32 Mainframes & Unix
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/98.html#18 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/98.html#51 Mainframes suck? (was Re: Possibly OT: Disney Computing)
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2001b.html#25 what is interrupt mask register?
https://www.garlic.com/~lynn/2001c.html#16 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#56 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002c.html#30 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002l.html#15 Large Banking is the only chance for Mainframe
https://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
https://www.garlic.com/~lynn/2002p.html#6 unix permissions
https://www.garlic.com/~lynn/2003.html#38 Calculating expected reliability for designed system
https://www.garlic.com/~lynn/2003c.html#23 difference between itanium and alpha
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
https://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
https://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
https://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2004m.html#51 stop worrying about it offshoring - it's doing fine
https://www.garlic.com/~lynn/2004m.html#56 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#63 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005.html#18 IBM, UNIVAC/SPERRY, BURROUGHS, and friends. Compare?
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#42 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005f.html#38 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005h.html#16 Today's mainframe--anything to new?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Development as Configuration Newsgroups: comp.databases.theory,alt.folklore.computers Date: Sat, 28 May 2005 14:39:54 -0600re:
a little more SOA topic drift ... but one of the other SOA
characteristics frequently is multi-tier architecture. Somewhat prior
to starting ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
my wife had co-authored and presented the response to a gov. RFI for a large campus-like distributed environment. In the RFI response ... she had formulated the principles for multi-tier architecture. We then expanded on those principles and started presenting them in customer executive briefings as 3-tier architecture.
unfortunately this was in the SAA period ... which could be
characterized as the company attempting to put the client/server
(2-tier) genie back into the bottle ... which frequently put us at
direct odds with SAA crowd. We were also heavily pushing enet as a
connectivity. The SAA crowd were heavily in with the token-ring people
who were advocating corporate environments with something like 300
stations on a single lan (which aided in pushing the idea of a PC as
an extremely thin client to the corporate mainframe). this is somewhat
related to some terminal emulation postings:
https://www.garlic.com/~lynn/subnetwork.html#emulation
Somebody in the T/R crowd turned out a comparison of enet & T/R, making statements of enet typically degrading to 1mbit/sec. (or less) effective thruput. This was about the time of an acm sigcomm paper about typical enet degrading to 8.5mbit/sec effective thruput under worst case scenaro with all stations in low-level device driver loop constantly transmitting minimum size packets.
various past postings on saa. t/r and coming up with 3-tier
and middle layer architectures
https://www.garlic.com/~lynn/subnetwork.html#3tier
T/R disclaimer ... my wife is listed as co-inventor on one of the token passing patents from the 70s.
in this time-frame she was in pre-school and con'ed into going to POK
to be in charge of loosely-coupled architecture (loosely-coupled
being one of the SOA buzzwords). while there she authored
Peer-Coupled Shared Data architecture which took years to show
up in places like parallel sysplex
https://www.garlic.com/~lynn/submain.html#shareddata
From: <lynn@garlic.com> Newsgroups: microsoft.public.sqlserver.ce Subject: Re: SqlServerCE and SOA - an architecture question Date: Sat, 28 May 2005 13:23:28 -0700Darren Shaffer wrote:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The 8008 Newsgroups: alt.folklore.urban,alt.folklore.computers Date: Sat, 28 May 2005 17:00:44 -0600a primary reason for collecting rain water for wash day ... was that the well water was extremely hard (had lots of minerals in it) ... rain barrel was effective way of getting "soft" water for wash day.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Friday question: How far back is PLO instruction supported? Newsgroups: bit.listserv.ibm-main Date: Sat, 28 May 2005 18:12:25 -0600Chris_Craddock@ibm-main.lst (Craddock, Chris) writes:
more recently there has been some defection from the mainframe to other platforms. there is some potential future business by keeping customers at least on some mainframe platform ... as opposed to possibility that they will drift away to other platforms.
some recent postings on software pricing theme
https://www.garlic.com/~lynn/2005g.html#51 Security via hardware?
https://www.garlic.com/~lynn/2005g.html#53 "Best practices" or "Best implementations"?
https://www.garlic.com/~lynn/2005g.html#54 Security via hardware?
https://www.garlic.com/~lynn/2005g.html#57 Security via hardware?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Listserver for DFSMS/HSM Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Mon, 30 May 2005 09:30:45 -0600Raymond.Noal@ibm-main.lst (Raymond Noal) writes:
history of listserv
http://www.lsoft.com/products/listserv-history.asp
listserv somewhat grew up on bitnet ... from a internal corporate precursor.
the internal network was larger than arpanet/internet from
just about the start until possibly mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet (and earn) was an application of some of the internal network
technology (but bitnet nodes weren't included in the calculation of
internal network nodes)
https://www.garlic.com/~lynn/subnetwork.html#bitnet
an old posting referencing startup of earn
https://www.garlic.com/~lynn/2001h.html#65
recent posting discussing some internal network characteristics
https://www.garlic.com/~lynn/2005i.html#37 Secure FTP on the Mainframe
some of the internal listserv evolution has been blamed on stuff i
was doing. there were some number of studies in the early 80s about
what was going. one involved assigning a researcher to study what i
was doing. for 9 months they sat in the back of my office taking notes
on how i communicated. they also had access to all my incoming &
outgoing email and logs of all my instant messages for analysis. the
study also turned into a stanford phd thesis (joint between computer
ai and language depts) and material for subsequent papers and books.
some discussion in collection of postings on cmc (computer mediated
communication)
https://www.garlic.com/~lynn/subnetwork.html#cmc
the listserv search page
http://www.lsoft.com/lists/list_q.html
doesn't list either dfsms or hsm. also the bit.listserv usenet groups (bitnet listserv groups gatewayed to usenet) doesn't indicate anything obviously dfsms/hsm.
does have some listings for ADSM (now TSM) ... which grew up out of an
internal backup/archive tool that I originally wrote
https://www.garlic.com/~lynn/submain.html#backup
doing dfsms search in google groups shows up references primarily
in bit.listserv.ibm-main (ibm-main listserv group gatewayed to
usenet)
http://groups-beta.google.com/groups?as_q=dfsms&num=100&scoring=d&hl=en&as_epq=&as_oq=&as_eq=&as_ugroup=&as_usubject=&as_uauthors=&lr=&as_drrb=q&as_qdr=&as_mind=1&as_minm=1&as_miny=1981&as_maxd=30&as_maxm=5&as_maxy=2005&safe=off
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: defeating firewalls made easy Newsgroups: comp.security.firewalls Date: Mon, 30 May 2005 10:04:36 -0600ibuprofin@painkiller.example.tld (Moe Trin) writes:
another aspect from posting in this n.g.
https://www.garlic.com/~lynn/2005i.html#27
https://www.garlic.com/~lynn/2005i.html#28
slashdot ... History of Netscape and Mozilla (browser)
http://slashdot.org/articles/05/05/29/1314243.shtml?tid=114&tid=1
some from the commerce server side
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
and from SOA view point
https://www.garlic.com/~lynn/2005i.html#42
https://www.garlic.com/~lynn/2005i.html#43
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Where should the type information be? Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers Date: Mon, 30 May 2005 11:29:30 -0600small postscript to earlier reply
stu had created cms script command at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
for document formating. later at the science center, in '69, G, L, and
M invented GML (which has since morphed into SGML, HTML, XML, FSML,
etc)
https://www.garlic.com/~lynn/submain.html#sgml
and gml tag support was added to script document formating. however, it was quickly being realized that gml tags were useful for more than just specifying document formating.
however, even before invention of gml, bob adair was strong advocate of self-describing data. the performance statistic gathering process ... which ran on the cambridge machine continuously ... created files to tape and there was always header that described the format and fields in the data ... so that even years later the data could be retrieved and analyzed.
some of this legacy from the mid-60s were still around over ten years
later when i was getting the resource manager ready to ship. we had
over ten years of system performance statistics that could be used for
workload and thruput profiling (not only from the cambridge system but
also from a number of other internal systems as the methodology
migrated into general use). some past posts about workload and
performance profiling that went into calibrating the resource manager
... as well as the related technology evolving into things like
capacity planning
https://www.garlic.com/~lynn/submain.html#bench
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: XOR passphrase with a constant Newsgroups: sci.crypt Date: Mon, 30 May 2005 14:51:43 -0600"Andrew" writes:
the basic idea was repeated hashing of the passphrase ... ... server would record N and the Nth passphrase. when the user connected, the server would send back N-1. the user would have the passphrase repeatedly hashed N-1 and send it to the server. The server would hash it one more time and compare it with the previously recorded hash. If it compared, there was authentication ... the number would be decremented by 1 and the most recent hash recorded.
this was improved by having the server provide a salt & the number for the initialization ... and all subsequent iterations. the idea is that different servers would provide different salts ... so that the end users would be able to use the same passprhase for all environments.
supposedly this is resistant to MITM attacks w/o the end user having to carry anything (other than the memory of the passphrase ... which hopefully won't be entered on a compromised device).
the attack is for the MITM to intercept the salt and number and substitute a one (or other sufficiently small value). The MITM gets back the hash for the first round ... and then can iterate the hash for the additional rounds for the correct number. MITM now has information to generate correct authentication values for potentially several hundred rounds (for a specific server).
a possible countermeasure is for the end-user to carry some baggage to
track what is going on ... like recording the most recent N that they
had seen ... to recognize unauthorized uses (N having been decremented
by more than it should have been). however, this violates the original
design point justifying the implementation. if the end-user is going
to be carrying some baggage to track things like previous hash
interations ... then they might as well go with something like digital
signature and have the public key recorded (at the server) rather than
the Nth hash iteration of a passphrase.
https://www.garlic.com/~lynn/subpubkey.html#certless
couple past posts:
https://www.garlic.com/~lynn/2003m.html#50 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#0 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003o.html#46 What 'NSA'?
other mitm posts
https://www.garlic.com/~lynn/subintegrity.html#mitmattack
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Regarding interrupt sharing architectures! Newsgroups: comp.arch.embedded,comp.arch,alt.folklore.computers Date: Tue, 31 May 2005 09:45:32 -0600"ssubbarayan" writes:
an i/o interrupt would have the processor load a new PSW (program status word, contains instruction address, interrupt masking bits, bunch of other stuff) from the i/o new psw location (this new psw normally specified masking all interrupts) ... and storee the current PSW into the I/O old psw field.
OS interrupt routines were frequently referred to as FLIH (first level interrupt handlers) which would identify the running task, saved the current registers, copied in the old PSW information saving the instruction address, etc. There was typically a FLIH specific to each interrupt type (i/o, program, machine, supervisor call, external). The I/O FLIH would pick up the device address from the i/o old PSW field and locate the related control block structures.
the low-end and mid-range 360s typically had integrated channels ... i.e. they had microprocessing engines which was shared between the microcode that implemented 360 instruction set and the microcode that implemented the channel logic.
an identified thruput issue in 360 ... was the SIO (start i/o) instruction ... would interrogate the channel, control unit, and device as part of its operation. channel distances to a control unit could be up to 200'. The various propagation and processing dalays could mean that SIO instruction could take a very long time.
for 370, they introduced a new instruction SIOF (start i/o fast) ... which would basically interrogate the channel and pass off the information and not wait to hear back from the control unit and device. a new type of I/O interrupt was also defined ... if there was unusual status in the control unit or the device ... in the 360, it would be indicated as part of the completion of the SIO instruction. With 370, the SIOF instruction had already completed ... so any unusual selection status back from the control unit or the device now had to be presented as a new i/o interrupt flavor.
however, interrupts themselves had a couple of thruput issues. first in the higher performance cache machines ... asynchronous interrupt could have all sort of bad effects on cache hit ratios. also on large systems there was an issue with device i/o redrive latency. in a big system there might be a queue of requests from different sources for the same device. a device would complete some operation and queue and interrupt. from the time the interrupt was queued, until the operating system took the interrupt, processed the interrupt, discovered there was some queued i/o for the device ... and redrove i/o to the device could represent quite a bit of device idle time. compound this with systems that might have several hundred devices ... this could represent some amount of inefficiency.
370-XA added bump storage and expanded channel function with some high-speed dedicated asynchronous processors. a new type of processor instruction could add stuff to a device i/o queue managed by the channel processor. the processor could also specify that i/o completion was to be placed on a queue of pending requests ... also available to the processor. the channel processor could now do real-time queuing of device i/o completion and immedate restart the device with the next request in the queue.
this was also frequently referred to as i/o handling offload. the issue here was that some of the operating systems had something like a few 10k pathlength to take i/o interrupt and get around to doing an i/o device redrive.
in the late 70s ... just prior to introduction of 370-XA ... i was
wandering around the disk engineering lab ... and they had this
problem with regression testing of disk technology ("testcells") in
engineering development. they had tried doing some of this in a
traditional operating system environment ... and found that the
operating system MTBF was something like 15 minutes. As a result they
were scheduling stand-alone machine time between all the various
testcells contending for regression time. So i thot I would rewrite an
operating system i/o subsystem to be failure bullet proof so they
could concurrently work with all testcells concurrently ... not having
to wait for stand-alone test time.
https://www.garlic.com/~lynn/subtopic.html#disk
another thing i did was to cleanup the total pathlength so device i/o redrive time was a frew hundred instructions instead of a few 10k insruction pathlength. i claimed that this would significantly mitigate the requirement for doing i/o offload in 370-xa.
this was just a hypothetical argument (somewhat to highlight how
inefficient some kernel implementation pathlengths were). a number of
years earlier, I had done VAMPS ... a multiprocessor system (that
never shipped to customers
https://www.garlic.com/~lynn/submain.html#bounce
that had multiple close-in microcoded engines (in addition to the processor microcode engines) all on the same memory bus. i had designed a queued i/o offload interface for VAMPS.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Single Password - Linux & Windows Newsgroups: alt.linux.redhat,alt.os.linux.redhat,comp.os.linux.redhat,comp.os.linux.security,linux.redhat,microsoft.linux,microsoft.public.windows.server.active_directory,microsoft.public.windows.server.general,microsoft.public.windows.server.setup,redhat.se Date: Tue, 31 May 2005 14:24:18 -0600"Jason Williard" writes:
(windows) kerberos interoperability
http://www.microsoft.com/resources/documentation/Windows/XP/all/reskit/en-us/Default.asp?url=/resources/documentation/Windows/XP/all/reskit/en-us/prdp_log_tjil.asp
windows kerberos security tutorial
http://www.mcmcse.com/win2k/guides/kerberos.shtml
from my rfc index
https://www.garlic.com/~lynn/rfcietff.htm
select Term (term->RFC#) in the RFCs listed by section
and scroll down to kerberos:
kerberos
see also authentication , security
3962 3961 3244 3129 2942 2712 2623 1964 1510 1411
...
selecting any of the RFC numbers then brings up the summary for that RFC. in the summery field, selecting the ".txt=nnnn" field retrieves the actual RFC.
some past kerberos related postings:
https://www.garlic.com/~lynn/subpubkey.html#kerberos
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Single Password - Linux & Windows Newsgroups: alt.linux.redhat,alt.os.linux.redhat,comp.os.linux.redhat,comp.os.linux.security,linux.redhat,microsoft.linux,microsoft.public.windows.server.active_directory,microsoft.public.windows.server.general,microsoft.public.windows.server.setup,redhat.se Date: Tue, 31 May 2005 17:27:54 -0600disclaimer .... kerberos was a project athena activity at MIT. DEC and IBM equally funded athena for $50m total (unrelated drift, ibm funded cmu for mach/andrew stuff alone for $50m). in any case, in previous life, my wife and I got to periodically visit project athena for reviews ... including kerberos.
not too long ago ... i was at a SAML-based product description and they were describing cross-domain support. in looked to me like the same exact flows that had been presented for cross-domain kerberos (we happened to be visiting athena right in the middle of the cross-domain invention) ... except with saml messages instead of kerberos tickets.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
previous, next, index - home