From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: simple question about certificate chains Newsgroups: alt.computer.security,comp.security.ssh,comp.lang.java.security Date: 05 Jul 2005 11:28:49 -0600"Richard E. Silverman" writes:
slightly related posting in sci.crypt
https://www.garlic.com/~lynn/2005l.html#19
what you typed in is matched against the site you are dealing with.
the original major application was e-commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
the problem was that a lot of the e-commerse sites found that SSL reduced their capacity by 80-90 percent. as a result, most sites went to not invoking https/ssl until you hit the checkout/pay button.
the vulnerability is that one of the SSL objectives was countermeasure against man-in-the-middle &/or impersonation attacks. if you happen to be dealing with a bogus site w/o SSL (because nobody uses SSL until the checkout/pay phase) ... and then you get to the checkout/pay button ... it is highly probable that any bogus site will supply a URL as part of the pay button (which you haven't typed in) that corresponds to some valid SSL domain name certificate that they actually posses.
there is actually a funny catch-22.
one of the SSL justifications was as a countermeasure to perceived
integrity problems in the domain name infrastructure.
https://www.garlic.com/~lynn/subpubkey.html#sslcert
however, when somebody applies for an SSL domain name server certificate, the certification authority must check with the authoritative agency for domain name ownership. this turns out to be the same domain name infrastructure that has the integrity issues giving rise to the requirement for SSL domain name certificates.
basically the certification authority asks for a lot of identification information so that it can go through the complex, expensive and error-prone process of matching the applicants supplied identification information with the identification information on file for the domain name owner at the domain name infrastructure.
so somewhat with the backing of the certification authority industry, there has been a proposal to improve the integrity of the domain name infrastructure by having domain name owners registering a public key with the domain name infrastructure. An objective is improving the integrity of the domain name infrastructure by having all communication from the domain name owner be digitally signed ... which the domain name infrastructure can authenticate with the on-file public key (having all communication authenticated, improves the integrity of the domain name infrastructure, which in turns improves the integrity of the checking done by the certification authorities).
As an aside observtion ... this on-file public key
results in a certificate-less, digital signature operation.
https://www.garlic.com/~lynn/subpubkey.html#certless
The other issue for the certification authority industry, is that they can now require that SSL domain name certificate applications also be digitally signed. Then the certification authority can retrieve the on-file public key from the domain name infrastructure to authenticate the digital signature on the application. This turns an expensive, complex, and error-prone identification process into a much less expensive, straight-forward and more reliable authentication process.
The problem (for the CA industry) is that the real trust ruot for the SSL domain name certificates is the integrity of the ownership information on file with the domain name infrastructure. Improving this trust root, in support of certifying SSL domain name certificates ... also improves the overall integrity of the domain name infrastructure. This, in turns minimizes the original integrity concerns which gave rise to needing SSL domain name certificates.
The other problem (for the CA industry), if they can retrieve on-file trusted public keys from the domain name infrastructure, it is possible that everybody in the world could also retrieve on-file public keys from the domain name infrastructure. One could then imagine a variation on the SSL protocol ... where rather than using SSL domain name certificates (for obtaining a website's public key), the digital certificate was eliminated and the website's public key was retrieved directly.
In fact, a highly optimized transaction might obtain the website ip-address piggybacked with the website public key in a single message exchange (eliminating much of the SSL protocol chatter gorp that goes on).
In that sense, such an SSL implementation, using on-file public keys starts to look a lot more like SSH implementation that uses on-file public keys (eliminating the need for digital certificates, PKIs and certification authorities all together).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: <lynn@garlic.com> Newsgroups: mailing.openssl.users Subject: Re: Creating certs for others (without their private keys) Date: Tue, 05 Jul 2005 15:26:37 -0700Uri wrote:
basically a digital signature is the private key encoding of a hash of some message or data. the recipient rehashes the same message/data, decodes the digital signature with the indicated public key (giving the original hash) and compares the two hashes. if they are equal, the recipient can assume
1) the message hasn't been modified since signing 2) something you have authentication
aka, in 3-factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are
... where the digital signature verification implies that the
originator had access and use of the corresponding private key (w/o
having to supply the private key).
the technology is asymmetrical cryptography where there is a pair of keys, and what one key encodes, the other key decodes.
there is a business process called public key where one of the key pair is leabeled public and made freely available. the other of the keypairs is labeled private, kept confidential and never divulged.
acquiring a certificate frequently involves filling out a form that looks similar to a real certificate and digitally signing it (with your private key) and sending it off. the certification authority then verifies the digital signature with the public key included in the application (this should at least indicate that the applicant has the corresponding private key). the certification authority then verifies (certifies) the provided information ... generates a digital certificate (possibly in the identical format as the application) but digitally signs it with their private key.
now once the key owner has the digital certificate, the owner (and/or others) may be able to distribute the digital certificate all over the world.
one of the typical problems with the PKI & certification authority business model .... is that the party benefiting is the relying party who uses the digital certificate to obtain a public key to verify somebody's digital signature. Typically the person purchasing/buying the digital certificate from a certication authority is the key owner ... not the benefitting/relying party.
in typical business process operation, the benefitting/relying party is buying/paying for the certified information ... which tends to form a contractual relationship between the party responsible for the certification and the party benefiting from the certification. This has led to situations like the federal GSA PKI project .... where GSA has direct contracts with the certification authorities ... creating some sort of legal obligation between the federal gov. (as a relying/benefiting party) and the certification authorities (even when the certification authorities are selling their digital certificates to the key owners ... not to the benefiting/relying parties).
note that their is no actual requirement that a certification authority needs to have evidence of the key owners private key (aka implied by verifying a digital signature) .... it is just part of some certification authorities business practices statement.
There was a possible opening here in the mid-90s. Certification authorities in the early 90s, had been looking at issuing x.509 identity certificates grossly overloaded with personal information. One of the things defined for these digital certificates was a bit called a non-repudiation bit. In theory, this magically turned any documents (with an attached digital signature which could be verified with a a public key from a non-repudiation digital certificate) into a human signature.
This is possibly because of some semantic ambiquity since human signature and digital signature both contains the word signature. The definition of a digital signature is that the associated verification can imply
• message hasn't been modified
• something you have authentication
while a human signature typically implies read, understood, agrees, approves, and/or authorizes. The supposed logic was if a relying party could produce a message that had a digital signature and a digital certificate w/o the non-repudiation bit ... then it was a pure authentication operation. However, if the relying party could find and produce a digital certificate for the same public key that happened to have the non-repudiation bit turned on, then the digital signature took on the additional magical properties of read, understood, agrees, approves, and/or authorizes the contents of the message.
this logic somewhat gave rise to my observation about dual-use attack on PKI infrastructures. A lot of public key authentication operations involve the server sending some random data ... and the recipient digitally signing the random data (w/o ever looking at the contents) and returning the digital signature. The server than can authenticate the entity by verifying the digital signature. However, there is no implication that the entity has read, understood, agrees, approves, and/or authorizes the random data.
An attacker just sends some valid document in lieu of random data and is also able to produce any valid digital certificate for the associated public key that happens to have the non-repudiation bit set (whether or not the signing entity happened to include such a certificate for that particular operation or not). The victim digitally signs the supposed random data (w/o ever looking at it) and returns the digital signature (along with a digital certificate w/o the non-repudication bit set). The attacker however, now has a valid document, a valid digital signature and a valid digital certificate with the non-repudiation bit set (obtained from any source).
Somewhat because of the pure fantasy that being able to produce any valid digital certificate (for a valid public key that correctly validates the associated digital signature), with the non-repudiation bit set, magically guarantees read, understood, agrees, approves, and/or authorizes .... the standards definition for the non-repudiation bit has since been significantly depreciated.
slightly related recent posts about SSL domain name certificates
https://www.garlic.com/~lynn/2005m.html#0
misc. past posts on dual-use attack
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#56 two-factor authentication problems
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#43 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#21 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005.html#14 Using smart cards for signing and authorization in applets
https://www.garlic.com/~lynn/2005b.html#56 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#31 Public/Private key pair protection on Windows
https://www.garlic.com/~lynn/2005g.html#46 Maximum RAM and ROM for smartcards
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM 5100 luggable computer with APL Newsgroups: alt.folklore.computers Date: 05 Jul 2005 17:46:32 -0600"Phil Weldon" writes:
i was at cambridge (science center)
https://www.garlic.com/~lynn/subtopic.html#545tech
and then sjr, but did periodic got to work with some palo alto people
here is specific reference:
http://www.cedmagic.com/history/ibm-pc-5100.html
i did do some work with apl &/or hone ... hone was a internal cp/cms
time-sharing service that provided world-wide support to all the
field, marketing and sales people ... primarily apl applications on
cms. starting sometime in the early to mid 70s, salesmen couldn't
submit a mainframe related order w/o it first having been run thru a
HONE application. for a time, hone had a datacenter across the back
parking lot from pasc. misc. apl and/or hone posts:
https://www.garlic.com/~lynn/subtopic.html#hone
misc past 5100 posts
https://www.garlic.com/~lynn/2000d.html#15 APL version in IBM 5100 (Was: Resurrecting the IBM 1130)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2000g.html#46 A new "Remember when?" period happening right now
https://www.garlic.com/~lynn/2000.html#69 APL on PalmOS ???
https://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ???
https://www.garlic.com/~lynn/2001b.html#45 First OS?
https://www.garlic.com/~lynn/2001b.html#56 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2002b.html#39 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#47 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2003b.html#42 VMFPLC2 tape format
https://www.garlic.com/~lynn/2003i.html#79 IBM 5100
https://www.garlic.com/~lynn/2003i.html#82 IBM 5100
https://www.garlic.com/~lynn/2003i.html#84 IBM 5100
https://www.garlic.com/~lynn/2003j.html#0 IBM 5100
https://www.garlic.com/~lynn/2003n.html#6 The IBM 5100 and John Titor
https://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor
https://www.garlic.com/~lynn/2004c.html#8 IBM operating systems and APL
https://www.garlic.com/~lynn/2004l.html#32 Shipwrecks
https://www.garlic.com/~lynn/2005g.html#12 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005.html#44 John Titor was right? IBM 5100
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM 5100 luggable computer with APL Newsgroups: alt.folklore.computers Date: 06 Jul 2005 11:48:30 -0600CBFalconer writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: [newbie] Ancient version of Unix under vm/370 Newsgroups: alt.folklore.computers Date: 07 Jul 2005 11:40:04 -0600Renaissance writes:
there was a port of unix done to a stripped down tss/370 kernel (wasn't virtual machine interface was to higher level tss/370 kernel functions) done specifically for at&t.
some of the other 370 ports (gold/au, aix/370) etc ... tended to be deployed under vm370 ... not so much because of any lack in the straight-line 370 hardware support but because most shops were expecting normal vendor RAS support for their mainframes. the vendor RAS support was in large part based on extensive error recording and logging support. It was available w/vm370 and so guest operating systems could get by w/o having to also implement all the extensive hardware error recoding and logging (aka typical unix port could be mapped to the 370 hardware facilities ... but it would have been a much larger undertaking to add in all the RAS stuff ... than the straight-forward port had been).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: <lynn@garlic.com> Newsgroups: comp.protocols.kerberos Subject: Re: Globus/GSI versus Kerberos Date: Thu, 07 Jul 2005 11:19:07 -0700Ken Hornstein wrote:
you basically registered a public key with kerberos in lieu of a
password and then used digital signature authentication with the
onfile public key (no PKI and/or digital certificates required).
https://www.garlic.com/~lynn/subpubkey.html#kerberos
this was basically an authentication technology upgrade w/o having to introduce any new business processes and extraneous infrastructure operations.
it was later that certificate-based operation was added to the kerberos pk-init draft.
i gave a talk on this at the global grid forum #11
https://www.garlic.com/~lynn/index.html#presentation
at the meeting there was some debate on kerberos vis-a-vis radius as an authentication & authorization business process infrastructure.
note that in addition to their having been a non-PKI, certificate-less
authentication upgrade for kerberos (using onfile public keys), there
has been a similar proposal for RADIUS; basically registering public
keys in lieu of passwords and performing digital signature
authentication with the onfile public keys.
https://www.garlic.com/~lynn/subpubkey.html#radius
Straight forward upgrade of the authentication technology w/o having to layer on a separate cumbersome PKI business process.
From: lynn@garlic.com Newsgroups: mailing.openssl.users Subject: Re: Creating certs for others (without their private keys) Date: Thu, 07 Jul 2005 11:35:27 -0700lynn@garlic.com wrote:
somewhat at issue is that the standard PKI protocol involves the
originator digitally signing a message (with their private key) and
then packaging the three pieces:
• message
• digital signature
• digital certificate
in the basic authentication scenarios ... the originator never even
examines the contents of the message that is being signed (precluding
any sense of human signature, i.e. read, understood, agrees, approves,
and/or authorizes).
the other part of the problem (before non-repudiation bit was severely depreciated in PKI certificate definition), is there is no validation of what certificate that the originator actually appended.
even if the originator had appeneded a digital certificate w/o the non-repudiation bit set ... they had no later proof as to what certificate they had appended. all the attacker needs to do is being able to obtain from anyplace in the world a digital certificate for the same public key that happens to have the non-repudiation bit set.
in some of the pki-oriented payment protocols from the mid-90s ... there was even suggestions that if the relying party (or attacker, or say a merchant in an e-commerce scenario) could produce any digital certificate for the associated public key (effectively from any source) with the non-repudiation bit set ... then the burden of proof (in any dispute) would be shifted from the merchant to the consumer.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: [newbie] Ancient version of Unix under vm/370 Newsgroups: alt.folklore.computers Date: 07 Jul 2005 18:33:26 -0600Rich Alderson writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: 08 Jul 2005 10:13:59 -0600hancock4 writes:
prior to the ibm/pc, the instrumentation division did turn out a 68k-based machine.
in the early/mid-70s, peachtree become the s/1 and found wide deployment in instrumentation, control systems, as well as telecom world.
there was an effort from some sectors to try and get "peachtree" to be the core of the mainframe 3705 telecommunication unit (rather than some flavor of a UC ... universal controller microprocessor).
there was the joke about the (os/360) mft people from kingston moving to boca and trying to re-invent mft for the (16bit) s/1 (and called rps) ,,,, supposedly some of them went on to work on os/2.
the rps alternative was edx that had been done by some physicists at sjr (for lab instrumentation).
i don't have any ship numbers for these ... but as i've noted in
the past with regard to time-sharing systems
https://www.garlic.com/~lynn/submain.html#timeshare
cp67 and vm370 saw much wider deployed numbers that many other time-sharing systems that possible show up widely in the academic literature. the possible conjecture is while cp67 & vm370 had much wider deployment than better known systems from the academic literature ... the cp67 & vm370 deployments tended to be dwarfed by the mainframe batch system deployment numbers. however, the claim is that vm370 on 4341 saw wider deployment than equivalent vax machines (it was just that the ibm press was dominated by the batch system activity).
misc. past s/1, peachtree, edx, etc posts
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#239 IBM UC info
https://www.garlic.com/~lynn/2000b.html#66 oddly portable machines
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#87 Motorola/Intel Wars
https://www.garlic.com/~lynn/2000c.html#43 Any Series/1 fans?
https://www.garlic.com/~lynn/2000c.html#51 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000.html#71 Mainframe operating systems
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001f.html#30 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001.html#62 California DMV
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2001n.html#9 NCP
https://www.garlic.com/~lynn/2001n.html#52 9-track tapes (by the armful)
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002h.html#54 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002h.html#65 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002l.html#16 Large Banking is the only chance for Mainframe
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#67 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002q.html#53 MVS History
https://www.garlic.com/~lynn/2003b.html#5 Card Columns
https://www.garlic.com/~lynn/2003b.html#11 Card Columns
https://www.garlic.com/~lynn/2003b.html#16 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003c.html#23 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#28 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#76 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#13 COMTEN- IBM networking boxes
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix
https://www.garlic.com/~lynn/2003e.html#4 cp/67 35th anniversary
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003.html#67 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003m.html#28 SR 15,15
https://www.garlic.com/~lynn/2003o.html#16 When nerds were nerds
https://www.garlic.com/~lynn/2004g.html#37 network history
https://www.garlic.com/~lynn/2004p.html#27 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2005f.html#56 1401-S, 1470 "last gasp" computers?
https://www.garlic.com/~lynn/2005.html#17 Amusing acronym
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: 08 Jul 2005 10:30:46 -0600Mike Ross writes:
there was the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech
story trying to get a spare 360/50 to modify for virtual memory ... but all the 360/50s were going to FAA and cambridge had to settle for 360/40 to modify for virtual memory.
quote from melinda's paper
https://www.garlic.com/~lynn/2002b.html#7 Microcode?
random past 9020 postings:
https://www.garlic.com/~lynn/99.html#102 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2001e.html#13 High Level Language Systems was Re: computer books/authors (Re: FA:
https://www.garlic.com/~lynn/2001g.html#29 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001g.html#45 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#15 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001h.html#17 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#3 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2001i.html#14 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#15 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#13 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2001j.html#48 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001k.html#8 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2001k.html#65 SMP idea for the future
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001m.html#17 3270 protocol
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#53 A request for historical information for a computer education project
https://www.garlic.com/~lynn/2001n.html#62 The demise of compaq
https://www.garlic.com/~lynn/2002b.html#32 First DESKTOP Unix Box?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#13 Hardware glitches, designed in and otherwise
https://www.garlic.com/~lynn/2002f.html#29 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2002i.html#9 More about SUN and CICS
https://www.garlic.com/~lynn/2002i.html#79 Fw: HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002l.html#10 What is microcode?
https://www.garlic.com/~lynn/2002l.html#39 Moore law
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2002o.html#28 TPF
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#59 AMP vs SMP
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#30 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#7 Low-end processors (again)
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
https://www.garlic.com/~lynn/2003i.html#5 Name for this early transistor package?
https://www.garlic.com/~lynn/2003j.html#58 atomic memory-operation question
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003m.html#4 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2003p.html#40 virtual-machine theory
https://www.garlic.com/~lynn/2004c.html#9 TSS/370 binary distribution now available
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#3 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004l.html#6 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
https://www.garlic.com/~lynn/2004l.html#42 Acient FAA computers???
https://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005c.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#51 [Lit.] Buffer overruns
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Cost: Teletype 33 vs. IBM Selectric Terminal (2741?) Newsgroups: alt.folklore.computers Date: 08 Jul 2005 10:39:52 -0600hancock4 writes:
quicky web search for 1052, 2740, 2741, ...
https://web.archive.org/web/20060325095540/http://www.yelavich.com/history/ev197001.htm
http://www.beagle-ears.com/lars/engineer/comphist/ibm_nos.htm
http://portal.acm.org/citation.cfm?id=356563
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Question about authentication protocols Newsgroups: sci.crypt Date: 08 Jul 2005 12:25:30 -0600Peter Seibel writes:
so standard digital signature definition is using private key to encode hash of message. the recipient then calculates the hash of the string, decodes the digital signature with the public key and compares the two hashs. if they are equal, the recipient assumes
1) message hasn't been changed in transit
2) something you have authentication (aka originator has access
and use of the corresponding "private" key).
discussion of the digital signature standard:
http://csrc.nist.gov/cryptval/dss.htm
lots of posts on the 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
and other posts on certificate-less public key operations
https://www.garlic.com/~lynn/subpubkey.html#certless
there can be an issue of the dual-use attack ... there has sometimes been some possibly semantic confusion with the term digital signature and human signature because they both contain the word signature.
human signature usually includes the connotation of read, undertstands, agrees, approves, and/or authorizes.
as in your description, the party, being authenticated, assumes that they are getting a random string and rarely, if ever, examines what is being digitally signed.
in various scenarios, there have been efforts to promote digital signatures to the status of human signatures. a dual-use attack on a private key used for both authentication infrastructures as well as such human signature operations ... is for the attacker to substitute a valid document in lieu of random bit string.
this was exaserbated in the pki/certificate standards world by the introduction of the non-repudiation bit as part of the certification standard. if a relying party could find any certificate, anyplace in the world (for the signer's public key) containing the non-repudiation bit ... then they could claim that the existance of that digital ceritificate (for the originator's public key containing the non-repudiation bit) was proof that the originator had read, understood, agrees, approves, and/or authorizes what had been digitally signed. in some of the PKI-related payment protocols from the 90s, this implied that if a relying-party could produce a digital certificate containing the signer's public key & the non-repudiation bit ... then in any dispute, it would shift the burden of proof from the relying party to the digitally signing party.
some recent posts on the subject
https://www.garlic.com/~lynn/2005l.html#18 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005l.html#29 Importing CA certificate to smartcard
https://www.garlic.com/~lynn/2005l.html#35 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#36 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#5 Globus/GSI versus Kerberos
https://www.garlic.com/~lynn/2005m.html#6 Creating certs for others (without their private keys)
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: 08 Jul 2005 16:54:02 -0600Peter Flass writes:
part of this was that endicott's 4341 also was very strong competitor to pok's 3031 ... and as such there was some internal corporate political maneuvering.
rs6000 was much later.
risc/801/romp was going to be a display writer follow-on in the early 80s by the office products division. it was going to be CPr based with lots of implementation in pl.8. when that was cancelted, it was decided to quickly retarget the platform to the unix workstation market. somewhat to conserve skills .... a pl.8-based project was put together called the virtual resource manager .... that sort of provided a high-level abstract virtual machine interface (and implemented in pl.8). Then the vendor that had done the at&t port to ibm/pc for pc/ix was hired to do a similar port to the vrm interface. this became pc/rt and aix.
follow-on to pcrt/romp was rs6000/rios/power, the vrm was mostly
eliminated and aixv3 was built to the rios chip interface. there is
paper weight on my desk that has six chips with the legend: POWER
architecture, 150 million OPS, 60 millon FLOPS, 7 million transistors.
misc. 801/romp/rios postings
https://www.garlic.com/~lynn/subtopic.html#801
the executive that we reported to while we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
sample, specific post
https://www.garlic.com/~lynn/95.html#13
left to head-up somerset ... the joint ibm, motorola, apple, et al effort for power/pc. differences between rios/power and power/pc .... rios/power was designed for flat out single processor thruput ... tending to multi-chip implementation and no provisions for cache coherency and/or multiprocessor support. the power/pc was going to single-chip implementation with support for multiprocessor cache coherency. it was the power/pc line of chips that show up in apple, as/400, and some flavors of rs/6000 ... while other flavors of rs/6000 were pure rios/power implementation (including the original rs/6000).
i have some old memories of arguments going on between austin and rochester over doing power/pc 65th bit implementation. unix/apple world just needed 64bit addressing. rochester/as400 was looking for a 65th bit tag line ... helping supporting their memory architecture.
post posting with some vax (us & worldwide) ship numbers
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
part of the issue was that the price/performance during vax & 4341 time-frame seemed to have broken some threshhold. You started to see customer orders for 4341s that were in the hundreds ... quite a few that were single orders for large hundreds of 4341s. this really didn't carry-over to the 4381 (4341 follow-on), since by that time, you started to see that market being taken over by large PCs and workstations.
specific post:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
other past posts regarding the departmental computing/server market
https://www.garlic.com/~lynn/94.html#6 link indexes first
https://www.garlic.com/~lynn/96.html#16 middle layer
https://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#34 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#0 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002c.html#27 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#2 IBM's "old" boss speaks (was "new")
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002e.html#61 Computers in Science Fiction
https://www.garlic.com/~lynn/2002e.html#74 Computers in Science Fiction
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#48 How Long have you worked with MF's ? (poll)
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#52 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002.html#2 The demise of compaq
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002i.html#29 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#30 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#4 HONE, ****, misc
https://www.garlic.com/~lynn/2002j.html#7 HONE, ****, misc
https://www.garlic.com/~lynn/2002j.html#34 ...killer PC's
https://www.garlic.com/~lynn/2002j.html#66 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#18 Unbelievable
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002m.html#9 DOS history question
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#6 unix permissions
https://www.garlic.com/~lynn/2002p.html#59 AMP vs SMP
https://www.garlic.com/~lynn/2003c.html#14 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#17 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#71 Tubes in IBM 1620?
https://www.garlic.com/~lynn/2003d.html#37 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#61 Another light on the map going out
https://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003f.html#46 Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2003.html#10 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003i.html#5 Name for this early transistor package?
https://www.garlic.com/~lynn/2003j.html#60 Big Ideas, where are they now?
https://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
https://www.garlic.com/~lynn/2003n.html#46 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003o.html#13 When nerds were nerds
https://www.garlic.com/~lynn/2003o.html#24 Tools -vs- Utility
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004g.html#23 |d|i|g|i|t|a|l| questions
https://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004j.html#57 Monster(ous) sig (was Re: Vintage computers are better
https://www.garlic.com/~lynn/2004k.html#23 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
https://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
https://www.garlic.com/~lynn/2004m.html#59 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004q.html#71 will there every be another commerically signficant new ISA?
https://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#49 Secure design
https://www.garlic.com/~lynn/2005f.html#30 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#35 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005.html#43 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005m.html#9 IBM's mini computers--lack thereof
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: 09 Jul 2005 09:46:13 -0600Anne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: 10 Jul 2005 14:36:19 -0600"David Wade" writes:
there were some people that attempted to get peachtree for the core that became the 3705 mainframe telecommunication controller.
i had some interest in this area ... having worked on a clone
mainframe telecommunication controller as an undergraduate (and
some write-up blaming the project for starting clone controller
business)
https://www.garlic.com/~lynn/submain.html#360pcm
and then later tried to expand and productize a s/1-based implementation
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#69 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: <lynn@garlic.com> Newsgroups: microsoft.public.certification.networking Subject: Re: Course 2821; how this will help for CISSP exam ? Date: Tue, 12 Jul 2005 19:28:39 -0700davidxq wrote:
there is a business process called public key .... where one key, of a keypair is identified as public and freely distributed, the other of the keypair is identified as private, kept confidential and never divulged.
there is a business process defined digital signature .... where the origin calculates the hash of a message/document and then encodes the hash with the private key ... and transmits the message/document with the appended digital signature. the recipient recalculates the hash of the message/document, decodes the digital signature with the public key and compares the two hashes. if they are equal, then it is assumed:
1) the message/document hasn't changed since the digital signature 2) something you have authentication, aka the originator has access and use of the corresponding private key.
This can have tremendous advantage over shared-secrets likes pins/passwords and/or other something you know static data authentication.
In the 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are
the typical shared-secret
https://www.garlic.com/~lynn/subintegrity.html#secret
has short-coming that it can both originate and authenticate an operation ... i.e. entities with access to the authentication information can also use it to impersonate. partially as a result, the standard security recommendations is that a unique shared-secret is required for every security domain (so individuals in one security domain can't take the information and impersonate you in a different security domain). There is also the threat/vulnerability of evesdropping on the entry of the shared-secret information for impersonation and fraud.
It is possible to substitute the registration of public keys in lieu
of shared-secrets. Public keys have the advantage that they can only
be used to authenticate, they can't be used to impersonate. Also,
evesdropping on digital signatures doesn't provide much benefit since
the it is the private key (that is never divulged) that is used to
originate the authentication information.
https://www.garlic.com/~lynn/subpubkey.html#certless
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#kerberos
Basically, you build up a repository of trusted public keys of entities that you have dealings with for authentication purposes.
There has something been called PKI, certification authorities, and digital certificates designed to meet the offline email paradigm of the early 80s (somewhat anlogous the "letters of credit" from the sailing ship days). The reipient dials their local (electronic) postoffice, exchanges email and then hangs up. They now may be faced with first-time communication with total stranger and they have no local and/or online capability of determining any information regarding the stranger.
In this first time communication with total stranger, the trusted public key repository has been extended. There are certification authories that certify information and create digitally signed digital certificates containing an entities public key and some information. The recipient now gets an email, the digital signature of the email, and a digital certificate. They have preloaded their trusted public key repository with some number of public keys belonging to certification authorities. Rather than directly validated a sender's digital signature, they validate the certification authorities digital signature (using a public key from their local trusted public key repository) on the digital certificate. If that digital signature validates, then they use the public key from the digital certificate to validate the actual digital signature on the message.
In the early 90s, there were these things, x.509 identity certificates that were starting to be overloaded with personal information (the idea being that a recipient find at least one piece of personal information useful when first time communication with a total stranger is involved, and therefor the certificate serving a useful purpose). The business model was sort of do away with all established business relationships and substitute spontaneous interaction with total strangers. For instance, rather than depositoring large sums of money in a financial institution with which you have an established account ... you pick out a total stranger with which to give large sumes of money. The exchange of x.509 identity certificates would be sufficient to provide safety and security for your money. This also had the characteristic that all transactions (even the most simplest of authentication operations) were being turned into heavy duty identification operations.
In the mid-90s, some institutions were coming to the realization that
x.509 identity certificates, overloaded with excessive personal
information, represented significant liability and privacy issues. As
a result, you saw some financial institutions retrenching to
relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo
which basically contained a public key and some form of database index where the actual information was stored. However, it is trivial to demonstrate that such relying-party-only certificates are redundant and superfluous:
1) first they violate the design point for certificates ... providing information that can be used in first time communication with total stranger
2) if the relying party already has all the information about an entity, then they have no need for a stale, static digital certificate that contains even less information.
This was exasherbated in the mid-90s by trying to apply stale, static, redundant and superfluous relying-party-only digital certificates to payment protocols. The typical iso8583 payment message is on the order of 60-80 bytes. The PKI overhead of even relying-party-only stale, static, redundant and superfluous digital certificates were on the order of 4k-12k bytes. The stale, static, redundant and superfluous digital certificate attached to every payment message would have represented a payload bloat of one hundred times.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CPU time and system load Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: 13 Jul 2005 10:06:09 -0600mike.meiring@ibm-main.lst (Mike Meiring) writes:
the univ. that i was at ... got cp67 installed the last week in jan. 68 ... and i got to attend the march '68 share meeting in houston for the cp67 announcement. then at the fall '68 share meeting in Atlantic City ... i got to present a talk on mft14 enhancements as well as cp67 enhancements.
the workload at the univ. was lots of short jobs (before watfor) and was primarily job scheduler. i had done a lot of i/o tuning on mft14 to make thruput of this job mix nearly three times faster (12.7secs per 3-step job vis-a-vis over 30 seconds elapsed time for out-of-the-box mft14) essentially the same number of instructions executing in close to 1/3rd time ... because of drastically optimized disk and i/o performance).
I had also rewritten a lot of the cp67 kernel between jan. and the Atlantic City share meeting to drastically reduce hypervisor overhead for high overhead instructions/operations.
In the Share talk, i have the ratio of elapsed time w/hypervisor to elasped w/o hypervisor for mft14 job stream. Using these statistics, a normal, out-of-the-box mft14 looked much better running in a hypervisor environment. improving basic elapsed time by a factor of nearly 3 times by optimizing i/o made the hypervisor ratio much worse. Basically there was no increase in hypervisor overhead time for i/o wait. Drastically cutting i/o wait made the hypervisor overhead time ratio much worse (amount of overhead stayed the same but occurred in much shorter elapsed time).
The obtimized MFT14 jobstream ran in 322sec elapsed time on bare machine and in 856sec elapsed time under unmodified cp67 (534secs of cp67 hypervisor cpu overhead). With a little bit of work part time (I was still undergraduate and also responsible for the MFT14 production system), I got this reduced to 435secs elapsed time (113secs of cp67 hypervisor cpu overhead vis-a-vis the original 534 seconds of cp67 cpu overhead).
part of talk from Atlantic City '68 share presentation
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
a couple overheadsfrom above:
OS Performance Studies With CP/67 OS MFT 14, OS nucleus with 100 entry trace table, 105 record in-core job queue, default IBM in-core modules, nucleus total size 82k, job scheduler 100k. HASP 118k Hasp with 1/3 2314 track buffering Job Stream 25 FORTG compiles Bare machine Time to run: 322 sec. (12.9 sec/job) times Time to run just JCL for above: 292 sec. (11.7 sec/job) Orig. CP/67 Time to run: 856 sec. (34.2 sec/job) times Time to run just JCL for above: 787 sec. (31.5 sec/job) Ratio CP/67 to bare machine 2.65 Run FORTG compiles 2.7 to run just JCL 2.2 Total time less JCL time.... footnote for above overhead
MODIFIED CP/67 OS run with one other user. The other user was not active, was just available to control amount of core used by OS. The following table gives core available to OS, execution time and execution time ratio for the 25 FORTG compiles. CORE (pages) OS with Hasp OS w/o HASP 104 1.35 (435 sec) 94 1.37 (445 sec) 74 1.38 (450 sec) 1.49 (480 sec) 64 1.89 (610 sec) 1.49 (480 sec) 54 2.32 (750 sec) 1.81 (585 sec) 44 4.53 (1450 sec) 1.96 (630 sec)--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Another - Another One Bites the Dust Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: 13 Jul 2005 10:18:40 -0600wball@ibm-main.lst (William Ball) writes:
they had tried running a processor with MVS and a single testcell ... but at the time, MVS had a 15min mean-time-between-failure trying to run a single testcell.
I undertook to rewrite IOS (making it bullet proof) so that 6-12 testcells could be operated concurrently in an operating system environment. I then wrote an internal corporate only report about the effort ... unfortunately I happened to mention the base MVS case of 15min MTBF ... and the POK RAS guys attempted to really bust my chops over the mention.
That was not too long after my wife served her stint in POK in
charge of loosely-coupled architecture ... while there she had
come up with Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata
but was meeting with little success in seeing it adopted (until much later in sysplex time-frame ... except for some work by the IMS hot-standby people).
somewhat based on experience ... we started the ha/cmp project
in the late '80s
https://www.garlic.com/~lynn/subtopic.html#hacmp
one specific mention
https://www.garlic.com/~lynn/95.html#13
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: <lynn@garlic.com> Newsgroups: microsoft.public.biztalk.general,microsoft.public.windows.server.security Subject: Re: S/MIME Certificates from External CA Date: Wed, 13 Jul 2005 18:18:57 -0700Jeff Lynch wrote:
there is a business process called public key ... where one key (of a keypair) is labeled public and is freely distributed. the other key (of the keypair) is labeled private and is kept confidential and never divulged.
there is a business process called digital signatures for doing something you have authentication; basically the hash of a message/document is computed and encode with the proviate key. the message/document and the digital signature are transmitted. the recipient recalculates the hash on the message/document and decodes the digital signature with the public key and compares the two hashes. if the two hashes are the same, then the recipient assumes
1) the message/document has not changed since being signed 2) something you have authentication; the originator has access and use of the corresponding private key.
public keys can be registered in lieu of pins, passwords,
shared-secrets, etc as part of authentication protocols ... aka
https://www.garlic.com/~lynn/subpubkey.html#certless
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#kerberos
PKIs, certification authorities, and digital certificates were created to address the offline email type scenario from the early 80 (and somewhat analogous to the "letters of credit" paradigm from the sailing ship days). The recipient dials their local (electronic) post office, exchanges email, and hangs up. at this point they may be faced with first time communication from a total stranger and they have no local &/or other means of establishing any information about the email sender.
The infrastructure adds the public keys of trusted certification authorities to the recipients repository of trusted public keys. Individuals supply their public key and other information to a certification authority and get back a digital certificate that includes the public key, the supplied inoformation and digitally signed by the certification authority. Now, when sending first time email to a total stranger, the originator digitally signs the email and transmits the 1) email, 2) the digital signature, and the 3) digital certificate. The recipient validates the certification authorities digital signature (on the digital certificate) using the corresponding public key from their repository of trusted public keys. If the digital certificate appears to be valid, then they validate the digital signature on the mail using the public key from the digital certificate. They now can interpret the email using what every useful certified information from the digital certificate.
Basically, one might consider an "external certification authority" ... as an authority that you haven't yet loaded their public key into your repository of trusted public keys.
SSL/HTTPS domain name server certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert
were designed for
1) encrypted channel as countermeasure to evesdropping at any point in the communication 2) is the webserver you are talking to really the webserver you think you are talking to
Part of this was because of perceived integrity weaknesses in the domain name infrastructure. A webserver address is typed into the browser. the webserver sends back a digital certificate .... that has been signed by a certification authority whos public key has been preloaded into the browsers repository of trusted public keys. The browser validates the digital signature on the digital certificate. They then compare the domain name in the digital certificate with the typed in domain name. Supposedly if they are the same ... you may be talking to the webserver you think you are talking about
The browser now can generate a random session key and encode it with the server's public key (from the digital certificate) and send it back to the server. If this is the correct server, then the server will have the corresponding private key and can decode the encrypted random session key from the browser. From then on, the server and the browser can exchange encrypted information using the random session key (if it isn't the correct server, the server won't have access to the correct private key and won't be able to decode the random session key sent by the browser).
when this was originally be worked out
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
the objective was that the URL supplied by the end-user started out using HTTPS. In the e-commerce world ... some number of servers found that using HTTPS cut thruput by something like 80-90percent compared to plain HTTP. So you started seeing ecommerce sites using simple HTTP for all of the shopping experience and saving HTTPS when the user went to hit the pay/checkout button. The problem is that if the user was at a spoofed site during the HTTP portion ... than any fraudulent site would likely supply a URL with the pay/checkout button that corresponded to a URL in some valid digital certificate that they had (defeating the objective of making sure the server you thot you were talking to was actually the server you were talking to).
there is something of a catch-22 in this. A lot of certification authorities are purely in the business of checking on the validity of the information they are certifying ... they aren't actually the authoritative agency for the actual information. In the SSL domain name certificate scenario, the certification authorities ask from some amount of identiification information from the certificate applicant. They then contact the authoritative agency for domain name ownership and cross-check the applicant's supplied identification information with the identification information on file with the domain name infrastructure as to the domain name ownership. Note however, this domain name infrastructure which is the trust-root for things related to domain names ... is the same domain name infrastructure which is believe to have integrity issues that give rise to requirement for SSL domain name certificates.
So a proposal, somewhat supported by the SSL domain name certification authority industry ... is that domain name owners register their public key with the domain name infrastructure. Then all future communication with the domain name infrastructure is digitally signed ... which the domain name infrastructure can validate with the on-file public key (note: a certificate-less operation). This communication validation is thought to help eliminate some integrity issues.
For the certification authority industry, they now can also request that SSL domain name certificate applications also be digital signed. They now can change from an expensive, error-prone, and complex identification process to a much simple and cheaper authentication process (by retrieving the onfile public key from the domain name infrastructure and validating the digital signature).
The catch-22s are 1) improving the integrity of the trust-root for domain name ownership also lowers the requirement for SSL domain name certificates because of concerns about domain name infrastructure integrity and 2) if the certification authority industry can retrieve onfile public keys from the domain name infrastructure to validate who they are communicating with ... it is possible that the rest of the world could also ... eliminating any need for having SSL domain name server certificates.
One could imagine a simplified and optimized SSL protocol, where the client retrieves the ip-address and the associated public key from the domain name infrastructure in a single, existing exchange. They could then piggyback the randomly generated session key encoded with the servers public key on the initial contact with the server.
Another issue was some trend in the early 90s to overload the x.509 identity certificates with large amounts of personal information ... in hopes that future "strangers" (relying parties) would find something useful/meaningful when receiving first time communication (since CAs couldn't necessarily predict the context that future relying parties might be interested in).
In the mid-90s, there started to be some realization that x.509 identity certificates, grossly overloaded with personal information represented significant privacy and liability issues. As a result, you found some institutions retrenching to relying-party-only certificates ... basically a public key and some sort of database lookup index (where all the real information about an individual was stored). However, it was trivial to show that such relying-party-only certificates were redundant and superfluous ... aka 1) they violated the premise of supplying information for first-time communication between strangers and 2) if the relying party (recipient) already had a superset of the information found in a digital certificate (including the originator's public key) ... then it was redundant and superfluous for the originator to be constantly sending a copy of the certificate back to the relying party on every communication.
The other issue was that there were attempts to try and have x.509 identity certificates attached to all digitally signed documents and messages. This basically resulted in causing a large amount of confusion about the differences between authentication and identification ... and would have effectively turned all electronic operations ... even the most trivial authentication operations .... into heavyweight identification operations.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Implementation of boolean types. Newsgroups: comp.databases.theory Date: 14 Jul 2005 07:38:41 -0600Jonathan Leffler writes:
reference to Dec. '92, article by CJ Date
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: simd for 390(or z990)? Newsgroups: comp.arch Date: 15 Jul 2005 12:56:50 -0600"Michel Hack" writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Old Computers and Moisture don't mix - fairly OT Newsgroups: alt.folklore.computers Date: 16 Jul 2005 09:20:34 -0600forbin@dev.nul (Colonel Forbin) writes:
... topic drift ... santa teresa labs was originally going to be called coyote labs ... using a convention of naming after the nearest post office. the week before coyote labs was to open (I think the Smithsonian air&space museum and coyote labs were opening the same week), i happened to be in DC. That week, there was some demonstrations on the capitol steps (that made the national press) by an organisation of working ladies from san francisco ... which is believed to have led to the decision to quickly change the name of the lab from coyote to santa teresa (closest cross street).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Old Computers and Moisture don't mix - fairly OT Newsgroups: alt.folklore.computers Date: 16 Jul 2005 09:47:59 -0600Anne & Lynn Wheeler writes:
misc. past boyd refs:
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Old Computers and Moisture don't mix - fairly OT Newsgroups: alt.folklore.computers Date: 16 Jul 2005 10:03:09 -0600Anne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Old Computers and Moisture don't mix - fairly OT Newsgroups: alt.folklore.computers Date: 16 Jul 2005 11:20:09 -0600Anne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: 17 Jul 2005 08:05:10 -0600"Rupert Pigott" writes:
some lincpack numbers
https://www.garlic.com/~lynn/2002i.html#12 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
some past threads mentioning 158, 4341, 3031 rain/rain4 comparison
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001e.html#9 MIP rating on old S/370s
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)
3031 was announced 10/77 and first shipped 3/78.
basically 3031 and 158 used the same processor engine. in the 158 ... the engine was shared between running the 370 microcode and the integrated channel microcode. for the 303x-line of computers, a "channel director" was added ... bascially a dedicated 158 processor engine with just the integrated channel microcode.
a single processor 3031 configuration was then really two 158 processor engines sharing the same memory ... one processor engine dedicated to running the 370 microcode and one processor engine dedicated to running the integrated channel microcode.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Code density and performance? Newsgroups: comp.arch Date: Mon, 18 Jul 2005 07:46:12 -0600Nick Maclaren wrote:
I've periodically commented that 801/RISC was large part re-action to
future system failure ... swinging in the exact opposite direction.
There were periodic comments in the mid-70s about 801/RISC consistently
trading off software (& compiler) complexity for hardware simplicity.
https://www.garlic.com/~lynn/subtopic.html#801
From: lynn@garlic.com Newsgroups: microsoft.public.outlook.installation Subject: Re: how do i encrypt outgoing email Date: Mon, 18 Jul 2005 10:26:16 -0700Perry wrote:
there is a business process, public key ... where one of the keypair is designated "public" and freely distributed; the other of the keypair is designated private, kept confidential and is *never* divulged.
the standard method of sending encrypted email is to obtain the recipient's public key .... this can be done in a number of ways and most infrastructures provide ways of either dynamically obtaining the recipient's key ... and having it already stored in your local trusted public key repository.
the simple mechanism is to encode the data with the recipient's public key and then only the recipient's private key is able to decode it.
because of asymmetrical cryptography performance issues ... many implementations will generate a random symmetric key, encrypt the data with the symmetric key and then encode the symmetric key ... and transmit both the encrypted data and the encoded key. only the recipient's private key can decode and recover the symmetric key ... and only by recovering the symmetric key can the body of the message be decrypted.
for somebody to send you encrypted mail ... you will need to have generated a public/private key pair and transmitted your public key to the other party. for you to send another party encrypted mail ... they will have needed to have generated a public/private key pair ... and you will need to have obtained their public key.
PGP/GPG have individuals exchanging keys directly and storing them in their local trusted public key storage. PGP/GPG infrastructure also support real-time, online public key registry.
there is a business process, digital signatures. here the hash of the message is computed and encoded with the private key ... the messaage and the digital signature is transmitted. the recipient recomputes the hash of the message, decodes the digital signature (resulting in the original hash) and compares the two hash values. if they are the same, then the recipient can assume:
1) the message hasn't been altered since signing
2) something you have authentication ... aka the signer has access to
and use of the corresponding private key
There is also a PKI, certificate-based infrastructure that is targeted at the offline email environment from the early 80s. Somebody dials their local (electronic) post office, exchanges email, hangs up and is now possibly faced with first time communication. This is somewhat the letters of credit environment from the old offline sailing ship days where the recipient had no provisions for authenticating first time communication with complete strangers
An infrastructure is defined where people load up their trusted public key repositories with public keys belonging to *certification authorities*. When somebody has generated a public/private key pair ... they go to a certification authority and register the public key and other information. The certification authority generates a digital certificate contain the applicants public key and other information which is digitally signed by the certification authorities private key (public can verify the digital signature using the certification authorities public key from their trusted public key repository). This provides a recipient a way of determining some information about a stranger in first time communication ... aka the stranger has digital signed a message and transmitted the combination of the message, their digital signature and their digital certificate. The recipient 1) verifies the certification authorities digital signature on the digital certificate, 2) takes the public key from the digital certificate and verifies the digital signature on the message, 3) uses the other information in the digital certificate in determining basic information about the total stranger first time communication).
You can push a message and your digital signature to a stranger (possibly along with your digital certificate) ... but you can't actually encrypt the message for the stranger ... w/o first obtaining their public key.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: Tue, 19 Jul 2005 09:17:22 -0600Charles Richmond writes:
sheriff has some early marketing material that claimed 90 percent hit rate reading records. the case was seequentially reading a file that was formated ten 4k records to 3380 track; a read to first record on the track was a miss ... but it brought the full track into the cache ... so the next nine reads were "hits". you could achieve similar efficiency by changing the DD statement to do full-track buffering ... in which case the controller cache dropped to zero percent hit rate (the full track read would miss ... and then it would all be in the processor memory).
ironwood was oriented towards paging cache ... but the typical processor was 16mbyte to 32mbytes of real storage. a 4k page read brought it first into the ironwood cache and then into the processor memory. since a paging environment is effectively a form of caching and both the ironwood and the processor was using LRU to manage replacement ... they tended to follow similar replacement patterns. since real storage tended to be larger effective cache than the controller ... pages tended to age out of the controller cache before they aged out of processor's memory.
in the mid-70s ... i had done a dup/no-dup algorithm for fixed-head paging device (which tended to also have relative small limited size). in the "dup" case ... when there was relatively low pressure on fixed-head paging device ... a page that was brought into processor memory also remained allocated on the paging device (aka "duplicate", if the page was later replaced in real memory and hadn't been modified, then a page-write operation could be avoided since the copy on the paging device was still good). As contention for fixed-head paging device went up, the algorithm would change to "no-dup" ... i.e. when page was brought into real storage ... it was de-allocated from the fixed-head device (aka "no-duplicate" ... this increased the effective space on high-speed fixed-head paging devices ... but required that every page replacement require a page-write (whether the page had been modified or not).
So adapting this to ironwood ... all page reads were "distructive"
(special bit in the i/o command). "distrructive" reads that were in
the controller cache ... would de-allocate from the controller cache
after the read ... and wouldn't allocate in the cache on a read from
disk. The only way that pages get into the cache is when they are
being written from processor storaage (presumably as part of a page
replacement strategy ... aka they no longer exist in processor
storage). A "dup" strategy in a configuration with 32mbytes of
processor storage and four ironwoods (32mbytes of controller cache)
... would result in total electronic caching of 32mbytes in processor
storage (since most of the pages in ironwood would effectively be
duplicated in real storage). A "no-dup" strategy with the same
configuration could result in total electronic caching of 64mbytes
(32mbytes in processor storage and 32mbytes in ironwood).
past postings attached below
about the time of ironwood/sheriff ... we also did a project at sjr to instrument operating systems for tracing disk activity. this was targeted at being super-optimized so that it could continuously run as part of standard production operation. tracing was installed on a number of internal corporate machines in the bay area ... spanning a range of commercial dataprocessing to engineering and scientific.
a simulator was built for the trace information. the simulator found
that (except for a couple edge cases), the most effective place for
electronic cache was at the system level; aka given fixed amount of
electronic cache ... it was most effective as a global system cache
rather than partitioned into pieces at the channel, controller, or disk
drive level.
this corresponds to my findings as an undergraduate in the
60s that global LRU out performed local LRU.
One of the edge cases involved using electronic memory on a drive for doing some rotational latency compensation (not directly for caching per se); basically data would start transfering to cache as soon as the head was able ... regardless of the position on the track.
the other thing that we started to identity was macro data usage ... as opposed to micro data pattern usage. A lot of data was used in somewhat bursty patterns ... and during the burst there might be collection of data from possibly multiple files being used. At the macro level ... you could do things for improvements by load-balancing (the different data aggregates that tended to be used in a common burst) across multiple drives. The analogy for single drive operation is attempting to cluster data that tended to be used together.
some of the disk activity clustering work was similar to some early stuff at the science center in the early 70s ... taking detailed page traces of application and feeding it into a program optimization application (that was eventually released as a product called "VS/Repack"). VS/Repack would attempt to re-organize a program for minimum real-storage footprint (attempting to cluster instructions and data used together in minimum number of virtual pages). see past postings on vs/repack attached at bottom of this posting.
past postings on dup/no-dup, ironwood, sheriff:
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#54 mainframe question
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002d.html#55 Storage Virtualization
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2002o.html#52 ''Detrimental'' Disk Allocation
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003i.html#72 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#19 fast check for binary zeroes in memory
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2005c.html#27 [Lit.] Buffer overruns
past postings on global/local LRU replacement:
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#01 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#14 lru, clock, random & dynamic adaptive ... addenda
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
https://www.garlic.com/~lynn/96.html#0a Cache
https://www.garlic.com/~lynn/96.html#0b Hypothetical performance question
https://www.garlic.com/~lynn/96.html#10 Caches, (Random and LRU strategies)
https://www.garlic.com/~lynn/98.html#54 qn on virtual page replacement
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000f.html#34 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#36 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2003f.html#55 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#55 Advantages of multiple cores on single chip
https://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2004b.html#47 new to mainframe asm
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#77 Athlon cache question
https://www.garlic.com/~lynn/2005c.html#53 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005.html#2 Athlon cache question
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
past postings on vs/repack:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005.html#4 Athlon cache question
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: using ssl news servers Newsgroups: gnu.emacs.gnus Date: Tue, 19 Jul 2005 09:23:20 -0600Anne & Lynn Wheeler wrote:
gnus does nttp postings which works (on this service) with standard nttp/119 ... but apparently isn't supported for nntps/563.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Massive i/o Newsgroups: alt.folklore.computers Date: Tue, 19 Jul 2005 09:40:45 -0600CBFalconer writes:
the result was that an in-progress write during a power-failure might be completed with the interface supplying all zeros. the disk would then dutifully write correct error-correcting-codes (for the propagated zeros record)... so there wouldn't be an i/o error on subsequent reads.
for the cache-specific case, it wasn't viewed as a problem for the 3880-11/ironwood because it was presumed to be used with transient page data ... which wasn't assumed to survive a power failure.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: using ssl news servers Newsgroups: gnu.emacs.gnus Date: Tue, 19 Jul 2005 09:35:38 -0600Anne & Lynn Wheeler wrote:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: Tue, 19 Jul 2005 09:54:20 -0600sorry for the incomplete post ... i'm trying to get gnus working with nttps/563; it reads news fine but is hanging on posts ... i can't tell for sure whether it hangs before the post was actually done or after the post was sent off and is hanging waiting for some sort of completion.
aka
https://www.garlic.com/~lynn/2005m.html#29 using ssl news servers
https://www.garlic.com/~lynn/2005m.html#31 using ssl news servers
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Massive i/o Newsgroups: alt.folklore.computers Date: Tue, 19 Jul 2005 11:46:45 -0600Eric Sosman wrote:
in the raid5 scenario ... you had to read the original record (being updated) along with the parity record. you would subtract out the original record (being changed) and then update the parity record with the new contents. then you had to rewrite both the actual record and the parity record (a simpler approach sometimes was to read the whole raid5 stripe, including parity, change the updated record and then recalculate the parity record from the complete stripe ... and then write both). In several cases, they didn't provide for independent power and electronic copy of the data & parity records that were needed during the write process).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: Tue, 19 Jul 2005 11:57:26 -0600Anne & Lynn Wheeler wrote:
there was a hack done in the mid-to-late 70s to address this performance
issue w/o having intermediate electronic storage at the drive. it was
originally done for database logs on CKD dasd
https://www.garlic.com/~lynn/submain.html#dasd
basically the database log was being written a full-track of data at a time (and commit couldn't complete until the related log records had actually landed on the disk).
the scenario is that CKD dasd allows quite a bit of freedom in formating the records on the track. the standard procedure is to sequentially increment the "ID" portion of the record ... and then when reading (or updating) ... use "search id equal" to locate the specific record to be read or written. The log hack was to format a track with something like 1k byte records ... and sequentially increment the ID field.
However, when going to write the log ... use something like "search id high" to begin writing ... and have one channel I/O program that consecutively wrote as many 1k byte records as had been formated for the track. The "search id high" would be successful for whatever record was the first to rotate under the head ... and then it would consecutively write a full track of records from that position (w/o having to rotate around to a specific track location to start writing full track of data).
On log recovery ... the records had to have some minimal sequence number embedded in the record itself, ... since on a full-track read of a track ... you wouldn't otherwise know the starting write sequence of the records
This approach basically allowed the equivalent of local drive full-track storage for rotational latency compensation ... w/o actually requiring any local memory on the drive.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: Wed, 20 Jul 2005 09:26:50 -0600Morten Reistad writes:
we were trying to get it converged with FCS ... so that we could get signal interoperability with serial copper and FCS ... however it eventually went with its own independent standard as SSA
minor SSA reference
https://www.garlic.com/~lynn/95.html#13
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Massive i/o Newsgroups: alt.folklore.computers Date: Wed, 20 Jul 2005 09:58:08 -0600jmfbahciv@aol.com wrote:
the original cms filesystem (circa '66, cp67/cms then morphing to vm370/cms) had sequence were it wrote all the changed filesystem structure data to new disk record location ... and then carefully replaced the MFD (master file directory) record. This worked for all cases except the situation where the power failure occurred while the MFD record was actually being written and a power failure occurred ... resulting in zeros being propagated thru the end of the record (and their would be no error indication). The nominal logic (modulo the partial zero filled MFD record) was that either the MFD pointed to the old copies of all the file structure or the changed/updated records written to new disk locations (rewriting the MFD was effectively a commit like operation ... but was vulnerable to the power failure zero fill problem).
the "EDF" cms filesystem, introduced in the mid-70s, created a pair of MFD records ... and recovery/startup would read both MFD records, determine which was the most current valid MFD. on filesystem updates ... it would write updated/changed records to new disk location (just like the original cms filesystem) but alternate writing the MFD record. This caught various kinds of failure modes occurring during the updating of the MFD record.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: public key authentication Newsgroups: comp.security.ssh Date: Wed, 20 Jul 2005 12:15:49 -0600"Richard E. Silverman" writes:
... there is also the issue of shared-secrets being required to be unique for every unique security domains. in the past when a person was involved in one or two different security domains .. they had only one or two shared-secrets to memorize. now, it is not uncommon for people to have scores of unique shared-secrets that they have to memorize. taking the person-centric view ... this also has resulted in reaching human factor limitations when humans now have to make some sort of record of their scores of shared-secrets (because most humans don't have the capacity to otherwise deal with the situation). The necessity for resorting to some sort of recording infrastructure for tracking the scores of shared-secrets opens up additional threats and vulnerabilities.
the other compromise ... is some numbe of infrastructures, finding that humans have a difficult time keeping track of unique, infrastructure shared-secrets ... are resorting to common information that is known by the individual, like date-of-birth, mother's maiden name, social security number, etc. this violates fundamental security guidelines (but recognizes that there are common human limitations) ... and as led to a lot of the current identity theft situations.
the institutional centric model doesn't allow from human limitations having to deal with scores of different security domains, each requiring their unique shared-secret for authentication. the person centric model recognizes that individuals when dealing with scores of unique security domains, each requiring unique shared-secrets, isn't a practical paradigm for people.
the basic asymmetric key technology allows for one key (of a key-pair) for encoding information with the other key decoding the information (as opposed to symmmetric key technology where the same key is used for both decoding and encoding).
there is a business process called public key ... where one key (of a key pair) is identified as public and freely distributed. The other key (of the key pair) is identified is private, kept confidential and never divulged.
there is a business process called digital signature ... where the hash of a message (or document) is calculated and then encoded with the private key producing a "digital signature". the recipient then recalculates the hash of the message, decodes the digital signature (with the correct public key, producing the original hash), and compares the two hash values. If the two hash values are the same, then the recipient can assume
1) the message/text hasn't been modified since being digitally signed
2) something you have authentication ... aka the originator has access to and use of the corresponding private key.
From 3-factor authentication:
https://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are
... shared-secrets can be considered a form of something you know
authentication and digital signatures a form of something you have
authentication.
The integrity of digital signature authentication can be improved by using certified hardware tokens where the key pair is generated on the token and the private key is protected from every leaving the token. Where a private key is certified as only existing in a specific kind of hardware token ... then digital signature verification can somewhat equate the access and use of the private key as equivalent to the access and use of the hardware token (of known integrity characteristics).
There has been some amount of stuff in the press about the benefits of two-factor authentication (possibly requiring both something you have and something you know). The issue really comes down to whether the two factors are resistant to common vulnerability and threats. An example is PIN-debit payment cards which are considered two-factor authentication ... i.e. the something you have magstripe card and the something you know PIN (shared-secret).
The issue is that some of the ATM-overlay exploits can record both the magstripe and the PIN ... resulting in a common vulnerability that allows production of counterfeit card and fraudulent transactions. The supposed scenario for two-factor authentication is that the different factors have different vulnerabilities (don't have common threats and vulnerabilities). Supposedly, the original PIN concept was that if the card was lost or stolen (a something you have vulnerability), then the crook wouldn't also have access to the required PIN. Note, however, because of human memory limitations, it is estimated that 30precent of PIN-debit cards have the PIN written on them ... also creating a common threat/vulnerability.
public key hardware tokens can also require a PIN to operate. However, there can be significant operational and human factors differences between public key hardware tokens with PINs and a PIN-debit magstripe cards:
1) the PIN is transferred to the hardware token for correct operation, in the sense that you own the hardware token ... and the PIN is never required by the rest of the infrastructure, it becomes a "secret" rather than a shared-secret
2) in a person-centric environment, it would be possible to register the same publickey/hardware token with multiple different infrastructures (in part because, the public key can only be used to verify, it can't be used to impersonate). this could drastically minimize the number of unique hardware tokens an individual would have to deal with (and correspondingly the number of PINs needed for each unique token), possibly to one or two.
An institutional centric environment would issue a unique hardware token to every individual and require that the individual choose a unique (secret) PIN to activiate each token ... leading to a large number of PINs to be remembered and increases the probability that people would write the PIN on the token. A very small number of tokens would mean that there would be a very small number of PINs to remember (less taxing on human memory limitations) as well as increase the frequency that the limited number of token/PINs were repeatedly used (re-inforcing the human memory for specific PIN).
Substituting such a hardware token in a PIN-debit environment ... would still leave the PIN vulnerabile to ATM-overlays that skim that static data; but the hardware token wouldn't be subject to counterfeiting ... since the private key is never actually exposed. In this case, the two-factors are vulnerable to different threats .... so a single common attack wouldn't leave the individual exposed to fraudulent transactions. The PIN makes the hardware token resistant to common lost/stolen vulnerabilities and the hardware token makes the PIN resistant to common skimming/recording vulnerabilities.
Encrypted software file private key implementations have some number of additional vulnerabilities vis-a-vis a hardware token private key implementation ... aka the compromise of your personal computer. Normally the software file private key implementation requires a PIN/password to decrypt the software file ... making the private key available. A compromised personal computer can expose both the PIN entry (key logging) and the encrypted private key file (allow a remote attack to optain the encrypted file and use the pin/password to decrypt it).
Note that the original pk-init draft for kerberos specified the simple
registration of public key in lieu of passwords and digital signature
authentication ... in much the same way that common SSH operates ...
https://www.garlic.com/~lynn/subpubkey.html#certless
and w/o requiring the expesne and complexity of deploying a PKI
certificate-based operation
https://www.garlic.com/~lynn/subpubkey.html#kerberos
similar kind of implementations have been done for radius ... where
public key is registered in lieu of password ... and straight-forward
digital signature verficiation performed ... again w/o the complexity
and expense of deploying a PKI certificate-based operation
https://www.garlic.com/~lynn/subpubkey.html#radius
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Massive i/o Newsgroups: alt.folklore.computers Date: Wed, 20 Jul 2005 12:45:31 -0600... note that the journal file system for aixv3 (started in the late 80s) took a database logging approach to filesystem metadata.
in the cms filesystem case ... the records containing changed filesystem
metadata was written to new disk record location ... and then the MFD
was rewritten as a kind of commit to indicate the new metadata state ...
as opposed to the old metadata state. the EDF filesystem in the mid-70s,
updated the original cms filesystem (from the mid-60s) to have two MFD
records ... to take care of the case where there was a power failure and
a write error of the MFD record happening concurrently.
https://www.garlic.com/~lynn/2005m.html#36 Massive i/o
the aixv3 filesystem took a standard unix filesystem ... where metadata information had very lazy write operations and most fsync still had numerous kinds of failure modes ... and captured all metadata changes as they occurred and wrote them to log records ... with periodic specific commit operations. restart after an outage was very fast (compared to other unix filesystems of the period) because it could just replay the log records to bring the filesystem metadata into a consistent state.
there was still an issue with incomplete writes on the disks of the period. the disk tended to have 512byte records and the disks were defined to either perform a whole write or not do the write at all (even in the face of power failure). The problem was that a lot of the filesystems were 4kbyte page oriented ... and consistent "record" writes met a full 4k ... involving eight 512byte records ... on device that only guaranteed the consistency of a single 512byte record write (but there could be inconsistency where some of the eight 512byte records of a 4k "page" were written and some were not).
a lot of ha/cmp was predicated on having fast restart
https://www.garlic.com/~lynn/subtopic.html#hacmp
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Massive i/o Newsgroups: alt.folklore.computers Date: Wed, 20 Jul 2005 13:11:27 -0600Anne & Lynn Wheeler wrote:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: capacity of largest drive Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 20 Jul 2005 15:02:42 -0600gilmap writes:
the business case issue at the time was that there wasn't any demonstration that additional disk sales would happend (i.e. the prevailing judgement would that it might just convert some ckd sales to fba sales?).
the argument: that over the years ... the costs of not having shipped fba support would be far greater than the cost of shipping fba support in the early 80s ... and the likelyhood was that fba support would eventually have to be shipped anyway ... didn't appear to carry any weight.
random past dasd posts:
https://www.garlic.com/~lynn/submain.html#dasd
random past posts related to working with bldg 14 (dasd
engineering) and bldg. 15 (dasd product test lab)
https://www.garlic.com/~lynn/subtopic.html#disk
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: Thu, 21 Jul 2005 07:29:15 -0600Joe Morris wrote:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: public key authentication Newsgroups: comp.security.ssh Date: Thu, 21 Jul 2005 09:34:42 -0600Darren Tucker writes:
I would contend that the operational deployment and use of private keys tends to come more closely to approximating the something you have paradigm than the something you know paradigm ... even tho they are purely electronic bits. Even w/o a real hardware token container ... using only a software container ... the mechanics of the software container tends to approximate the operational characteristics of a physical object ...
Similarly the magstripe can be analyzed and copied ... generating counterfeit cards & magstripes. However, the account number from the magstripe can be extracted and used in fraudulent MOTO transactions. I know of no private key operational deployments providing for a mechanism for human communication of the private key ... all the deployments make use of the private key in some sort of container ... even if it is only the software simulation of a physical object.
The big difference between public key deployments and lots of the account fraud that has been in the press ... in the case of credit payment cards ... communicating the account number is sufficient to initiate fraudulent MOTO transactions ... and the account number is also required in lots of other merchant and processing business processes. The requirement that the account number be readily available for lots of business processes (other than originating the transactions ... and at the same time is the basis for authenticating a transaction origination.
From the security PAIN acronym
P ... privacy (or sometimes CAIN & confidentiality)
A ... authentication
I ... integrity
N ... non-repudiation
the multitude of business processes (other than transaction
origination) that require access to the account ... result in a strong
security integrity requirement ... but a relatively weak
privacy requirement (since the account number needs to be
readily available).
the conflict comes when knowledge of the account number is also essentially the minimum necessary authentication mechanism for originating a transaction ... which then leads to a strong security privacy requirement.
the result is somewhat diametrically opposing requirements ... requiring both weak and strong confidentiality, simultaneously.
By contrast, in a public key infrastructure, a digital signature may be carried as the authentication mechanism for a transaction and a public key is onfile someplace for validating the digital signature. Neither the digital signature nor the public key can be used for originating a new transaction.
In the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959
when mapped to iso8583 (credit, debit, stored value, etc) transaction,
a digital signature is used for authentication.
https://www.garlic.com/~lynn/8583flow.htm
furthermore, the standard defines that account numbers used in x9.59 transactions are not valid for use in non-authenticated (non-x9.59) transactions.
a public key onfile with a financial institution can be used for validating a x9.59 digital signature ... but can't be used for originating a transaction ... resulting in a security integrity requirement but not a security confidentiality requirement.
the transaction itself carries both the account number and the digital signature of the transaction. the digital signature is used in the authentication process ... but can't be used for originating a new transaction ... and therefor there is no security *confidentiality* requirement for the digital signature.
The account number is needed for x9.59 transaction for a multitude of business processes, but no longer can be used, by itself, for origination of a fraudulent transaction ... eliminating any security confidentiality requirement for the account number.
Another analogy is that in many of the existing deployments, the account number serves the function of both userid and password, leading to the conflicting requirements of making the userid generally available for userid related business process ... and at the same time, the same value is used as a form of authentication password, resulting in the confidentiality and privacy requirements.
X9.59 forces a clear separation between the account number as a "userid" function and the digital signature as a "password" (or authentication) function.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Code density and performance? Newsgroups: comp.arch Date: Thu, 21 Jul 2005 14:05:29 -0600glen herrmannsfeldt writes:
and invented compare&swap (chosen because CAS are his initials).
there was a push to get the instruction into 370 ... but there was push back from the 370 architecture owners in pok ... saying that a new instruction for purely SMP use couldn't be justified. To justify getting CAS into 370 ... we had to come up with justifications that were applicable to non-SMP environment ... thus was born the examples of serializing/coordinating multi-treaded applications (whether they ran on in non-SMP or SMP environments ... the issue was having an atomic instruction storage update where one thread might interrupt another thread operating in the same memory space). thus was born the CAS programming notes. As part of incorporating CAS into 370 ... a full-word (CS) and double word (CDS) versions were defined.
In later versions of the principles of operation, the CAS programming notes were moved to appendix (nominally POP programming notes were part of the detailed instruction operation notes).
current descriptions:
compare and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.28?SHELF=DZ9ZBK03&DT=20040504121320
compare double and swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.29?SHELF=DZ9ZBK03&DT=20040504121320
appendix a.6: multiprogramming (multi-thread by any other name) and
multiprocessing examples (old programming notes):
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/A.6?SHELF=DZ9ZBK03&DT=20040504121320
the (newer) perform locked operation (PLO) instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/7.5.107?SHELF=DZ9ZBK03&DT=20040504121320
... misc. other compare&swap and smp postings:
https://www.garlic.com/~lynn/subtopic.html#smp
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: TLAs - was summit else entirely Newsgroups: bit.listserv.ibm-main Date: Thu, 21 Jul 2005 14:22:09 -0600Chase, John wrote:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: lynn@garlic.com Newsgroups: microsoft.public.exchange.admin Subject: Re: Digital ID Date: Thu, 21 Jul 2005 20:05:52 -0700Emyeu wrote:
there is a business process called public key ... where one of the key-pair is identified as "public" and made widely available. The other of the key-pair is identified as "private" and kept confidential and never divulged.
there is a business process called digital signature ... where the originator calculates the hash of a message, encodes it with the private key producinng a digital signature, and transmits both the message and the digital signature. the recipient recalculates the hash on the message, decodes the digital signature with the public key (producing the original hash) and compares the two hashes. If they are equal, then the recipient can assume that
1) the contents haven't changed since the original digital signature
2) something you have authentication, i.e. the originator has access to and use of the corresponding private key.
PGP-type implementations involve the senders and receviers having a trusted repositories of public keys. The senders can use their private key to digital sign messages and transmit them to the recipients. The recipients can authenticate the sender by verifying the digital signature with the corresponding public key. Senders can also use on-file public key for the recipient to encode the message being sent (so only the addressed recipient can decrypt the message with the specific private key). Some actual message encryption implementations may be a two-step process where a random symmetric key is generate, the message encrypted with the random symmetric key and the random symmetric key then encoded with the recipient's public key. The recipient then uses their private key to decode the random symmetric key, and then uses the decoded random symmetric to decrypt the actual message.
In the SSL implementation used by browsers for encrypted
communication, digital certificates are introduced.
https://www.garlic.com/~lynn/subpubkey.html#sslcert
These are special messages containing the public key of the server and their domain name which is digital signed by certification authorities. Users have their trusted repositories of public keys loaded with the public keys of some number of certification authorities (in the case of many browsers these certification authority public keys have been preloaded as part of the browser creation). A Server has registered their public key and domain name with some certification authority and gotten back a digital certificate (signed by the certification authority)
The client browser contacts the server with some data. The server digital signs the data and returns the digital signature and their domain name digital certificate. The client browser inds the correct public key in their local repository and verify the certification authority's digital signature. If they certification authority's digital signature verifies, then the client assumes that the content of the digital certificate is correct. The client browser then checks the domain name in the digital certificate against the domain name used in the URL to contact the server (if they are the same, then the client assumes that the server they think they are talking might actually be the server they are talking to). The client browser can now use the server's public key (also contained in digital certificate) to validate the returned server's digital signature. If that validates, then the client has high confidence that the server they think they are talking to is probably the server they are talking to. The browser now generates a random symmetric key and encocdes it with the server's public key (taken form the digital certificate) and sends it to the server. When the server decrodes the random symmetric key with their private key ... then both the client and server have the same random symmetric key and all futher communication between the two is encrypted using that random symmetric key.
So the basic starting point is that the sender has to already have the recipient's public key in some locally accessible place. In the normal email scenario this tends to be a long term repository where the sender may collect before hand the public keys of recipients that they wish to securely communicate with. There are also a number of public-key server implementations ... where senders can obtain recipient public keys in real time.
In the SSL dynamic session scenarion ... the server's public key is provided as part of the two-way session initiation (although, the client browser still needs a trusted repository of public keys ... in this case at least for some number of certification authorities ... so that the dynamically obtained digital certificate containing the server's public key can be verified).
In a number of implementations ... the term "digital IDs" is used interchangeably with digital certificates ... and digital certificates can represent one source of obtaining recipient's public key.
However, when encrypting messages ... the sender isn't encoding with either their public or private keys ... they are encoding with the recipient's public key. If the sender doesn't already have the recipient's public key on file ... it is possible that the reicpient has registered their public key with some public key repository server ... and the sender can obtain the recipient's public key, in real-time, from such a server.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: Thu, 21 Jul 2005 23:01:14 -0600Jack Peacock wrote:
similar asynchronous operation was defined for SCSI commands over FCS
(fiber channel standard) and SCI (scalable coherent interface).
minor past reference
https://www.garlic.com/~lynn/95.html#13
somewhere long ago and far away, I did a number of A/B 9333/9334 comparisons of large number of concurrent operations to multiple drives.
we were advocating that 9333 serial copper become interoperable with
FCS ... instead it was turned into SSA. SSA reference
http://www.matilda.com/hacmp/ssa_basics.html
and lots of HA/CMP references
https://www.garlic.com/~lynn/subtopic.html#hacmp
a paper on SSA performance:
http://citeseer.ist.psu.edu/690893.html
a couple SSA redbook references:
http://www.redbooks.ibm.com/abstracts/sg245083.html
http://www.redbooks.ibm.com/redbooks.nsf/0/8b81ae28433f2cba852566630060b942?OpenDocument
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Code density and performance? Newsgroups: comp.arch Date: Thu, 21 Jul 2005 23:53:42 -0600glen herrmannsfeldt wrote:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Code density and performance? Newsgroups: comp.arch Date: Sun, 24 Jul 2005 10:53:55 -0600forbin@dev.nul (Colonel Forbin) writes:
the shared segment paradigm also made trade-offs on the basis of hardware simiplicity.
there was this advanced technology confernce in pok ... and we were
presenting 16-way (370) smp and the 801 group was presenting 801/risc.
somebody from the 801 group started critisizing the 16-way smp
presentation because they had looked at the vm370 code and said that
the vm370 code that they had looked at contained no support for smp
support ... and therefor couldn't be used to support a 16-way smp
implementation. the counter claim was (effectively) that the basic
support was going to be something like 6000 lines of code ... i had
already done the VAMPS 5-way design in 75 based on modifications and
moving most of the affected code into the microcode of the hardware
https://www.garlic.com/~lynn/submain.html#bounce
when VAMPS got killed, i had done a design that moved the affected
code back from microcode into low-level software.
https://www.garlic.com/~lynn/subtopic.html#smp
so when the 801 group started presenting, i pointed out that the virtual memroy segment architecture had been moved from tables into 16-registers ... which severely limited the number of concurrent shared objects that could be defined in a virtual memory space at any moment.
the counter-argument was that 801/risc represented a software/hardware trade-offs where significant hardware simplicity was compensated for by significant increase in software complexity. that there was no protection domains in 801/risc and that an application program was going to be able to change segment register values as easily as they could change general purpose address registers. that program correctness would be enforced by the compiler (for generating non-violating code) and the link/loader that would only enable correctly compiled code for execution. basically this came down to the cp.r operating system and the pl.8 compiler.
so my counter-argument was that while they effectively argued that it was going to be impossible for us to make 6000 line code change to the vm370 kernel ... it appeared like they were going to have to write a heck lot more than 6000 lines of code to achieve the stated cp.r and pl.8 objectives.
later in the early 80s ... ROMP was going to be used with cp.r and pl.8 by the office products division for a displaywriter follow-on product. when that product was killed ... it was decided to retarget ROMP displaywriter to the unix workstation market. something called the virtual resource manager was defined (somewhat to retain the pl.8 skills) and a unix at&t port was contracted out to the company that had done the pc/ix port to the ibm/pc (with the unix being ported to an abstract virtual machine interface supplied by the virtual resource manager). The issue here was that hardware protection domains had to be re-introduced for the unix operating system paradigm. this was eventually announced as the pc/rt with aix.
however, the virtual memory segment register architecture wasn't reworked ... which then required kernel calls to change segment register values for different virtual memory objects (and inline application code could no longer change segment register values as easily as they could change general purpose address register pointers) ... and the limited number of segment registers then, again became an issue regarding the limited number of concurrent virtual memory objects that could be specified concurrently.
To somewhat compensate for this limitation there was later work on virtual memory shared library objects ... where aggregations of virtual memory objects could be defined and virtual memory segment registers could point to an aggregated object (containing large number of individual virtual memory objects).
the original 801 architecture ... because the ease in which inline application program could change segment register values ... was frequently described as having a much larger virtual address space than 32bits. the concept was that while original 370 was only 24-bit addressing ... where there were only actual 15 general purpose registers that could be used for address pointers ... and any one address pointer could only address up to 4k of memory ... so actual addresaability by a program at any one moment (w/o changing general purpose register pointer) was 15*4k = 60k. However, an application program could change a pointer to be any value within 24bit addressing.
so while romp was 32bit virtual addressing ... and 16 virtual memory segment registers each cable of addressing 28bits (28bits * 16 also equal 32bits) ... the original 801/romp design allowed inline application program to change segment register values to point to any one of 4096 (12bit) segments. the result was that 801/romp was described as having 28bit * 4096 addressing ... or 40bit addressing.
The later RIOS/power chip up'ed the number of segment register values to 16meg (24bit) ... and even tho hardware protection domains (in support of unix programming paradigm) no longer allowed inline application code to change virtual memory segment register values ... you still saw some descriptions of rios/power having 28bit * 16meg or 52bit virtual addressing.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM's mini computers--lack thereof Newsgroups: alt.folklore.computers Date: Mon, 25 Jul 2005 12:11:30 -0600prep writes:
some amount of that was eventually rolled into the high speed
data transport project
https://www.garlic.com/~lynn/subnetwork.html#hsdt
later there was tcp/ip router support specified by rfc1044:
https://www.garlic.com/~lynn/subnetwork.html#1044
network systems has since been acquired by stk .... however they still retain the domain name: network.com.
related posts on serial interfaces
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005l.html#26 ESCON to FICON conversion
https://www.garlic.com/~lynn/2005m.html#34 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#35 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Cluster computing drawbacks Newsgroups: comp.arch Date: Mon, 25 Jul 2005 12:16:46 -0600"Emidio S." writes:
clusters can be used for scalable computing (thruput) and/or
redundant computing (availability). we did some of both when we were
doing the ha/cmp project
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Cluster computing drawbacks Newsgroups: comp.arch Date: Tue, 26 Jul 2005 09:39:41 -0600Ketil Malde wrote:
the interactive stuff frequently tended to punt on such issues ... pushing the conditions out externally assuming that there was the responsible human on the other side of the display and they could decide on how to handle the condition.
while there are a huge amount of stuff that involves people interacting with a keyboard and display (say an internet browser) .... the operational characteristics of servers are much more inline with that of batch systems paradigm ... aka the person at the browser doesn't also tend to have direct control over the server. so the claim is that while lots of people have direct contact with systems that evolved from interactive paradigm ... most of the world still turns on environments where it isn't expected that the responsible human is present and directly in control.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Cluster computing drawbacks Newsgroups: comp.arch Date: Tue, 26 Jul 2005 14:06:12 -0600glen herrmannsfeldt writes:
several years ago, we were talking to one of the major financial
transaction systems ... which commented that they attributed their one
hundred percent availability over the previous several years primarily
to
• ims hot-standby
• automated operator
when my wife did her stint in pok (batch mainframe land) responsible
for loosely-coupled (i.e. cluster by any other name) architecture ...
she came up with Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata
the first organization that really used it was ims group for ims hot-standby.
batch systems tended to have some residual direct human involvement, in the early days for tending printers, card readers, tape drives, etc (i.e. called operators).
during the early 70s, i started developing automated processes for performing many of the tasks that the operating system nominally required of operators.
starting in the early 80s ... you started to see the shift from hardware being the primary source of failures to software and people being the primary source of failures. automated operator went a long way to reducing many of the human mistake related failures.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Barcode Email Newsgroups: sci.crypt,alt.privacy,comp.security.misc Date: Tue, 26 Jul 2005 13:48:41 -0600Jean-Luc Cooke writes:
typically encryption is nominal considered a confidentiality or privacy tool.
the x9a10 financial standards working group was given the task of
preserving the integrity of the financial infrastructure for all
retail payments ... which resulted in X9.59 standard ... applicable to
credit, debit, stored-value, internet, point-of-sale, atm, etc.
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959
basically, x9.59 defines a light-weight message (a couple additional fields over what might be found in a standard iso 8583 message used for either credit or debit) that is digitally signed. the digital signature provides for integrity and authentication w/o actually requiring encryption.
one of the big issues in non-x9.59 transactions has been that the
transaction can be evesdropped and the information is sufficient to
originate a fraudulent transaction (giving rise to enormous
confidentiality requirement as countermeasure to evesdropping).
https://www.garlic.com/~lynn/subintegrity.html#harvest
part of x9.59 standard is a business rule that information from x9.59 transactions aren't valid in non-x9.59 &/or non-authenticated transactions. the business rule is sufficient countermeasure to evesdropping vulnerability that results in fraudulent transactions i.e. prime motivation for encryption has been reducing the evesdropping vulnerabilities that can lead to fraudulent transactions, which x9.59 addresses with
1) digital signatures for integrity and authentication
2) business rule that eliminates evesdropping use of information for fraudulent transactions.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Barcode Email Newsgroups: sci.crypt,alt.privacy,comp.security.misc Date: Tue, 26 Jul 2005 21:27:33 -0600"Luc The Perverse" writes:
pins/passwords can be single factor (something you know) authentication ... but they are also used as countermeasure for lost/stolen vulnerability involving something you have authentication (aka the infrastructure around private key tends to approx. something you have ... involving some sort of software or hardware container for the private key)
some of the more recent tv advertisements have either biometrics or pin/password (protecting the whole machine) as countermeasure to laptop lost/stolen vulnerability.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 54 Processors? Newsgroups: bit.listserv.ibm-main Date: Wed, 27 Jul 2005 00:18:03 -0600edgould@ibm-main.lst (Ed Gould) writes:
we actually had a 16-way 370/158 design on the drawing boards (with
some cache consistency slight of hand) that never shipped ... minor
posting reference:
https://www.garlic.com/~lynn/2005m.html#48 Code density and performance?
3081 was supposed to be a native two-processor machine ... and there never originally going to be a single processor version of the 3081. eventually a single processor 3083 was produced (in large part because TPF didn't have smp software support and a lot of TPF installations were saturating their machines ... some TPF installations had used vm370 on 3081 with a pair of virtual machines ... each running a TPF guest). the 3083 processor was rated at something like 1.15 times the hardware thruput of one 3081 processor (because they could eliminate the slow-down for cross-cache chatter).
a 4-way 3084 was much worse ... because each cache had to listen for chatter from three other processors ... rather than just one other processor.
this was the time-frame when vm370 and mvs kernels went thru restructuring to align kernel dynamic and static data on cache-line boundaries and multiples of cache-line allocations (minimizing a lot of cross-cache invalidation thrashing). supposedly this restructuing got something over five percent increase in total system thruput.
later machines went to things like using a cache cycle time that was much faster than rest of the processor (for handling all the cross-cache chatter) and/or using more complex memory consistency operations ... to relax the cross cache protocol chatter bottleneck.
around 1990, SCI (scalable coherent interface) defined a
memory consistency model that supported 64 memory "ports".
http://www.scizzl.com/
Convex produced the exampler using 64 two-processor boards where the two processors on the same board shared the same L2 cache ... and then the common L2 cache interfaced to the SCI memory access port. This provided for shared-memory 128 (HP RISC) processor configuration.
in the same time, both DG and Sequent produced a four processor board (using intel processors) that had shared L2 cache ... with 64 boards in a SCI memory system ... supporting shared-memory 256 (intel) processor configuration. Sequent was subsequently bought by IBM.
part of SCI was dual-simplex fiber optic asynchronous interface ... rather than single, shared synchronous bus .... SCI defined bus operation with essentially asynchronous (almost message like) operations being performed (somewhat latency and thruput compensation compared to single, shared synchronous bus).
SCI had definition for asynchronous memory bus operation. SCI also has definition for I/O bus operation ... doing things like SCSI operations asynchronously.
IBM 9333 from hursley had done something similar with serial copper ... effectively encapsulating scsi synchronous bus operations into asynchronous message operations. Fiber channel standard (FCS, started in the late 80s) also defined something similar for I/O protocols.
we had wanted to 9333 to evolve into FCS capatible infrastructure
https://www.garlic.com/~lynn/95.html#13
but the 9333 stuff instead evolved into SSA.
ibm mainframe eventually adopted a form of FCS as FICON.
SCI, FCS, and 9333 ... were all looking at pairs of dual-simplex, unidirectional serial transmission using asynchronous message flows partially as latency compensation (not requiring end-to-end synchronous operation).
a few recent postings mentioning 9333/ssa:
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005m.html#35 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
a few recent postings mentioning SCI
https://www.garlic.com/~lynn/2005d.html#20 shared memory programming on distributed memory model?
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005e.html#19 Device and channel
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
previous, next, index - home