List of Archived Posts

2001 Newsgroup Postings (02/20 - 03/21)

Z/90, S/390, 370/ESA (slightly off topic)
Z/90, S/390, 370/ESA (slightly off topic)
Z/90, S/390, 370/ESA (slightly off topic)
Z/90, S/390, 370/ESA (slightly off topic)
what makes a cpu fast
what makes a cpu fast
OS/360 (was LINUS for S/390)
LINUS for S/390
Server authentication
Server authentication
Memory management - Page replacement
Memory management - Page replacement
What does tempest stand for.
LINUS for S/390
Something wrong with "re-inventing the wheel".?
OS/360 (was LINUS for S/390)
database (or b-tree) page sizes
database (or b-tree) page sizes
On RC4 in C
What is "IBM-MAIN"
Something wrong with "re-inventing the wheel".?
What is "IBM-MAIN"
OS for IBM 370
Use of ICM
Use of ICM
Use of ICM
The Foolish Dozen or so in This News Group
Massive windows waisting time (was Re: StarOffice for free)
The Foolish Dozen or so in This News Group
mini-DTR tapes? (was Re: The longest thread ever ...)
PKI and Non-repudiation practicalities
database (or b-tree) page sizes
How Commercial-Off-The-Shelf Systems make society vulnerable
database (or b-tree) page sizes
PKI and Non-repudiation practicalities
How Commercial-Off-The-Shelf Systems make society vulnerable
How Commercial-Off-The-Shelf Systems make society vulnerable
database (or b-tree) page sizes
How Commercial-Off-The-Shelf Systems make society vulnerable
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
database (or b-tree) page sizes
database (or b-tree) page sizes
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
Varian (was Re: UNIVAC - Help ??)
PKI and Non-repudiation practicalities
How Many Mainframes Are Out There
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
Risk management vs security policy
SSL weaknesses
SSL weaknesses
confused about active scripting and security
Key Recovery System/Product
KI-10 vs. IBM at Rutgers
What ever happened to WAIS?
IBM Glossary
Wheeler and Wheeler
Wheeler and Wheeler
Wheeler and Wheeler
PKI and Non-repudiation practicalities
PKI and Non-repudiation practicalities
database (or b-tree) page sizes
CNN reports...
Unix hard links
CNN reports...
Unix hard links
Q: ANSI X9.68 certificate format standard
Unix hard links
Unix hard links
ARP timeout?
CNN reports...
database (or b-tree) page sizes
Unix hard links
A future supercomputer
"Bootstrap"
Unix hard links
database (or b-tree) page sizes

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Tue, 20 Feb 2001 23:26:48 GMT
richgr@panix.com (Rich Greenberg) writes:
Complete with occasional CP abends on the service consoles.

but dumprx was available on those machines ....

random refs:
https://www.garlic.com/~lynn/94.html#11

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Wed, 21 Feb 2001 13:15:14 GMT
Randy Hudson writes:
In Tanenbaum's _Structured Computer Organization_, I believe he cites a 30-1 speedup for microcoded FORTRAN programs, over simply compiling them, on a 370/158. But that probably refers to writing the application program directly in microcode, rather than just microcode-assisted FORTRAN.

-- Randy Hudson <ime@netcom.com>


the 158 machines were vertical microcoded machines ... i.e. the 360/370 sequences of instructions tended to have same characteristics as the microcode. 158/168 were horizontal microcode machines ... the programming nature was quite different.

also in 158 time-frame ... there were people that were taking some extremely performance sensitive fortran programs running on 195s and hand-coding selected performance sensitive sequences directly in 370 assembler and getting 10:1 speedups (or better i.e. comparing hand coded machine language vis-a-vis fortran compiler generated machine language). I believe that a lot of the work on assemblers better than assembler F ... were people involved in high performance computing optimization (a lot of the stuff that went into assembler H and some of the other varieties).

there was some work that on 158/168 ... that if the machine didn't have to check if i-stream code modifications that you might get a 2:1 speedup ... the i-stream fetched a double word of instructions and started some overlap of decode and execution. however, 360/370 allowed that the previous instruction could do a store and totally overlay the following instruction ... which introduced some stall in the amount of overlap that could be done (which goes away with direct microcode implementation).

in the vertical microcoded machines things were frequently expressed in the number of native engine instructions needed to implement 360/370 instruction set. because of the totally different nature of horizontal microcode, things were typically expressed in terms of machine cycles. I remember that one of the differences going from 165 to 168 was the reduction of avg. machine cycles per instruction from 2.1 to 1.6 (while the cycle time didn't change). The 3033 further reduced that to about 1.0 (and simple straight-forward translation of 370 instructions sequences to microcode showed little or no performance improvement).

sometimes it was possible to get 30:1 speedup with hand-generated machine code compared to fortran generated machine code.

in any case, on 158 going from fortran to microcode there were larger number of factors compared to simulating one machine language instruction set in the same or similar machine language instruction set.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Wed, 21 Feb 2001 14:15:42 GMT
Randy Hudson writes:
Anne & Lynn Wheeler wrote:

>370/145/148 were one of the most frequently microcoded engines. the >:"APL" microcode ran APL at 10 faster than native(?) 370 (APL on 145 >w/microcode assist ran about same thruput as APL on 168 w/o microcode >assist).

In Tanenbaum's _Structured Computer Organization_, I believe he cites a 30-1 speedup for microcoded FORTRAN programs, over simply compiling them, on a 370/158. But that probably refers to writing the application program directly in microcode, rather than just microcode-assisted FORTRAN.


just to further confuse the issues ...the APL comparison wasn't comparing APL compiler generated 360 machine language vis-a-vis a hand-generate horizontal microcode ... APL was an interpreter ... effectively when talking about running an APL program ... it is the execution of the APL interpreter against different kinds of data. It was pieces of the APL interpreter that were dropped into the 145 microcode. Going to Fortran, it is more comparable to talk about dropping pieces of the Fortran compiler into microcode and comparing the different of the straight machine language compiler vis-a-vis the microcode version.

Going from running a non-microcded version of the APL interpreter against a particular APL program compared to directly implementing the function of an APL program in microcode might show a 100:1 improvement (or better).

In the early 70s, the internal HONE system (all field, branch and many hdqtrs function) was totally a CMS/APL and then APL/CMS environment with some amount of shared segments for both CMS and APL. The CP/67 version of HONE and the early VM/370s version didn't provide for any graceful way of transitioning between the APL environment and a non-APL environment (because of the way that shared segments were implemented).

Some amount of the HONE delivered applications were APL simulation models. To a large extent these served the business functions that today are implemented in spreadsheets allowing business people to pose what-if questions. Other APL applications where configurators ... which walked the salesman thru a sequence of Q&A for specifying a machine (starting with the 370/125 it was no longer possible for a salesman to order/specify a machine w/o the help of a HONE configurator; again a type of function today that would be frequently implemented in a spreadsheet).

HONE also delivered some number of very sophisticated performance modeling tools, including allowing a SE to take detailed customer performance data and feed it to the modeling tools and be able to provide reasonable guestimates for the customer as to the effect of faster processor and/or more memory would have on the customer's workload thruput.

For some of these more sophisticated application models, it was possible to recode the APL implementation in Fortran, compile the Fortran and show that the compiled Fortran executed possibly 50 times faster than the interpreted APL version.

In any case, I had done an enhancement to CP/67 for generalized shared memory support late in the CP/67 cycle (release 1 of VM/370 was already available to customers). For HONE, I converted that to VM/370 r2plc15 base. The use of this at HONE was so that they could transparently slip back&forth between the generalized end-user environment (implemented in APL/CMS) and the growing number of performance sensitive applications that had been recoded in Fortran.

A small subset of this was incorporated into the product in release 3 of VM/370 called discontiguous shared segments. However, it didn't include the memory mapped file system and/or the ability to memory map arbritrary CMS executables/modules (including the abilility for CMS modules to be generated with portions of the memory-mapped region specified as shared memory ... and for CMS kernel to automatically invoke the shared memory specification as part of invoking a CMS executable).

Totally, unreleated, I recently ran into somebody who had obtained the rights to a descendent of the HONE performance modeling tool and had taken an APL->C translator to it and was doing a lot of performance analysis work (for instance, it had some interesting things to say about the performance costs of turning on parallel sysplex).

random refs:
https://www.garlic.com/~lynn/95.html#3
https://www.garlic.com/~lynn/97.html#4
https://www.garlic.com/~lynn/98.html#23
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#149
https://www.garlic.com/~lynn/99.html#150
https://www.garlic.com/~lynn/2000e.html#0
https://www.garlic.com/~lynn/2000e.html#22
https://www.garlic.com/~lynn/2000f.html#30
https://www.garlic.com/~lynn/2000f.html#62
https://www.garlic.com/~lynn/2000g.html#27
https://www.garlic.com/~lynn/2001.html#0
https://www.garlic.com/~lynn/2001.html#26

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Z/90, S/390, 370/ESA (slightly off topic)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Z/90, S/390, 370/ESA (slightly off topic)
Newsgroups: comp.lang.asm370
Date: Wed, 21 Feb 2001 14:52:36 GMT
Randy Hudson writes:
In Tanenbaum's _Structured Computer Organization_, I believe he cites a 30-1 speedup for microcoded FORTRAN programs, over simply compiling them, on a 370/158. But that probably refers to writing the application program directly in microcode, rather than just microcode-assisted FORTRAN.

random other 158 trivia .... the 158 was time-shared between the 370 function and the channel function ... going to 3031 (where there were two 158 engines, one with 370 feature and no channel function, and a channel director 158 engine with the channel function and no 370 support) also showed improved 370 instruction thruput:

refs:
https://www.garlic.com/~lynn/2000d.html#0

a directly microcoded function on the 158 would have some more latitude to play games vis-a-vis cycle stealing of the channel function.

from much earlier in this same thread:
https://www.garlic.com/~lynn/2001b.html#69

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

what makes a cpu fast

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what makes a cpu fast
Newsgroups: comp.arch
Date: Wed, 21 Feb 2001 15:19:20 GMT
cecchi@signa.rchland.ibm.com (Del Cecchi) writes:
And since each plant has a local copy of the mask, before the wafers are exposed the operator has to check if it is the current copy or if there is an updated copy available. This is done at IBM by an internal network called VNET.

and to be totally off topic ... VNET effectively included a gateway function as part of each node, essentially from the beginning ... and contributed to one of the reasons that the internal network was larger than the whole arpa/internet from just about the beginning until sometime around 1985 (change in part because of the proliferation of individual workstations & PCs as internet network nodes).

the internet didn't really get gateway function until the great cut-over on 1/1/83 from NCP/IMPs to IP (at which time arpa/internet was approx. 250 nodes and the internal network was nearly 1000 nodes).

random refs:
https://www.garlic.com/~lynn/2001b.html#81
https://www.garlic.com/~lynn/2000e.html#18
https://www.garlic.com/~lynn/internet.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

what makes a cpu fast

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: what makes a cpu fast
Newsgroups: comp.arch
Date: Wed, 21 Feb 2001 16:02:54 GMT
cecchi@signa.rchland.ibm.com (Del Cecchi) writes:
And since each plant has a local copy of the mask, before the wafers are exposed the operator has to check if it is the current copy or if there is an updated copy available. This is done at IBM by an internal network called VNET.

& i guess to totally round it off ... there was an internal application called TOOLSRUN (was also the precursor of listserv) ... but as deployed it could be configured with flavors of mailing list exploder (akin to current listserv/majordomo with all the maint. & administrative function/feature), one-way broadcast (akin to existing usenet), and master/slave distributed database (master & slaves interact directly with push, pull & consistency protocols, updates were done by the master and distributed out to all the slaves).

local user could interact in such a way that a consistency check was made with regard to a locally shadowed copy.

TOOLSRUN ran on top of VNET ... similar to the way that much of usenet is currently configured running on top of tcp/ip (as opposed to the uucp usenet flavors).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS/360 (was LINUS for S/390)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS/360 (was LINUS for S/390)
Newsgroups: bit.listserv.ibm-main
Date: Wed, 21 Feb 2001 16:18:26 GMT
listmanager@SNIPIX.FREESERVE.CO.UK (Roger Bowler) writes:
IMO this is not correct. S/360 did not have EC mode, this was introduced in S/370. OS/360 runs in BC mode only, even on a S/370. Later releases of OS/360 make limited use of control registers (for example CR2 is needed to support more than 6 channels), but it never runs in EC mode.

slight caveat

360/67 had EC mode, control registers, virtual memory, channel director supporting more than 7 channels (i.e. in 360/67 duplex channel directory provide capability for all processors to access all channels), 24&32bit addressing. In fact, the channel director capability disappeared in 370 and don't really show up again until 308x.

370 prior to virtual memory & ec-mode announce had a couple bits defined in cr0 not associated with ec-mode, multiprocessing, or virtual memory. I don't remember how many of the other non-ec &/or possibly later announced CR functions were in available at initial 370 first customer ship.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

LINUS for S/390

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LINUS for S/390
Newsgroups: bit.listserv.ibm-main
Date: Wed, 21 Feb 2001 20:34:34 GMT
astaller@TOTALSYSTEM.COM (Al Staller) writes:
Are you sure about the "that the original S/370 announcement didn't include DAT, " ?

I was in a presentation for the 370 (155?) and I am almost positive they talked about the DAT box. But you may be correct as when I walked out of there, I distinctly remember thinking to myself its going to run like a dog as the power to do address translations will take away from the processor. We are talking at least 30-32 years ago and my memory is going the way of non-ECC .

Ed


very definitly the original 370 announcement didn't include DAT. There was in fact, some big pentagon-paper-like flap with some DAT information leaking to the press ... which (in part) resulted in internal copying machines being retrofitted with serial numbers on the glass ... so that all copies made with that machine would include the copy machine identification. Before DAT announcement, there were also customers asking what was the meaning of the "XLAT" light that was on the front panel of 145s.

also, there was a problem retrofitting the full DAT function to the 165 (as defined in the base 370 document for all machines). 165 engineers claimed that the engineering to retrofit some of the DAT features to the 165 would delay the availability by six months and significantly increase the complexity of the hardware field upgrade ... as a result they got dropped from the 370 announcement from all models (and in the case of 145 which already had done the full architecture implementation, the eventual implementation shipped to customers had to be stripped back to only those things that the 165 would be support).

One of the features that got dropped were all the selective invalidate instructions, IPTE, ISTE, and ISTO .... leaving just PTLB.

random refs:
https://www.garlic.com/~lynn/2000e.html#15
https://www.garlic.com/~lynn/2000f.html#55

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Server authentication

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Server authentication
Newsgroups: comp.security.misc
Date: Wed, 21 Feb 2001 20:18:14 GMT
Patrick.Knight@currentechnologies.com (Patrick Knight) writes:
That's what I thought. How can authentication be done without a third party (i.e. certificates, CA)? In other words, how can a client know a server is a server and not someone spoofing the network? Or for that matter, without certificates how can a client authenticate itself to a server without certificates?

actually, certificates are a second order binding between a server and a client. certificates typically provide a binding between a public key and some external identification for use in offline environments.

two parties with no prior relationship may frequently not have a context for the externail identification and so the certificate means nothing.

in general, until i set-up an account with some random ISP for my name .... the ISP is unlikely to let me connect and provide me with service just because I have sent them a certificate. they will typically want some context where the account incurs charges and process for them to be payed for the incurred charges.

looking at the internet, possibly 99.9999999999% of instances of client/server authentication events that goes on in the internet world occurs today using some sort of radius; i.e. consumer calls their ISP (server), connects, performs some hand-shaking authentication procedure and is granted access to the internet.

This occurs tens of millions and possibly hundreds of miliions of times each day.

This process involving 99.999999999% of the existing internet client/server authentication events typically involves some form of userid/account and password.

The radius infastructure involving 99.99999999% of the existing internet client/server authentication events could relatively trivially support public/private key and still not require a certificate.

A typical certificate-based authentication process, involves some sort of secure message, which has a digital signature appended, and then a certificate appended to the combined secure message and digital signature. The recipient then uses the public key bound in the certificate to verify the digital signature. This implies that the message was presumably transmitted by the owner of the certificate and that the message hasn't been modified in transit. The recipient then typically uses a combination of the contents of the secure message, the certificate and local administrative context to perform some action in conjunction with the authentication (typically authentication isn't being performed in the total absence of any reason .... i.e. typical business process scenerio is using authentication in conjunction with some sort of authorization, typically requiring a business context).

Now, a trivial way of adding an upgrade option for 99.99999999% of the existing internet client/server authentication events is to record the client's public key in place of a password or other authentication mechanism.

In this scenerio, the client forms a secure message, digital signes it, and transmits the secure message plus digital signature (w/o needing an associated certificate, note this can represent a real savings on traffic bloat where the size of a digital certificate is an order of magnitude bigger or more than the combined secure message plus digital signature). The server recipient obtains the transaction context pointer from the secure message, retrieves the context which also contains the public key; uses the public key to verify the digital signature ... and if it verifies, proceeds to perform any associated service/authorization function.

No certificate is needed. Certificates are really targeted at processes which can be carried out w/o regard to any additional context and/or needing a prior business relationship (i.e. say a server that exists solely for the purpose of performing authentication functions ... but otherwise does absolutely nothing).

A physical world analogy to the offline use of certificates are the old business paper checks that had signing limits (i.e. in theory the value bound into the paper check/certificate had some meaning when the check was being used at some offline site). As the world simultaneously moved to both electronic as well as online, there was never any real business requirement for a offline, electronic check (i.e. another word for certificate) ... but went to things like online, electronic, business purchase cards that not only supported business rules regarding individual purchases, but also things like aggregate purchase rules (no longer found the scam where somebody signed 200 $5,000 checks in order to make a $1million purchase).

The issue with regard to being online ... is that if the current business context has to be accessed (aka account record), that it is more efficient to use a authentication process with public key & other timely information bound in the account record; that to rely on stale data bound possibly bound in a redundant and superfluous certificate.

some of the attached also discusses the scenerio of a "secure" DNS providing a public key for server authentication; vis-a-vis relying on a server certificate. In part, most SSL server certificates supposedly fullfil a role of adding security because of possible security problems with the domain name infrastrucutre. However, a CA, issueing an SSL domain name certificate, has to rely on the authoritative agency for domain names to validate the information that will be bound into a certificate (i.e. the domain name infrastructure, which supposedly has the security problems justifying the certificate in the first place). A security improvement for the domain name system involves registering a public key at the same time the domain name is registered. However, 1) improving domain name system so that the CA can better trust it as the authoritative agency ... also eliminates one of the main reasons for needing a server SSL certificate, and 2) if the security improvement with registered public keys for the domain name system so it can be better trusted to provide timely information, then it can also be trusted to provide the registered public key (making a server SSL certificate, with possibly stale information from the past, redundant and superfluous).

random refs:
https://www.garlic.com/~lynn/aadsm2.htm#inetpki
https://www.garlic.com/~lynn/aadsm2.htm#account
https://www.garlic.com/~lynn/aepay4.htm#comcert6
https://www.garlic.com/~lynn/aepay4.htm#comcert7
https://www.garlic.com/~lynn/aepay4.htm#comcert8
https://www.garlic.com/~lynn/aepay4.htm#rfc2807b
https://www.garlic.com/~lynn/aepay4.htm#rfc2807c

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Server authentication

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Server authentication
Newsgroups: comp.security.misc
Date: Wed, 21 Feb 2001 23:58:01 GMT
aka

... for the 99.9999999% of the current "client" authentication events that goes on today in the internet, it is a relatively trivial upgrade to support public key authentication w/o requiring certificates (i.e. add digital signature authentication to radius and public key registration option in lieu of passwords).

... for the 99.9999999% of the current "server" authentication events (i.e. SSL domain name certificates) that goes on today in the internet ... a justification has been they are needed because of domain name system integrity problems; however since the domain name infrastructure is the authoritative agency that a CA has to check with in order to validate the domain name for issuing a certificate .... it is in the interest of CAs to improve the integrity of the doman name infrastructure (so CAs can trust the domain name information that CAs need to validate when issueing a certificate). The current RFCs for improving domain name infrastructure includes registering a public key with the registration of the domain name. However fixing integrity of domain name system eliminates the justification for the existing SSL domain name certificates ... as well as providing a trusted method for the domain name infrastructure to distribute trusted public keys w/o requiring certificates (not only are the SSL domain name certificates no longer justified, but they also become redundant and superfluous).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Memory management - Page replacement

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Memory management - Page replacement
Newsgroups: comp.os.misc,alt.folklore.computers
Date: 22 Feb 2001 11:24:34 -0700
Brian Inglis writes:
On Wed, 21 Feb 2001 09:53:36 -0500, "John D." wrote:
>What is the advantage if using two-handed clock replacement policy
>than one-handed? What is the effect if second hand is closer first
>hand and in what situations operation system can do this?

Ask the guy who invented it! [xposted to a.f.c]

Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
--
Brian_Inglis@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
use address above to reply


with one bit of reference and one-handed clock .... the amount of "history" information represents the time that it takes for the "hand" to make a complete cycle of all pages i.e. one-handed checks to see if the reference bit is off, if it is off, it selects the page, if it is on, it just turns the bit off. the bit then has the time it takes the hand to completely cycle thru all pages for it to be turned on and preserve the page in memory.

The whole purpose of the algorithm is hopefully to be able to discriminate between pages that have been recently referenced and those that haven't been recently referenced (the set of all pages with reference bit off and the set of all pages with the reference bit on). The algorithm has some interesting dynamic adaoptive characteristic because as the demand for pages goes up, the interval that pages can survive in memory decreases.

However, for various configurations ... the period of time it takes the hand to completely cycle through all pages may be excessively large ... and the algorithm starts to cycle with all pages having the bit on with all pages having the bit off; i.e. the idea of the algorithm is to be able to descriminate between pages that have been used more recently versis pages that have been used less recently. If the time for the one hand to cycle thru all pages is long enuf, there may not be any descrimination capability (all pages look the same).

Two handed clock allows more degrees of latitude in the period between the time a reference bit is turned off and the time it is examined ... while still preserving the dynamic adaptive characteristic of single handed clock ... rather than the reset hand being exactly equal to the total number of available pages .... the reset hand can be some fraction of the total number of available pages. This has some degree of more control over the interval between the time the reference bit is turned off and the time it is examined ..... which in theory allows better descrimination in being able to "rank" pages that have been more recently referenced from those that have been less recently referenced (i.e. for some environments, the single handed clock interval, with full rotation of all pages is too long an interval to provide any meaningful descrimination/categorization of pages into the two groups ... they all appear to be in one group or the other).

However, both one handed & two handed clock will degenerate to FIFO under various circumstances. There is a variation that I also did on two-handed clock that has the interesting property of degenerating to random ... rather than FIFO; i.e. many of the conditions where the algorithm degenerates to FIFO is where pages are being used in a cyclic manner and you therefor guarantee that every page will be faulted on every cycle. Random works much better in this scenerio, since it tends to reduce to only a random number of pages are faulted on every cycle.

this was all 30+ years ago.

there was stanford PhD by carr on clock in the early 80s.

random refs:
https://www.garlic.com/~lynn/93.html#0
https://www.garlic.com/~lynn/93.html#4
https://www.garlic.com/~lynn/94.html#01
https://www.garlic.com/~lynn/94.html#1
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/94.html#4
https://www.garlic.com/~lynn/94.html#6
https://www.garlic.com/~lynn/94.html#10
https://www.garlic.com/~lynn/94.html#14
https://www.garlic.com/~lynn/94.html#49
https://www.garlic.com/~lynn/94.html#54
https://www.garlic.com/~lynn/95.html#12
https://www.garlic.com/~lynn/96.html#0a
https://www.garlic.com/~lynn/96.html#10
https://www.garlic.com/~lynn/97.html#0
https://www.garlic.com/~lynn/97.html#28
https://www.garlic.com/~lynn/98.html#2
https://www.garlic.com/~lynn/98.html#17
https://www.garlic.com/~lynn/99.html#18
https://www.garlic.com/~lynn/2000.html#86
https://www.garlic.com/~lynn/2000f.html#9
https://www.garlic.com/~lynn/2000f.html#10
https://www.garlic.com/~lynn/2000f.html#32
https://www.garlic.com/~lynn/2000f.html#33
https://www.garlic.com/~lynn/2000f.html#34
https://www.garlic.com/~lynn/2000f.html#36

note ... if this was comp.arch ... the question would assumed to be a home work assignment .... and answering home work assignments for other people is strickly frowned upon.
L. Belady, A Study of Replacement Algorithms for a Virtual Storage Computer, IBM Systems Journal, v5n2, 1966

L. Belady, The IBM History of Memory Management Technology, IBM Journal of R&D, v25n5

R. Carr, Virtual Memory Management, Stanford University, STAN-CS-81-873 (1981)

R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm for Virtual Memory Management, ACM SIGOPS, v15n5, 1981

P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6, jan80

J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, cacm16, apr73

D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Memory management - Page replacement

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Memory management - Page replacement
Newsgroups: comp.os.misc,alt.folklore.computers
Date: 22 Feb 2001 14:01:40 -0700
Anne & Lynn Wheeler writes:
note ... if this was comp.arch ... the question would assumed to be a home work assignment .... and answering home work assignments for other people is strickly frowned upon.

& while i was undergraduate when i did most of the work ... i never got any homework credit for it.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

What does tempest stand for.

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What does tempest stand for.
Newsgroups: sci.crypt
Date: Thu, 22 Feb 2001 22:19:21 GMT
admin@hotmail.com (Mark Healey) writes:
I know that "tempest" is an acronym (really T.E.M.P.E.S.T.) but I forgot what it stands for. Surprisingly this isn't in any of the online sources I could find.

Could someone please tell me.

Mark Healey marknews(the 'at' thing)healeyonline.com


available number of online sources, including rfc2828

also see security glossary at
https://www.garlic.com/~lynn/secure.htm

TEMPEST
(O) A nickname for specifications and standards for limiting the strength of electromagnetic emanations from electrical and electronic equipment and thus reducing vulnerability to eavesdropping. This term originated in the U.S. Department of Defense. [Army, Kuhn, Russ] (D) ISDs SHOULD NOT use this term as a synonym for 'electromagnetic emanations security'. [RFC2828] The study and control of spurious electronic signals emitted by electrical equipment, such as computer equipment. [AJP][NCSC/TG004][TCSEC] (see also approval/accreditation, preferred products list) (includes compromising emanations, emanations, emanations security, emission security)


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

LINUS for S/390

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LINUS for S/390
Newsgroups: bit.listserv.ibm-main
Date: Fri, 23 Feb 2001 16:02:48 GMT
Ted.Spendlove@THIOKOL.COM (Ted Spendlove) writes:
Bob, You definition of "Worked fine" is a bit different than mine. The "hot rod" 155 that you converted at the CSC center where I worked had a mean-time-to-fail of 15 MINUTES for the first 5 days. The average MTTF for the first 18 months was only 3 days. That early solid state memory was really awfull!!

Ted Spendlove ex-CSC


as an aside, the front panel of the 155 was like a door with latch ... you could unlatch & swing it open. One of the switches behind front panel disabled the cache on the 155. Benchmarking the 155 with cache disabled had about the same thruput as a 145.

as part of calibrating the resource manager (especially its dyanmica adaptive characteristics), I built an automated benchmark procedure & did a test suite of 2000 benchmarks that took about 3 months elapsed time to run Part of the test suite was varying available memory, processer speed, i/o capacity, and workload. Part of this was some of the genesis for automated operator, some of the tests involved rebuilding the kernel and then restarting the system and starting the next workload.

Some of the workload was nominally expected ... but there were also stress-test workloads and extreme outlayers ... like a paging intensive workload, the most extreme version had a mean service time of 1 second for a page fault (saturate the page i/o subsystem around 300-350 page i/os per sec. ... and build up a queue of 300 page requests ... new page requests would get put in the queue of 300+ and take a second to service).

the benchmarks were also use to cross-calibrate the predictor ... the APL analytical performance model available to the field on HONE (with a customer workload characterization, the predictor would say stuff about what the effects of faster cpu, more memory, or more disks would have).

in any case, being able to run the cambridge 155 at both 155 speeds and 145 speeds aided in the calibration process.

as a measure of the ability to dynamically adapt, a kernel implementing an early version (along with a bunch of other changes) had been made available to a special large customer for 145. about 10 years later the customer had a wide deployment on 3083s(?) and called to ask about SMP support (they were still running the same kernel/code, and for ten years it had been doing its job of dynamically adapting to a wide range of processors speeds, available memory, i/o configurations and workload).

random refs:
https://www.garlic.com/~lynn/94.html#2
https://www.garlic.com/~lynn/95.html#0
https://www.garlic.com/~lynn/95.html#1
https://www.garlic.com/~lynn/99.html#38
https://www.garlic.com/~lynn/99.html#149
https://www.garlic.com/~lynn/99.html#150

note the following from
https://www.garlic.com/~lynn/2001c.html#2

A small subset of this was incorporated into the product in release 3 of VM/370 called discontiguous shared segments. However, it didn't include the memory mapped file system and/or the ability to memory map arbritrary CMS executables/modules (including the abilility for CMS modules to be generated with portions of the memory-mapped region specified as shared memory ... and for CMS kernel to automatically invoke the shared memory specification as part of invoking a CMS executable).

also picked up at the same time was the automated operator stuff (developed for doing automated benchmarking) at the same time for inclusion into release 3.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Something wrong with "re-inventing the wheel".?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Something wrong with "re-inventing the wheel".?
Newsgroups: alt.folklore.computers,comp.os.cpm
Date: Sat, 24 Feb 2001 18:14:16 GMT
CBFalconer writes:
I considered the original PC keyboards misdesigned, because you couldn't conveniently put a line on the CRT bottom defining their use that matched the layout. Now such Fn key usage help is possible but rarely implemented.

And I have since become convinced that touch key usage is more important, but the keys have moved!


there was lot of work done on hands (not) leaving the keyboard. in the '70s there was work on hand-sized egg/ball shapped chord-keyboard ... could have one for each hand. fingers fit into depressions with multi-position keys at the finger tips and you could have a mouse-like tracking ball underneath. with practice, it was relatively easy to achieve 80 words/minute (simpler than current standard qwerty key layout) ... and the hands never had to leave the "keybarod" to do positioning.

problem, of course was that the qwerty stuff was/is so prevalent.

one of the early standard keyboard pointers were two sliders at the space bar for the thumbs ... one controlled X coordinate on the screen and the other the Y coordinate ... problem was that it required coordinating both thumbs to position the pointer.

advances in technology allowed single small control at the junction of the G/H/B keys (found on some laptops).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS/360 (was LINUS for S/390)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS/360 (was LINUS for S/390)
Newsgroups: bit.listserv.ibm-main
Date: Sun, 25 Feb 2001 16:04:30 GMT
listmanager@SNIPIX.FREESERVE.CO.UK (Roger Bowler) writes:
Actually, OS/360 did have code to deal with control registers. Keep in mind that the original S/370 announcement didn't include DAT, and that two out of the three machines in the announcement didn't even have the hardware for it. S/360 supported EC mode, but didn't exploit it nearly as much as the AOS releases did.

as an aside, from the front panel of the system/360, model 67 reference data "blue card" (229-3174-0)

IBM System/360 Model 67 Reference Data

Floating-point Double Precision Instructions

Load Mixed                      LX      RX      R1,D2(X2,B2)    74
Add Mixed Normalized            AX      RX      R1,D2(X2,B2)    76
Add Double Normalized           ADDR    RR      R1,R2           26
Add Double Normalized           ADD     RX      R1,D2(X2,B2)    66
Subtract Mixed Normalized       SX      RX      R1,D2(X2,B2)    77
Subtract Double Normalized      SDDR    RR      R1,R2           27
Subtract Double Normalized      SDD     RX      R1,D2(X2,B2)    67
Multiply Double Normalized      MDDR    RR      R1,R2           25
Multiply Double Normalized      MDD     RX      R1,D2(X2,B2)    65
Store Rounded (Short)           STRE    RX      R1,D2(X2,B2)    71
Store Rounded (Long)            STRU    RX      R1,D2(X2,B2)    61

Model 67 Instruction Codes

Instruction                     Mnem    Type    Exception       Code

Load Multiple Control           LMC     RS      M, A, S, D P    B8
Store Multiple Control          STMC    RS      M, P, A, S      B0
Load Real Address               LRA     RX      M, A, S         B1
Branch and Store                BASR    RR                      0D
Branch and Store                BAS     RX                      4D
Search List (RPQ)               SLT     RS      P, A, S, Relo   A2

Notes:
A       Addressing Exception
D       Data Exception
M       Privileged Operation Exception
P       Protection Exception
S       Specification Exception.

.........................

also

control registers:


CR0     segment table register (for dynamic address translation)
CR1     unassigned
CR2     translate exception address register
CR3     unassigned
CR4     Extended Mask Registers for I/O Control Masks
CR5     unassigned
CR6     bits 0,1        Machine Check Mask Extensions for channel
controller
bits 2,3        reserved
        bits 4-7        unassigned
bit 8           extended control mode
        bit 9           configuration control bit, defines when
partitioning can take place
bts 10-23       unassigned
bits 24-31      external interruption masking
CR7     unassigned
CR6-14  partitioning sensing registers, see 2167 description for register
        layout
CR15    unassigned

.......................................................

The card also contained the sense byte definitions for

1052/2150, 2540/1821, 1403/1443, 1442/2501/2520, 2671/2822, 2400, 2311/2841, 2301/2820, 2250, 2260, 2280, and 2282

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

database (or b-tree) page sizes

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch,comp.arch.storage
Date: Mon, 26 Feb 2001 16:31:48 GMT
"Bill Todd" writes:
And in that distant past that conclusion was likely correct. Disk transfer rates were measured in KB/sec: a few hundred KB/sec wasn't bad 20 years ago, which meant that 512 bytes took around 10 ms. to transfer, and anything much larger began to affect I/Os/sec noticeably (since disk bandwidths have improved much faster than seek times and rotational speed). Internal bus bandwidths also were far smaller than today's, so larger pages tended to saturate them. And virtual memory was less common, making large pages harder to cache without cache fragmentation (not to mention that the memory just to hold them was expensive as well, and at least on 16-bit machines a precious resource due to addressing limitations).

a large percentage of the big production commercial database operations had virtual memory 25 years ago and mbyte/sec transfer was common ... a lot of that was upgraded to 3mbyte/sec 20 years ago. the upgrade to 3mbyte/sec transfer started a lot of work larger multi-block transfers ... i.e. the disks upgrade from 1mbyte/sec to 3mbyte/sec had significant more capacity under an arm w/o little or no improvmeent in avg. arm access (i.e. it was possible to transfer 3 times as much data per avg. arm access).

long ago and far away comparison of about 35 years ago to 20 years ago:

https://www.garlic.com/~lynn/94.html#43

the above possibly represented 90+% of the business critical database processing during the period.

virtual memory system from 20 years ago could have several thousand 4k pages and an application could have hundreds of 4k pages. "single" paging such an application started to make less and less sense. any benefit from single paging such an application with 4k pages was more than offset by the arm contention issues.

The above claimed that basically from the mid-60s to 1980, real storage and cpu increased by a factor of 50, disk transfer increased by a factor of 10 and avg arm access increased by a factor of 4. In effect, by 1980, the relative system thruput of arm performance had decreased by a factor of 10. With such a significant increase in memory and processing speed ... and some increase in disk transfer speed; it was possible to "waste" (trade-off) significant amounts of cpu and real storage ... and modest amounts of disk transfer attempting to maximize the effectiveness of each arm access.

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

database (or b-tree) page sizes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch,comp.arch.storage
Date: Mon, 26 Feb 2001 20:11:43 GMT
gah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
All this is for moving head disks. The first thing I remember after I first heard of virtual storage was the 2305 fixed-head disk. Not so big, but it had the 3MB/s transfer rate way before 1980, and no seek time. There is still latency, but rotational position sensing (RPS) was supposed to help with that. (I remember RPS from the 3330, I don't know if the 2305 had it.)

-- glen


in the previous reference "drums" were 2301 and then 2305 fixed head devices (one head per track and no moving arm).

2301 was effectively a 2303 fixed head device that read/wrote 4 tracks in parallel for approx. 1.5mbyte transfer (mid 60s). for paging, a pair of tracks were formated with 9 4k pages.

2305 was follow-on. both 3330 and 2305 had rotational position sensing. This wasn't so much for drive thruput ... but for multiple device thruput sharing the same i/o channel. prior to 3330/2305, the sequence was arm and then select record. The arm positioning operation could be done while not making the i/o channel busy (aka multiple concurrent operations could proceed concurrently); but the select record operations would "busy" the channel (i.e. from the time the arm was in position until the correct record rotated under the head). The RPS feature allowed operations to release the channel for other i/o operations until the correct record rotated under the head.

what 2305 had ... in addition was "multiple exposures" ... basically instead of a single I/O request interface, the 2305 had eight logical i/o request interfaces ... with the 2305 controller servicing the requests in optimal hardware order (i.e. effectively could approx. out of order execution).

There was also a "fast" 2305 with heads 180 degree offset cutting the avg. rotational delay in half (i.e. first head that the record got to would read it).

There was also a really special 2305 that may have only been available on special contract from intel ... it was all electronic simulating a 2305 with no arm access and no rotational delay. it had a special model number ... i think 1655(? or some such).

random refs:
https://www.garlic.com/~lynn/2001b.html#69
https://www.garlic.com/~lynn/2000c.html#34
https://www.garlic.com/~lynn/2000d.html#7

note RPS only helped channel busy when selecting specific record. lots of that generation of disk used the CKD feature to search for the correct record based on pattern match (as opposed to selecting specific record). when doing multi-track search, the channel could be continuously busy for as many revolutions as there were tracks in a cylinder (or arm position). example described in the following:

https://www.garlic.com/~lynn/94.html#35
https://www.garlic.com/~lynn/93.html#29
https://www.garlic.com/~lynn/97.html#16
https://www.garlic.com/~lynn/99.html#75

some configuration comparisons

https://www.garlic.com/~lynn/95.html#8

note in the previous reference

https://www.garlic.com/~lynn/95.html#10

where i made the claim that the relative system performance of disk had declinded by at least a factor of five over a period of 12-15 years (at the time i wrote the original), a lot of people in the disk division got upset and they assigned a performance group to investigate. After some study, they came to the conclusion that the actual decline in performance was worse when the effects of RPS-miss were taken into consideration in a large busy environment.

RPS allowed the device to give up use of the channel while the disk was rotating but it expected to re-acquire the channel when the head was in position. However, if the channel was busy with some other operation, there was an RPS-miss and the disk had to perform a full revolution and try again. This was before full-track buffers which didn't show up until the 3880-13 disk controller about 1983 or 1984 (on the road at the intel developer's conference and don't have access to some data ... if any body else is here, i'm on a panel at 2pm tomorrow).

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

On RC4 in C

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: On RC4 in C
Newsgroups: comp.lang.ada,sci.crypt,talk.politics.crypto
Date: Wed, 28 Feb 2001 05:07:57 GMT
William Hugh Murray writes:
(Notice that many already complain about the dominance and business practices of both of these companies. We seem to have a penchant for attributing to predatory practices the results of wise, not to say courageous, investment and novel business models.)

Having been through this once, I have more faith in the markets and competition than in litigation and regulation. Time will tell. In any case, this too is a discussion for another day and another forum. Perhaps over a glass of bubbly some time.


there was some case made regarding some execs that might have been accused of taking a one time shot to quarterly earnings and their bonus converting a significant number of lease machines to purchased in the 70s.

there was also june 23rd, 1969 with the separate charging for people services (aka unbundling; which had been packaged for "free" as part of customer support).

following has comments supposedly attributed to testimony in the trial
https://www.garlic.com/~lynn/94.html#44

--
Anne & Lynn Wheeler | lynn@garlic.com, finger for pgp key
https://www.garlic.com/~lynn/

What is "IBM-MAIN"

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is "IBM-MAIN"
Newsgroups: comp.lang.asm370
Date: Fri, 02 Mar 2001 16:42:10 GMT
Tom Anderson writes:
I have been away from this environment for a while and while reading some posts I see a reference to something called "IBM-MAIN". Is this a newsgroup, a mailing list, or something you subscribe to through IBM?

a bitnet listserv mailing list is also gatewayed to usenet. the mailing list is "ibm-main" ... the usenet gatewayed group is

bit.listserv.ibm-main

--
Anne & Lynn Wheeler | lynn@garlic.com, https://www.garlic.com/~lynn/

Something wrong with "re-inventing the wheel".?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Something wrong with "re-inventing the wheel".?
Newsgroups: alt.folklore.computers,comp.os.cpm
Date: Sat, 03 Mar 2001 00:11:27 GMT
some comments about re-inventing the wheel ... also implies re-inventing the same bugs:

https://www.garlic.com/~lynn/aadsm5.htm#asrn4

on a panel discussion at:
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

What is "IBM-MAIN"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is "IBM-MAIN"
Newsgroups: comp.lang.asm370
Date: Sat, 03 Mar 2001 16:08:10 GMT
Bill Becker writes:
I remember posting to the usenet version and getting the automated nasty-grams telling me "You are not authorized to post to ... " Finally, Darren emailed me personaly. Now all I receive is the monthly subscription probe.

i would get the "not authorized" message ... but they stopped ... but i also notice that lot of non-usenet don't appear to see anything i post. possibly 18?? months ago when i stopped getting the "not authorized" messages.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS for IBM 370

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS for IBM 370
Newsgroups: comp.lang.asm370
Date: Sun, 04 Mar 2001 01:59:08 GMT
lwinson@bbs.cpcn.com (lwin) writes:
IIRC, it was 29 meg per disk pack, and five disks per 2314, so you should be able to fit it all on one unit. Of course, you won't have space for anything else to run the company unless you have multiple disk systems.

It just amazes how today we think 117 Meg is "nothing". In the days that OS/360 was written, it was quite a lot.

Our S/360-40 DOS operating system required all of 16K core memory.

I'm not sure what the low end S/360 minimum memory offering was--16K, or even 8K. Those numbers sound absurbly low today, but in those days the 1401, the most common machine, maxed out at 16K, and many users had smaller machines.


five spindles for a "short" 2314. A regular 2314 had 9 spindles ... only eight were addressable at any one time and a ninth "spare". The 9-spindle 2314s had 8 addressing plugs, it was possible to mount a new pack on the 9th drawer, bring it up to speed, and pop out the addressing plug from one of the other drives and pop it into the drive with the recently mounted pack.

random refs (including URL for picture of 9-drive 2314 string)
https://www.garlic.com/~lynn/2001b.html#7

random other refs:
https://www.garlic.com/~lynn/95.html#8
https://www.garlic.com/~lynn/95.html#10

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Use of ICM

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Use of ICM
Newsgroups: comp.lang.asm370
Date: Sun, 04 Mar 2001 05:24:01 GMT
"Robert L. Cochran Jr." writes:

Consider this code

L    8,ADCON1
LH   9,LENGTH1
L    2,ADCON2
LR   3,9
ICM  2,X'40'
MVCL 2,8

What is the ICM doing and why would an applications programmer want to
use it?

MVCL will use the high byte of 2 as pad character if the destination length is longer than the source length.

x'40' is a ebcdic blank

however,

a) the icm is coded wrong

b) the mvcl has same length for source & destination (aka the lr 3,9).

icm should be something like

icm 2,b'1000',=x'40'

i.e. take the data at storage location =x'40' and load it into high byte of register 2.

i would hope that the assembler would generate an error for the above icm since there is no 3rd argument ... & with the x'40' as the 2nd argument, it is likely that it takes it to mean the 2nd byte of register 2, not the first byte.

icm also sets the condition code based on whether the loaded value(s) were zero or non-zero i.e. can be used in place of load followed by ltr if just interested in zero/non-zero c.c.

icm 8,b'1111',adcon1
bz error
icm 2,b'1111',adcon2
bz error

would test to see if adcon1 & adcon2 are zero

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Use of ICM

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Use of ICM
Newsgroups: comp.lang.asm370
Date: Sun, 04 Mar 2001 05:56:49 GMT
"Robert L. Cochran Jr." writes:

Consider this code

L    8,ADCON1
LH   9,LENGTH1
L    2,ADCON2
LR   3,9
ICM  2,X'40'
MVCL 2,8

and besides it is the wrong reg, the pad byte goes in the high byte of the source length (aka register 9) ... only 24 bits are used in the source and destination length registers.

one might also do a SR 9,9 and an icm,b'0011',length1 with a branch.

so possibly (if there were any chance of source length being less than destination length).

icm 8,b'1111',adcon1
bz error
icm 2,b'1111',adcon2
bz error
sr 9,9
icm 9,b'0011',length1
bz error
lr 3,9
icm 9,b'1000',=c' '
mvcl 2,8

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Use of ICM

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Use of ICM
Newsgroups: comp.lang.asm370
Date: Sun, 04 Mar 2001 06:04:13 GMT
Anne & Lynn Wheeler writes:
icm 8,b'1111',adcon1
bz error
icm 2,b'1111',adcon2
bz error
sr 9,9
icm 9,b'0011',length1
bz error
lr 3,9
icm 9,b'1000',=c' '
mvcl 2,8


as an aside, questions like this posted to comp.arch are treated as if somebody is trying to get their homework done for them ...

random recent refs:
https://www.garlic.com/~lynn/2001.html#70
https://www.garlic.com/~lynn/2001b.html#38
https://www.garlic.com/~lynn/2001c.html#11
https://www.garlic.com/~lynn/2001c.html#10

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Foolish Dozen or so in This News Group

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Foolish Dozen or so in This News Group
Newsgroups: alt.hacker,sci.crypt
Date: Sun, 04 Mar 2001 07:06:25 GMT
"Scott Fluhrer" writes:
And, of course, you have the problem that Gwyn (?) cited that disk drives often write bits to parts of the drive without you asking it to. Being careful with your fopen() and your fclose() functions won't help you there either.

it may be possible to use some form of async i/o with raw read/write, note however that disk electronics criteria for a write tend to be more demanding than a read ... a write failure and system (or even controller) re-allocation of a block to a new physical disk sector doesn't necessarily mean that the original physical disk record is unreadable. disk sparing with a bad spot can take several seconds as it tries to find good spare spots.

couple URLs alta-vista found on (sector & record) sparing
http://til.info.apple.com/techinfo.nsf/artnum/n24530
https://web.archive.org/web/20001121152400/http://til.info.apple.com/techinfo.nsf/artnum/n24530
http://www.eros-os.org/design-notes/DiskFormatting.html
http://docs.rinet.ru:8080/UNT4/ch28/ch28.htm
http://mlarchive.ima.com/winnt/1998/Nov/2142.html
https://web.archive.org/web/20010222201650/http://mlarchive.ima.com/winnt/1998/Nov/2142.html

misc. words from scsi standard
Any medium has the potential for defects which can cause user data to be lost. Therefore, each logical block may contain information which allows the detection of changes to the user data caused by defects in the medium or other phenomena, and may also allow the data to be reconstructed following the detection of such a change. On some devices, the initiator has some control some through use of the mode parameters. Some devices may allow the initiator to examine and modify the additional information by using the READ LONG and WRITE LONG commands. Some media having a very low probability of defects may not require these structures.

Defects may also be detected and managed during execution of the FORMAT UNIT command. The FORMAT UNIT command defines four sources of defect information. These defects may be reassigned or avoided during the initialization process so that they do not appear in a logical block.

Defects may also be avoided after initialization. The initiator issues a REASSIGN BLOCKS command to request that the specified logical block address be reassigned to a different part of the medium. This operation can be repeated if a new defect appears at a later time. The total number of defects that may be handled in this manner can be specified in the mode parameters.

Defect management on direct-access devices is usually vendor specific. Devices not using removable medium typically optimize the defect management for capacity or performance or both. Devices that use removable medium typically do not support defect management (e.g., some floppy disk drives) or use defect management that is based on the ability to interchange the medium.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Massive windows waisting time (was Re: StarOffice for free)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Massive windows waisting time (was Re: StarOffice for free)
Newsgroups: alt.folklore.computers
Date: Sun, 04 Mar 2001 17:34:53 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Thank you for letting me get that off my chest. Now, I'd like to see someone bring in a console with all the blinkenlichts from an IBM 360 model 40 through 85. (Never saw a higher model number.) Now THAT I'd bid on.

there was 195 ... tales that if "lamp test" was performed, fuse could blow ... i believe there has been thread that dummy ones have been used in movies (no computer, just front panel hooked up to something that would blink the lights). san jose research had one.

feb 30th?

random refs on 360/195
https://people.computing.clemson.edu/~mark/acs_technical.html
http://csep1.phy.ornl.gov/ov/node12.html
https://web.archive.org/web/20020208203909/http://csep1.phy.ornl.gov/ov/node12.html
http://www.cs.ucl.ac.uk/research/darpa/internet-history.html
https://web.archive.org/web/20020227225902/http://www.cs.ucl.ac.uk/research/darpa/internet-history.html
http://www.crowl.org/Lawrence/history/computer_list
http://www.cmc.com/lars/engineer/comphist/model360.htm
https://web.archive.org/web/20010610042541/http://www.cmc.com/lars/engineer/comphist/model360.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Foolish Dozen or so in This News Group

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Foolish Dozen or so in This News Group
Newsgroups: alt.hacker,sci.crypt
Date: Sun, 04 Mar 2001 23:22:02 GMT
"Douglas A. Gwyn" writes:
Wrong. It is possible to ensure that the same disk blocks will be ovrwritten (unless a new bad block gets added to the remapping table during the process), but you have to open the file in a particular mode (r+w in stdio terminology); if the file is opened for writing in the default mode, it gets truncated to 0 length and all its previous data blocks are returned to the block-buffer pool. It is quite common for different disk blocks to get assigned as the new file grows.

however, overwriting 27 times is a little harder since straightforward overwrite is likely to just be updating buffer records. frequently multiple overwriting passes consists of different combinations of ones & zeros with the intent of exercising the magnetic flux in different ways on the disk surface.

there is some chance (in unix) of issuing fsck a couple times and waiting a minute or so between passes.

of course even this changes if you are dealing with log-structured filesystem ... which attempts to always write to a new location.

random ref:
https://www.garlic.com/~lynn/2000.html#93

for many of these scenerios it just about boils down to some filesystem enhancement that meets some zeroization standard when blocks are released (and uses sequences of patterns that satisfy some criteria based on knowledge of magnetic properties of disk surface).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

mini-DTR tapes? (was Re: The longest thread ever ...)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: mini-DTR tapes?  (was Re: The longest thread ever ...)
Newsgroups: alt.folklore.computers
Date: Mon, 05 Mar 2001 02:07:17 GMT
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
What's a "mini-DTR" tape?

small 9track tape reel holding <<2400' of tape. all the ones i saw were cheap gray plastic ... 100' (to maybe 200' max) of tape. reel diameter was maybe 1" larger than the hole in the middle for the hub (i.e. say maybe 1/2" of gray plastic).

next size up held maybe up 600' of tape and typically actually came in a case (the gray plastic reels didn't have a case and tape was kept from spilling off the reel with rubber band or very smooth strip of plastic). the 600' reels may have been 1/2 the diameter of a standard 2400' tape reel.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Mon, 05 Mar 2001 21:44:00 GMT
"Lyalc" writes:
No one buys security. They do pay for things that improve their business, short and long term. PKI merely replicates password based processing. Remember, the private key is controlled by a password. So a digital signature is merely an indication that someone once knew the password associated with a certificate/private key. And that the password was verified on a remote machine with no specific indication of that machine's trustworthiness . Nothing more.

Many billions of dollars in transactions are authenticated today by passwords (e.g. ATMs with PINs) with very low technology based risk exposure.

What else is requered depends upon your goals. Secure password capture, secure private key storage, secure processing, high bandwidth and large storage are some criteria that spring to mind for day to day electronic signatures.

Lyal


i would have said that public/private key authentication does a little bit better job than secret-key/pin authentication .... since it eliminates some issues with the sharing of a secret-key.

hardware tokens can be accessed with pin/secret-key and then do public/private key digital signatures for authentication ... but there is no "sharing" of the pin/secret-key.

taken to the extreme, biometrics is a form of secret key .... and while compromise of pin/secret-key in a shared-secret infrastructure involves issuing a new pin/secret-key ... current technology isn't quite up to issuing new fingers if a biometric shared-secret were compromised (aka there are operational differences between shared-secret paradigms and non-shared-secret paradigms ... even if both have similar pin/secret-key/biometrics mechanisms).

in an account-based infrastructure that already uses some form of authentication (pins, passwords, mothers-maiden-name, #SSN, etc), it is relatively straight-forward technology upgrade to public/private key authentication ... w/o requiring business process dislocation that many PKIs represent.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

database (or b-tree) page sizes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch
Date: Mon, 05 Mar 2001 21:59:22 GMT
shocking@houston.rr.com (Stephen Hocking) writes:
There was a study done and written up in the relevant IEEE journal by a Jerry Breecher(?) who took an office app running on the Data General machines of that time and profiled it. He then tried various ways of repacking the shared libraries and measured the paging activity caused by a standard run. There were some rather fancy algorithms tried but it turned out that simply laying out the functions in order of CPU usage (as measured by the flat profile of gprof) provided most of the benefits (reduced paging activity).

cambridge did something similar in the early '70s ... among other things it was used to help develop new garbage collector for APL in a virtual memory environment and also used by some of the other software subsystem products (like used for optimizing IMS structure).

D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971

and in the mid-70s a product called VS/Repack based on the work was released (given large multiple csect/module program it would make a pass at automatic program restructuring).

recent ref to some paging stuff in a.f.c
https://www.garlic.com/~lynn/2001c.html#10

and ref from comp.arch:
https://www.garlic.com/~lynn/93.html#4

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How Commercial-Off-The-Shelf Systems make society vulnerable

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How Commercial-Off-The-Shelf Systems make society vulnerable
Newsgroups: comp.security.misc,sci.military.moderated,sci.military.naval
To: sci-military-moderated@moderators.isc.org
Date: Mon, 05 Mar 2001 16:02:55 GMT
John Doe writes:
The unreflected use of Commercial-Off-The-Shelf Systems in essential infrastructures creates unecessary risks for the modern society. Instead of spending to defend against "rogue nations who want to commit suicide by lobbing missiles", we should better procure systems provably survivable against an information attack. I have written a document on that, which you can find at http://geocities.com/fgerlach.geo/COTS.html.

note that some of this isn't necessarily specific to all COTS ... in the case of buffer exploits, we've claimed that the problem has been common buffer & string semantics in C programs ... i.e. vulnerability analysis in the late '80s specifically identified C buffer & string semantics as likely to increase buffer related problems by possibly two orders of magnitude (aka non-C language & infrastructures with explicit length paradigms tend to have significantly lower buffer related problems).

randem refs:
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

database (or b-tree) page sizes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch
Date: Tue, 06 Mar 2001 15:06:42 GMT
Anne & Lynn Wheeler writes:
and in the mid-70s a product called VS/Repack based on the work was released (given large multiple csect/module program it would make a pass at automatic program restructuring).

there were two methods of capturing CPU & storage refs for input ... one was full instruction simulation which caught all instruction and data references. This was useful in identifying things like the (virtual memory) problem with APL garbage collect ..

random refs:
https://www.garlic.com/~lynn/93.html#5
https://www.garlic.com/~lynn/94.html#7
https://www.garlic.com/~lynn/99.html#20
https://www.garlic.com/~lynn/2000g.html#30

the other method was a modification of the virtual memory system for data collection. max. limit was specified for a process (say like 10 pages), the program was run until it had a page fault that required more than the limit, all the valid virtual page numbers were then output/recorded along with the accumulated cpu consumption, all the pages invalidated (but not necessarily removed from memory), and the application restarted (since the pages removed around in memory, although invalid, the major overhead was the additional page faults). This also had the advantage of obtaining both instruction and data refs. Various calibration and tests (especially for large programs) showed that it was a relatively close approximation to full instruction method of storage ref. capture.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Tue, 06 Mar 2001 15:30:19 GMT
"Lyalc" writes:
Not sure I, or the GAO agrees with that in all cases. The recent GAO report (Advances and Remaining Challenges to Adoption of Public Key Infrastructure Technology, GAO-01-277) outlines some enormous cost and complexity challenges. $170m merely to make a few legacy applications operate with PKI seems a lot, when the PKI itself stills needs to be built.

I would expect many are legacy systems that lack authentication ... claim was for legacy applications that currently have some form of authentication process. The estimate is that the cost of replacing current shared-secret authentication is well under 5% of the cost of the legacy system i.e. a $500m legacy system would possibly cost $25m to modify for public key authentication ... the majority of that tends to be because of various Q&A and integration issues, if the modification can be merged into some ongoing modification cycle, that could further be reduced.

the issue tends to be that a front-end PKI for pilots and test ... involving a small number of accounts can be shown to be less costly than modifying the production system and business processes ... especially if there are various kinds of risk acceptance having the PKI information out of synch with the production account information.

There tends to be a trade-off attempting to scale to full production where the costs of a duplicate PKI account-based infrastructure has to be evolved to the same level as the production account-based infrastructure ... along with the additional business processes maintaining consistency between the PKI accounts and the production accounts. Full scale-up might represent three times the cost of the base legacy infrastructure, rather than <5% of the legacy infrastructure.

That is independent of the costs of a hardware token ... which could be the same in either an account-base deployment or a PKI-based deployment (and i'm working hard at making the costs of producing such a token as low as possible while keeping the highest possible assurance standard). There is a pending $20b dollar upgrade issue for the existing ATM infrastructure that is going to be spent anyway (because of the DES issue). When that is spent, the difference between whether or not public-key support is also included in the new swap-in is lost in the noise of the cost of the overall swap. Existing ATM problem is further confounded that there are master DES keys in addition to the individual DES-key infrastructure (representing significant systemic risks, similar to CA root keys) ... the back-end cost savings with elimination of systemic risk and shared-secrets more than offset the incremental front-end costs of adding public key technology in a swap-out that is going to have to occur anyway.

A trivial example would be RADIUS ... which possibly represents 99.999999% of existing client authentication events around the world on the internet. A public-key upgrade to RADIUS is well under a couple hundred thousand ... resulting in a RADIUS that supports multiple concurrent methods of authentication with the connecting authority be able to specify authentication protocol on an account by account basis.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How Commercial-Off-The-Shelf Systems make society vulnerable

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How Commercial-Off-The-Shelf Systems make society vulnerable
Newsgroups: comp.security.misc,sci.military.naval
Date: Tue, 06 Mar 2001 15:40:02 GMT
Frank Gerlach writes:
You are right that OS390 is mostly written in assembler and HP's MPE in Pascal, but they aren't exactly the sexy systems the DoD people want to use....

except for the length semantic problems in strings and I/O operations, the incidence of buffer problems would be much lower than incidence of dangling pointers and storage cancers.

the use of either explicit lengths and/or instantiated lengths (under the covers) in string libraries and I/O operations could go a long way to achieving that goal.

in the case of os390 ... it isn't a case of assembler ... or pls or pli, etc ... it is convention of explicit length semantics in the system. the i/o services from the highest level down to the hardware implementation uses explicit lengths or instantiated lengths (under the covers) ... so that many types of buffer problems either don't occur or are prevented at runtime (one way or another the i/o routines typically know the max. length of the buffer, either exlicitly in the i/o call or by instantiated length built into the buffer).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How Commercial-Off-The-Shelf Systems make society vulnerable

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How Commercial-Off-The-Shelf Systems make society vulnerable
Newsgroups: comp.security.misc,sci.military.naval
Date: Tue, 06 Mar 2001 16:04:26 GMT
Anne & Lynn Wheeler writes:
in the case of os390 ... it isn't a case of assembler ... or pls or pli, etc ... it is convention of explicit length semantics in the system. the i/o services from the highest level down to the hardware implementation uses explicit lengths or instantiated lengths (under the covers) ... so that many types of buffer problems either don't occur or are prevented at runtime (one way or another the i/o routines typically know the max. length of the buffer, either exlicitly in the i/o call or by instantiated length built into the buffer).

random ref. from long ago & far away ... when i was an undergraduate I had done the implementation for tty devices and taken a short-cut using some one byte operations because the max. hardware transfer for tty devices was 72 characters. this was picked up and incorporated by the vendor in the standard product.

subsequently one customer acquired an ascii device (I think a plotter than simulated tty interface) that had transfers much longer than 72 characters. They patched the system to increase the maximum transfer limit to 1000(? bytes ... but didn't pick up on some of the operations involving one byte numbers (aka 0-255). They recorded a couple dozen system failures in a single day because of the resulting buffer problem.

random ref:
https://www.multicians.org/thvv/360-67.html
https://www.garlic.com/~lynn/2000.html#30

it is still possible to have basic programming errors that give rise to buffer problems ... but nothing like the numbers that are currently experienced

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

database (or b-tree) page sizes

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch
Date: Tue, 06 Mar 2001 17:12:26 GMT
Bernd Paysan writes:
But in case of a credit card business, I suppose banks know pretty good how many credit card customers they have now, and how the market is evolving within the next 5 years.

total aside ... much of the credit card business now is merges and/or out-sourcing. there tends to be large conversion process. also while the basic account part is somewhat regular, keeping the actual transactions indexed is interesting with a very large number of daily inserts ... and large variance in number of transactions per account.

airlines is even more interesting ... since there isn't any ongoing relationships ... just large numbers of new PNR (passenger name records) every day. Even if you only keep six months online ... there are all the new inserts every day ... plus approx. the same number of deletes for the cleaning of aged-out records. A typical system might have hundreds of thousands of PNR inserts and similar number of deletes every day ... in addition to all the nominal query/update activity.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How Commercial-Off-The-Shelf Systems make society vulnerable

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How Commercial-Off-The-Shelf Systems make society vulnerable
Newsgroups: comp.security.misc,sci.military.naval
Date: Tue, 06 Mar 2001 18:11:07 GMT
Anne & Lynn Wheeler writes:
note that some of this isn't necessarily specific to all COTS ... in the case of buffer exploits, we've claimed that the problem has been common buffer & string semantics in C programs ... i.e. vulnerability analysis in the late '80s specifically identified C buffer & string semantics as likely to increase buffer related problems by possibly two orders of magnitude (aka non-C language & infrastructures with explicit length paradigms tend to have significantly lower buffer related problems).

note this wasn't so much specific to C ... one of the other random efforts was having produced a widely-used problem determination tool in the early '80s and spending a lot of time characterising/profiling problems.

a very fundamental issue turned out to be semantics and paradigms that relied on implicit conventions. The prediction with regard to buffers & strings in C was based on vulnerability analysis identifying common buffers/strings was based on implicit paradigm/semantics.

explicit &/or instanstiated tends to create higher level awareness of the issues with programmers ... so the problems just don't show up and/or runtime processes that take explicit information into account that handles situations.

i.e. it wasn't hindsight ... it was an issue of recognizing a major contributor to flaws is implicit (vis-a-vis explicit).

random other refs:
https://www.garlic.com/~lynn/94.html#11

there is sometimes a trade-off between complexity and implicit. While complexity is also a major contributor to flaws ... if explicit has to be sacrificed in favor of simplicity ... then something needs to be explicitly instantiated underneath the covers.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Tue, 06 Mar 2001 20:57:28 GMT
"Lyalc" writes:
I've always taken the view that non-repudiation, at a commercial level, is implementation dependent, regardless of the underlying technology. Certificates/PKI use a shared 'secret' such as a password or biometric The 'weakest link' rule means that shared-secrets/passwords are the thing to focus on for really getting PKI secure, even after the expertise devoted to PKI comes up with the answer to the PKI part of the puzzle.

pins/passwords/biometrics for activating a hardware token is significantly different from pins/passwords/biometrics used as shared-secrets.

it is possible to do 3-factor authentication with no shared-secret

1) something you have
2) something you know
3) something you are

using hardware token that requires both pin & biometric .... where the business process of a pin activated card is significantly different from a business process shared-secret PIN (even if they are both PINS).

hardware token with a pin & biometric requirement can be used to meet 3-factor authentication ... and a shared-secret secret doesn't exist.

and doing it w/o impacting the business process infrastructure .... separate issue from impacting the technology implementation; pilots tend to be technology issues ... real deployment frequently are business process issues.

similar discussion in sci.crypt from last oct:
https://www.garlic.com/~lynn/2000e.html#40
https://www.garlic.com/~lynn/2000e.html#41
https://www.garlic.com/~lynn/2000e.html#43
https://www.garlic.com/~lynn/2000e.html#44
https://www.garlic.com/~lynn/2000e.html#47
https://www.garlic.com/~lynn/2000e.html#50
https://www.garlic.com/~lynn/2000e.html#51
https://www.garlic.com/~lynn/2000f.html#0
https://www.garlic.com/~lynn/2000f.html#1
https://www.garlic.com/~lynn/2000f.html#14
https://www.garlic.com/~lynn/2000f.html#15
https://www.garlic.com/~lynn/2000f.html#2
https://www.garlic.com/~lynn/2000f.html#22
https://www.garlic.com/~lynn/2000f.html#23
https://www.garlic.com/~lynn/2000f.html#24
https://www.garlic.com/~lynn/2000f.html#25
https://www.garlic.com/~lynn/2000f.html#3
https://www.garlic.com/~lynn/2000f.html#4
https://www.garlic.com/~lynn/2000f.html#7
https://www.garlic.com/~lynn/2000f.html#8

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Tue, 06 Mar 2001 21:00:50 GMT
"Lyalc" writes:
Yep, there is a massive phase out challenge for the migration away from single DES to 3DES or AES.

or public key ... the cut-over costs are effectively the same for single solution or multi-solution cut-over deployment. hardware token card with digital signatures can actually reduce various infrastructure costs associated with shared-secret activites (again a hardware token requiring a PIN is done w/o having the PIN a shared-secret).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Tue, 06 Mar 2001 21:32:21 GMT
Anne & Lynn Wheeler writes:
or public key ... the cut-over costs are effectively the same for single solution or multi-solution cut-over deployment. hardware token card with digital signatures can actually reduce various infrastructure costs associated with shared-secret activites (again a hardware token requiring a PIN is done w/o having the PIN a shared-secret).

from various AADS & X9.59 writings at
https://www.garlic.com/~lynn/

take the fully loaded existing costs of sending out a new magstripe card. the incremental costs of injecting a chip into that card is somewhere between 10% and 50% of the fully loaded costs of turning out a new card.

given the chip has the characteristics of a hardware token and doesn't implement shared-secrets ... then the financial requirement for needing a unique magstripes with unique shared-secrets is alleviated (i.e. shared-secrets are not being divulged to different financial and commercial organizations).

with x9.59 for all electronic retail transactions ... then theoretically, a single hardware token performing digitally signed, x9.59 authenticated transactions ... would not have a financial security issue if the token were used for multiple different accounts, potentially leading to a reduction in the number of cards shipped (because the infrastructure has switched away from shared-secret ... even if the token is PIN activated).

furthermore, in the current credit card world, "expiration date" is used as sort of an authentication code (i.e. you can randomly guess a credit card number with some realistic chance of it being valid .... also guessing the corresponding expiration date is much lower probability). a hardware token supporting authenticated (x9.59) transactions minimizes the requirement that an expiration date needs to be carried on the card as a kind of authentication code.

Given that the incremental cost of injecting a chip into an existing card delivery business process is at <100% than the current fully loaded costs, then if the chip can be used to eliminate at least one other card delivery ... there is a net savings to the infrastructure.

If the

1) ATM swap-out costs have no negligable difference in the cost of the kind of technology and/or number of technologies supported

2) and if the backend costs can be significantly reduced by moving to a non-shared-secret infrastructure

3) and if the elimination of shared-secrets and/or other chip related characteristics reduces the number of cards that have to go out

then there is net reduction in business costs .... independent of issues related to fraud reduction with the migration to better authenticated transactions ... and independent of any non-repudiation cost reduction issues.

i.e., by appropriately applying technology to basic business processes there is opportunities for cost reduction.

This approach is completely different from one that might look at the total costs of deploying hardware tokens and authentication technology as part of a brand new infrastructure, totally unrelated to any existing business purpose (not how much more it costs, but how much less it costs).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Wed, 07 Mar 2001 15:04:20 GMT
Benjamin Goldberg writes:
Even if your authentification is via a token, which in turn is activated via biometrics, there still is a secret. It's the data in the token! If all tokens were identical (allowing for them needing different biometrics to activate), they would be useless. The token needs to contain something to uniquely identify it electronically, and authenticate that identity. Identification is simple; give each token a unique id. Authentification, however, requires some sort of secret -- either a private key, or a shared-secret.

The weakest link is still access to the secret. An attacker merely needs to get access to a token and open it up, and avoid the tamper resistance, and he has the secret. This is conceptually no different from beating a password out of the user with a rubber hose.


the difference is whether the secret is known by only one person or a lot of people (i.e. which also represents the semantic difference between "secret" and shared-secret).

in the shared-secret scenerios ... the shared-secret are registered someplace and are subject to harvesting ... aka effectively credit card numbers are treated as shared-secrets (witness all the stuff written about protecting master credit-card databases at merchant servers). Harvesting of master database files of shared-secrets is significantly simpler than defeating tamper-evident and/or beating somebody with rubber hose.

eliminating shared-secrets was the point of the discussion ... and distinguishing shared-secret infrastructures vis-a-vis secret infrastructures, along with the difference in fraud ROI; aka a scenerio where somebody can electronically steal 100,000 shared-secrets in a couple of minutes ... vis-a-vis taking hrs to steal one secret significantly changes the risk, exploit, and fraud characteristics of an infrastructre. If it is possible to deploy a "secret" infrastructure for approximately the same cost as a shared-secret infrastructure and the "secret" infrastructure reduces the fraud ROI by five to ten orders of magnitude (i.e. it takes a thousand times as much effort to obtain 1/100000 usable fraudulant material).
> -- > The difference between theory and practice is that in theory, theory and > practice are identical, but in practice, they are not.

random ref
https://www.garlic.com/~lynn/2000b.html#22

the other part of the scernio ... is financial and various other commercial infrastructures strongly push that in shared-secret scenerios that the same shared-secret can't be shared across multiple different organizations with multiple different objectives i.e. an employer typically has strong requlations against "sharing" a corporate access password shared-secret with other organizations (i.e. using the same shared-secret password to access the corporate intranet as is used to access a personal ISP account and misc. random webservers around the world).

I would guess that a gov. agency would not be too please if an gov. agency employee specified their employee access (shared-secret) password ... as an access (shared-secret) password for some webserver registration site in some other country.

However, it is possible to specify a public key in multiple places and employees of one organization couldn't use the harvesting of the public keys at that organization for penetration of other organizations.

Furthermore, the rubber-hose approach is going to take quite a bit longer to obtain a million secrets and hardware tokens that correspond to the registered public keys (as compared to some of the shared-secret harvesting techniques). Lets say that the rubber-hose approach takes something like two days per ... planning, setup, capture, executing, etc and involves minimum of two people. That is four person days per secret. For a million secrets using the rubber hose method, it then takes four million person days, or 10,959 person days. By comparison some of the shared-secret harvesting techniques can be done in a couple person weeks for a million shared-secrets.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Wed, 07 Mar 2001 15:07:58 GMT
"Lyalc" writes:
True, although operating support costs may theoretically be double or more, since the legacy method can't be switched off anytime soon, at least until the whole world comes into a steady, single state condition. Esxpecially for payment cards.

the card/crypto part of the operating costs is rather trivial part of the overall operating & total business process costs. biggest issue is the deployment of technology in such a way that old & new technologies can concurrently co-exist during the transition phase, while all technologies continue to utilize the same operating & business process infrastructures.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Wed, 07 Mar 2001 15:19:17 GMT
Anne & Lynn Wheeler writes:
four person days per secret. For a million secrets using the rubber hose method, it then takes four million person days, or 10,959 person days. By comparison some of the shared-secret harvesting techniques

oops, that should be 10,959 person years with the rubber hose method compared to possibly a couple person weeks to come up with approx. the same potential fraud "yield".

sort of reminds me of the inverse of the story about why telephones and automobiles were never going to take off in the market ... both required the services of manual people operators. the solution in both cases was to make each individual person their own operator (i.e. rather than having to hire telephone operators and automobile drivers, each person was responsible for their own). The person hour projections for the number of telephone operators and automobile drivers were at least similar to the rubber-hose solution to solving the fraud yield/ROI problem with a transition from a shared-secret infrastructure to a secret infrastructure (with tokens and public keys).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Wed, 07 Mar 2001 16:58:23 GMT
junk@nowhere.com (Mark Currie) writes:
The attack that you make on shared-key systems is not entirely fair though. Although it may be possible to crack the central repository of shared-secrets/credit card numbers, PKI has a similar problem in that if you compromise a CA, or worse, a root CA, you can create millions of new certificates using existing identities that you can now masquerade. The way PKI solves this is to suggest that you place your root CA in a bunker (possibly under a mountain!) and in fact have multiple instances scattered around the world. This increases the cost of PKI. In an earlier thread you mentioned the possible savings to be gained by having chip cards (shared across institutions). This may outweight the associated infrastructure costs but I don't think that PKI infrastructure costs are insignificant. Even if you just focus on the CA's, hierachical PKI's tend to create a central trust point (root CA) that millions of certs rely on. Typically a lot more users rely on the central point than what you would find in shared-secret systems. This puts enormous pressure on the security of this entity. If the root CA (plus copies) are attacked by an organised para-military group, the whole trust chain collapses because you can't be sure that the private key wasn't compromised in the process. Preventing these types of attack are not cheap.

Mark


I'm not talking about PKI, CA's or the certification authority digital signature model (CADS model) ... i'm talking about the AADS (account authority digital signature) model. It eliminates the systemic risks inherent in the CADS model.

random refs to the AADS model can be found at:
https://www.garlic.com/~lynn/

I take a infrastructure that currently registeres shared-secrets and instead register public keys. No business process costs ... just some technology costs.

Given that many back-end systems have some pretty strigent security and audit requirements specifically targeted at preventing things like insiders harvesting shared-secrets .... some of those procedures and associates costs can be alliviated.

Also, in the ISP world ... a significant costs is service call associated with handling a password compromise. This is further aggravated by human factors issues with people having to track & remember a large number of different shared-secrets ... because of the guidelines about not using the same shared-secrets in multiple different domains.

i.e. the start of my comments on this thread was purely the transition of existing business processes (no new, changed &/or different business processes, no reliance on 3rd parties and all the associated new issues with regard to liability, vulnerabilities, and systemic risk, etc) from a shared-secret paradigm to a public key/secret/token paradigm ... and some deployment approaches that would result in lower costs than the current shared-secret paradigm (for instance adding a chip to an existing card being distributed might be able to save having to distribute one or more subsequent cards ... resulting in distributing hardware tokens actually costing the overall infrastructure less than distributing magstripe cards).

random systemic risk refs from thread in sci.crypt in fall of 99
https://www.garlic.com/~lynn/99.html#156
https://www.garlic.com/~lynn/99.html#236
https://www.garlic.com/~lynn/99.html#240

random other system risk refs:
https://www.garlic.com/~lynn/98.html#41
https://www.garlic.com/~lynn/2000.html#36
https://www.garlic.com/~lynn/2001c.html#34

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Wed, 07 Mar 2001 17:05:51 GMT
Anne & Lynn Wheeler writes:
oops, that should be 10,959 person years with the rubber hose method compared to possibly a couple person weeks to come up with approx. the same potential fraud "yield".

the other analysis is a skilled rubber-hose person might be expecting a minimum of $1000/day. At four person days per secret, that comes out to $4k salary ... plus maybe another $1k or so in expenses; or on the order of $5k cost per secret/token.

Lets say the person has $5k credit limit on the account associated with the use of the token ... that means that the fraud can go out and (at best) make $5k worth of fraudulent purchases. Say brand-new stuff that they then have to fence at .10 on the dollar ... yielding $500 at an outlay of $5k.

Downside is that the person may be able to report the token lost/stolen/compromised prior to the criminals being able to fully take advantage of it. There is also possibility that person already has some charges out standing so the available credit is less than the credit limit.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Wed, 07 Mar 2001 17:37:51 GMT
Anne & Lynn Wheeler writes:
oops, that should be 10,959 person years with the rubber hose method compared to possibly a couple person weeks to come up with approx. the same potential fraud "yield".

also, possibly unnecessary observation is that paper signatures are also subject to rubber hose attacks. various legislation that i'm aware of attempts to put digital signatures on somewhat equal footing with paper signatures ... not a stronger footing; and procedures for handling rubber hose extraction of signatures have been around for possibly hundreds (? depends on how far back you want to take things defined as signatures) of years

there are possible trade-offs; paper signatures are probably a little easier to counterfeit than hardware token digital signatures. the rubber hose bit is probably about the same. in any kind of token/public key registration environment ... it should be straight-forward to report lost/stolen/compromised (in the AADS model it should be the same as existing card lost/stolen).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

database (or b-tree) page sizes

Refed: ** -, ** -, ** -, ** -, ** -, ** -, ** -, ** -, **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch,comp.arch.storage
Date: Wed, 07 Mar 2001 21:43:49 GMT
Jan Vorbrueggen writes:
How about per-page setup costs?

370 supported both 2k & 4k pages ... and the lower-end 370 operating systems (dos/vs & vs1) were implemented using 2k pages.

there was some amount of work in the mid-70s that showed for configurations starting at least with 1mbyte (and possibly smaller) of real memory) that VS1 running in "handshaking" mode under VM/370 ... ran faster (& sometimes significantly so) than VS1 running "natively" on the real machine.

VM/370 used 4k paging ... and when VS1 ran under VM/370 a large virtual machine was defined for VS1 where VS1 had its total virtual memory mapped one-for-one to VM/370 virtual machine virtual memory (i.e. VS1 never had a normal page fault because all possible of its virtual pages were defined as resident ... at least in the storage of the virtual machine). As a result, no VS1 2k page faults occurred and VS1 performed no 2k page fetch/write I/O operations when running in hand-shaking mode under VM/370. Any page faults and paging that did take place were VM/370 4k page faults and 4k fetch/write I/O operations associated with the virtual storage of the VS1 virtual machine.

The trade-offs were VS1 running with 2k pages, 2k page faults, and 2k page I/O, natively on the real hardware vis-a-vis VS1 running in a virtual machine (with the various associated virtual machine simulation overheads) with VM/370 supporting 4k pages, 4k page faults, and 4k page I/O (plus the reduction in available real storage for VS1 operation represented by the VM/370 kernel and misc. other fixed storage requirements).

So, lets say for a 1mbyte 370/148 with 1mbyte of real storage

vs1 native on the real machine with 512 2k pages

vs1 in a vm/370 virtual machine, with vm/370 running on the real machine and requiring approx. 200k bytes of fixed storage ... reducing the storage remaining to VS1 to about 800kbytes ... or approximately 200 4k pages.

Even with the associated virtual machine simulation overhead and reduction in real storage, the benefits of having VM/370 manage the infrastructure using 4k pages outweighed running VS1 on the native hardware managing the infrastructure using 2k pages.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

database (or b-tree) page sizes

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch,comp.arch.storage
Date: Wed, 07 Mar 2001 22:00:27 GMT
Anne & Lynn Wheeler writes:
Even with the associated virtual machine simulation overhead and reduction in real storage, the benefits of having VM/370 manage the infrastructure using 4k pages outweighed running VS1 on the native hardware managing the infrastructure using 2k pages.

on the other hand, the comparison isn't quite fair. I had gotten the total pathlength on CP/67 to page-fault, perform replacement algorithm, pro-rated portion of pathlength to perform a page write i/o operation, schedule page read i/o, select another task, task-switch to another process, take page read i/o interrupt, validate page, task-switch back to process ... somewhere on the order of 150-200 instructions.

VM/370 initially increased that by a factor of 3-4 ... but I was able to get it back down to <400.

The equivalent pathlength in VS/1 was closer to 15* the cp/67 number.

4k paging helped ... but having approx. 1/10th the pathlength per (page fault) event possibly helped more than having approx. 1/2 the number of (page fault) events (using 4k instead of 2k pages) ... overall reducing the virtual memory associated pathlength by a factor of 20.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Thu, 08 Mar 2001 14:59:11 GMT
"Lyalc" writes:
eliminating shared-secrets was the point of the discussion ... and

That challenge PKI faces is translates the shared-secret risk ( which is one to 1) into a shared trust (many to many) situation. Trust in the CA, and trust you have the real CA certificate in particular are the low hanging fruit on the PKI trust tree there are others in the infrastructure part of PKI.. And all of this is a bigger, more expensive challenge than initially thought, by many orders of magnitude.


yep, business processes are almost always significantly more expensive than technology. in many cases, business processes (and various associated trusts) have grown up via trial and error over many (sometimes hundreds of) years. injecting totally unrelated new parties into such processes (especially in critical paths of trust) is an enormous undertaking ... in terms of value ... possibly duplicating the value of the transactional value of the existing processes.

that is one of the reasons after working on:

https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1

that I decided to start looking at a PKI (public key infrastructure, AADS model) that used the technology but preserved the existing trust & business process models.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Thu, 08 Mar 2001 15:17:07 GMT
junk@nowhere.com (Mark Currie) writes:
Hi,

Ok, I see now where you changed to discussing this model (AADS). I am not familiar with this model, I guess that I will have to visit your site. It sounds interesting.

Mark


as per prior note ... after working on

https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1

it seemed to be worthwhile to look at a public key infrastructure model that preserved the existing trust and business process models.

random refs:
https://www.garlic.com/~lynn/aadsover.htm
https://www.garlic.com/~lynn/draft-wheeler-ipki-aads-01.txt
https://www.garlic.com/~lynn/aadswp.htm

CAs and certificates were an design point to address trust in offline email where little or no trust processes existed ... which might show a net benefit. However, introducing a new trust participant in existing value transactions containing significant number of stake & trust members is a very, very difficult business process. If the argument is purely based on better authentication technology .... then do an implementation that is a technology public key infrastructure w/o having to introduce new trust partners into every transaction.

For a technology demonstration, a CA-base with certificates is a possibly less expensive demo scenerio i.e. a couple thousand certificates and some boundary demonstration of signed transaction which then are converted into standard business form and processed in the normal way. Somebody takes a risk acceptance on the fact that the limited demo isn't integrated into the standard trust and business processes. The trade-off is the integration of the public key infrastructure into the existing business & trust processes (1-5% of the existing data processing infrastructure costs) versis scaling a CA-base with certificates into a duplicate of the existing business & trust processes (100% of the existing data processing infrastructure costs) plus synchronizing the CA-base and the existing base for things like referential integrity (possibly another 100% of the base).

The cut-over from the CADS-model for demo purposes to integrated AADS-model would be at 1-5% of the account base plus any risk exposure because of a non-integrated operation.

Part of some of the AADS-model scenerios has been trying to demonstrate overall cost savings converting to public key infrastructure authentication from existing business authentication methods (i.e. various benefits are larger than the cut-over costs). Part of that is looking for existing transition activities and attempt to merge a AADS-model transition into some transition that is already going to be done and has been cost justified for other reasons.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Thu, 08 Mar 2001 16:37:36 GMT
Anne & Lynn Wheeler writes:

www.garlic.com/~lynn/aadsm5.htm#2
www.garlic.com/~lynn/aadsm5.htm#3
www.garlic.com/~lynn/aadsm5.htm#4
www.garlic.com/~lynn/aadsm5.htm#1


oops, finger slip

https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Varian (was Re: UNIVAC - Help ??)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Varian (was Re: UNIVAC  - Help ??)
Newsgroups: alt.folklore.computers
Date: Thu, 08 Mar 2001 23:53:01 GMT
Eric Chomko writes:
Sorry, no help there. But what About Univac aquiring the Varian line of minicomputers? Not sure if you have that or not.

random ot, at least some varian orgs used CP/67 as platform for some of the engineering tools (late '60s). some of the people then went on to form LSI Logic where they continued to use VM/370.

Later, a friend of mine there (LSI) ported the bell "370" C compiler to CMS in the early '80s (as well as fixing/rewriting significant pieces) so he could then port various c-based chip tools (many out of berkeley) to CMS.

In the mid 80s ... they were using SGIs as graphics front-end with VM/370 on 3081 as the tools backend.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Fri, 09 Mar 2001 00:23:32 GMT
Anne & Lynn Wheeler writes:
in the shared-secret scenerios ... the shared-secret are registered someplace and are subject to harvesting ... aka effectively credit card numbers are treated as shared-secrets (witness all the stuff written about protecting master credit-card databases at merchant servers). Harvesting of master database files of shared-secrets is significantly simpler than defeating tamper-evident and/or beating somebody with rubber hose.

note that CC# harvesting can also be an insider activity
Large Criminal Hacker Attack on Windows NT E-Banking and E-Commerce Sites

3:00 PM EST, Thursday, March 8, 2001

In the largest criminal Internet attack to date, a group of Eastern European hackers has spent a year systematically exploiting known Windows NT vulnerabilities to steal customer data. More than a million credit cards have been taken and more than 40 sites have been victimized.

The FBI and Secret Service are taking the unprecedented step of releasing detailed forensic information from ongoing investigations because of the importance of the attacks.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How Many Mainframes Are Out There

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How Many Mainframes Are Out There
Newsgroups: bit.listserv.ibm-main
Date: Fri, 09 Mar 2001 14:40:03 GMT
Howard Brazee writes:
I suppose supercomputers can be mainframes as well. The line between mini-computers and mainframes is very blurred these days. But I am curious - in the traditional mainframe market, who are the current players and how much market share do they have? Preferably, I would like to know Burroughs and Univac separated from each other as they were very different machines.

posting of some super-mini numbers from '88 (about 850) ... comparable to most ibm mainframes in processing power (at least at the time).

https://www.garlic.com/~lynn/2001b.html#56

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Fri, 09 Mar 2001 14:35:20 GMT
not-a-real-address@usa.net (those who know me have no need of my name) writes:
more efficient for whom? if i have 8 institutions, say 2 banks and 6

financial and business institutions are claiming that they have to go to relying-party-only certificates because of privacy and liability reasons.

a generalized identity certificate represents a severe privacy issue. solution is a domain/business specific certificate carrying something like just an account number ... so not to unnecessarily divulge privacy information (even name). EU is saying that electronic payments at point-of-sale needs to be as annonymous as cash. By implication that means that payment cards need to remove even name from the card. A certificate infrastructure works somewhat with an online environment and just an account number.

a generalized 3rd party certificate represents a severe liability issue. solution is a domain/business specific relying-party-only certificate.

combine an "account" certificate and a relying-party-only certificate you have a CADS model that is the AADS model with redundant and superfluous certificates appended to every transactions, i.e. it is trivial to show that when a public key is registered with an institution and a copy of the relying-party-only certificate is returned (for privacy and liability reaons) where the original is kept by the registering institution; then returning the copy of the relying-party-only certificate to the institution appended to every transaction is redundant and superfluous because the institution already has the original of the relying-party-only certificate.

it is redundant and superfluous to send a copy of the relying-party-only certificate appended to every transaction to an institution that has the original of the relying-party-only certificate (i.e. the insitution that was both the RA and the CA for that specific relying-party-only certificate).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Fri, 09 Mar 2001 14:59:10 GMT
not-a-real-address@usa.net (those who know me have no need of my name) writes:
more efficient for whom? if i have 8 institutions, say 2 banks and 6

the other scenerio from

https://www.garlic.com/~lynn/ansiepay.htm#aadsnwi2

is that for the CADS business solution to privacy and liability that the certificates are actually there .... but using some knowledge about the actual business flow of the transactions it is possible to compress the certificate size by eliminating fields that are already in possession of the relying party ... and specifically to show that all fields in the certificate are already present at the relying party and therefor it is psosible to deploy a very efficient CADS infrastructure using zero byte certificates.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Fri, 09 Mar 2001 17:46:59 GMT
junk@nowhere.com (Mark Currie) writes:
OK, read your aadsover.htm and aadswp.htm. Sounds pretty good to me.

Questions:

1. Would an account holder have to have a separate PK set for each institution ?

2. Does the model allow the account holder to generate the PK set ?

One (possibly minor) problem that I can see is if your private key is compromised/lost, in AADS you have to contact all institutions yourself, in CADS you only have to contact the CA.

Mark


most of these are business &/or personal preference issues.

Currently if a person has 15 pieces of plastics in their waller, and their wallet is lost or stolen they have 15 places to notify. However, there are several 1-800 offerings that can be used to notify all 15.

If a person/business chose to have one or two hardware tokens registered in multiple places or 15 hardware tokens registered in multiple places ... there are still 15 places to notify.

The decision of what granularity of 1-to-1 or 1-to-many is business/personal decision (i.e. business may or may not allow various options, individuals may or may not to choose various options).

There is some serious issue whether CADS model scales up to large number of end-points ... either CRLs or OCSP.

The other issue is that significant numbers of businesses, commerical and financial entities are having difficulty with standard 3rd party identify certificate model because of privacy and liability reasons. For instance, the claim that EU is dictating that electronic transactions at retail point-of-sale will be as anonymous as cash; aka current payment cards have to remove name & other identification from plastic and magstripe ... as well as eliminate signature requirement.

The solution has been relying-party-only certificates. The relying-party-only aspects eliminates serious liability issues. A relying-party-only certificate carrying just an account number eliminates the serious privacy issues.

However, walking thru the process flows for relying-party-only certificate it is possible to show that appending a relying-party-only certificate (in the CADS-model) is redundant and superfluous.

The other approach it is possible to show that a relying-party-only certificate appended to every transaction can be compressed to zero bytes (i.e. every transaction has a CADS zero-byte certificate appended to every transactions).

misc. refs discussing redundant and superfluous appending of relying-party-only certificates and/or how relying-party-only certificates can be trivially compressed to zero bytes.

https://www.garlic.com/~lynn/99.html#238
https://www.garlic.com/~lynn/99.html#240
https://www.garlic.com/~lynn/2000.html#36
https://www.garlic.com/~lynn/2000b.html#53
https://www.garlic.com/~lynn/2000b.html#92
https://www.garlic.com/~lynn/2000e.html#40
https://www.garlic.com/~lynn/2000e.html#47
https://www.garlic.com/~lynn/2000f.html#15
https://www.garlic.com/~lynn/2000f.html#24
https://www.garlic.com/~lynn/2001.html#67
https://www.garlic.com/~lynn/2001c.html#56
https://www.garlic.com/~lynn/2001c.html#8
https://www.garlic.com/~lynn/2001c.html#9
https://www.garlic.com/~lynn/aadsm2.htm#account
https://www.garlic.com/~lynn/aadsm2.htm#inetpki
https://www.garlic.com/~lynn/aadsm2.htm#integrity
https://www.garlic.com/~lynn/aadsm2.htm#pkikrb
https://www.garlic.com/~lynn/aadsm2.htm#scale
https://www.garlic.com/~lynn/aadsm2.htm#stall
https://www.garlic.com/~lynn/aadsm3.htm#cstech6
https://www.garlic.com/~lynn/aadsm3.htm#kiss1
https://www.garlic.com/~lynn/aadsm3.htm#kiss2
https://www.garlic.com/~lynn/aadsm3.htm#kiss4
https://www.garlic.com/~lynn/aadsm3.htm#kiss5
https://www.garlic.com/~lynn/aadsm3.htm#kiss6
https://www.garlic.com/~lynn/aadsmail.htm#variations
https://www.garlic.com/~lynn/aepay3.htm#aadsrel1
https://www.garlic.com/~lynn/aepay4.htm#comcert10
https://www.garlic.com/~lynn/aepay4.htm#comcert11
https://www.garlic.com/~lynn/aepay4.htm#comcert12
https://www.garlic.com/~lynn/aepay4.htm#comcert15
https://www.garlic.com/~lynn/aepay4.htm#comcert2
https://www.garlic.com/~lynn/aepay4.htm#comcert3
https://www.garlic.com/~lynn/aepay4.htm#comcert9
https://www.garlic.com/~lynn/aepay4.htm#dnsinteg2
https://www.garlic.com/~lynn/aepay4.htm#x9flb12
https://www.garlic.com/~lynn/ansiepay.htm#simple

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Sat, 10 Mar 2001 00:07:36 GMT
Daniel James writes:
There's also an interesting question of how you prove that you really are the owner/authorized user of the allegedly lost/stolen token when reporting its loss/theft ... and of how the service operating the 1-800 number proves to your financial institutions that you really did report the cards missing.

CRLs should always be signed, to frustrate denial of service attacks.

Cheers, Daniel.


yep, if you look at the weakest link in the chain ... with respect to applying digital signatures & X9.59 to payment cards ... the weakest link has been the ability to harvest credit card numbers and use them in fraudulent unauthenticated transactions.

fixing fraudulent unauthenticated transactions with X9.59 ... then possibly the next weakest link in the current infrastructure is the lost/stolen business process. This weakness also applies to calling up your local friendly CA and convincing them that your hardware token containing your private key has been lost/stolen.

this doesn't detract from x9.59 being able to eliminate the harvesting credit card exploit associated with using them for executing fraudulent unauthenticated transaction ... it just says that fraud will have to move someplace else.

at least the straightforward weakness in lost/stolen process is denial of service ... as opposed to fraudulently obtaining value thru fraudulent unauthenticated transactions. projected fraud costs from issues associated with lost/stolen report issues are significantly less than the current fraud costs associated with fraudulent unauthenticated transactions.

note the CRL issue and whether or not they are signed ... are independent of how can convince you local friendly CA that your hardware token has been lost/stolen.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Sun, 11 Mar 2001 21:50:49 GMT
"Lyalc" writes:
Please do. The challenge is not only the technology, but what the technology is used for, and it's specific needs

as an aside, possibly to see the contrast between a shared-secret infrastructure and a secret infrastructure in the case of biometrics. biometrics can be used in either a shared-secret scenerio (the biometric metric is transmitted along with the transaction to the relying party) or in a secret scenerio (the biometric metric is used to activate something like a hardware token, but is not part of the transaction, known by the infrastucture &/or relying party).

there is a bunch of stuff in x9.84 (Biometric Information Management and Security) having to do with protecting a biometric shared-secret infrastructure. In a PIN-based shared-secret scenerio, when the PIN value is compromised, invalidate the old PIN and issue a new one. A problem in a biometric-based shared-secret scenerio, when a biometric value is compromised, it is difficult to issue new fingers, eyeballs, DNA, etc. at the current technology state-of-the-art.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Risk management vs security policy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Risk management vs security policy
Newsgroups: comp.security.misc
Date: Sun, 11 Mar 2001 21:54:59 GMT
Jon Haugsand writes:
Within the (computer) security community there are two approaches to the handling of risks: Using risk management or by defining a security policy and implementing it.

an old discussion considering the merger of risk management and information security:
http://lists.commerce.net/archives/ansi-epay/199901/msg00014.html
https://web.archive.org/web/20020402060455/http://lists.commerce.net/archives/ansi-epay/199901/msg00014.htm

references are to asset liability management.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

SSL weaknesses

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SSL weaknesses
Newsgroups: comp.security.misc
Date: Mon, 12 Mar 2001 15:30:49 GMT
Amy writes:
What are the potential and actual weaknesses of SSL??

note no so much a protocol issue ... but business process issues with regard to the certificates & current browsers implementing SSL

random refs:
https://www.garlic.com/~lynn/aepay4.htm ... whole merchant
comfort certificate thread
https://www.garlic.com/~lynn/aepay4.htm#comcert14 ... a root cert list
https://www.garlic.com/~lynn/2001c.html#8
https://www.garlic.com/~lynn/2001c.html#9

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

SSL weaknesses

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SSL weaknesses
Newsgroups: comp.security.misc
Date: Mon, 12 Mar 2001 20:13:48 GMT
p27604@email.mot.com (Doug Stell) writes:
Since SSL Version 2 was mentioned in a reply to your other post, it would be appropriate to say that SL Version 2 had a very significant weakness that was addressed in Version 3. Version 2 was subject to a man-in-th-middle attack. Version 3 fixed that by verifying the integrity of the early messages after the secure channel is established.

one of the man-in-the-middle on V2 was that it was possible during protocol negotiations to force a downgrade from 128-bit to 40-bit.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

confused about active scripting and security

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: confused about active scripting and security
Newsgroups: alt.comp.virus,alt.security,comp.security.misc
Date: Tue, 13 Mar 2001 00:41:44 GMT
Gary Flynn writes:
I browse with scripts, ActiveX, and java disabled. I can't remember the last time not having ActiveX or Java enabled caused me problems except for the Windows Update site. Many more sites use scripting which I manually enable as needed when I need to use their functionality. Frequently used sites that require scripts and whom I trust I place in IE's trusted sites list.

You took a major step in the right direction by visiting the Windows Update Site. I'm recommending a monthly visit here.

The other major thing I'd recommend is disabling scripts in email and news messages.


there was an xmas greeting email scripting trojen horse in '74 or '75 (forget which now, been a couple years). resolving that pretty much left everybody involved with strong inclination not to allow automatic scripting in relationship to any network activity.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Key Recovery System/Product

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Key Recovery System/Product
Newsgroups: sci.crypt
Date: Wed, 14 Mar 2001 15:06:13 GMT
"Arnold Shore" writes:
Not science, but I'll appreciate being pointed towards any available product suitable for use in a light/medium-duty environment.

The problem I'm trying to solve is with an application that publishes user-specific information online, encrypted by that user's public key -- the latter computed from the user's userID/Password hash as his private key. This has worked out well - a trusted batch application encrypts using a protected repository-stored public key.

Now I need to accommodate a requirement that a second party - suitably authorized - needs occasional access.

Is there a feasible approach other than key recovery? Thanks, all.


at the basis, it is an authorization/authentication problem using various binding trade-offs. the public key encryption (assuming strong private key protection) is form of early binding of authorization/authentication (the authorization/authentication of who has access to the data is bound early by encrypting it with the individual's public key). No real audit trail is necessary, since there is high confidence that only the "owner" of the data (the user themself) has access to their own private data (thru the use of their strongly protected private key).

Generalizing that to multiple-access gets into either compromising the user's private key ... and/or having a duplicate repository. The duplicate repository is either early binding with the additional party's public key or uses more traditional late binding authorization/authentication (still could be done with public/private key authentication ... but goes thru some form of ACL list along with audit trail).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

KI-10 vs. IBM at Rutgers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: KI-10 vs. IBM at Rutgers
Newsgroups: alt.folklore.computers
Date: Wed, 14 Mar 2001 18:15:53 GMT
Lars Poulsen writes:
From this, Cutler went to head up the development of VMS. The goal of VMS was to build a robust business data processing system for the new VAX product line. VAX/VMS was a great system with hardware and software architectures that were built together. It ran slower than the PDP-10, but it was a lot more robust.

it is always useful having some experience and/or example to work from.

when my wife and I did HA/CMP product ... we had our previous experience from having done mainframe clusters in the '70s. Also, when I was doing the initial distributed lock manager for HA/CMP ... from a couple of the top RDBMS vendors, we were provided with the ten top things done wrong by the VMS distributed lock manager and suggested that we needed to find a better way of doing it ... it is always easier to start from scratch building on previous experience.

random refs:
https://www.garlic.com/~lynn/94.html#15 cp disk story
https://www.garlic.com/~lynn/94.html#19 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#31 High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/97.html#4 Mythical beasts (was IBM... mainframe)
https://www.garlic.com/~lynn/98.html#35a Drive letters
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/98.html#58 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#182 Clustering systems
https://www.garlic.com/~lynn/99.html#183 Clustering systems
https://www.garlic.com/~lynn/99.html#219 Study says buffer overflow is most common security bug
https://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
https://www.garlic.com/~lynn/2000b.html#45 OSA-Express Gigabit Ethernet card planning
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#31 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
https://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#40 Disk drive behavior

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

What ever happened to WAIS?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to WAIS?
Newsgroups: alt.folklore.computers
Date: Thu, 15 Mar 2001 15:46:54 GMT
Charles Eicher writes:
Look, I understand what replaced WAIS, and why it never became popular with the masses, that is not in question. I want to know what HAPPENED to the WAIS system. Did people decide to decommission all the perfectly good, functional WAIS servers in one fell swoop? What happened to the main CM-1 WAIS system at Thinking Machines? Did that one go and then all the others followed? Surely someone knows something about the situation. It should be folklore by now.

they moved to menlo park ... old 3story house (looked like it was built in at least early 50s maybe earlier) ... and got sold. i went to some meetings there ... i somewhat remember sort of deal with einet spinoff of MCC consortium in austin (what ever happened to einet?). keeping the index up-to-date was/is a real chore.

national library of medicine & library of congress had a number of sigwais/z39.50/sigir meetings (nlm has full CD of just the words/terms they use for indexing).

wais inc, was looking for revenue streams ... and the search engine guys offering "free" indexing of the web probably closed some of those avenues.

following is from internet monthly report for march 1997 of activity at internic registration services

Gopher connections: 18222 retrievals: 20580 WAIS connections: 36532 retrievals: 15926 FTP connections: 90868 retrievals: 178451 Telnet 67372 Http 12855124

totally random ref:
https://www.garlic.com/~lynn/2000d.html#64
https://www.garlic.com/~lynn/94.html#26

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM Glossary

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Glossary
Newsgroups: bit.listserv.ibm-main
Date: Fri, 16 Mar 2001 16:05:11 GMT
sknutson@LANDMARK.COM (Sam Knutson) writes:
Check out

IBM Definitions
http://www.ibm.com/ibm/terminology

Free on-line dictionary of computing Awesome!
http://www.foldoc.org

Handy and easy to remember
http://www.dictionary.com

Thanks, Sam Knutson


I even have some http hits originating from some embedded url in some ibm redbook.

there are payment, security, and financial glossaries; as well as index of ietf standards

https://www.garlic.com/~lynn/payment.htm
https://www.garlic.com/~lynn/secure.htm
https://www.garlic.com/~lynn/x9f.htm
https://www.garlic.com/~lynn/financial.htm

https://www.garlic.com/~lynn/rfcietff.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Wheeler and Wheeler

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wheeler and Wheeler
Newsgroups: alt.folklore.computers
Date: Fri, 16 Mar 2001 16:10:25 GMT
Paul Repacholi writes:
Anne & Lynn Wheeler writes: ^^^^ ^^^^ >when my wife and I did HA/CMP product

OK, after 12 years or so of wondering, I'm finally going to ask;

Which is which?


anne is mrs, lynn is mr.

anne was in gburg, was the catcher in jes group for asp turning into jes3

for awhile anne was in pok, responsible for loosely-coupled (aka cluster) architecture ... wrote Peer-Coupled Shared Data which became basis for IMS hot-standby and then parallel sysplex. also for awhile she had position as manager of 6000 engineering architecture. She also has a patent or two on token LANs.

ha/cmp is High Availability Cluster Multi-Processing ... product that we started in the late '80s when we were also working on fiber-channel cluster scale-up (before a lot of cluster work was really popular on "open" platforms.

we also ran high-speed backbone (among other things ran large rios chip design files between austin and the first large-scale hardware chip logic simulator on the west coast ... was about 50,000 faster than software logic simulators of the time).

random refs at
https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Wheeler and Wheeler

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wheeler and Wheeler
Newsgroups: alt.folklore.computers
Date: Fri, 16 Mar 2001 16:35:06 GMT
... and for the past several years we've done work on various kinds of systems, including financial systems ... random refs:

https://www.garlic.com/~lynn/aadsm5.htm#2
https://www.garlic.com/~lynn/aadsm5.htm#3
https://www.garlic.com/~lynn/aadsm5.htm#1
https://www.garlic.com/~lynn/aadsm5.htm#4

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Wheeler and Wheeler

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wheeler and Wheeler
Newsgroups: alt.folklore.computers
Date: Fri, 16 Mar 2001 16:50:49 GMT
Anne & Lynn Wheeler writes:

www.garlic.com/~lynn/aadsm5.htm#2
www.garlic.com/~lynn/aadsm5.htm#3
www.garlic.com/~lynn/aadsm5.htm#1
www.garlic.com/~lynn/aadsm5.htm#4


oops, finger slip (brain check?)
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/aadsm5.htm#asrn1
https://www.garlic.com/~lynn/aadsm5.htm#asrn4

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Fri, 16 Mar 2001 22:21:43 GMT
not-a-real-address@usa.net (those who know me have no need of my name) writes:
can allow, but do you really expect the banking industry (to name but one i think likely to object) to accept it?

X9 is the ANSI/US financial standards organization (financial institutions)

x9 web site is at
http://www.x9.org

TC68 is the iso/international financial standards organization (financial institutions). The US is the chair of TC68.

TC68 web site is at
http://www.tc68.org

there is also a tc68 page at the ISO home
http://www.iso.ch/meme/TC68.html

X9.59 recently passed (electronic standard for all account-based retail payments):
https://www.garlic.com/~lynn/aepay6.htm#x959dstu

and is now on the ANSI electronic store web site:
https://web.archive.org/web/20020214081019/http://webstore.ansi.org/ansidocstore/product.asp?sku=DSTU+X9.59-2000

the additional addenda field to carry the addiitional X9.59 data is already going forward as part of the standard five year review for ISO8583. mapping of x9.59 to iso8583 (i.e. all payment cards) ref:
https://www.garlic.com/~lynn/8583flow.htm

X9 also passed an AADS NWI (i.e. new work item, the process by which X9 financial standards work is done, for instance X9.59 was originally a NWI).
https://www.garlic.com/~lynn/aepay3.htm#aadsnwi

The draft document for the AADS NWI was the AADS document in IETF draft format (with some wrappers) ... but will be significantly redone since the document formats for IETF and X9 are significnatly different
https://www.garlic.com/~lynn/draft-wheeler-ipki-aads-01.txt

many financial institutions have already gone to relying-party-only certificates because of the signficant privacy & liability issues with traditional TTP/3rd party identity certificates. It is relatively trivial to show that for relying-party-only certificates appended to account-based financial transactions that the certificates can be compressed to zero-bytes.
https://www.garlic.com/~lynn/ansiepay.htm#aadsnwi2

archives of the X9A10 working group mailing lists (the X9A10 retail payment standards organization that produced X9.69)
http://lists.commerce.net/archives/ansi-epay/
https://web.archive.org/web/20020604031659/http://lists.commerce.net/archives/ansi-epay/

reference to NACHA accepting a similar AADS specification for ACH
https://www.garlic.com/~lynn/99.html#224

misc. other x9.59 & aads references
https://www.garlic.com/~lynn/

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PKI and Non-repudiation practicalities

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI and Non-repudiation practicalities
Newsgroups: sci.crypt
Date: Fri, 16 Mar 2001 22:30:18 GMT
not-a-real-address@usa.net (those who know me have no need of my name) writes:
further, while your description is slightly different than mine, you still describe exactly what i did, to wit, you are without access to the materials or services controlled by the compromised secret until the replacement is generated and communicated to all linked institutions. granted each material or service becomes available as the associated institution is notified, so that you can sequence according to need vs convienience. but that still leaves quite a lot of work to do to get it all done. (i don't like most of those credit card "registries" as it is, and it looks like aads would make them almost a necessity.)

looking at some pieces

does the bank trust the card & does the bank trust who you are.

current system has the bank mailing you the card and you get to call up and answer one or more questions (sometimes just touch-tone & AR unit and sometimes it kicks out to real person).

Given the AADS chip strawman that a bank can trust w/o having to mail you something ... then it comes down are you really you. Simplest is a website analogy to the 1-800 activation, you show you have the private key by signing something that includes the public key, the web site asks you one or more questions (like off recent statements, from credit history, etc), the more questions that are answered the higher the confidence (credit limit); confidence increases over time with use and/or somebody walking into their local branch.

In this scenerio, it switches from bank choice about how many cards to individual choice about how many cards ... the person can choose to use as many or as few different cards for their relationship ... however lost/stolen pattern tends to be wallet/purse associated ... not single cards.

So the issue is the person has the same number of cards they have today, only one card ... or some number inbetween.

The risk/vulnerability analysis says that stealing cards is much less lucrative business since the cards don't work w/o PIN/secret key. Current credit cards can be used at point-of-sale w/o additional information or the account number used (w/o the card) in MOTO/internet (aka mail order/telephone order) transactions (i.e. X9.59 introduces authenticated transactions which eliminates most of the current account number havesting vulnerabilities).

Counterfeiting is a lot harder also ... i.e. one of the new major credit card frauds is waiter with a PDA & magstripe stripe reader inside their jacket, as they do their normal business, they are also harvesting the magstripe information with the PDA. Six hours later, there are counterfeit cards in use someplace else in the world (i.e. the actual card doesn't have to be lost/stolen, just the information from the magstripe). The chip, private key, & secret key are significantly more difficult to counterfeit.

So now the issue is what, if anything, new has to be done for lost/stolen reports. Just stealing the card doesn't do much good w/o the PIN. For most practical fraud purposes, extracting the private key from the chip costs more than any likely benefit. Most of the existing attacks go away since the costs rise significantly compared to any likely gain. With all the major existing easy fraud issues addressed, the weakest link is possibly the lost/stolen infrastructure ... either as a denial-of-service attack (and forcing the infrastructure to deal with person in less secure way) and/or as part of getting a fraudulent replacement authorized. At the worst, it is the same as it is today. Improvement, is using some of the web server techniques associated with variable confidence techniques based on the number of different random questions that can be answered, aka rather than absolute lost/stolen report ... have a variable confidence lost/stolen report.

The other issue in lost/stolen report ... is loosing your wallet/purse and your one (or more) authentication devices ... which may be collectively registered for 20 different relying parties ... can the notification be simplified. Current 1-800 global notification is a cost, somebody has to be prepared to pay for it. The success of any 1-800 service is going to be individuals willing to fund it (directly or indirectly) or preferring to perform the function themselves (again an individual choice). The individual could automate the process personally with a PDA device (i.e. the centralized 1-800 is somewhat left-over from the days where only corporate infrastructures had data processing capability). The downside is that the PDA may be in the same wallet/person package as the card(s) ... and also lost/stolen.

A 1-800 and/or automated PDA possibly is a vulnerability point since the information there could be used to compromise the infrastructure also (also. a bank confirming with variable, random questions from recent statements/transactions isn't likely to get the information from a 1-800 service since there needs to have some other type of trust relationship).

The are a number of ways of enhancing the lost/stolen reports (and replacements) both from aspect of ease-of-use as well as vulnerabilities. The fixes also introduce costs and/or different kinds of vulnerabilities. I would like to see a dozen or more different approaches with well researched cost/vulnerability trade-offs and allow the market to decide on how many of them are acceptable (similar to allow person to decide on how many different authentication hardware tokens they may wish to posses and how they distribute their trust relationships among the hardware tokens).

However, if the frequency of (at least the stolen part of) lost/stolen drops, the aggregate costs for lost/stolen infrastructure declines, even with more sophisticated infrastructure introducing various forms of variable confidence techniques.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

database (or b-tree) page sizes

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch
Date: Sat, 17 Mar 2001 13:57:26 GMT
dsiebert@excisethis.khamsin.net (Douglas Siebert) writes:
Can you name such an example? I can't think of a large source of files that are smaller than the 4K blocksize. Anything larger and this perceived benefit no longer exists, because if they are "not individually accessed frequently enough to be cached" the first block(s) will require a disk access anyway. Even with an application that used several thousand such <4K files you wouldn't measure a performance benefit unless you were running on the ragged edge of available memory for buffer cache.

usenet download ... couple hundred mbytes per day if you are getting a full feed ... although very bi-model ... majority of the bandwidth is alt.binary of various sorts but easily hundreds of thousands of <|=512byte. When I had a full feed there was big difference driving feed into an AIX 4k-byte page/block filesystem vis-a-vis a Waffle DOS system.

random refs:
https://www.garlic.com/~lynn/2000e.html#39
https://www.garlic.com/~lynn/2000.html#38

some number of ISP NNTP servers seem to run on ragged edge (although I I don't know whether it is disk, cpu, or memory) ... I notice significantly slow-down at some times of the day (initial connection, even w/article handshaking).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CNN reports...

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CNN reports...
Newsgroups: us.military.army,alt.folklore.military,sci.military.naval,
uk.people.ex-forces,soc.culture.kuwait
Date: Sat, 17 Mar 2001 14:30:48 GMT
John Lansford writes:
This accident simply points out how dangerous the military really is, whether there is a war or not.

wasn't there some statistic that the annual highway accident mortality rate use to be greater than all of vietnam ... although i believe the annual rate dropped somewhat with lowered speed limits and all the seatbelt and airbag stuff. I think there is also some statistic that bathtubs are more dangerous than automobiles. The military has got to be way down on the list in terms of number of mortality accidents.

An interested exercise might be to order a list of items as to their perceived lethality (is that a word?) against a ranking of causes of mortality accidents. Smoking seems to be way up there (although i don't know if death by smoking would be considered an accident or not). More people have likely died this year from smoking than in all military related accidents.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unix hard links

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unix hard links
Newsgroups: comp.arch,comp.arch.storage
Date: Sun, 18 Mar 2001 14:57:05 GMT
"Maxim S. Shatskih" writes:
NTFS (the extent-based FS) keeps tiny files inside the MFT record ("inode") without allocating any extents to it.

i did something akin to that for the original CMS CDF file system in the mid-70s as part of putting in paged mapped file access (prior to that the "CDF" file system had been pretty much unchanged since '67 or so). The difference was that the CDF FST (file status table, "inode") was paired to about 40 bytes allowing 100 per 4k file directory hyperblock. The normal FST pointed to file index hyperblocks which then pointed to file data blocks. For small files, I changed the FST to point directly to the (only) data block (eliminating one level of hyperblocks).

Later a similar trick was done for the CMS EDF (aka extended) file system. Anotheer trick was adding a flag indicating sorted FSTs in the file directory hyperblocks ... which significantly cut down file search overhead for very large number of FSTs.

One of the big differences between the CMS CDF & CMS EDF file systems was that the CDF file system alwas wrote the first record of the MFD at record 4. The CMS EDF file system changed that to alternate writing the first record of the MFD between record 4 and record 5 (with a version number). This covered the case of a transient write error (potentially corrupted the record on disk) at the same time as a system &/or power failure. At restart, both records 4 & 5 were read and the valid record with the latest version number was used.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CNN reports...

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CNN reports...
Newsgroups: alt.folklore.military
Date: Sun, 18 Mar 2001 15:20:02 GMT
BobMac writes:
More people smoke than serve in the military. If you're going to compare rates of fatality (or colourblindness, or back pain, or tooth decay, or whatever) you need to define your terms very carefully.

BobMac


sorry, that was part of the point ... news stories play with statistics all the time; events per year, events per million people per year, events per million miles traveled, etc. i've seen homicide rates for cities given both ways depending on the spin. sometimes they don't even bother to give the statistics. Airplane/car comparison like to trade-off miles flown vis-a-vis hours driving.

w/o knowing the rates/# people, there is some story that traffic accidents involving drivers using cellphones is rapidly closing on traffic accidents involving alcohol (i.e. are cellphones killing more people than guns?).

Even with military service tending to involve more dangerous activities ... i haven't seen any statistics that indicate that there are more fatal accidents per ??? than say DWI or DWcellphone (i.e automobiles seeem to be very dangerous things).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unix hard links

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unix hard links
Newsgroups: comp.arch,comp.arch.storage
Date: Mon, 19 Mar 2001 01:22:21 GMT
"Bill Todd" writes:
And in the situation under discussion (yes, context is significant: I'd never assert that storage-level NVRAM isn't desirable in environments with frequent persistence synchronization points), the file system never decides to 'harden' things in anything but an asynchronous (lazy) manner, so the speed of the resulting I/O is of negligible importance: no visible latency is associated with it, and there's no obvious overall throughput advantage either.

back when I did audits of raid 5 hardware implementations, frequently one of the weak points was non-redundant NVRAM for single record writes where it wasn't possibly to guarantee simultaneous parity record write and data record write ... i.e. parity record & the record to be replaced were read, original record backed out of the parity record and new record incorporated into parity record and then both new record and parity record scheduled for write. If the writes weren't simultaneous and pawer dropped possibly between the writes ... then controller had to complete the writes when configuration regained power ... or data could be inconsistent and result in later corruption. redundant NVRAM covered the failure mode that one of the NVRAMs didn't come back up correctly when power was restored.

a file system could potentially eliminate the requirement for replicated NVRAM ... by having a list of uncommitted records and on power restoration know enuf to read the individual record blocks in a RAID-5 set (w/o parity and hope for the best), replace just the specific record and rewrite the whole RAID-5 set with new parity.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Q: ANSI X9.68 certificate format standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Q: ANSI X9.68 certificate format standard
Newsgroups: sci.crypt
Date: Mon, 19 Mar 2001 15:54:22 GMT
Tomás Perlines Hormann writes:
I was wondering whether someone of you have a clue where to get the draft (or already standard) from ANSI X9.68 certificate format as I have been searching through ANSI webpages and some other sites and just found a draft dated from march 1st, 1999.

I guess there must be a recently updated draft versionor even the full standard.

Does anybody know anything about the current state of this certificate format for mobile applications?

I would be very satisfied if you could pelase help me out regarding this issue.


x9.68 work on compressed/compact certificates for account-based financial transactions addresses a situation that would frequently be associated with relying-party-only certificates. The X9.59 work demonstrates that X9.68 techniques for relying-party-only certificates can compress all redundant fields located in the certificate (and at the relying-party) to zero resulting in a X9.68 certificate of zero bytes ...

... or since the relying party built and kept the original certificate at publickey registration time and transmitted a copy of the certificate to the public key owner; to then have the publickey owner return a copy of the original certificate appended to every transaction sent to the relying-party ... when the relying party has the original certificate is redundant and superfluous.

the nominal objective of x9.68 compatc/compressed certificate was to operate in a highly optimized account-based financial transaction environment that typically might involve existing transaction sizes of 80 bytes or less. The addition to such transactions of both a digital signature and a 4k-12k byte publickey certificate would represent significant bloat in the size of the financial transactions

random refs:
https://www.garlic.com/~lynn/aadsm5.htm#x959
https://www.garlic.com/~lynn/2001c.html#72
http://www.x9.org/

from old x9.68 draft introduction (ISO 15782-1 is the work of ISO TC68/SC2 international financial standards body):
This standard defines syntax for a more compact certificate than that defined in ISO 15782-1 and X.509. This syntax is appropriate for use in environments with constraints imposed by mobility and/or limited bandwidth (e.g., wireless communications with personal digital assistants), high volumes of transactions (e.g., Internet commerce), or limited storage capacity (e.g., smart cards). This syntax is also geared towards use in account-based systems.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unix hard links

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unix hard links
Newsgroups: comp.arch,comp.arch.storage
Date: Mon, 19 Mar 2001 16:01:30 GMT
"Stephen Fuld" writes:
Gee, where were you guys when we (my former employer) tried to sell a RAID 5 system with mirrored non-volatile caches to IBM? They pooh poohed the idea of a power failure where the NVS couldn't come up. IBM bought the system, (It was part of a package with another system that they really wanted), but sold few of them, frequently complaining of the price for the redundant hardware. :-(.

i don't know ... was it one of the companies that i reviewed RAID-5 implementations?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unix hard links

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unix hard links
Newsgroups: comp.arch,comp.arch.storage
Date: Mon, 19 Mar 2001 16:35:52 GMT
"Bill Todd" writes:
I'm not sure how that suggestion (which I'm not sure I understand the details of) might interact with my observation above: if you're using the NVRAM to speed up writes by acknowledging them as complete before they hit the platters, then that's likely because the file system is performing synchronous writes (otherwise, it would defer them itself and not much care how long they took to complete once submitted to the array), in which case the overhead of performing some other persistent operation to track the outstanding writes would defeat the purpose of the NVRAM.

the case was where NVRAM allows for the parity write and individual record write not being synchronous and there could be any event occurring between the two that prevents one or the other from happening (like power failure). Recovery is then dependent on NVRAM for recovery (and question of redundant or non-redundant becomes important in failure analysis scenerios). The problem is analogous to a single disk write being interrupted, partially complete and no subsequent indication (like CRC error) indicating that it hadn't completed (subject we had previously on this list in some detail).

At that point the stripe and the parity record may be inconsistent. Normal filesystem commit is only with respect to the record being written and not with respect to the other records in a raid-5 stripe and/or the parity record.

A "commit" filesystem supporting roll-foward with RAID-5 sensitivity might be able to eliminate the need for the NVRAM for covering the case of parity & stripe inconsistency, the filesystem log of all pending writes could be used to imply all possibly parity records that might possibly inconsistent with their associated stripes ... and have the filesystem only rebuild the parity record for those specific stripes (rather than all parity records).

On recovery, the filesystem could do a full stripe read for each pending write that it has from its log, update just the specific record and rewrite the full stripe ... forcing parity record consistency

then there is the problem that any power failure may have resulted in bad/aborted i/o write ... and then there is case of the two writes just happened to occur simultaneously and the power failure resulted in both writes (parity and record) being bad (i.e. aborted write leaves the records being written with i/o error indication). A raid-5 sensitive filesystem could still read the remaining records in the stripe (w/o parity), update the pending record write from the log and rewrite the full stripe (with new parity).

Of course, similar strategy could be implemented in a controller with NVRAM (keeping track of which operations are in progress and attempting to mask situations where partially complete operations could have occurred on disk and there may be no on-disk error indication of the incompleteness/inconsistency).

random refs
https://www.garlic.com/~lynn/2001b.html#3

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

ARP timeout?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ARP timeout?
Newsgroups: comp.security.firewalls
Date: Mon, 19 Mar 2001 18:22:22 GMT
"Anon" writes:
Hello, What exactly are ARP timeouts and do they compromise my firewall protection? If so, what is causing them and how can I correct it?

The ARP timeout entries started appearing in my Sonicwall SoHo2 firewall logs approximately a month ago. I am Earthlink/Mindspring DSL customer.

Anon


ARP timeout is a bootp/dhcp (35) ... rfc1700 ... assigned numbers.

normal internet nominal has at least two levels of network indirection.

host name ... stuff of the form www.abc.com

which the domain name infrastructure will take and resolve to an internet address ... of the form 1.1.1.1;

the internet address normally still needs to be resolved into a real "network" address ... like the address of an ethernet card.

ARP, address resolution protocol handles the resolution of internet address to network address. most implementations have local ARP caches that are kept at each machine ... giving the mapping between an internet address and a network address. This cache has time-out values for the entries in the cache ... after which the ARP protocol has to go out and resolve the address again.

In addition, DHCP/BOOTP service can implement a reverse ARP ... or mapping between a network address to a internet address. Some firewalls have DHCP/BOOTP implementation ... where machines on the local LAN receive their IP address dynamically from the firewall.

PPP dialup and some types of DSL service also use DHCP to assign internet address ... i.e. when the initial connection is made, an IP address is dynamically assigned to the connecting machine by the ISP using DHCP. For DSL service doing dynamic internet address assignment, it is possible that it is configured so that the address assignment periodically times-out and then an internet address needs to be re-acquired/acquired.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CNN reports...

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CNN reports...
Newsgroups: alt.folklore.military
Date: Mon, 19 Mar 2001 21:02:21 GMT
BobMac writes:
TM, in a discussion a couple of years back, somebody pointed out that when there's a screwup in the military, the guy that screwed up will often take the consequences for his decision. (the opposing example from civilian life was the Challenger launch decision, but don't get me started.)

some years ago i ran across something of a gulliver's travels allegory ... set in columbus's time where somebody in the queen's court decided that the three ships should be built up in the mountains where the trees grew. for transportation to the port, after building was complete, each of the three ships would then be sawed into three pieces and moved from the mountains to the port. At the port, the pieces would be put into the water and the pieces glued back together, reconstructing the original ships. in went into some detail about the arguments contrasting ship building in a port shipyard versus shipbuilding in the mountains where the lumber originated ... and random other descriptions of the process.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

database (or b-tree) page sizes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch
Date: Tue, 20 Mar 2001 16:41:17 GMT
pg+nm@sabi.Clara.co.UK (Piercarlo Grandi) writes:
anton> The original Power architecture had 16 (and consequential anton> restrictions on mmap),

These were also probably partly a consequence of the inverted page table...


the original ROMP had 16 segment registers. The design point was for a closed operating system that did early binding with security checking being performed at compile and load/bind time. Runtime was to allow inline code to change the segment table register value as simply as base/address registers could be changed by normal code.

the inverted tables allowed a segment value to be an "id" rather than a pointer to a page table. However, the mechanics of the segment value were orthogonal to whether it inverted tables were used or not.

ROMP supported a 12 bit segment id value. The top four bits of 32-bit address were used to index a segment register. The TLB was indexed using the 12bit segment id from the segment register plus the 28-12=16bit virtual page number.

In a non-inverted architecture ... a corresponding solution is the segment id is an address pointing to a page table. Rather than directly indexing the TLB with concatenation of the page table address plus the virtual address, there can be some sort of page table address associative table ... where the TLB is indexed by a concatenation of the page table address entry index and the virtual address. Say there is 16 entry address table ... the segment is looked up in the address table and its 4bit index is then taken and concatenated to the virtual address for the TLB lookup. If the segment is not currently in the associative array, a entry is selected for invalidation & replacement and all the TLB entries with that index are flushed.

Another solution is that there is a two level table, a segment table of page table addresses (i.e. segment ids) and then the page tables. Rather than having the virtual addresses in the TLB associated with a specific segment, they are associated with a specific addess space (and the TLB entries are tagged as to the address space they are associated with). In this situation, "shared segments" may have duplicate entries in the TLB ... i.e. same virtual address in the same shared segment but in different address space can have unique entry per address space.

Of course the simplest solution is not to tag the TLB entries at all, just every time there is an address space switch, flush all the TLB entries.

Anyway, back to ROMP/801. The TLB entries effectively have 16bit virtual page no. plus tagged with the 12bit segment id. Changing values in segment register had no (direct) effect on the TLB since the TLB segment tag field is large as the possible TLB max. value. The original ROMP/801 design compensated for the small number of simultaneous segments in a virtual address space with the assumption that inline code could change the segment id values in the segment registers as easily as changing address values in address registers (i.e. ROMP/801 was supposedly viewed as having a 40bit virtual address space based on the implied ability for inline code to change segment id values as easily as base registers ... i.e. 12bit segment id + 28bit address ... gives 40bits of addressing).

For RIOS/801 (i.e. power), the segment id value was doubled from 12 bits to 24 bits. Now each TLB entry had a 24bit segment id tag plus a 16bit virtual page number. RIOS/801 theoretically had a 52bit virtual address space (24bits segment id plus 28bit virtual address = 52bits) under the implied assumption that inline code could change segment ids in segment registers as easily as addresses could be changed in address registers.

However, mapping ROMP/RIOS/801 to open system like unix which did late permission/security binding basically reduced the infrastructure back to a standard 32bit virtual address space (4bit segment index plus 28bit virtual address) since the operating system permission paradigm was completely different than the original ROMP/RIOS/801 design point. Rather than doing early permission/security validation and allowing inline code to have access to complete infrastructure, permissions and security checking was done with system calls at runtime.

The RIOS/power infrastructure sill used 24bit segment ID and the TLB still supporting tagging of the entry with 24bit segment id plus 16bit virtual page number ... the unix "virtual address" space programming paradigm assumed relatively long-term binding of segments to addresses with system calls and typically some amount of planning allowing segments in an address space to change (different than the programming, permissions, and security paradigm that the hardware was designed for).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unix hard links

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unix hard links
Newsgroups: comp.arch,comp.arch.storage
Date: Tue, 20 Mar 2001 17:14:15 GMT
Jan Vorbrueggen writes:
Well, there's TCP state. There is no NFS connection state.

Obviously. I said as much.


while there is no NFS connection state ... it isn't completely true that it is completely stateless ... there is local client cache ... which has interfered with various kinds of client-side database implementations (the protocol effectively chooses to ignore that state).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

A future supercomputer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A future supercomputer
Newsgroups: sci.crypt
Date: Tue, 20 Mar 2001 23:47:07 GMT
Quisquater writes:
In this thread nobody was able to give any url: hot air? Please if you have the news you've the link: give it.


http://www.research.ibm.com/bluegene

random other URLs/refs:
https://www.garlic.com/~lynn/2000d.html#2
https://www.garlic.com/~lynn/2000d.html#3
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"Bootstrap"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Bootstrap"
Newsgroups: alt.folklore.computers
Date: Wed, 21 Mar 2001 15:19:38 GMT
jcmorris@jmorris-pc.MITRE.ORG (Joe Morris) writes:
"John Galt" writes:

>The notion of lifting oneself by the bootstraps makes no sense to me and >never did. Until I see someone do it I will assume it is impossible. I >think it is entirely possible that you are not missing anything. The damn >phrase never did make sense to me.

Of course "lifting oneself by the bootstraps" is an absurd concept, but it's long been in the language as a fancy substitute for "self-starter".

I have no idea how far back the term was conscripted by the computer industry. I do recall in the early 1960s a 1401 system I used had a "bootstrap" deck; when we added a 1311 disk drive to the machine the 1401 experts had what amounted to a contest to see how small they could make that deck. It finally became a single card, which loaded a short program from the hard disk; this program loaded yet another startup program, which finally loaded the (locally-written) batch monitor. The single card -- which usually had a picture of a boot drawn on it -- was called the "BOOT^3" (boot-cubed) card.

The IBMese term "IPL" (Initial Program Load) came into use, I think, with the System/360. There many have been prior uses of the phrase, but all the IBM systems I recall before the S/360 had a button marked "LOAD" to perform cold startup.

At the opposite end of the spectrum, machines like the PDP-11 had no cold-start capability. If it wasn't already in memory the user had to manually key in a short (20-30 word?) program using the console bit switches, then force the program counter to the start of the program. This design always irritated me, especially since the first DEC computer (the PDP-1) had hardware bootstrap built into the machine.

Joe Morris


there was the loader carddeck (about 80-100 cards) ... which would load other card program,s resolving RLDs, etc.

it was possible to generate a program in self-loading format ... i.e. so that it didn't need the loader in front of it.

and of course there was the 360 3card loader.

the 360 microcode "IPL" (initial program load) hardware sequence would load/read 24 bytes starting at location zero. The first eight bytes were assumed to be a PSW (program status word) and the 2nd/3rd CCWs were assumed to be I/O program instructins. After the 24 bytes were read into location zero, the hardware would have an I/O branch instruction to location 8 (i.e. the 2nd 8 bytes, presumably more i/o program instructions). After the I/O program terminated, the hardware would load the PSW from location zero. The IPL sequence would read 24 bytes with SILI (suppress incorrect length indication) so regardless of the actual record length, only the first 24 bytes were read and the rest of any data in the record was ignored (i.e. wasn't possible to do a one card loader in 360 since the remaining bytes in the card were ignored).

The 360 PSW contained bunch of status bits and the instruction counter ... i.e. where to start executing. The assumption was that the 2nd/3rd CCWs for i/o program would read instructions into some storage location and then the PSW would cause a branch to that location. There also more complex loading sequences ... where the 2nd CCW read additional CCWS and the 3rd CCW branched/TIC to the additiional I/O program CCWs. The additional I/O program CCWs would then presumably read instructions ... and eventually at some time the I/O program would finish and the LOAD hardware would pickup the PSW at location zero.

The 3card loader had 24bytes of binary/hex in the first card ... for the initial hardware load sequence ... and then the next two cards contained 160 bytes of instructions ... which would read additional cards from the device and do what ever was necessary.

random refs:
https://www.garlic.com/~lynn/94.html#11
https://www.garlic.com/~lynn/98.html#9
https://www.garlic.com/~lynn/99.html#135
https://www.garlic.com/~lynn/2001b.html#23
https://www.garlic.com/~lynn/2001b.html#26
https://www.garlic.com/~lynn/2001b.html#27

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unix hard links

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unix hard links
Newsgroups: comp.arch,comp.arch.storage
Date: Wed, 21 Mar 2001 15:46:57 GMT
Paul Repacholi writes:
So what is this sufix? Unix has an unstructured string as the name. one string. Where is the sufix, the name, and any possible prefix?

What is the sufix of 'a.a.a.a'? or of 'a'? Is '.a' a sufix?


In CP/67 (&VM) CMS ... both the filename and filetype were (up to) 8 bytes. A command (and for that matter standard kernel calls) defaulted to a filename and the kernel then would do a prescribed sequence of filetype searches in the specified paths using the filename. If nothing turned up, it would look for the name in the internal kernel routine name table.

It was possible to change the flavor of almost anything by creating an executable with the same name as a kernel call or by creating a script with the same name as an executable.

The original CMS text formater was called SCRIPT (i.e. precursor to GML, SGML, HTML, etc). It was possible to have both a private copy of the binary executable with the filename SCRIPT as well as a script file with the filename SCRIPT. Typing the command SCRIPT would pickup the script file ... and the script file could force invoking the executable SCRIPT file. Other tricks could be played, with a small hack, it was also possible to call kernel routines expecting binary arguments from a script file. Basically kernel calls, executables, script files, etc, were presented as a uniform, consistent interface regardless of how invoke.

For VM/370, CMS was enhanced to support a kernel call mode that used effectively an index into the kernel name table (rather than an 8 character name).

However, prior reference about enhancement to sort file status table ... was in part motivated because of the extensive use of command lookup that occurred in CMS. For a sorted file status table ... a binary /radix search could be performed rather than a linear search. Since the command search sequence not only could occur with manually entered commands ... but even with kernel calls ... it was a very frequently executed function.

random ref:
https://www.garlic.com/~lynn/2001c.html#76

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

database (or b-tree) page sizes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: database (or b-tree) page sizes
Newsgroups: comp.arch
Date: Wed, 21 Mar 2001 16:06:50 GMT
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
The latency until the first instruction executed should be the same for statically and dynamically linked executables, because the same thing happens in both cases: the binary is mmaped, and the page containing the first instruction is demand-paged in.

Seems a rather minor point to me compared to having a more compact image, and even the latter is not a big deal: disk transfer rates are pretty amazing.

Startup times for demand-paged executables seem to be dominated by seek time, not transfer time. I have measured (a few years ago, probably on Linux-Alpha)

emacs -f save-buffers-kill-emacs

and

cat /usr/local/bin/emacs >/dev/null; emacs -f save-buffers-kill-emacs

where emacs had not been started before. The latter was faster.


I did a variation on it in the 70s with the paged mapped filesystem that I did for CMS. It is analogous to some of the current cache line fetches where the actual address causing the miss is attempted to be fetch ahead/asynchronously of/from the rest of the data in the cache-line (shortening the cache-miss latency).

Basically the executable was page mapped .... and quite a bit of it was prefetched ... starting with the initial executable address. Depending on configuration and contention for real storage ... all or part or some of the executable would be prefetched in a single I/O operation, all of it could be prefetched in chunks with multiple i/o operations, some of it could be prefetched and other parts demand paged, etc.

This had the advantage of shortening the initial program start latency by getting the initial part of the executable into memory and executing first ... while not necessarily having to wait until the whole executable was in storage (and at the same time not having to incur the latency of demand paging individual pages).

On large multi-user systems ... the improvement was significant ... but the difference was on the order of the difference between using the system on a lightly loaded system versus using the system when under heavy load (significant program start latency because of contention).

However, it also made a big difference in mapping VM/370 to XT/370. For XT/370 there was a 370 co-processor card in a PC and VM/370 was modified to do I/O via an inter-processor call to DOS running on the 8088. For disk I/O, DOS would emulate the VM/370 disk i/o with files on the XT (100ms/access) hard disk. Using the standard executable loading process migration from standard 370 with standard mainframe harddisks to XT/370 (and 100ms/access hard disks) represented a significant increase in program start-up time. The hybrid stuff with the paged mapped filesystem helped to mask the program startup latency (in much the same way the technique is used to mask some cache miss latency).

random refs:
https://www.garlic.com/~lynn/96.html#19
https://www.garlic.com/~lynn/96.html#23
https://www.garlic.com/~lynn/2000.html#5
https://www.garlic.com/~lynn/2000.html#29
https://www.garlic.com/~lynn/2000.html#75
https://www.garlic.com/~lynn/2001c.html#76

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/


previous, next, index - home