List of Archived Posts

2002 Newsgroup Postings (10/12 - 11/09)

additional pictures of the 6180
Tweaking old computers?
SRP authentication for web app
Tweaking old computers?
Tweaking old computers?
Tweaking old computers?
Tweaking old computers?
Tweaking old computers?
Tweaking old computers?
Asynch I/O
Coherent TLBs
Wanted: the SOUNDS of classic computing
Tweaking old computers?
Help! Good protocol for national ID card?
So how does it work... (public/private key)
Tweaking old computers?
Help! Good protocol for national ID card?
updated security glossary & taxonomy
Help! Good protocol for national ID card?
Help! Good protocol for national ID card?
Help! Good protocol for national ID card?
Tweaking old computers?
Tweaking old computers?
Tweaking old computers?
Sandia, Cray and AMD
Help! Good protocol for national ID card?
Help! Good protocol for national ID card?
why does wait state exist?
why does wait state exist?
why does wait state exist?
Help! Good protocol for national ID card?
why does wait state exist?
why does wait state exist?
why does wait state exist?
Opera 6.05 resources problem?
VR vs. Portable Computing
VR vs. Portable Computing
VR vs. Portable Computing
VR vs. Portable Computing
CMS update
Help! Good protocol for national ID card?
Home mainframes
Help! Good protocol for national ID card?
VR vs. Portable Computing
public-key cryptography impossible?
RFC 2647 terms added to merged security glossary
Tweaking old computers?
Tweaking old computers?
Tweaking old computers?
Tweaking old computers?
EXCP
History of HEX and ASCII
Computing on Demand ... was cpu metering
SHARE MVT Project anniversary
SHARE MVT Project anniversary
ibm time machine in new york times?
REVIEW: "Internet Security Dictionary", Vir V. Phoha
SHARE MVT Project anniversary
IBM S/370-168, 195, and 3033
IBM S/370-168, 195, and 3033
Follklore
Who wrote the obituary for John Cocke?
PLX
Help me find pics of a UNIVAC please
PLX
Follklore
Mainframe Spreadsheets - 1980's History
Mainframe Spreadsheets - 1980's History
merged security glossary updated with glossary from CIAO
merged security glossary updated with glossary from CIAO
The Forrest Curve (annual posting)
bps loader, was PLX
bps loader, was PLX
Home mainframes
Everything you wanted to know about z900 from IBM

additional pictures of the 6180

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: additional pictures of the 6180
Newsgroups: alt.os.multics
Date: Sun, 13 Oct 2002 02:02:00 GMT
"Hugo Drax" writes:
Thats sad, you would think people would have taken pictures of these machines for documentary purposes. its almost like a whole generation of computing history will dissapear without any visual documentation for younger generations. I never seen a 6180 and when I sat that pic on the multicians.org site my jaw dropped I thought what a cool looking system Now thats a real computer :) I can see how people in the 60's,70's were intimidated by computers thinking they were going to take over the world and the human race. seeing something like that to a layperson would definately leave an impression. Vs todays miniscule mainframes with its 1 power switch and 1 power light.

i remember taking a bunch of slides of the machine room on the 2nd floor in the early '70s (of course it wasn't GE645 ... that was on the upper floor) ... but can't seem to find them now ... i may have sent some of them off to melinda for some reason or another
http://www.leeandmelindavarian.com/Melinda#VMHist

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Sun, 13 Oct 2002 15:03:43 GMT
Charles Richmond writes:
What all these speed-up and volume-increasing stories shows is that the companies liked getting something for nothing. They are playing with the idea of "perceived value", and trying to support the spectrum of their computer offerings without actually creating a spectrum of computer models.

ISTM, that this is a hideous way to do business, and certainly screws the customer backward!!! How can you build up trust with a customer when you treat them like this??? As I said before, this shows bad business practice and poor professional ethics on the part of the companies that engaged in this.


another facet is that way back when a lot of the data processing equipment was leased ... not sold; the customer was paying for the degree/amount of service ... they didn't own the equipment.

as somewhat implied in other posts ... was that many of these operations had huge up-front costs and manufacturing/delivery costs were lower percentage (there tended to be significantly lower volumes than some of today PC volumes). I believe some aspect of that has been in the news related to high costs of drugs ... significant percentage is the up-front costs.

a frequent scenario was that the device was designed and priced based on full capacity and the projected volumes for that design point. Then you get a bunch of customers saying that they would buy it if it was only cheaper/slower (they didn't need all that capacity anyway). This original design point may represent 80 percent of the market size.

There may be an emerging/entry market that wants half the capacity at half the price but the size of this market is only 1/5th the original target market. Cutting the price in half for everybody in order to pick up 20 percent more sales could fail to recover the up front costs (and in some cases might violate some gov. decree that products not be priced at less than costs).

The size of the emerging, entry level market may not be sufficient to justify designing a totally different product because in order to recover independent up-front costs the product might have to be priced four times that of the standard product. Sometimes the problem is that there is a misimpression that because something is 1/2 something else that it costs 1/2; and/or that entry level market is significantly larger than mainstream market.

So in a product market that is extremely price sensitive to up-front costs (design, manufacturing setup, etc represents significant large percentage of the price), there may be a tendency to try and amortize those costs over a larger market segment and that may require (or the gov. effectively demand) tiered pricing of effectively the identical product for different parts of the market (or not serve that market at all).

A more recent example might be the 486DX/486SX .... the 486SX was effectively the same chip at a lower price with floating point (permanently) disabled. The cost of taking basically a 486DX and disabling floating point is likely to have been significantly less than designing a 486SX chip from scratch.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

SRP authentication for web app

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SRP authentication for web app
Newsgroups: sci.crypt
Date: Sun, 13 Oct 2002 15:27:08 GMT
Paul Rubin <phr-n2002b@NOSPAMnightsong.com> writes:
It doesn't seem that important. Just use an SSL connection and do password authentication over it. Or are you afraid of somebody using a forged server certificate?

doesn't have to be a forged server certificate ... it can just be a valid server certificate in the case of domain name take-over.

one of the major reasons for SSL domain name server certificates are trust issues regarding the domain name infrstructure ... and can you trust the domain name infrastructure to correctly point you at the server you want to be pointed at.

however, what happens when a trusted third party certification authority gets a request for a server domain name certificate .... it has to go verify that the requester is valid for that domain name ... in order to do validate information that is "bound" in a certificate it is to issue, it must check with the authoritative agency for the information it is certifying. For domain names, the authoritative agency is the domain name infrastructure. This creates sort of a catch-22 ... the same agency that everybody is worried about trust issues ... and generates the requirement for SSL domain name certificates ... is also the same agency that the CAs rely on for effectively the same information.

So it is possible to attack the domain name infrastructure and result in individuals getting bad information and point to the wrong server. It is also possible to attack the domain name infrastructure, apply for a valid certificate, get the certificate and result in individuals getting bad information and point to the wrong server. All of this is frequently obscured by discussions regarding the integrity of the mathematical process that protects the information in a certificate. In some cases the obfuscation can be distraction that the trust/quality of the information directly from the domain name infrastructure and the trust/quality of the information in a certificate is nearly the same (so what that it is extremely difficult to attack the integrity of a certificate once it has been created ... if it much simple to attack the integrity of the source of the information that goes into a certificate).

So the CA businesses have a requirement to improve the integrity of the domain name infrastructure .... so that not only can the integrity of certificates can be trusted ... but also the integrity of the information in a certificate can be trusted. The catch-22 here is that improving the integrity of the domain name infrastructure so that information from the domain name infrastructure can be trusted (by CAs) ... also significantly reduces the requirement for needing SSL domain name certifictaes (since others will also better trust the information from the domain name infrastructure).

So the question isn't just about being afraid of a forged server certificate (aka the integrity of the certificate itself) but also things like spoofed domain name (the integrity of the information in the certificate, valid certificate, bad information).

misc. refs to various domain name exploits:
https://www.garlic.com/~lynn/subintegrity.html#fraud
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Sun, 13 Oct 2002 16:12:12 GMT
also in somewhat related scenario (that i'm more familiar with) is the transition days starting to charge for software.

at hundred thousand foot level, one of the processes was to select high, medium, and low price and then do volume forecast (market size) at those prices. one check was that (gov. requirement?) forecast volume times price had to be greater than costs. Higher price tended to be lower volumes, lower price tended to be higher volumes (of course there is the vodka maker tale about 30 percent price increase doubled the volume).

For the most part (at this point in time), software manufacturing and distribution costs were pretty volume insensitive ... vast majority of the costs are up-front with development, organizational setup, etc (fixed up front training costs of field support people might be as much as development). Anyway in this transition period ... some software projects found that there was no forecasted price point where development costs could be recovered ... and they couldn't go to market.

also to hardware scenario an equivalent entry level analogy these days (with software) is with demo/freeware where you have the full product but it is crippled (or not full function) pending paying (additional) money and getting an unlocking key.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Sun, 13 Oct 2002 18:59:47 GMT
Steve O'Hara-Smith writes:
Wouldn't some of these be rejects with failures in the FP circuitry thus increasing the effective yield of the line ?

i also heard that the intel electronic solid-state disk was totally populated with memory chips that had failed the standard acceptance tests. That many of these failed chips could be compensated for with circuitry that assumed higher latency and large block transfers (at least compared to random access memory operational characteristics).

this assumes that there is some yield issues to begin with .... if there happens to be nearly 100 percent yield ... using the product to implement other products with different operational characteristics wouldn't help (assuming the alternative products are lower cost). Another kind of yield is sorting for max. operational frequency where the chips show a significant variance.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Mon, 14 Oct 2002 17:33:44 GMT
note many of these issues come up related to trying to make the transition from an early agrarian/gathering society to an industrial society. the economic model in the agrarian/gathering society is frequently that there is almost a linear, simplistic relationship between the delivered product and the work effort/value (and tends to contribute to the simplesting economic view by members of such societies).

in the transition to industrial (and even information) society there are frequently signficiant up front, fixed costs that are relatively independent of the actual item delivered. As a result it becomes a lot more complicated to demonstrate a linear economic relationship with one specific item in isolation from the overall infrastructure.

The fixed, up front infrastructure costs contribute to significantly increased efficiencies compared to the linear econcomic relationships found in the early agrarian/gathering infrastructures ... assuming some specific product delivery volumes. However, if such huge up front infrastructures were developed and delivered only one item ... it is pretty obvious that it wouldn't be economically viable compared to an earlier agrarian/gathering infrastructure (with a strictly linear relationship). It is only being able to amortize such up-front infrastructures & costs across a large volume that the economic benefit accrures to the participants of such infrastructures. A more simplistic explanation is that in such environments, the cost of producing five times as many items is typically a lot less than a factor of five (which would be the case in the earlier agrarian/gathering societies). As a result there is much more atttention given to a pricing paradigm that recovers the cost of the up-front infrastructures (which is frequently more complex than the more simplistic agrarian/gathering societies that are just looking at economic recovery of the linear costs associated with per item production).

I remember in the early '80s looking at devices produced strictly for the computer industry with a price per unit in the $6k range. Similar items with similar capability (actually more advanced) that had been produced for the consumer electronic business were in the $300-$600 dollar range (between a 10:1 to 20:1 price reduction). The direct linear work effort that went into production of the different items were nearly the same.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Wed, 16 Oct 2002 18:55:20 GMT
Steve O'Hara-Smith writes:
Which is in itself something of a pain if you have a big fat box that wou would like to burden additionally with a little light use of some tool. You find yourself choosing between a dedicated single CPU box (perhaps 5% loaded), a 20 CPU license or (my favorite at this point) some other tool. But yes that is indeed a probable gotcha.

mainframes have even gotten more interesting ... as more and more virtual machine assists were dropped into the hardware/m'code ... it became possible to do a virtual machine offering subset as a direct hardware offering ... aka LPARS (Logical PARtitions). So you can "buy" a box with certain hardware enabled and effectively spares warehoused right on site (it use to be that customers paid extra to have spares and/or upgrade hardware warehoused in near proximity ... or in some cases provided rooms right off the main machine room ... now technology is such that additional hardware can be packaged right inside each box).

So you can have physical machine with N number of processors enabled, running LPARs where each LPAR can have some number of logical processors where it is possible to specify the CPU utilizatin target for that LPAR (finer granularity than whole processors), and within an LPAR you can also have a virtual machine operating system ... that can provide even finer granularity.

I think that was the 40,000 copies of linux from two years ago, running in a modest sized LPAR under vm (aka VM was providing 40,000 virtual machines for 40,000 different copies of linux and VM was running in an LPAR that was less than the whole machine.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Wed, 16 Oct 2002 19:56:15 GMT
note as manual service costs increased, first there was a migration to FRUs and then actually packaging for spares or sparing. lots of the sparing ... the customer would pay more for the availability. This obviously seen in the HA configurations ... my wife and I did ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

various ha/cmp configurations would be simple 1+1 fall-over where spare idle machine was just sitting there waiting to take over. Customer was typically paying more than two times a simple non-ha configuration (at least for the hardware .... but possibly got by with just a single-copy application software licenses).

I believe one of the other factors was lots of gov. contracts started specifying field upgradable hardware (gov. regs that made it significantly easier to get new hardware as upgrades than as replacement).

So ... tied into industrial non-linear production ... a combination of work already going on in sparing ... and at least the gov. market segment being big driver in field upgrading ... a natural evoluation would be field upgradability built in at time of original manufacturing (compared to cost of having physical person appear).

This industrial-age paradigm is somewhat out of synch with the linear process found in the early agrarian/gathering cultures (the book flatlanders also comes to mind as a possible analogy).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Thu, 17 Oct 2002 16:00:21 GMT
jmfbahciv writes:
I don't know if each board had a serial. Gawd...we had enough problems just trying to keep track of software edit levels. I have no idea how the hardware half kept track of their parts. Boards might be possible but not components on boards.

one of the things that a lot of (involving larger hardware components) startups learned as the evolved from technology to service ... is that that they needed to know the EC-level of the components ... even consumer electronics have serial numbers for warranty purposes ... but also for EC-level/manufacturing date stuff. there are manufacturing quality control stuff related to all pieces in same lot/batch ... but there are also design/implementation bugs which get changed/upgraded over time.

I have heard of people talk about nightmare situations after they got the first 100 (or 1000) units to customers and a proper tracking system hadn't been set-up before hand. Then along comes field service and begins to really confuse what level are the components at any specific customer location.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Asynch I/O

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Asynch I/O
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 17 Oct 2002 16:44:25 GMT
"John S. Dyson" writes:
It would be interesting to see the report, because the idea that Berkeley wasn't so good at disk I/O might or might not be true, depending upon which version of AT&T it was being compared with. The 'standard' SVR3 AT&T had horrid disk block fragmentation in the standard filesystem, but SVR4 (and later versions of SVR3) used the Berkeley FFS scheme.

i have copy of margo's papers from tr-ftp directory long ago and far away:

 18844 Jun  1  1993 jobs.slides.ps.gz
 91160 Jun  1  1993 usenix.1.93.ps.gz
 84246 Jun  1  1993 txnsim.tar.gz
338106 Jun  1  1993 thesis.ps.gz
102210 Jun  1  1993 andrew.tar.gz

she did a lot of work on FFS, log-structured ... and if i remember correctly there were comparisons between FFS, log-structured, Sprite and some others (she also consulted on some ha/cmp issues after she graduated).

I also archived many of the Raid papers from the same time.

somewhat as total aside
http://hyperion.cs.berkeley.edu/

has announcement of the RAID Project 10-year reunion (for members of the raid project only)

old raid stuff from their site


 56607 Mar  2  1996 raid5stripe.ps.gz
 27235 Mar  2  1996 nossdav93.ps.gz
 23779 Mar  2  1996 mss93rama.ps.gz
185174 Mar  2  1996 ieeetocs93.ps.gz
 91530 Mar  2  1996 algorithmica.ps.gz
 90941 Mar  2  1996 tech93_778.ps.gz
456694 Mar  2  1996 tech93_770.ps.gz
 82033 Mar 29  1993 tech91_616.ps.gz
141166 Mar 29  1993 winter93usenix.ps.gz
 89675 Mar 29  1993 sigmetrics93.ps.gz
 44589 Mar 29  1993 vlsisys93.ps.gz
   763 Mar 29  1993 journal.bib.gz
  6047 Mar 29  1993 raid.bib.gz
 40624 Mar 29  1993 ipps93.ps.gz
  2029 Mar 29  1993 README.gz
119689 Jul 25  1992 measureSOSP91.ps.gz
 41298 Jul 25  1992 benchUsenix90.ps.gz
 22541 Jul 25  1992 zebra.ps.gz
141279 Jun  6  1992 asplos91.ps.gz
172963 Jun  6  1992 tech90_573.ps.gz
174414 Jun  6  1992 tech91_660.ps.gz
 62023 Jun  6  1992 tech92_672.ps.gz
 81019 Jun  6  1992 sigmetrics91.ps.gz
 45582 Jun  6  1992 sigarch90.ps.gz
 69140 Jun  6  1992 sigmetrics90.ps.gz
 33854 Jun  6  1992 usenix90.ps.gz
 76536 Jun  6  1992 superComputing91.ps.gz
 76194 Jun  6  1992 tech91_638.ps.gz

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Coherent TLBs

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Coherent TLBs
Newsgroups: comp.arch
Date: Fri, 18 Oct 2002 08:21:40 -0600
iain-3@truecircuits.com (Iain McClatchie) writes:
One possible scheme to improve SMP TLB flushes is to architect a global TLB flush operation. The CPU informs all other CPUs to flush TLB entries corresponding to a given virtual address. The difficulty here is that other CPUs may have allocated the address space a different ASID, so that the flush operation either operates across all processes (generating multiple hits in the TLB and requiring hardware to deal with that). For large SMPs, this scheme requires coherency traffic scaling as the square of the number of CPUs, which is bad.

original 370 architecture had global PTLB, ISTO, ISTE, & IPTE machines that would invalidate all TLBs in complex.
PTLB .... purge all TLB entries in all TLBs ISTO .... purge all TLB entries for a STO (segment table origin, aka address space) in all TLBs ISTE .... purge all TLB entries for a STE (segment table entry) in all TLBs, in addition turn on the invalid bit in the STE IPTE .... purge TLB entry for a PTE (page table entry) in all TLBs, in addition turn on the invalid bit in the PTE

because the selective invalidates would have resulted in delaying virtual memory hardware for the 370/165 by six months (and delayed 370 virtual memory for the whole product line), initial 370 only announced and shipped PTLB (even tho some of the other 370 machine models had already implemented all four).

The 370s TLBs (for the TLBs that supported multiple concurrent address spaces) were STO associative, which was the (consistent) real address of the segment table origin, the same across all processors.

The IPTE selective invalidate finally appeared with the 3033 model in the late '70s.

with or w/o selective invalidate ... the sequence still required a CPU signal broadcast; typical scenario was turn on the invalid bit in the PTE (either with IPTE or an OI followed by PTLB) and then broadcast because there was kernel code (running in parallel) that might be operating on the virtual memory page using its real address. Some of the implementations tended to try and batch up a whole slew of page invalidates at a single time ... amortizing the broadcast that "drained" any kernel operations in progress that were using real address. There was some trade-off regarding relatively short-lived kernel operations getting locks on the address space as a means of serializing any page invalidates against that address space.

There was also a lot of discussion in the 1970 time-frame about advantages of STE-associative TLBs (rather than STO-associative) to improve invalidates in the case of segment sharing. An IPTE on a PTE in a shared segment ... might possibly involve multiple different TLB entries in a STO-associative (aka address space associative) TLB. For a STO-associative TLB, the choices were having logic at TLB entry load time to not allow multiple TLB entries for the same PTE (aka real address) ... or software cycled invalidates for all possible STOs (aka address space) ... or the software punts and just does a PTLB (whenever it dealt with page that might be in multiple different address spaces).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Wanted: the SOUNDS of classic computing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wanted: the SOUNDS of classic computing
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Fri, 18 Oct 2002 21:52:09 GMT
Charles Richmond writes:
Re-reading what Brian Inglis wrote, I can see how you got the idea that he meant removing all "can't happen" checks. My reading understood that he meant cleaning out "dead code"... that was what I was meaning with my reply. Surely, checks for "can't happen" often need to be left in.

In the past, I've made different assertions ... that to take an application and turn it into a service ... can result in 4-10 times additional programming as the original application ... lots of it checking for can't happen scenarios. Sometimes it is only 3 times as much code ... but ten times as hard ... because it is trying to predict all the impossible conditions and handle before they happen.

part of this i gave at keynote for nasa high assurance conference last year ... pointer at:
https://www.garlic.com/~lynn/index.html

something similar was done in support of the original stuff for what is frequently now called e-commerce.
https://www.garlic.com/~lynn/aadsm5.htm#asrn1 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aadsm5.htm#asrn2 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aadsm5.htm#asrn4 assurance, X9.59, etc

misc past postings on industrial/commercial strength computing:
https://www.garlic.com/~lynn/94.html#44 bloat
https://www.garlic.com/~lynn/98.html#4 VSE or MVS
https://www.garlic.com/~lynn/2001d.html#70 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001h.html#1 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001l.html#4 mainframe question
https://www.garlic.com/~lynn/2001l.html#14 mainframe question
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002.html#26 Buffer overflow
https://www.garlic.com/~lynn/2002.html#28 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002f.html#23 Computers in Science Fiction

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Fri, 18 Oct 2002 21:59:25 GMT
jcmorris@mitre.org (Joe Morris) writes:
Just ask any IBMer who was with the company during the antitrust litigation, when Edelstein ordered that EVERYTHING be preserved.

there was joke in POK about 705/706 building when everything else was full ... they started vacating a row of offices and filling them at the rate of one or two per day ... at least until the floor loading rating became a serious issue. I remember walking down an isle of scuh offices.

It must have made an impression on me ... because i also started backing things up ... frequently in triplicate (although I had situations where all three copies got scratched because of operator error). some of mine (and others) fanaticism for backing everything up ... leaked into things like email products (which may have contributed to issue at the white house in the early '80s).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Fri, 18 Oct 2002 21:33:20 GMT
Jay Miller <jnmiller@@cryptofreak.org> writes:
A (non-homework!) problem: suppose you were designing an ID card. You want it to be useful in readers all over the world, but you do not want to grant holders the right to modify or create their own cards even if they are given the physical pieces necessary to do so. (ie. on-card data must be encrypted.)

Is there any solution that doesn't require every reader in the world be either 'special' in the sense that it physically holds the key or networked such that it can download the key on demand?

If not, can the key be split somehow to minimize the destructiveness of a single reader being reverse-engineered (a la. CSS)?

Or if so, can the algorithm be made public?


look at the AADS chip strawman
https://www.garlic.com/~lynn/x959.html#aads

it isn't an identification chip ... it is an authentication chip (and, yes there can be significant difference).

in conjunction with x9.59 and aads
https://www.garlic.com/~lynn/x959.html#x959

the objective was purefly to provide strong authentication in an otherwise untrusted environment.

the chip can be 7816 contract, 14443 contactless, usb, 2-way combo (7816+usb, 7816+14443, 14443+usb) or 3-way combo.

no keys required in the reader for the card to perform the authentication operator. basically the card is at a known integrity level and relying party can choose to trust something at that integrity level.

the reader is an integrity issue however ... not so much for correct chip operation ... but for correct business process operation; some of that shows up in the EU finread stuff. the issue is not whether the AADS chip provides correct authentication ... in straight authentication business processes .... but there are business process that have both authentication & approval facets; aka a chip is used to demonstrate both authentication and approval; like a financial transaction, the person is both authenticating themselves and agreeing to pay a merchant some amount of money. While an untrusted reader can't spoof the authentication ... an untrusted reader may transmit a transaction to the card for $5000 when only displaying $50 (the person thinks they are authenticating and approving a $50 transaction, not a $5000 transaction).

one approach is to potentially have the reader also sign any transaction, the relying party can then evaluate the integrity of the authentication chip, and also evaluate the integrity of any reader that may have also signed the transaction ... with respect to performing any operation.

misc finread &/or intention related stuff:
https://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
https://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#7 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#9 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#13 Words, Books, and Key Usage
https://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative to PKI?
https://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#13 anybody seen (EAL5) semi-formal specification for FIPS186-2/x9.62 ecdsa?
https://www.garlic.com/~lynn/aadsm12.htm#18 Overcoming the potential downside of TCPA
https://www.garlic.com/~lynn/aadsm12.htm#19 TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aadsm12.htm#30 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/2002f.html#46 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002f.html#55 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002g.html#69 Digital signature
https://www.garlic.com/~lynn/2002h.html#13 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002i.html#77 Does Diffie-Hellman schema belong to Public Key schema family?
https://www.garlic.com/~lynn/2002j.html#29 mailing list history from vmshare
https://www.garlic.com/~lynn/2002l.html#24 Two questions on HMACs and hashing
https://www.garlic.com/~lynn/2002l.html#28 Two questions on HMACs and hashing
https://www.garlic.com/~lynn/2002m.html#38 Convenient and secure eCommerce using POWF

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

So how does it work... (public/private key)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: So how does it work...  (public/private key)
Newsgroups: sci.crypt
Date: Sun, 20 Oct 2002 03:53:26 GMT
Carlos Moreno writes:
Unless... Wait... I guess encrypting per se with the private key makes no sense (if you can use the public key to decrypt it, then who are we hiding the information from?)... So, if it's just a matter of guaranteeing that some information comes from me, then I guess I could take some data, make it visible, and then decrypt that data with my private key? (then, others can use my public key to encrypt what I decrypted and see if it gives the same data I made visible?). Would that be fool proof? Can't data be falsified with such approach?

digital signatures (taking the hash of the contents and then encrypting the hash) provides "integrity" and "authentication" ... while not (necessarily) providing confidentiality (i.e. the actual encryption of the data itself).

FIPS186-2 is one such digital signature algorithm that uses FIPS180, SHA-1 (and now SHA-2).
http://csrc.nist.gov/cryptval/dss.htm

in any case issues are (at least):
integrity authentication confidentiality

some cases integrity and authentication are sufficient w/o actually requiring confidentiality.

one of the most common scenarios on the internet is electronic commerce in conjunction with SSL. A major function of SSL is to encrypt the credit card number and keep it confidential. Note however, that the PAN (aka primary account number, aka credit card number) is needed in a large number of business processes ... and therefor while SSL provides confidentiality for the number while in transit/flight ... it doesn't do anything for the number while at rest. most of the credit card exploits have been involved with some part or another of the business process where the number is in the clear. misc. fraud/exploit refernces (including some card related stuff):
https://www.garlic.com/~lynn/subintegrity.html#fraud

the x9a10 financial standards working group was to devise a standard for all electronic retail payments (credit, debit, stored-value, etc) that preserved the integrity of the financial infrastructure ... regardless of the environment (pos, internet, etc). The result was x9.59
https://www.garlic.com/~lynn/x959.html#x959

in this scenario ... the analysis was that the fundamental problems was the credit card number had to be both a shared-secret (needing confidentiality) as well as open and pretty freely available because of the various business process. The x9.59 solution wasn't to try and add more levels of confidentiality (and there never would be enuf) and instead change things so the credit card number was no longer a shared-secret and therefor didn't require confidentiallity (or encryption). Basically x9.59 defines transactions that are always digitally signed (providing both integrity and authentication) and the PAN used in a x9.59 transaction can never be used in a non-X9.59 (non-authenticated) transaction. That business rule ... then removes (x9.59) PAN from the category of shared-secret (since knowing the PAN is not sufficient to perform a fraudulent transaction). Since the PAN is no longer a shared-secret ... it no longer requires confidentiality (encryption) to protect it. Integrity and authentication (i.e. digital signature) is sufficient. Furthermore since the PAN is no longer a shared-secret .... its exposure in a multitude of other business processes is also no longer a risk.

A slightly related posting regarding PAN as a shared-secret ... and the issue of the necessary level of security (and confidentiality) that would be proportional to the fraud risk:
https://www.garlic.com/~lynn/2001h.html#61 Net banking, is it safe????

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Sun, 20 Oct 2002 21:06:46 GMT
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
Might have been true for some later models, but the 155 and 165, which were introduced before the 370 Principles of Operation defined address translation, needed a major retrofit to add that feature. Almost but not quite a "forklift upgrade".

while it was already in the 135 & 145 and was just a m'code change at announcement ...there is the story about customers at SHARE asking what the "XLT" label was on the roller lights on the front panel (aka translate).

there was also the (pentagon paper-like) scenario involving the leakage of a virtual memory document to somebody in the press some months before announcement .... big investigation and a result that all comapny copying machines were retrofitted with a serial number on the glass that printed thru on all copies.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Sun, 20 Oct 2002 21:35:37 GMT
there is the alternative explanation.

you craft a public key and take it to these mystical organizations called certification authorities. They laboriously create an object of great power called a certificate and grant it great magical powers. The certificate is used to create a digital signature and it only performs this duty when you have thoroughly understood and agreed with the meaning contained in the computer binary bits that are being digitally signed. Such digital signatures now carry the attribute of non-repudiation ... that it is impossible for you to later claim that you don't fully agree with the terms and conditions expressed in any computer binary bits that carry your digital signature.

some past discussions on the subject of ssl domain name certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

some recent refs to non-repudiation and such stuff
https://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#5 NEWS: 3D-Secure and Passport
https://www.garlic.com/~lynn/aadsm12.htm#13 anybody seen (EAL5) semi-formal specification for FIPS186-2/x9.62 ecdsa?
https://www.garlic.com/~lynn/aadsm12.htm#18 Overcoming the potential downside of TCPA
https://www.garlic.com/~lynn/aadsm12.htm#19 TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aadsm12.htm#30 Employee Certificates - Security Issues

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

updated security glossary & taxonomy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: updated security glossary & taxonomy
Newsgroups: comp.security.misc
Date: Mon, 21 Oct 2002 17:04:52 GMT
i recently updated merged security glossary at
https://www.garlic.com/~lynn/index.html#glossary

with nstissc glossary:
https://web.archive.org/web/*/http://www.nstissc.gov/Assets/pdf/4009.pdf
notes on other sources:
https://www.garlic.com/~lynn/index.html#glosnote

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Mon, 21 Oct 2002 20:18:35 GMT
Jay Miller <jnmiller@@cryptofreak.org> writes:
A chip-card of this sort would probably solve this problem, but I'm afraid I'm limited to hardware independent solutions. Also, the data must be assumed both readable and writable publicly (e.g. assume it's a floppy disk).

Note AADS is general framework that can use any media ... 5-6 years ago when I started on it ... the standard was private key in an encrypted file (required password/pin to use). The file could be on floppy, hard disk, cdrom, etc.

I joked that wasn't sufficient integrity for many purposes ... so I joked that I wanted to take a $500 mil-spec part, cost reduce it by more than two orders of magnitude and at the same time increase the integrity/security ... that basically is the aads chip strawman.

aads and the aads chip strawman aren't synonymous ... but it looked like trying to put together a high integrity chip would be an interesting exercise.

I gave a talk about the effort in the TCPA track on assurance at the intel developer's conference two years ago ... slides at
https://www.garlic.com/~lynn/x959.html#aads

a little further down in the screen.

I somewhat joked that the TPM specification at that time was such that the aads chip strawman could meet all of the TPM requirements; The other part of the joke (from somebody in the audience) was that I came to the design almost three years earlier than TCPA because i didn't have 200 people in committees helping me with the design.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Mon, 21 Oct 2002 21:46:36 GMT
Christopher Browne writes:
People that haven't thought things through imagine that maybe having an "Alien Visitors Card" would prevent terrorists from entering the US. But they fail to grasp that it only prevents this if there is a /perfect/ screening process that gives cards to "safe" people and denies access to "terrorists."

the supposed magical properties of id cards are also frequently attributed to (id/x.509) certificates as well ... re previous posting in this thread ... not only id'ing ... but empowered with other mystical properaties like non-repudiation.
https://www.garlic.com/~lynn/2002n.html#16

misc privacy/identification/biometrics and authentication vis-a-vis identification
https://www.garlic.com/~lynn/subpubkey.html#privacy

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Mon, 21 Oct 2002 23:24:06 GMT
Jay Miller <jnmiller@@cryptofreak.org> writes:
From what I've seen, this system seems very solid on practical grounds. From a theoretical point of view, however, it seems that if the algorithm becomes well known the system may have a weakness. That is, that anyone might create her own card with whatever information she likes including a private key of her own choosing (encrypted with a password/pin of her choosing). It would therefore be vulnerable to forgery. I suspect this might be the reason for the military grade hardware? Or am I way off base?

so how strong integrity do you want?

1) anybody generates their own public/private key however ... and registers it with relying parties ... the relying parties use some process to make sure that the person presenting the public key ... actually can do corresponding digital signatures (lets use ec/dsa, fips186-2, as an example). that has one level of integrity. the institution is responsible for making sure that the person presenting the public key for registration corresponds to whatever they are registering. if it is purely opening a bank account ... then it can be analogous to tearing a dollar bill in half and giving one half to the bank ... and telling them not to honor any requests unless the matching half can be presented.

2) institutions get chips/cards from the foundary ... the chips do on-chip key gen ... the private key never leaves the chip, the public key is exported. any digital signature algorithm will do from a framework standard, but for some mundane purposes can again select ec/dsa, fisp186-2. the institution has done some FIPS/EAL certification on the chip ... and so trust it to whatever level it is certified to. these chips are given to their end users. institutions only trust & register public keys from chips they get directly from the foundary. lots of corporate employee stuff has various kinds of hardware tokens (door badge system, login system, etc) ... it doesn't have to be just military. also there are all sorts of chip cards (especially in europe) for financial transactions. for the financial they would possible like the highest possible integrity at the lowest possible costs. again it doesn't have to military ... just anything of value.

OK, so in the case of institutional delivered tokens ... they have a high level of confidence in the integrity of the delivered/registered tokens. By contrast, it can be relatively difficult for an institution to trust a random consumer-presented token. As you have pointed out many infrastructures are subject to counterfeit/mimic chips that can be programmed to talk like, smell like, look like and be accepted as valid chips.

So an interesting opportunity is how can trust be created for a token that is presented (whether it is card format, or dongle/key-fob format, or whatever). There are a couple steps here that are somewhat orthogonal. If a random token is presented ... on what basis does a institutional organization for trusting the token to be a "valid" token (for some degree of valid).

Once they get past can the trust the token ... then they have other business processes that they go thru that establishes some relationship between that token and other attributes ... so that whenever the token is presented in the future ... that the token represents the equivalent of all the business processes that previously equated the token to some set of attributes.

The attributes could be identity ... something like whoever uses this card probably has some specific fingerprint and/or DNA. The attributes might not be identity ... the attributes might just be that the person is allowed to make financial transactions against a specific bank account (and totally divorced from whether or not the financial institution has a separate process relating the account to some identity ... like SSN). The attribute might be that this is a valid employee (w/o actually having to indicate which employee) and the front door should open.

The higher the risk ... the larger the amount that the bad guys will be willing to spend on exploits off the infrastructure (counterfeit cards for instance). This goes somewhat to past statements about the amount of security proportional to risk (actually this frequently degenerates to the cost of security proportional to risk ... there isn't necessarily a strick linear relationship between security cost and security strength).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Tue, 22 Oct 2002 16:06:04 GMT
jdallen2000@yahoo.com (James Dow Allen) writes:
National Semi built a series of 158 lookalikes which could supposedly be upgraded to the more expensive model by adding a jumper. I was told this by a NatSemi FE. I don't know if he ever followed through on his plan to sell the jumper-upgrades personally.

you could unlatch the front panel on 155 and swing it out. on the back was a switch that could disable/enable the cache. If you disabled the cache ... the 155 ran possibly slower than a 145. the 155/165 had main storage that was significantly slower (2mic) than the 145 (aka the cache was suppose to compensate for the slower memory). it wasn't until the 158/168 that the higher end models got memory that was comparable speed to 145.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Tue, 22 Oct 2002 16:07:48 GMT
jdallen2000@yahoo.com (James Dow Allen) writes:
National Semi built a series of 158 lookalikes which could supposedly be upgraded to the more expensive model by adding a jumper. I was told this by a NatSemi FE. I don't know if he ever followed through on his plan to sell the jumper-upgrades personally.

weren't they actually hitachi ... or was that only after becoming NAS?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Tue, 22 Oct 2002 16:13:14 GMT
jdallen2000@yahoo.com (James Dow Allen) writes:
No. A 370-nn5 could be upgraded to look like a nn8 (after which it was called a "165 Model II" or "nn5-3") but in most cases the change was massive, with a large percentage of the circuit boards replaced.

the 165 mod II was upgrade that added virtual memory. the 165 still had the slower memory. it wasn't until 168 (& 158) that it got the faster memory. virtual memory upgrade was significant hit to 165. also as per other postings .... the claim that implementing the selective invalidates would have added another six months to getting out the virtual memory support (and six month delay in announcing virtual memory for 370). the decision was to drop the selective invalidates and not incur the six month delays.

the other going from 165 to 168 was that m'code was reworked (and some hardware added) that reduced the avg 370 instruction from 2.1 machine cycles (on 165) to 1.6 machine cycles (on 168).

some selective invalidate posts:
https://www.garlic.com/~lynn/2001k.html#8 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002b.html#48 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Sandia, Cray and AMD

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sandia, Cray and AMD
Newsgroups: comp.arch
Date: Tue, 22 Oct 2002 16:41:49 GMT
Robert Myers writes:
U.S. Taxpayer questions:

1. Why aren't they doing this with TCP/IP over ethernet? 8^}.

2. Are you imagining that AMD will lose any of its proprietary rights by having Uncle Sam pay the bill?

Who knows what machinations may be behind this one. The US DoD is even less comfortable with single source situations than IBM was.


we could even revive why did gov pay for tcp/ip and the internet in the first place and why is the us gov. allowing so many people around the world to use it (and some still even make money off it)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Tue, 22 Oct 2002 16:30:48 GMT
Jay Miller <jnmiller@@cryptofreak.org> writes:
I'm surprised that Chaum's psydonym system hasn't been mentioned more in this context - I'd never even heard of it. It would solve several (not all) of the problems talked about in BS's bit and the earlier post by Mr. Browne. (Mr. Browne actually noted as much.)

note that aads chip strawman (with biometrics and match on card) accomplishes effectively something similar ... but from a different approach ... a judicious application of authentication ... rather confusing indentification with authentication. in that respect it is identity agnostic ... aka any identity is dependent on other business processes that might (or might not) relate authentication to identification

The chip can establish (authenticate) whether or not the owner had rights to perform certain operations ... like withdrawing money from a bank account. no identification is involved and the chip is identity agnostic. any identity would require the business to establish a mapping between the entity that had rights to withdraw from an account with some identity (but totally outside the scope of the chip).

many of the biometrics systems flow the information up to a central repository where the match is done. in that sense these systems not only involve identity but turn the biometric value into a shared-secret (similar to previous postings about cc account number is a shared-secret). match on card eliminates biometrics as a shared-secret. the problem with many of the current generation of biometric chips with match on card ... is that they've been designed for offline environment. biometrics tend to be very fuzzy with some assceptable scoring threshold sent (i.e. percent match) for whether or not the card works or doesn't work (also leading to the whole notion of false positives and false negatives). the issue is that in an area somewhat related to security proportional to risk ... the threshold values are somewhat tuned to the value of the operation. For an environment that migrated to chip-based biometrics across a broad range of envirnments with a broad range of values and risks ... that could lead to a very fat wallet filled with different cards.

random biometrics:
https://www.garlic.com/~lynn/aadsm2.htm#privacy Identification and Privacy are not Antinomies
https://www.garlic.com/~lynn/aadsm3.htm#cstech2 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech4 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#shock2 revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm6.htm#terror12 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm7.htm#rhose9 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#carnivore2 Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aadsm10.htm#tamper Limitations of limitations on RE/tampering (was: Re: biometrics)
https://www.garlic.com/~lynn/aadsm10.htm#biometrics biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio1 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio2 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio3 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#bio4 Fingerprints (was: Re: biometrics)
https://www.garlic.com/~lynn/aadsm10.htm#bio5 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio6 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio7 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio8 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aepay3.htm#passwords Passwords don't work
https://www.garlic.com/~lynn/aepay4.htm#comcert Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay6.htm#cacr7 7th CACR Information Security Workshop
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aepay7.htm#3dsecure2 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aepay10.htm#5 I-P: WHY I LOVE BIOMETRICS BY DOROTHY E. DENNING
https://www.garlic.com/~lynn/aepay10.htm#8 FSTC to Validate WAP 1.2.1 Specification for Mobile Commerce
https://www.garlic.com/~lynn/aepay10.htm#15 META Report: Smart Moves With Smart Cards
https://www.garlic.com/~lynn/aepay10.htm#20 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/99.html#160 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#166 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#172 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#235 Attacks on a PKI
https://www.garlic.com/~lynn/2000.html#57 RealNames hacked. Firewall issues.
https://www.garlic.com/~lynn/2000.html#60 RealNames hacked. Firewall issues.
https://www.garlic.com/~lynn/2001c.html#30 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#42 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#60 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001g.html#11 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#38 distributed authentication
https://www.garlic.com/~lynn/2001h.html#53 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#16 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
https://www.garlic.com/~lynn/2001j.html#52 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#1 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#6 Is VeriSign lying???
https://www.garlic.com/~lynn/2001k.html#61 I-net banking security
https://www.garlic.com/~lynn/2002.html#39 Buffer overflow
https://www.garlic.com/~lynn/2002e.html#18 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002e.html#36 Crypting with Fingerprints ?
https://www.garlic.com/~lynn/2002e.html#38 Crypting with Fingerprints ?
https://www.garlic.com/~lynn/2002f.html#22 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#32 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#45 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002g.html#56 Siemens ID Device SDK (fingerprint biometrics) ???
https://www.garlic.com/~lynn/2002g.html#65 Real man-in-the-middle attacks?
https://www.garlic.com/~lynn/2002g.html#72 Biometrics not yet good enough?
https://www.garlic.com/~lynn/2002h.html#6 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#8 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#9 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#13 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#41 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002i.html#61 BIOMETRICS
https://www.garlic.com/~lynn/2002i.html#65 privileged IDs and non-privileged IDs
https://www.garlic.com/~lynn/2002j.html#40 Beginner question on Security
https://www.garlic.com/~lynn/2002l.html#38 Backdoor in AES ?
https://www.garlic.com/~lynn/2002m.html#14 fingerprint authentication
https://www.garlic.com/~lynn/2002n.html#19 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002n.html#20 Help! Good protocol for national ID card?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Tue, 22 Oct 2002 17:00:14 GMT
Jay Miller <jnmiller@@cryptofreak.org> writes:
You're right that cost versus security is hardly linear. Schneier's cement-encased computer is a good example, I think.

Consider a high resource attack. Normally one might make a passport factory or a chip factory meant to duplicate to a high degree the ID token. The cool thing about Chaum's protocol is that the object of attack shifts. Instead of being the weakest link, the token itself is now the strongest piece of the system and elements that are already viable targets now (databases, humans, etc.) become the only objects of attack.


also see security proportional to risk and the credit card databases
https://www.garlic.com/~lynn/2001h.html#61

it requires both an infrastructure model and the corresponding standards operation. the x9.59 protocol removes the credit card number as the point of attack (and all the large multitude of databases that contain it in the clear) and effectively moves the attack to the end-points ... the signing environment and the authentication environment.
https://www.garlic.com/~lynn/x959.html#x959

the aads chip strawman proposes the best token technology in existance today at optimized cost-reduced delivery ... for protection of the private key and the signing operations
https://www.garlic.com/~lynn/x959.html#aads

that moves the attacks & exploits on the signing end point to different areas ... some addressed by the eu finread stuff (are you really signing what you thing you are signing):
https://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
https://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative to PKI?
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
https://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
https://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
https://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002f.html#46 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002f.html#55 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002g.html#69 Digital signature
https://www.garlic.com/~lynn/2002m.html#38 Convenient and secure eCommerce using POWF
https://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

why does wait state exist?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: why does wait state exist?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 22 Oct 2002 20:24:11 GMT
gah@UGCS.CALTECH.EDU (glen herrmannsfeldt) writes:
There is a discussion on the Hercules list about some features of S/3x0 architectures, including wait state.

The wait state is an unusual feature in computer architectures. Most just loop when there isn't anything else to do. Multitasking OS have a "null" task that gets all the time when there is nothing else to be done.


easy ... machines were leased and charged for based on meter running. the meter ran whenever the cpu was executing or channels were active. pure side note that the meter actually coasted (at least on the 370), if the cpu/channel was active at any time within a 400ms window the meter ran for 400ms (or take the view that the meter tic resolution was 400ms). I'm sure totally unrelated to all this was that the MVS SRM had a 400ms wake up interval.

one of the big things that CP/67 did in the late '60s was switch to "PREPARE" sequence on terminal lines.

CP/67 was precursor to VM/370 (which survives today as both LPAR support and zVM ... my guess that the LOCS in LPAR microcode are comparable to the LOCS in the original CP/67 kernel). CP/67 and CMS were pretty much an evoluation of CTSS time-sharing system ... done by some of the same people that worked on CTSS ... and done in parallel and in the same building as other people (that had also worked on CTSS) doing Multics.

In any case, CP/67 was doing all this super-optimized time-sharing, time-slicing, dynamic adaptive workload management, fastpath kernel optimization, near optimal page replacement algorithms, lot of the precursor stuff to what became capacity planning, etc, etc.

However one of the major things that allowed CP/67 to transition into the time-sharing service bureau was the change to use PREPARE in the terminal CCW sequence. CP/67 was already going into wait state when there wasn't anything to do ... and not waking up gratuitously ... but the terminal I/O sequence still had the channel active and ran the meter.

one of the requirements for offering cp/67 service bureau ... was being able to provide 24x7 service ... and be able to recover costs of the operation. Going into wait state helped with stopping the meter under off-shift low usage scenarios. But it wasn't until the PREPARE CCW sequence change was made that the meter actually totally stopped. At that point, just leaving the system up and running continuously became much more cost effective.

another issue (at least during the start up phases) time-sharing service bureau stuff was various automated operator stuff and automated recovery & reboot in case of failures.

In any case, somewhat after CP/67 was announced at the spring '68 SHARE meeting in houston (coming up 35 years)... two CP/67 time-sharing service offerings spun off.

misc. other pieces of ctss, timesharing, cp/67, vm/370, and virtual machine lore at:
http://www.leeandmelindavarian.com/Melinda#VMHist

these days ... with time-sharing by both virtual machine kernel and the microcode ... the issue isn't the (leasing) meter running .... but not unnecessarily using processor that could be put to better use by some other component.

random other posts related to the subject
https://www.garlic.com/~lynn/99.html#179 S/360 history
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000b.html#44 20th March 2000
https://www.garlic.com/~lynn/2000b.html#72 Microsoft boss warns breakup could worsen virus problem
https://www.garlic.com/~lynn/2000d.html#40 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2000e.html#9 Checkpointing (was spice on clusters)
https://www.garlic.com/~lynn/2000f.html#52 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#4 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2001g.html#30 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#52 Compaq kills Alpha
https://www.garlic.com/~lynn/2001h.html#14 Installing Fortran
https://www.garlic.com/~lynn/2001h.html#35 D
https://www.garlic.com/~lynn/2001h.html#59 Blinkenlights
https://www.garlic.com/~lynn/2001k.html#38 3270 protocol
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002b.html#1 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002c.html#44 cp/67 (coss-post warning)
https://www.garlic.com/~lynn/2002d.html#48 Speaking of Gerstner years
https://www.garlic.com/~lynn/2002e.html#27 moving on
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002f.html#17 Blade architectures
https://www.garlic.com/~lynn/2002f.html#59 Blade architectures
https://www.garlic.com/~lynn/2002h.html#34 Computers in Science Fiction
https://www.garlic.com/~lynn/2002i.html#21 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#62 subjective Q. - what's the most secure OS?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#64 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002k.html#64 History of AOL
https://www.garlic.com/~lynn/2002l.html#66 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002m.html#61 The next big things that weren't

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

why does wait state exist?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: why does wait state exist?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 22 Oct 2002 20:33:32 GMT
Anne & Lynn Wheeler writes:
CP/67 was precursor to VM/370 (which survives today as both LPAR support and zVM ... my guess that the LOCS in LPAR microcode are comparable to the LOCS in the original CP/67 kernel). CP/67 and CMS were pretty much an evoluation of CTSS time-sharing system ... done by some of the same people that worked on CTSS ... and done in parallel and in the same building as other people (that had also worked on CTSS) doing Multics.

actually LPARs might be slightly be more like CP/40. Prior to availability of 360/67 (a 360/65 with virtual memory support), the group modified a 360/40 with virtual memory support and built CP/40 to run on it. The virtual memory support had a TLB for each of the 64 4k pages in the machine (i.e. 256kbyte machine) and a 4bit process-id that it did an associative lookup on (i.e. maximum of 15 processes support by cp/40). The limitations of CP/40 is possibly more analogous to the LPAR limitations. In any case, when 360/67 became available ... CP/40 was ported and became cp/67 (and when 370s becamse available it was ported and became vm/370).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

why does wait state exist?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: why does wait state exist?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 23 Oct 2002 15:19:23 GMT
pa3efu@YAHOO.COM (Jan Jaeger) writes:
One other thing that comes to mind is that a cpu in a wait state would not require any bandwith to memory, and as such channel access to memory might be better if a cpu is in a wait state. oltp systems such as tpf do not use the enabled wait, they loop, I think because going into and coming out of the wait state is rather expensive. This to improve response times, wheras mvs is (or at least was) batch oriented, and thougput is more importand then response.

i believe the overhead for the SIO instruction was much worse than interrupt (one of the reasons for the introduction of SIOF). also much of the hardware interrupt overhead was achieving a consistent state of the machine (aka happening at an instruction boundary) ... which sould imply that it would be slightly/somewhat more expensive if instruction was executing rather than in wait state (aka no imprecise interupts).

much of the interrupt overhead ... wasn't in the hardware ... it was how the operating systems implemented first level intherrupt handler (FLIH). I have claim that (as undergraduate) i had optimized the CP/67 FLIHs that they were possibly ten times faster than mvt/mft (even tho I had done lots of MFT/MVT optimization work also). minor refs to presentation at fall '68 SHARE in Atlantic City (both lots of work modifying MFT14 for standalone production work, lots of work modifying CP/67, and lots of work modifying MFT14 for running under cp/67):
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14

lots of FLIH and i/o initiation tended to be with non-standard, anomolous, fault handling ... but it was still possible to build a bullet-proof infrastructure that was still very optimized (work done for the disk engineering lab):
https://www.garlic.com/~lynn/subtopic.html#disk

as mentioned in another posting to this thread ... various other machine architectures provided for vectored interrupts ... which would shave a couple instructions off FLIH. The big savings in some real-time architectures was that they had rings & vectored interrupts ... and interrupt into "better" ring ... suspended execution of "poorer" ring (each ring had its own regs, etc ... so FLIH didn't have to save & restore). This tended to be special case for small number of things.

On cache machines, asynchronous interrupts can imply task switching and cache trashing. One of the little special twists that I did for VM/370 was some dynamic adaptive code that under heavy interrupt load would run user processes disabled for I/O interrupts ... but with a managed timer interrupt. I/O interrupts tended to be slightly delayed ... and then batch drained with an interrupt window in the kernel. Properly tuned (on heavily loaded 370/168) it actually improved interrupt processing (since tended to have very good cache hits on the kernel interrupt code since it was being repeatedly executed) and application execution (since it didn't have a lot of asynchronous interrupts trashing the cache).
https://www.garlic.com/~lynn/2002l.html#25 Do any architectures use instruction count instead of timer

Large part of the I/O features for XA ... was to offload a lot more of the kernel I/O processing into dedicated asynchronous processors (over and above already provided by the channel architecture). Part of this was justified on the significantly painful long MVS pathlengths (and in some sense ... it was easier to rewrite from scratch in a new microprocessor than try and cleanup existing spaghetti code, although I had demonstrated it was possible with the work supporting the disk engineering lab). One of the ancillary issues of outboarding more of the I/O function allowed asynchronous queuing of new requests and dequeuing of completed requests ... with dedicated processors being able to handle things like device redrive ... processing the completion of the current requests and immediately redriving the device with the next queued request ... w/o impacting the cache locality of the main processor.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Wed, 23 Oct 2002 15:52:28 GMT
"Tony T. Warnock" writes:
Of course, Police Undercover Agents, would have fake ID's for getting their criminal jobs and true ID's for their real jobs.

the cards wouldn't be fake ... the cards might refer to identification that was somewhat manufactured. identity theft has to do with getting valid ID "cards" ... for the wrong person. There are a number of different kinds of vulnerabilities and exploits ... at least

counterfeit/invalid cards (either valid or ficticious persona) valid cards for somebody else's (valid) persona (identity theft) valid cards for ficticious persona

also ... who is the authority that decides what are valid persona and what are ficticious persona?

lots of privacy stuff going on (like GLB) ... big issues are institutional "mis-use" of privacy information ... as well as criminal "mis-use" of privacy information (identity theft).

one of the (effective) claims regarding x.509 "identity" certificates is that they can represent major privacy violation issues ... and therefor some past transition to relying-party-only certificates (aka effectively authentication-only certificates). Note however, that traditional certificates are like letters of credit from one institution to another institution. In general writing a letter of credit for somebody to yourself can frequently be shown to be redundant and superfluous (aka dear me, please accept my assurance that the holder of this document is good for $10,000, signed me).

that then strays into the semantics of identification and authentication. rather than looking at cards as identification ... embodying a persona ... they are part of some authentication schema ... aka 3-factor authentication

something you have (aka hardware token or card) something you know (password or PIN) something you are (biometrics)

now within the structure of 3-factor authentication semantics ... in conjunction with cards ... something you know and something you are can either represent "secrets" or shared-secrets. shared-secrets is that the information is registered someplace else (like mother's maiden name) and somebody is responsible for presenting it A "secret" is that the information is registered in the token and correct presentation affects the operation of the token. With properly designed authentication token, a non-shared-secret paradigm tends to be at least privacy agnostic.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

why does wait state exist?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: why does wait state exist?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 23 Oct 2002 18:34:57 GMT
Anne & Lynn Wheeler writes:
lots of FLIH and i/o initiation tended to be with non-standard, anomolous, fault handling ... but it was still possible to build a bullet-proof infrastructure that was still very optimized (work done for the disk engineering lab):
https://www.garlic.com/~lynn/subtopic.html#disk


at the time that i started the bullet-proof rewrite in the above refs, the MTBF (system crash) for MVS running a single testcell in the engineering lab was on the order of 15 minutes. eventually things got to the point where they could simultaneously operate 6-12 testcells on the same machine with no failures (significant improvement in productivity since they had to previously resort to doing everything stand-along with dedicated time per testcell).

anyway ... i did a (corporate classified) paper on what was needed and the changes & restructuring. then there was a letter from somebody in POK RAS management ... which effectively wanted to fire me; not for fixing everything ... but the document could be interpreted (if you so desired) as a list of things that had needed fixing (which then could be construed as reflecting on the RAS group, especially if you were totally focused on image building).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

why does wait state exist?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: why does wait state exist?
Newsgroups: alt.folklore.computers,bit.listserv.ibm-main
Date: Wed, 23 Oct 2002 23:19:15 GMT
Jeff Raben writes:
I thought that the early 2701 with its limited number of lines was the only machine without the 'prepare' command.

The 2703 and the its little buddy 2702 both had these 'wake up on data' (and lose 'em) commands (also good on the old contention-type communication).

The 2701 manuals reference the /30 thru the /75 (including the 44 and 67). The 'major updated' 2703 manuals predate TSS release by about a year.


I didn't say that the machines didn't have them. the machines had them ... it just that the software didn't use them originally. the software was then changed to use prepare (just because the hardware had it and/or the hardware availability predated the software availability, didn't mean that the original designers thot to use the feature). one of the major reasons justifying changing (the software) for the prepare command was to stop the meter tic'ing.

the 2702 had other problems ... which resulted in a project that i worked on as an undergraduate that reversed engineering the channel interface and we built our own controller ... using an interdata/3 as a base microprocessor. supposedly this originated the pcm controller business (something that CPD wasn't too happy with me for).
https://www.garlic.com/~lynn/submain.html#360pcm

as for the extraneous reference to TSS ... some other extraneous refernces. the discussion was specifically about change to cp/67 resulting in the meter to not tic ... especially off shift with possible low activity ... and enhancing the ability for some service bureaus to offer cost effective cp/67 time sharing service.

at approximately the time the prepare command change was done in cp/67 ... i believe the cp/67 & cms ibm group was somewhere around 12 people. I was told that at about the same time the tss/360 ibm group numbered around 1200 people (two orders of magnitude more). All sorts of discussions could be had about whether it was better to have had just 12 people or 1200 people. there are also discussions about the subsequent tss/370 (on 370s with virtual memory than 360/67) activities may have possible done better with only a 20 person group (rather than the original 1200).

While there were a number of commercial cp/67 (and later vm/370) time sharing service bureaus ... i'm not aware of there having been any commercial tss/360 time sharing service bureaus (as well as a significantly larger number of 360/67s running cp/67 than tss/360).

as an aside ... almost 20 years later ... i tried to do another CPD controller replacement/clone using a series/1 (peachtree) migrating to RIOS ... but was somewhat less successful than the original effort (that is orthanganol to an earlier attempt to have original 3705 be based on peachtree (s/1) rather than uc.5 microprocessor.
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#155 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#165 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#239 IBM UC info

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

why does wait state exist?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: why does wait state exist?
Newsgroups: alt.folklore.computers,bit.listserv.ibm-main
Date: Thu, 24 Oct 2002 00:09:59 GMT

https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?

another somewhat extraneous ... interdata was bought up by perkin/elmer which continued to offer the product. 5-6 years ago (nearly 30 years after we built the original) i was in a large oltp data center and there was a p/e box handling a significant amount of the load.

i didn't get to examine that box in detail ... but i've talked to some people that were selling p/e boxes into the federal gov. in the early '80s and they said that the channel interface was still a wire-wrap board ... possibly unchanged since we built the original board.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Opera 6.05 resources problem?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opera 6.05 resources problem?
Newsgroups: opera.general
Date: Wed, 23 Oct 2002 23:34:17 GMT
possible totally unrelated to any other problem ... i normally run with javascript disabled ... unless i have to absolutely turn it on for some website.

i found with linux firewall ... displaying transmission activity ... that after having turning on javascript in conjunction with visiting a website (doesn't seem to be any correlation with any specific site) ... constant relatively low-level arriving packet rate starts, even when absolutely nothing (that i know of) is going on. turning off javascript has no affect. killing/dropping the link and then restarting the link does successfully interrupt it tho.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VR vs. Portable Computing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VR vs. Portable Computing
Newsgroups: alt.folklore.computers,comp.arch,comp.society.futures,rec.arts.sf.science,soc.history.science
Date: Thu, 24 Oct 2002 02:30:22 GMT
Keith R. Williams writes:
Not so tiny. IBM has a class-A domain (indeed I have two 9-dot fixed addresses in my office).

Not so bizarre either. It's an indication of where things are (Intra/Inter).

Not so important. Who the hell cares what people call 'www'? ...and why is it still there?


i remember at the time that ibm got the class-a domain (somebody i had worked with applied and got it), i was somewhat surprised that one was available. note that this was still not that far removed from when the internal network was still larger than the (whole) arpanet/internet. random ref:
https://www.garlic.com/~lynn/internet.htm

also note that GML was done at the science center ... which begot SGML and then HTML ... possibly in large part because CERN was a vm/cms installation ... and had been running it since the (infamous) cern mvs/vm bake-off. i believe its sister location, slac (also was a large vm/cms installation) claims to have the web site that has been around the longest (i don't know if they are claiming the original web site, but i believe they are at least claiming the earliest one still around).

during much of the 70s & 80s slac hosted the bay area vm user group (baybunch) meetings ... there were some at the 30th anniv. party for vm/370 at share 99 in san fran (this past aug).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VR vs. Portable Computing

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VR vs. Portable Computing
Newsgroups: alt.folklore.computers,comp.arch,comp.society.futures,rec.arts.sf.science,soc.history.science
Date: Thu, 24 Oct 2002 02:56:34 GMT
Anne & Lynn Wheeler writes:
i remember at the time that ibm got the class-a domain (somebody i had worked with applied and got it), i was somewhat surprised that one was available. note that this was still not that far removed from when the

totally unrelated ... but at another time & place this person had been the "catcher" in endicott for system/r (original rdbms) ... which then became sql/ds. and then to stray even further afield, one of the people at the following meeting had been the endicott->stl sql/ds catcher for what became db2.
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VR vs. Portable Computing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VR vs. Portable Computing
Newsgroups: alt.folklore.computers,comp.arch,comp.society.futures,rec.arts.sf.science,soc.history.science
Date: Thu, 24 Oct 2002 06:53:42 GMT
Anne & Lynn Wheeler writes:
also note that GML was done at the science center ... which begot SGML and then HTML ... possibly in large part because CERN was a vm/cms installation ... and had been running it since the (infamous) cern mvs/vm bake-off. i believe its sister location, slac (also was a large vm/cms installation) claims to have the web site that has been around the longest (i don't know if they are claiming the original web site, but i believe they are at least claiming the earliest one still around).

actually i believe the report was tso/cms comparison (i.e. interactive computing). it was somewhat infamous in that the (public) share report was internally classified corporate confidential - restricted (aka available on a need to know basis only). apparently it wasn't possible to restrict customers from reading how bad tso was ... but at least it was possible to try and keep employees from finding out.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VR vs. Portable Computing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VR vs. Portable Computing
Newsgroups: alt.folklore.computers,comp.arch,comp.society.futures,rec.arts.sf.science,soc.history.science
Date: Thu, 24 Oct 2002 07:12:15 GMT
Keith R. Williams writes:
Not so tiny. IBM has a class-A domain (indeed I have two 9-dot fixed addresses in my office).

another total aside ... about the time one location was getting the class-a ... several other locations were getting one or more class-Bs each. this was all before the 10-net rfc (and the request to return unused nets). from:
https://www.garlic.com/~lynn/rfcietff.htm

1597 -
Address Allocation for Private Internets, DeGroot G., Karrenberg D., Moskowitz R., Rekhter Y., 1994/03/17 (8pp) (.txt=17430) (Obsoleted by 1918)

1627 -
Network 10 Considered Harmful (Some Practices Shouldn't be Codified), Crocker D., Fair E., Kessler T., Lear E., 1994/07/01 (8pp) (.txt=18823) (Obsoleted by 1918)

1917
An Appeal to the Internet Community to Return Unused IP Networks (Prefixes) to the IANA, Nesser P., 1996/02/29 (10pp) (.txt=23623) (BCP-4)

1918
Address Allocation for Private Internets, DeGroot G., Karrenberg D., Lear E., Moskowitz R., Rekhter Y., 1996/02/29 (9pp) (.txt=22271) (BCP-5) (Obsoletes 1597, 1627)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CMS update

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: Thu, 24 Oct 2002 02:08:27 -0600
Newsgroups: bit.listserv.vmesa-l
Subject: CMS update
long, long ago ... when some guys come out to install CP/67/CMS at the university ... the method was basically

update fn assemble a fn update a

where the fn update file had ./ i number, ./ r number <number>, ./ d number <number>

where number were the sequence number in cols 73-80 of the assemble file.

you then could assemble the resulting temporary file from update (actually update could be used against any kind of file as long as it had sequence numbers in 73-80)

periodically the temporary file would be taken and used to replace the permanent assemble file (normally resequencing the assemble file when that was done ... but not always). The convention was that you also needed to manual type in the sequence numbers into the update file in cols 73-80 ... appropriately choosing the numbers you typed. all the updates I was doing ... it really got to be a pain to constantly type in those numbers. So i wrote a little preprocessor routine ... it would read the update file and look for dollar sign on the ./ control cards. If it found one ... it would take that as indication to automatically generate the sequence numbers in the cards it output. "$" could have nothing following it ... in which case it did the default ... or it could have an optional starting number and an optional increment following the dollar sign. This is was all still one level update.

Later in the "L", "H", and "I" time-frame (distributed development project implementing virtual 370 support in cp/67 running on real 360/67) ... the work was done at cambridge for multi-level update. As mentioned in one of melinda's notes ... I was able to resurrect this original infrastructure and send her a copy.

basically it was all still plain update command but driven by exec that iterated one for every update specified in the control file. this multi-level update exec started out looking for files of the form UPDGxxxx where xxxx could be specified in the CNTRL file. For every UPDGxxxx it found, it would run it thru the dollar preprocessor and generate a UPDTxxxx (temporary) file ... which was than applied to the assemble file resulting in a temporary assemble file. Any subsequent UPDG files it found in the specified search order would be run thru the "$" process, generate the UPDT file and then applied (iteratively) to the resulting assemble file. Finally when it exhausted all UPDG files, it would assemble the resulting assemble file.

Then there was some really fancy stuff done by an MIT co-op that attempted to merge multiple parallel update threads and resolve conflicts between the parallel development threads. That fairly sophisticated work eventually fell by the way-side. In the mean time, the development group (which had split off from the scientific center by this time) had a need for PTF/APAR files. They took the CNTRL/UPDG structure developed by the science center and added "aux" file support to the CNTRL file ... i.e. the update exec instead of looking for a update file of the form UPDTxxxx ... would look for a "aux" file that contain lists of update files ... giving the full filetype name of each file to be applied.

Eventually, the exec code for supporting control file loop and the "$" sequence number processing was incorporated into the standard update routine, aka update would read the assemble file into memory and iteratively execute the control file loop applying all update files it found ... before writing out the resulting updated assemble file. Even later, support was extended in the editor ... that it would 1) do the iterative CNTRL file update operation prior to editing sessions ... and on file ... instead of writing out the complete file ... generate the appropriate update file reflecting all edit changes (prior to that, the update file had to be explicitly edited ... including all the ./ control commands ... instead of having the editor automatigically generate them for you).

The other part was after the assemble process .... the resulting TEXT/binary file was appropriately renamed to reflect the highest level update that had been applied and "comments" card were added to the front of the TEXT file ... one comment line for each file involved .in the process ... with full name, date, time, etc ... the original assemble, file, all the update files applied and all the maclib files involved in the assembly. And then there was the VMFLOAD process which took the CNTRL file and looked for TEXT files in the appropriate search order for inclusion in the runtime image. And of course when the loader read the runtime image and generated the load map ... it output as part of the loadmap process each one of the comment cards that it ran across. It could somewhat reconstruct what pieces were part of a CP kernel routine by all the comments cards in the load map.

So i was in madrid sometime in the mid-80s. This was to visit the madrid science center ... they had a project that was imaging all sorts of old records ... preparing stuff that would be a comprehensive cdrom getting reading for the 500th annv of 1492. So while i'm there, I visit a local theater. They have this somewhat avant guard short done at the university that runs about 20 minutes. A big part of the short was a apparently a hotel room or apartment that had a wall of possible two dozen TVs ... they all appeared to be scrolling some text at 1200 baud ... the same text on all TVs (looks like all TVs are slaved to the same computer output). The dardest thing was that I recognized it as a CP kernel load map that was being scrolled ... and what is even worse, I recognized the release and PLC level from the APAR/PTF comments.

In any case, it is nice to have all the individual updates around for some kinds of audit processes ... compared to effectively the "downdates" of RCS & CVS ... the rest of CVS support is a lot more comprehensive.

--
Anne & Lynn Wheeler lynn@garlic.com, https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Thu, 24 Oct 2002 17:07:02 GMT
Christopher Browne writes:
I haven't seen the Chaum proposal; I gather it involves having digital signatures on a whole bunch of assertions so that you'd have a digital signature on things like: - Age > 18 - Driver's licence = Class "G" - Driving Requirements = "With Corrective Lenses" ???

careful that you don't confuse two different digital signatures. this is effectively institutionalized in x.509 identity certificate paradigm. all of this information about you resides in a certificate that is digitally signed by a trusted agency ... not the person themselves (aka people may have reasons for not being totally truthfull regarding details about themselves).

the certificate then contains something that can be used to validate the entity that the information is about.

everybody carries with them the public key of the trusted agency ... so the validaty of the certificate (and its assertions) can be validated.

in the traditional x.509 identity digital certificate ... the entity validation information is a public key ... the entity is asked to digital sign some arbitrary information (aka like a challenge/response) and then the public key in the certificate is used to check the response. Assuming that the trusted agencies public key validates the certificate and that the public key in the certificate validates the challenge/response) ... then it is assumed that the attributes in the certificate correspond to the entity signing the challenge/response.

in variations on this ... rather than having the entities public key "bound" in the certificate ... there is biometric information or digitized picture of the person ... or some other way of validating the entity and the certificate are bound together.

The driver's license analogy was frequently used as the business case for justifying huge x.509 identity digital certificate business cases.

Note that the digital signature on the certificate/credential is that of the authoritative agency that is trusted for the information of interest. The public key of the trusted/authoriative agency is then used to validate that digital signature. Any public key in the certificate/credential is then used to validate some digital signature generated by the entity of interest. This is somewhat the hierarchy trust model of PKI ... you have to first validate the correctness of the credential/certificate and then validate the binding to the entity that the credential/certificate information is about.

Note that this is all a paradigm developed for the offline world before police had radios, portable computers, and checked real-time databases. Effectively the suggested solution tried to make up for the difficiency in the offline world by creating read-only stale copies of the real-time authoritative information. This was the offline, hardcopy model translated to the offline, electronic world.

However, to some extent the world has moved on ... typically the online connectivity is such that for anything of real importance or value ... if it is electronic ... it is possible to directly query the authoritative agency for the real-time information .... instead of relying on stale, static copies of the information. The driver license information (and almost all other information) works for offline, stale, static hardcopy ... when there isn't access to electronic and online. However, it is becoming such that if there is a reason for the electronic (rather than the hardcopy) ... and the issue involves anything of importance or value ... then it is electronic&online ... and not electronic&offline. In the driver's license case ... the officer except for cursory checks ... can check the picture and the number ... and then calls in the number for real-time, online transaction.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Home mainframes

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Home mainframes
Newsgroups: alt.folklore.computers
Date: Thu, 24 Oct 2002 20:14:16 GMT
jmaynard@thebrain.conmicro.cx (Jay Maynard) writes:
VM and MVS are both good OSes, but for different things. VM is better for interactive computing. MVS is better for day-in, day-out workhorse DP where the same tasks need to be done over and over - a perfect description of batch processing. VM sucks at batch, as does Unix. The facilities MVS provides are much more manageable, and much more robust. The tradeoff is that it's less friendly to interactive use, and harder to develop for.

not just batch but MVS is reasonable platform for almost any kind of service offering. It provides a lot of robust infrastructure functions to automate almost anything that might need to be done in a data processing system. Many of these automated functions are hidden behind arcane JCL ... making it a horrible delivery vehicle for personal computing.

However, if you have requirement for nearly any sort of automated delivery service that needs to run repeatedly day-in, day-out ... with little or no hands on required ... things like payroll, check clearing, financial transactions, etc. it is very dependable work horse. In that sense it is more like some of the big 18 wheelers on the highway ... people looking for something simple like a small two-seater sports car are going to find a big 18 wheeler with a couple trailers somewhat unsuited.

One of the intersection points in the current environment ... is that a large number of web services have requirements for 7x24, reliable, totally automated operation (even dark room). Lots of users around the world don't care why either the ATM machine is down or their favorite web server is down ... they just want it up and running all the time.

a couple years ago ...one of the large financial services claimed a major reason for 100 percent uptime for the previous six years was automated operations in MVS ... aka people effectively were almost never allowed to touch the machine ... because people make mistakes.

Hardware had gotten super reliable ... software was getting super reliable ... but people weren't getting a whole lot better.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help! Good protocol for national ID card?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help! Good protocol for national ID card?
Newsgroups: sci.crypt
Date: Thu, 24 Oct 2002 21:27:07 GMT
Christopher Browne writes:
I haven't seen the Chaum proposal; I gather it involves having digital signatures on a whole bunch of assertions so that you'd have a digital signature on things like:
- Age > 18
- Driver's licence = Class "G"
- Driving Requirements = "With Corrective Lenses"


basically this type of information is designed at providing some sort of trusted information between two parties that otherwise have no knowledge of each other. in order to support such an infrastructure there is a need for trusted third institutional parties that are trusted by the majority of the target population that these kinds of certified information is targeted for.

then from another facet, it is possible to divide the business and institutional solution space into four quadrants offline/online and electronic/hordcopy (or electronic/non-electonic):


      offline&              online&
hardcopy              hardcopy


offline& online& electronic electronic
the world prior to the 70s was significantly in the upper left quadrant, offline&hardcopy. During the '70s there started to be a transition to online (at least in the US) either directly to online&electronic (for money/value type transactions with POS terminals) or with a stop-over in online&hardcopy before proceeding to online&electronic (polic car 2-way radios, personal 2-way radios, laptops with online connectivity, etc).

basically the business and institutional infrastructures were moving from the offline&hardcopy quadrant to the "online" column ... either directly to the online&electronic quadrant or possibly passing thru the online&hardcopy quadrant.

In the '80s there started to appear in the literature description of solutions that fit in the offline&electronic quadrant ... it wasn't a domain space that any of the business & institutional infrastructures were migrating to but it had potential market niches.

Possibly some market niches driving the literature for solutions in the offline&electronic quadrant were 1) the (no-value) offline email (business process that didn't justify the expense of online connectivity; dial-up, efficiently exchange email, and hangup ... and do the actual processing offline) and 2) potentially various places around the world that had poor, very expensive, little or no online connectivity.

Many of these solutions somewhat circled around digitally signed credential ... aka information from some trusted database copied into a read-only, distributable copy. This somewhat culmunated in the x.509 identity digital certificate standard.

By the early '90s when the x.509 standard was starting to settle out, the potential market niches in the offline&electronic quardrant was starting to shrink rapidly. Internet ISPs were starting to bring the possibility of nearly online, all-the-time connectivity. In many parts of the rest of the world, a combination of deregulation of PTTs and the internet was starting to also bring the possibility of nearly online, all-the-time connectivity. Finally, in parts of the world where the last-mile physical infrastructure had not happened, the whole wireless revolution was turning into a wild-fire. Some places that had not heavily invested in legacy last-mile physical infrastructure had a high uptake for wireless solutions.

It was almost like that by the time digital signed credentials (copies of information from trusted databases by trusted agencies) started to come into their own for the offline&electronic solution quadrant ... the market was evaporating because the possibility of nearly online, all-the-time was spreading like wildfire.

In some sense that puts x.509 digitally signed identity certificates in somewhat the same category as OSI&GOSIP. There were huge amounts of stuff written about it, it is still taught in various academic circles, but it is unrelated to real-life business and institutional requirements, directions, and real live deployments.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VR vs. Portable Computing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VR vs. Portable Computing
Newsgroups: alt.folklore.computers
Date: Fri, 25 Oct 2002 20:29:46 GMT
David Powell writes:
Today, we have a large hydraulic accumulator on the electricity grid, just pump water uphill to a reservoir in Wales during the night, and generate during peak times. It's just a few seconds to change from pumping to generating, and saves the need for a few hundred MW of spinning reserve on the grid.

grand coulee dam on the columbia is like that ... it is largest(?) hydroelectric plant in the US ... but its original purpose was flood control and irrigation. Besides the downstream water flow for hydroelectric power ... it also pumps up into the coulee (forming a man-made, 30 mile long lake). From the coulee water normally flows thru irrigation canals ... for something over million acres. However, the pumps are reversable and they can "over-pump" during off-peak hours and then reverse the pumps for generation during peak demand periods (selling excess power to states like california ... at least when the river is flowing strong).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

public-key cryptography impossible?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: public-key cryptography impossible?
Newsgroups: sci.crypt
Date: Fri, 25 Oct 2002 20:19:40 GMT
"Ben Mord" writes:
I suppose the field of security is not the field of eliminating risk, as many would presume. It is instead the art of achieving and assessing new compromises between risk and functionality.

i think the field of risk management can be providing security that is proportional to risk (not eliminating risk). risk management may also employ other tools like insurance for dealing with risk.

there is always the joke about making extremely secure systems by eliminating all connections and not letting anybody touch and/or use the systems ... the sysem would be perfectly fine if there weren't any users.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

RFC 2647 terms added to merged security glossary

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: RFC 2647 terms added to merged security glossary
Newsgroups: comp.security.firewalls,comp.security.misc
Date: Sun, 27 Oct 2002 23:17:35 GMT
I've updated my merged security glossary & taxonomy with 26 terms from RFC 2647 (Benmarking Terminology for Firewall Performance):
https://www.garlic.com/~lynn/index.html#glossary

the terms allowed traffic, application proxy, authentication, bit forwarding rate, circuit proxy, concurrent connections, connection, connection establishment, connection establishment time, connection maintenance, connection overhead, connection teardown, connection teardown time, data source, demilitarized zone, firewall, goodput, homed, illegal traffic, logging, network address translation, packet filtering, policy, protected network, proxy, rejected traffic

notes:
Terms merged from: AFSEC, AJP, CC1, CC2, FCv1, FIPS140, IATF V3, IEEE610, ITSEC, Intel, JTC1/SC27, KeyAll, MSC, NCSC/TG004, NIAP, NSA Intrustion, NSTISSC, RFC1983, RFC2504, RFC2647, RFC2828, TCSEC, TDI, TNI, and misc.

Updated 20020928 with more ISO SC27 definitions. Updated 20020929 with glossary for online security study (www.srvbooks.com). Updated 20021020 with glossary from NSTISSC. Updated 20021027 with RFC2647.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Mon, 28 Oct 2002 14:50:33 GMT
jmfbahciv writes:
Sigh! Think about programmers who have hard-coded in the disk capicity. Doing a complete forced replacement, with no planning? I think I'd probably do the same thing and then "convert" to the larger disk capacity under a _controlled_ test.

the ancient copy protect mechanisms (on hard disks) ... used disk sector of files ... if you backed up and did normal restore the disk sector changed (needed to do image copy to preserve things).

personal firewalls may do something similar ... i get sporadic requests to re-validate permissions for program after running disk optimization (and then have duplicate entries for programs in the permissions display).

then there are old int13 bios/software with 1024 cylinder problem

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Mon, 28 Oct 2002 22:42:32 GMT
Pete Fenelon writes:
It is. Particularly with the preemptible kernel patches and the O(1) scheduler in.... you could almost mistake it for a good OS! :)

... circa 1968.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Tue, 29 Oct 2002 14:26:48 GMT
Pete Fenelon writes:
Oh I didn't say "modern" ;) After all, it's lugging all that Unix baggage around with it.

:-) undergraduate in late '68, i had done the first rewrite of the scheduler (along with page replacement algorithm and kernel fastpath, and some other stuff) for cp/67 ... putting in first pass at (latter they said inventing) fairshare and dynamic adaptive scheduling. At that time I got rid of the once/second daemon which looked like it came from CTSS ... and than many years later i ran into the almost the same code in nix ... and I had thot I had stamped it all out).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Tweaking old computers?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tweaking old computers?
Newsgroups: alt.folklore.computers
Date: Wed, 30 Oct 2002 18:28:54 GMT
jmfbahciv writes:
It was a daemon ;-). Those things are supposed to run around unseen and unheard until you least need them. When we considered naming anything a daemon, we thought long and hard any possibility it might have to be removed. If we didn't ever want it to go away, it got called a daemon.

with respect to the ibm mainframe "meter" thread ... and leased machines based on meter running ... one second wakeups would have kept the meter running almost half the time.

the meter runs whenever the cpu executed instructions and/or there was "active" I/O ... and the meter "tic" resolution was 400milliseconds (aka any activity, no matter how small caused at least a 400 ms tic).

ref:
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#31 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#33 why does wait state exist?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

EXCP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: EXCP
Newsgroups: bit.listserv.ibm-main
Date: Wed, 30 Oct 2002 20:15:16 GMT
bblack@FDRINNOVATION.COM (Bruce Black) writes:
Even with FICON, I agree that connect time is a good measure of the impact of a job on the I/O subsystem. Connect time has been reported for many years (certainly since ESA, maybe since XA).

For the uninformed, a channel program can be connected or disconnected. It is disconnected by the control unit when the CU has to wait for some event. Pre-cache, included waiting for the heads to seek or the disk to rotate to the right position; with cache it is often time waiting to fetch data into cache. I/O to other devices can execute on the channel while this I/O is disconnected. It is connected again when the data it needs is available, so connect time equates roughly to data transfer time. A chain that transfers a lot of data (e.g., a FDR or DSS read of a full cylinder of data) may have a long connect time, but is only one EXCP.


it use to be that PDS directory multi-track searches tended to dwarf everything else (monopolizing channel, cu, string, & device) ... especially undesirable in loosedly-coupled complexes. I got brought into a major customer (5-6 loosely coupled machines) data center having an horrendously bad performance problem ... and it turned out to be a drive peaking under heavy load at 5-6 EXCPs/second (but three of those EXCPs accounted for nearly one hundred percent channel, controller, string, and drive busy) ...

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

History of HEX and ASCII

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of HEX and ASCII
Newsgroups: comp.lang.asm370
Date: Thu, 31 Oct 2002 05:12:41 GMT
"glen herrmannsfeldt" writes:
Well, IBM has supported Hex for a long time, but, at least on the machines that this newsgroup is for, only recently.

Linux/390 uses ASCII as its native character set, while most OS for the 360/370/390/Z machines use EBCDIC.


360 had an ascii mode bit defined in the PSW ... supposedly the machine would interpret bits as ascii instead of ebcdic.

hex is somewhat orthogonal to ascii/ebcdic. ebcdic is 8bit extension fo 6bit bcd. the 6bit bcd machine had things like six six-bit characters in 36bit word. ebcdic machines had four eight characters in 32bit word.

ascii came from things like TTY/ascii terminals.

as an undergraduate when i worked on the original PCM controller ... initially for TTY ascii environment ... one of the early things that I found out was ibm terminal controllers had a convention of placing the leading bit in the low-order bit position of a byte ... and ibm terminals worked correspondingly. TTY/ASCCI terminals used the convention of the leading bit in the high bit position of the byte (not the low bit position).

so one of the peculiarities of ascii terminal support in an ebcdic mainframe wasn't just that the bit pattern definitions for characters were different ... but when dealing with ascii/tty terminals ... the bits arrived in the storage of a mainframe bit-reversed. The terminal translate tables for tty/ascii terminals actually was bit-reversed ascii; in coming bits were bit-reversed in a byte ... and so the ascii->ebcdic translate table (like btam) had the translate tables appropriately confused. Outgoing ebcdic->asclii translated to bit-reversed ascii ... relying on IBM controller to turn the bits around before transmitting on the line to the terminal.

this starts to get more confused later on with ibm/pc (ascii machines) directly attached to mainframe and not going thru traditional terminal controller. now you have two different sets of translate tables ... one for bit-reversed ascii (going thru ibm terminal controller) and one for direct connected transmission.

all of this is independent of the issue that there are some characters in ascii that have no corresponding character defined in the ebcdic character set ... so a particular (bit-reversed) ascii character gets mapped to some arbritrary ebcdic pattern. similarly there are some ebcdic characters that don't exist in ascii.

misc. pcm controller refs:
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Computing on Demand ... was cpu metering

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Computing on Demand ... was cpu metering
Newsgroups: alt.folklore.computers,bit.listserv.ibm-main
Date: Thu, 31 Oct 2002 20:56:30 GMT
cpu metering refs:
https://www.garlic.com/~lynn/2000d.html#40 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2000d.html#42 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#31 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#33 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#49 Tweaking old computers?

http://www.washingtonpost.com/wp-srv/javascript/channelnav/nav_imagemap.map?324,15
IBM's Plan: Computing On Demand

Washington Post Staff Writer Thursday, October 31, 2002; Page E01

International Business Machines Corp. chief executive Samuel J. Palmisano said yesterday that his company is investing $10 billion in a business strategy aimed at getting corporate customers to pay for their computing power in much the way they now buy power from utilities: as they use it.

Palmisano described his vision of "on-demand" computing in a speech to customers and analysts in New York. It was his first address since the company announced that he would gain the title of chairman Jan. 1.

IBM, he said, hoped to fashion a computing grid that would allow services to be shifted from company to company as they are needed. For instance, a car company might need the computing power of a supercomputer for a short period as it designs a new model but then have little need for that added horsepower once production begins. Other services could be delivered in much the same way, assuming IBM can pull together the networks, computers and software needed to manage and automate the chore. Palmisano said the industry would first need to embrace greater standardization.

Palmisano said the company is pursuing its $10 billion strategy through acquisitions, marketing and research, much of which has taken place in the past year.

"No doubt about it, it is a bold bet. Is it a risky bet? I don't think so," he said.

Analysts regarded the speech as Palmisano's road map for IBM's future. "We view this as Palmisano's coming-out party," said Thomas Bittman, an analyst at Gartner Research. "The industry will be measuring IBM against this as a benchmark for years."

The concepts of grid computing are not entirely new or unique to IBM. Hewlett-Packard Co. is pursuing similar ideas, for example.

"A lot of the threads we've heard before," said David Schatsky, research director at Jupiter Research. "But it does represent a new coalescence of their vision."

Palmisano is to succeed Chairman Louis V. Gerstner Jr., and analysts are already picking up on differences between the men.

"Gerstner always talked to the CEOs," said Bittman. "Today, Palmisano was focusing on the [chief information officers] as the executives to drive change. He's able to do that because he's more of a techie."

Palmisano joined the company in 1973 as a sales representative in Baltimore and has been the driving force behind many of IBM's announcements and decisions in recent years, such as the company's move to adopt and promote the open-source operating system Linux.

Since becoming chief executive in March, Palmisano helped oversee the acquisition of PricewaterhouseCoopers Consulting -- a purchase that has shored up IBM's dominance in computer consulting services. Palmisano also arranged the pending sale of IBM's hard disk drive business and outsourced its desktop PC manufacturing business.

During his address, Palmisano said he saw signs that the global economy may have hit bottom and is flattening out. But he also said the tech sector would be slow to rebound because of the enormous growth and overinvestment of the Slate 1990s.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

SHARE MVT Project anniversary

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SHARE MVT Project anniversary
Newsgroups: alt.folklore.computers
Date: Fri, 01 Nov 2002 16:36:39 GMT
jmaynard@thebrain.conmicro.cx (Jay Maynard) writes:
SHARE will be celebrating the 30th anniversary of the MVT Project at their upcoming conference in Dallas next February. I'm putting together a turnkey MVT system to be distributed at that event. The idea is to put together a full-featured MVT system, with a typical set of mods and enhancements, in a form that can be easily run with Hercules.

note also the 35th anniversary of the announcement of cp/67 (precursor to vm/370, et all); happened at the spring '68 share meeting in houston. I gave talks on mft, hasp & cp/67. sorry no code from the era.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

SHARE MVT Project anniversary

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SHARE MVT Project anniversary
Newsgroups: alt.folklore.computers
Date: Mon, 04 Nov 2002 21:31:50 GMT
"David Wade" writes:
Its been a while since I looked seriously at this but I think basically the answer was that in general you did not use TSO unless you had to. So in places where they needed to run a large number of interactive users they ran something else, VM, or MTS or another third part tool (ROSCOE ?). For example I did my Maths degree at Newcastle Upon Tyne Polytechnic, who used the 360/67 and later the 370 at Newcastle University. The 360/67 had 1024K of main store and a drum with 4Meg for fast paging store. It usually ran MTS and you could get a small number of terminals active. I guess there were about 20 in all. I also having my job terminated for using too much Virtual Memory, so used to go in Saturday mornings when the load was lighter. (I was using APL to solve Critical Path problems). but I am sorry I can't remember how much memory there was. They sometimes IPL'd MFT or MVT (I think about twice a week) as some programs could not be converted to MVS....

And when I worked with IBM mainframes in UK academia the only place that I can recall using TSO was Daresbury Labs, most of the others had VM, but there was also UTS, ROSCOE and MTS.


and of course there is the cern tso/cms bake-off report given at share ... which while public share document was classified internall in the company, confidential, restricted (aka available on need-to-know basis only).

MTS was on 360/67 ... but UTS (originally code named gold for A.U.) was much later. there was also stuff like wilbur, music, cps, apl\360, etc .. all interactive services platformed on os/360.

at the university, on a MVT18 base ... I had removed the 2780 device support code from HASP and replaced it with a kind of CRJE supporting 2741 and tty terminals. I copied most of the syntax from CMS edit (circa '69).

as an aside ... the lore is MTS was originally built on LLMPS. LL also had 360/67 and ran cp/67. Also it was some of the LL people that left and formed one of the early cp/67 time-share service bureaus.

Late in cp/67 cycle ... before converting over to vm/370 ... cambridge was able to run 70-80 "mixed-mode" (interactive, program development, compile, test, execute, apl, various kinds of apl analysis packages, batch, etc) on 768kbyte 360/67 (with three 4mbyte 2301 paging "drums", 45 2314 disk drives)) with subsecond response for interactive and nearly 100 percent cpu utilization during prime shift.

misc ...
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/2002f.html#37 Playing Cards was Re: looking for information on the IBM 7090
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

ibm time machine in new york times?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: ibm time machine in new york times?
Newsgroups: alt.folklore.computers,bit.listserv.ibm-main
Date: Mon, 04 Nov 2002 22:06:55 GMT
haven't seen much comment on the ibm time machine advertisement in the new york times

one other reference to the use of time machine ... and I once even suggested that some policy or another could only make sense if they had a time machine in the basement of bldg. 12 in the main san jose plant site.

[Hume89] A.G. Hume, "The Use of a Time Machine to Control Software," in Proceedings of the USENIX Software Management Workshop, Usenix, New Orleans, Louisiana, April 1989, pp. 119-124.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

REVIEW: "Internet Security Dictionary", Vir V. Phoha

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REVIEW: "Internet Security Dictionary", Vir V. Phoha
Newsgroups: misc.books.technical,comp.security.misc,alt.computer.security
Date: Tue, 05 Nov 2002 17:27:57 GMT
as an aside I've got nearly 2600 terms with nearly 4000 definitions in the merged security taxonomy & glossary at
https://www.garlic.com/~lynn/index.html#glossary

i don't know about printing ... although somebody from SC27 sent me a converted PDF file from my HTML (and PDF reader does pretty good job of following the converted HREF links).
Security
Terms merged from: AFSEC, AJP, CC1, CC2, FCv1, FIPS140, IATF V3, IEEE610, ITSEC, Intel, JTC1/SC27, KeyAll, MSC, NCSC/TG004, NIAP, NSA Intrustion, NSTISSC, RFC1983, RFC2504, RFC2647, RFC2828, TCSEC, TDI, TNI, and misc. Updated 20020928 with more ISO SC27 definitions. Updated 20020929 with glossary for online security study (www.srvbooks.com). Updated 20021020 with glossary from NSTISSC. Updated 20021027 with RFC2647.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

SHARE MVT Project anniversary

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SHARE MVT Project anniversary
Newsgroups: alt.folklore.computers
Date: Wed, 06 Nov 2002 18:16:36 GMT
"David Wade" writes:
Thanks for that. The machine did originally have 512k and was upgraded to 1024k as the text on the site says. Perhaps my memory is playing tricks and we did only have 4 memory units, it was a long time ago.

you could get 2megs by going to a duplex ... 1meg on each processor. at one point, tss/360 made a big point that they got something like 3.5 times the thruput on a duplex system than they got on a simplex system with a specific benchmark .... there was almost a magical underthread that tss/360 was able to multiply resources (getting more than twice as much from twice as much hardware).

of course neither their uniprocessor or dual-processor thruput was as good as cp/67 single processor thruput (unless you are talking about a carefully tuned, processor intensive benchmark comparing tss/360 on a dual processor machine against cp/67 on a single processor machine).

the actual situation was that tss/360 kernel was fairly bloated and was quite memory constrained on a 1mbyte single processer system. Going to a 2mbyte dual processor system ... with a single copy of the kernel, increased available memory for applications on the order of four times. This was the primary reason for the 3.5 times increase in the benchmark thruput ... but wasn't ever referenced.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM S/370-168, 195, and 3033

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM S/370-168, 195, and 3033
Newsgroups: alt.folklore.computers
Date: Thu, 07 Nov 2002 05:24:53 GMT
lwinson@bbs.cpcn.com (lwin) writes:
The IBM history website had a piece on their 3033. It said it was the successor to the S/370-168-3 in high horsepower machines and had comparisons of cost vs. power.

I was wondering where the model -195 (both 360 and 370 variants) fell into this mix. I guess they left it out since it was essentially only a specialty machine as only a few were built. Did the 3033 out perform the 195? If not, what IBM mainframe finally did?

I wish IBM kept a more rational model number series. Today it is completely undecipherable, I don't even know the model of the box we have, and there are so many models and sub-variants out there now.

When they came out with the 3033, things got confusing. Was it part of S/370? Was there supposed to be a S/380? When did S/390 come out? And then there was the 4300 series, which despite the higher number, were the low end machines.


360/91, 360/95, & 360/195. There was an initial redo of 360/195 as 370/195 but it never got virtual memory.

The 370/165-3 was 2.5 to 3 mip machine (depending on cache hit and workload, etc).

The 370/158 was a processor engine that was "time-shared" between performing the I/O channel functions and 370 instruction set.

For the 303x, there was a channel director for I/O ... which was effectively the 370/158 processor engine with just the channel I/O microcode (and no 370 microcode).

A 3031 was a 370/158 processor engine running just the 370 instruction set microcode and no channel i/o microcode ... coupled to a channel director (which was a 370/158 processor engine running just the channel i/o microcode and no 370 instruction set microcode).

A 3032 was a 370/168-3 redone to use channel director ... instead of the 168 outboard channels.

A 3033 started out being a 370/168-3 remapped to newer technology. The 168 used 4circuit/chip logic ... and the 3033 had chips with about ten times the circuit density ... and the chips ran about 20 percent faster. The 3033 started out just being a straight wiring remap to the new technology ... which would have given a straight 20 percent boost ... from 3mips to about 3.6mips. Late in the development cycle, there was some rework of critical logic to take advantage of the higher circuit density ... eventually yielding a 50 percent improvement ... aka 3033 were about 4.5mip machines. For operating system and regular data processing type codes ... the 3033 was almost as fast as 370/195 (however, for highly optimized codes utilizing the 370/195 pipeline would run twice as fast).

Following the 3033 was the 3081 ... the initial 3081D was a pair of five mip processors. The later 3081K was a pair of seven mip processors. There was a 3084 which was a pair of 3081 in a 4-way configuration. 3081 & XA architecture were code-name 811.

370/135 turned into 370/138 and then 4331

370/145 turned into 370/148 and then 4341, and then 4381.

3081s had a UC.5 microprocessor for the service processor.

After the 3081 was the 3090 ... which had a pair of 4331s running a highly modified version VM/370 release 6 for the service processor function.

When I was doing the RFC1044 support for mainframe tcp/ip ... the standard code could just about saturate a 3090 engine getting 44kbytes/sec thruput using standard adapter (8232). Tuning the RFC1044 at cray research ... between a 4341-clone and a cray would hit nearly 1mbyte/sec using about 20 percent of the (4341) processor.

from
http://ap01.physik.uni-greifswald.de/~ftp/bench/linpack.html


IBM 370/195               2.5
IBM 3081 K (1 proc.)      2.1
IBM 3033                  1.7
IBM 3081 D                1.7
IBM 4381-23               1.3
IBM ES/9000 Model 120     1.2
IBM 370/168 Fast Mult     1.2
IBM 4381 90E              1.2
IBM 4381-13               1.2
IBM 4381-22                .97
IBM 4381 MG2               .96
IBM 4381-12                .95
IBM-486 33MHz              .94
IBM 9370-90                .78
IBM 370/165 Fast Mult      .77
IBM 9377-80                .58
IBM 4381-21                .47
IBM 4381 MG1               .46
IBM 9370-60                .40
IBM 4381-11                .39
IBM 9373-30                .36
IBM 4361 MG5               .30
IBM 370/158                .23
IBM 4341 MG10              .19
IBM 9370-40                .18
IBM PS/2-70 (20 MHz)       .15
IBM 9370-20                .14
IBM PS/2-70 (16 MHz)       .12
IBM 4331 MG2               .038

misc dates from some old list

CDC6600          63-08 64-09     LARGE SCIENTIFIC PROCESSOR
IBMS/360-67      65-08 66-06 10  MOD 65+DAT; 1ST IBM VIRTUAL MEMORY
IBMPL/I.LANG.    66-?? 6????     MAJOR NEW LANGUAGE (IBM)
IBMS/360-91      66-01 67-11 22  VERY LARGE CPU; PIPELINED
IBMPRICE         67-?? 67???     PRICE INCREASE???
IBMOS/360        67-?? 67-12     MVT - ADVANCED MULTIPROGRAMMED OS
IBMTSS           67??? ??-??     32-BIT VS SCP-MOD 67; COMMERCIAL FAILURE
1Kbit/chip.RAM   68              First commercial semicon memory chip
IBMCP/67         68+?? 68+??     MULTIPLE VIRTUAL MACHINES SCP-MOD 67
IBMSW.UNBUNDLE   69-06 70-01 07  IBM SOFTWARE, SE SERVICES SEP. PRICED
IBMS/360-195     69-08 71-03 20  VERY LARGE CPU; FEW SOLD; SCIENTIFIC
IBMS/370ARCH.    70-06 71-02 08  EXTENDED (REL. MINOR) VERSION OF S/360
IBM3330-1        70-06 71-08 14  DISK: 200MB/BOX, $392/MB
IBMS/370-155     70-06 71-01 08  LARGE S/370
IBMS/370-165     70-06 71-04 10  VERY LARGE S/370
IBMS/370-145     70-09 71-08 11  MEDIUM S/370 - BIPOLAR MEMORY - VS READY
AMHAMDAHL        70-10           AMDAHL CORP. STARTS BUSINESS
Intel,Hoff       71              Invention of microprocessor
IBMS/370-135     71-03 72-05 14  INTERMED. S/370 CPU
IBMS/360-22      71-04 71-07 03  SMALL S/360 CPU
IBMLEASE         71-05 71 06 01  FixTERM PLAN;AVE. -16% FOR 1,2 YR LEASE
IBMPRICE         71-07 71+?? +8% ON SOME CPUS;1.5% WTD AVE. ALL CPU
IBMS/370-195     71-07 73-05 22  V. LARGE S/370 VERS. OF 360-195, FEW SOLD
IBMVM.ASSIST     72+?? 7?-??     MICROCODE ASSIST FOR VM/370
IBMMVS-JES3      72+?? 75-10     LOOSE-COUPLED MP (ASP-LIKE)
IBMMVS-JES2      72-?? 72-08     JOB-ENTRY SUBSYSTEM 2 (HASP-LIKE)
IBMVSAM          72+?? 7?-??     NEW RANDOM ACCESS METHOD
IBM3705          72-03 72-06     COMMS CNTLR: 352 LINES; 56KB/SEC
IBMS/370.VS      72-08 73-08 12  VIRTUAL STORAGE ARCHITECTURE FOR S/370
IBM135-3         72-08 73-08 12  INTERMED. S/370 CPU
IBM145-3         72-08 73-08 12  INTERMED. S/370 CPU
IBM158           72-08 73-04 08  LARGE S/370, VIRTUAL MEMORY
IBM168           72-08 73-08 12  VERY LARGE S/370 CPU, VIRTUAL MEMORY
IBMOS/VS1        72-08 73-??     VIRTUAL STORAGE VERSION OF OS/MFT
IBMOS/VS2(SVS)   72-08 72+??     VIRTUAL STORAGE VERSION OF OS/MVT
IBMOS/VS2(MVS)   72-08 74-08     MULTIPLE VIRTUAL ADDRESS SPACES
IBMVM/370        72-08 72+??     MULTIPLE VIRTUAL MACHINES (LIKE CP/67)
IBM125           72-10 73-04 06  SMALL S/370 CPU
AMHV/6           75-04?75-06 02  FIRST AMDAHL MACHINE, FIRST  PCM CPU
AMHV6-2          76-10 77-09 11  (1.05-1.15)V6 WITH 32K BUFFER
AMHV7            77-03 78-09 18  AMDAHL RESP. TO 3033 (1.5-1.7) V6
IBM3033          77-03 78-03 12  VERY LARGE S/370+EF INSTRUCTIONS
IBM3031          77-10 78-03 05  LARGE S/370+EF INSTRUCTIONS
IBM3032          77-10 78-03 05  LARGE S/370+EF INSTRUCTIONS
IBM3033MP        78-03 79-09 18  MULTIPROCESSOR OF 3033
IBM3033MP        78-03 79-09 18  MULTIPROCESSOR OF 3033
AMHPLANT         78-05           AMDAHL OPENS DUBLIN, IRELAND PLANT
AMHV8            78-10 79-09 11  (1.80-2.00)V6, FLD UPGR. FROM V7
IBM3033AP        79-01 80-02 13  ATTACHED PROCESSOR OF 3033 (3042)
IBM3033          79-11 79-11 00  -15% PURCHASE PRICE CUT
IBM3033N         79-11 80-01 04  DEGRADED 3033, 3.9MIPS
IBM3033AP        80-06 80-08 02  3033 ATTACHED PROCESSOR
IBM3033          80-06 81-10 16  Ext. Addr.=32MB REAL ADDR.;MP ONLY
IBMD.Addr.Sp.    80-06 81-06 12  Dual Address Space for 3033
IBM3033XF        80-06 81-06 12  OPTIONAL HW/FW PERF. ENHANCE FOR MVS/SP
AMHUTS           80-09 81-05     UTS=Amdahl Unix Op. System (under VM)
IBM3033.24MB     80-11 81-11 12  24MB REAL MEM. FOR 3033UP, AP
IBM3081D         80-11 81-4Q 12  FIRST H MODEL, 10MIPS IN DP, WATER COOLED
AMH580/5860      80-11 82-09 22  (2V8, 12+ MIPS) UP, NEW,AIR COOLED TECH.
AMH580/5880      80-11 85-05 54  MP OF 5860 AT 21+ MIPS
IBM3033S         80-11 81-01 02  2.2MIPS, DEGRADED 3033 (ENTRY 3033 MODEL)
IBM3033N.UPGR.   80-11 80-11 00  9%-14% PERF. IMPROVE, NO CHARGE
IBM3081K         81-10 82-2Q 08  NEW DP FUM: 1.353081D, 64K BUFFER/OVLAP
IBM370-XA        81-10 83-03 17  NEW ARCH 3081: 31 BIT REAL/VIRT, NEW I/O
IBM3033.PRICE    81-10           10% IN US, 12-20% EUROPE PURCH. ONLY
IBM3033S.PERF.   81-10 82-06 08  NO-CHARGE PERF. BOOST BY 8%-10%
IBM3033          82-03           16% PUR.PRICE CUT, -14%Mem.Price($31K/MB)
IBM3033          82-03           3033 Placed on LIMITED-NEW PRODUCTION
IBM3084          82-09 83-4Q 15  1.93081K Perf., 4 way MP, 3081K upgrade

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM S/370-168, 195, and 3033

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM S/370-168, 195, and 3033
Newsgroups: alt.folklore.computers
Date: Thu, 07 Nov 2002 05:38:51 GMT
Anne & Lynn Wheeler writes:
370/135 turned into 370/138 and then 4331

370/145 turned into 370/148 and then 4341, and then 4381.

3081s had a UC.5 microprocessor for the service processor.

After the 3081 was the 3090 ... which had a pair of 4331s running a highly modified version VM/370 release 6 for the service processor function.


oops, that is 370/135 turned into 370/138 and then 4331, and then 4361

the service processor for the 3090 started out to be 4331, but by the time it shipped ... it had turned into a pair of 4361s (not 4331s).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Follklore

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Follklore
Newsgroups: alt.folklore.computers
Date: Thu, 07 Nov 2002 16:52:12 GMT
"Russ Holsclaw" writes:
And, of course, almost all card reading and punching was done via disk spooling, which pretty much rules out this kind of coordination, anyway. I knew of some cases where prople did it, but it was pretty flaky, and recovery from reader- and punch-check errors was pretty tricky for the operator and the software. Obviously it was done by using the reader and punch directly, and not via spooling.

i did an 2540 punch/reader app at the university ... this was in the days of class registration being with cards. all the class registration info was processed from reader side ... feeding cards into the middle bin. if there was a problem detected ... a (blank) card would be punched behind it from the punch side. the class scheduling cards were solid manila colored. the punch was loaded with cards that had colored stripe (typically red or yellow) across the top. when all the cards went back into trays ... the punched colored strips would be easily visible ... indiating class scheduling cards with problems.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Who wrote the obituary for John Cocke?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Who wrote the obituary for John Cocke?
Newsgroups: comp.arch
Date: Fri, 08 Nov 2002 00:32:33 GMT
David Brower writes:
At the time RISCs started rolling out, (Pyramid, '83-84) more money was going into VMS than UNIX. I don't think you can say that UNIX was a market success until after then; it was a niche player at that point. You can correlate the rise of risk and the dominance of UNIX pretty closely. (And, the demise of UNIX with architectural diversity, we wrote from his SPARC).

my view was that 801 was a significant re-action to the debacle(?) that was FS (future systems) with really complex CISC architecture and SMP. It wasn't just getting a single chip processor ... but there was also issues about hardware/software trade-offs (instead of dropping everything into the hardware) and also shared-memory multiprocessing (a no-no).

With regard to Unix & RISC ... you could view it from the opposite facet. In the late '70s and early '80s it was possible for lots of start-ups to relatively easily produce inexpensive hardware systems. The earlier genre of everybody having to produce expensive proprietary operating systems to go with their hardware would have made the whole undertaking impossible. Unix represented a "portable" operating system that just about anybody could adopt to their hardware platform offering (regardless of the type). The market was the explosion(?) in lots of different hardware system platforms (not just risc) that needed a relatively easily adaptible operating system (because the undertaking couldn't afford to invent and develop their own from scratch).

In the early '80s, 801 was targeted at "closed" environments ... things like Fort Knox which was going to adopt 801 as the universal microprocessor engine across the corporation (the low & mid-range 370 processors were micro-code running on some microprocessor, and then there was broad range of "controllers" using one kind or another of microprocessor). The specific 801 project with ROMP microprocessor that resulted in the PC/RT (and aix) was originally targeted as a (office product division's) displaywriter replacement. When that project got killed ... effectively the group looked around and asked what could the hardware be adapted for. The reverse side of the "portable" Unix market was supposedly it was relatively hardware platform independent aka not only the hardware vendors delivering their product w/o the expense of developing a proprietary operating system from scratch ... but the customer market place was getting use to the idea that they could get their unix w/o a lot of concern regarding the specifics of the processor architecture. In theory, PC/RT started out as just a matter of hiring the same group that did the PC/IX port for the ibm/pc to do a similar port to ROMP.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PLX

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PLX
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 08 Nov 2002 02:33:43 GMT
tjpo@AIRBORNE.COM (Patrick O'Keefe) writes:
Yabut, yabut, ...

Of course a S/360 program would run on a virtual S/360. As long as CP/67 was doing a good job of emulating the hardware envireonment, any S/360 operating system would run there. In this case (and in every BPS case) the BPS program was the operating system.

I'm not belittling CP/67 - just saying it was doing its job.


there were two environments .... the bare iron virtual machine provided by CP/67 ... and CMS running on the bare iron. CMS had a 32kbyte os/360 ... aka in cms there was approx. 32kbytes of code that provided os/360 system services for running os/360 programs (there was some joke about what more did you get from 8mbyte MVS emulation of os/360 system services over and above the 32kbyte CMS emulation of os/360 system services).

CP/67 and then VM/370 continued using BPS ... or at least the BPS loader. Both the CP and CMS kernel build into at least into the '80s would combine the BPS loader followed by all the kernel programs and then do a (real or software simulated) IPL loaded the BPS loader. The BPS loader, in turned loaded all the kernel modules into memory. The standard procedure was when the BPS loader had finished loading everything, it would branch (default) to the last entry point ... which typically would then write the memory image to disk. Standard operating procedure would then load/ipl the memory image from disk.

There was a temporary problem late in the CP/67 days with the BPS loader. The standard BPS loader supported 255 ESD entries ... when the CP kernel grew past 255 ESD entries there were games played attempting to manage external entries ... while the search for a copy of the BPS loader source went on. A copy was discovered in the attic of 545 tech sq (instead of basement store rooms ... the top floor of 545 tech sq was a store room). CSC had a number of card cabinets stored up there ... one of the card cabinents had a tray containing the assembler source for the BPS loader. This was initially modified to support up to 4095 ESD entries (from 255). A number of further modifications was made for VM/370 ... one being support for control statement to round next available address to a 4k page boundary. In any case, the CP and CMS kernels could be considered to be BPS programs ... that happened to have been checkpointed to disk.

A big CP/67 issue was all problem-mode instructions would run "as is" ... but supervisor state instructions and interrupts went into the CP kernel and had to be simulated. SIO instruction was a big simulation issue because the "virtual" CCWs had to be copied to scratch storage ... the virtual address referenced locations had to have their associated virtual pages fixed/pinned in real stroage and the "shadow" CCWs rewritten to use real addresses instead of the original virtual addresses. The routine in CP/67 that did all this was called CCWTRANS (renamed in VM/370 to DMKCCW).

In jan. of 1968, three people came out to the university that I was at to install CP/67. This was the 3rd cp/67 location (after cambridge itself and lincoln labs). Then it was officially announced at the spring '68 share meeting in houston. At the time there was some reference to there being something like 1200 people working on tss/360 development (the official operating system for the 360/67) compared to possibly 12 people total on both CP and CMS at cambridge (one of the reasons that the OS/360 environment was only 32k bytes of instructions in CMS).

For the early development work on SVS (VS2 with single virtual storage) they started with a copy of MVT and a copy of CP/67's CCWTRANS cobbled into MVT to perform the virtual to real CCW translation and associated virtual->real page management.

A later effort in this area was in the late '70s for the GPD san jose disk engineering lab. The operating environment for work under development were "testcells" which were cabled into mainframes for development and test. The MTBF for MVS (system failure) with a single (just one) testcell operating was on the order of 15 minutes ... and so all development was going on "stand-alone" with half dozen or more testcells competing for scheduled time connecting to the CPU. An effort was launced to move testcell operation into an operating system environment so that a dozen testcells could be operating concurrently (improving engineering productivity and eliminating the stand-alone test time competition). This required a significant redesign and rewrite of the I/O subsystem to make it much more resiliant to all sorts of faulty operational conditions. random refs:
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Help me find pics of a UNIVAC please

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help me find pics of a UNIVAC please...
Newsgroups: alt.folklore.computers
Date: Fri, 08 Nov 2002 16:54:49 GMT
Charles Richmond writes:
It was the summer of 1980, at the end of my college days, that the college's IBM 370-155 was upgraded to something over a megabyte of memory. Of course this was real memory...little ferrite cores. For the late 1970's, this machine had about 3/4 of a meg...for the whole university to use. If you had a program that required 256k, you became a B class job, and your program would only run overnight.

slightly earlier ... san jose research still had its 370/195, Somebody at Palo Alto Science Center had a job that they scheduled on the 195 and it had a three month turn-around (competing with lots of other compute intensive jobs). PASC had a 370/145 running VM ... and they found that if they ran the same program under CMS ... with things set up so that it ran in the background (getting most of its cpu usage at night and on weekends) that it would finish in slightly less than three months (which was just a little faster than waiting for the 195)

another job that was competing for scheduling on the 370/195 was the GPD disk head air bearing simulation (for the 3380s). it turns out that the disk engineering (bldg 14) and product test labs (bldg 15) had lots of computing power but they were dedicated to testcell stand-alone testing. when we got the I/O subsystem redone so that multiple testcells could be tested in an operating system environment concurrently, lots of processing power became available.
https://www.garlic.com/~lynn/subtopic.html#disk

Shortly after all of this became part of the bldg 14/15 standard process, bldg. 15 got both early models of 4341 and 3033 (bldg. 15 disk product test got 4341 before the 4341 product test people got their machine so we ran some stuff on the bldg 15 machine in san jose for the 4341 product test people in endicott). In any case, the GDP disk head air bearing simulation was able to get lots of time on bldg 15 3033 that was essentially zero percent cpu utilization (instead of long turn around competing with all the other stuff being scheduled on the SJR 370/195 in bldg 28). recent 195 and/or disk posts:

https://www.garlic.com/~lynn/2002n.html#52 Computing on Demand ... was cpu metering
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#59 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#60 Follklore
https://www.garlic.com/~lynn/2002n.html#62 PLX

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

PLX

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PLX
Newsgroups: bit.listserv.ibm-main
Date: Fri, 08 Nov 2002 17:11:41 GMT
Rick.Fochtman@BOTCC.COM (Rick Fochtman) writes:
My turn to nit-pick; Not early S/390 but rather early S/360. IIRC there was a very rudimentary 'nucleus' that was used by a number of the early utilities, like DEBE (Does Everything But Eat) and CLIP (Change Label In Place).

another (primarily) DEBE type program ... was LLMPS ... lincoln labs multiprogramming supervisor ... type III program from share library, some quotes from manual (i still have):
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history ..

it also had the distinction of forming the basis for MTS (there was the official tss/360 operating system for 360/67, CP/67 done by a few people in cambridge, and MTS ... michigan terminal system).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Follklore

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Follklore
Newsgroups: alt.folklore.computers
Date: Fri, 08 Nov 2002 19:27:08 GMT
"Russ Holsclaw" writes:
And how did you deal with the "race" condition between the punch and the reader? Did you use a timed delay, or was the application slow enough that the problem didn't occur?

application had direct DDs for both punch and reader .... and the use of the punch was an "exception" condition ... relatively easy to pause when it was necessary to punch.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Mainframe Spreadsheets - 1980's History

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Spreadsheets - 1980's History
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 08 Nov 2002 22:16:14 GMT
smithwj@US.IBM.COM (William Smith) writes:
Does anyone remember ADRS and ADRS-BG from IBM? A Departmental Reporting System-Business Graphics (remember FDPs... Field Developed Programs?... those 5798-xxx products) entirely written in APL to run under TSO using the TSO APL Fullscreen Editor IUP from IBM Silicon Valley Laboratory. It was popular in the mid to late 1980's and also used GDDM to display business graphics on 3279 S3G displays. I used to support and maintain it as a sysprog at Syntex Pharmaceuticals in Palo Alto, CA. Our users were extremely fond of it and used it profusely along with SAS-CALC (remember that one?) on a 3081 K32 running MVS/XA.

What about OXYCALC? Wasn't that an IBM spreadsheet written in PL/I to run under TSO? I seem to remember that Howard Dean was using it at American President Lines before it moved from San Mateo, CA.

Visi-Calc.... and my TI-99.... memories of days gone by...

William J. Smith


nope (other than references that it had been originally done for apl\cms and available in the late '70s & early '80s)

... but cambridge had originally took apl\360 ... stripped out the monitor stuff and then reworked the interpretor to run under cms ... as well as redid pieces (like storage allocation) to perform better in a virtual memory environment. Cambridge also put in APIs for system call support into the apl language which offended all the old time APLers. this was released as cms\apl. palo alto science center then took cms\apl and redid some of the stuff ... including doing the apl microcode for the 370/145 and the support for it in apl. They also redid the system call API stuff as shared variables (which was a lot more paletable to the old time APLers). This was called apl\cms. The 145 apl microcode gave about a ten times performance boost for lots of typical apl stuff (for many things apl\cms on 145 with microcode assist ran as fast on 168).

APL product was then transferred from the palo alto science center to STL (not SRL) lab. STL did the changes to allow APL to run both in CMS and MVS ... and was released as APL\SV and then APL2. The STL lab was originally going to be called the coyote lab (based on the convention of the closest post office) and was to be dedicated the same time as the smithsonian air & space museam. However a week or two before the dedication, the "coyote union" demonstrated on the steps of the capital in wash DC ... and the name of the lab was quickly changed to santa teresa (closest cross street is bailey and santa teresa).

possibly the biggest apl user in the world was "HONE" ... which provided support world wide for all the sales, marketing, and field people
https://www.garlic.com/~lynn/subtopic.html#hone

At some point the head of the APL2 group in STL transferred to PASC to head up a new group that was going to port BSD unix to 370. I had been talking to one of the VS/Pascal guys about doing a C front end to the pascal backend code generator. I went on a business trip when i got back he had disappeared, going to work for metaware in santa cruz. In any case, doing some work with the unix 370 group ... I suggested that they might talk to metaware about getting a 370 C compiler ... as part of the BSD to 370 port. Somewhere along the way the PC/RT came on the scene with AIX (at&t unix port similar to the pc/ix port) ... and the PASC group was redirected to do a BSD port to PC/RT instead of the 370. They continued to use metaware for the c compiler ... even tho the target machine had changed from 370 to PC/RT. This was done very quickly by a small number of people and released as "AOS".

how 'bout tinycalc that came with borland's pascal.

misc 801, pc/rt refs:
https://www.garlic.com/~lynn/subtopic.html#801

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Mainframe Spreadsheets - 1980's History

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Spreadsheets - 1980's History
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 08 Nov 2002 22:29:31 GMT
couple other footnotes

1)

in the late '70s, multiple HONE locations were all consolidated in a single location in Palo Alto ... eventually operating the largest single system complex in the world (all VM/CMS based) ... subsets of this were also replicated around the world ... and then for disaster survivability the Palo Alto location was also replicated in Dallas and Boulder (with fall-over and load sharing between the three locations).

2)

late '70s ... some of the largest online services data centers in the world could be found within a couple miles of each other (at least hone, large vm/cms complex, sales, marketing and field support; tymshare .... vm/cms service bureau, and dialog ... world-wide online library catalog and abstracts).

3)

somewhat in parallel with the BSD effort for 370 which was retarged to PC/RT ... PASC was also working with UCLA on locus ... having locus running on 68k and S/1 machines in the early '80s. Somewhat in the same time frame as the work PASC did for AOS ... they also did AIX/370 and AIX/PS2. The AIX/370/PS2 were Locus ports (not bsd or at&t) ... even tho they shared the name "AIX" with AIX on the pc/rt (and then rs/6000) which was an AT&T port.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

merged security glossary updated with glossary from CIAO

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: merged security glossary updated with glossary from CIAO
Newsgroups: comp.security.misc,alt.computer.security
Date: Sat, 09 Nov 2002 05:48:47 GMT
I've updated the merged security glossary with terms from CIAO, i.e.

https://web.archive.org/web/20030210115112/http://www.ciao.gov/CIAO_Document_Library/glossary/A.htm
Security
Terms merged from: AFSEC, AJP, CC1, CC2, CIAO, FCv1, FIPS140, IATF V3, IEEE610, ITSEC, Intel, JTC1/SC27, KeyAll, MSC, NCSC/TG004, NIAP, NSA Intrusion, NSTISSC, RFC1983, RFC2504, RFC2647, RFC2828, online security study, TCSEC, TDI, TNI, and misc. Updated 20021020 with glossary from NSTISSC. Updated 20021027 with RFC2647. Updated 20021108 with terms from CIAO.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

merged security glossary updated with glossary from CIAO

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: merged security glossary updated with glossary from CIAO
Newsgroups: comp.security.misc,alt.computer.security
Date: Sat, 09 Nov 2002 06:00:18 GMT
oops, and of course the url
https://www.garlic.com/~lynn/index.html#glossary

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The Forrest Curve (annual posting)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Forrest Curve (annual posting)
Newsgroups: comp.arch
Date: Sat, 09 Nov 2002 15:54:09 GMT
nospam@oddhack.engr.sgi.com (Jon Leech) writes:
The HEP folks I know at SLAC seem to spend most of their time at conferences arguing over which national lab gets increasingly scarce funding, and much of the rest filling out OSHA reports and satisfying other bureaucratic imperatives, rather than actually being in a position to do physics.

I don't get by slac as often as i did in the 70s & 80s ... but have been by a couple times in the past several months. I gave a talk there in august and got a tour of the old machine room ... which used to be all big ibm mainframe stuff ... and is now these racks filled with linux processors (although there is quite a bit of empty space). I have a little cube (beemtree) on top my screen from the slac/fermi booth at sc2001.
http://sc2001.slac.stanford.edu/
http://sc2001.slac.stanford.edu/beamtree/

the big(?) stuff is grid, babar, and the petabytes of data that will be flying around world wide grid. i interpreted what i heard was that there is so much data and so much computation required to process the the data ... that is being partitioned out to various organizations around the world (slightly analogous to some of the distributed efforts on the crypto challenges/contests ... except that hundreds of mbytes/sec & gbytes/sec will be flying around the world). The economic model seems to be that they can get enuf money for the storage and for some of the processing ... but it seems that world-wide gbyte/sec transfer cost is such that various computational facilities around the world can share in the data analysis (also there is quite of database work supporting all of this stuff).

BaBar home page
http://www.slac.stanford.edu/BFROOT/

sc2002 is coming up in baltimore.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

bps loader, was PLX

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: bps loader, was PLX
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 09 Nov 2002 20:09:57 GMT
Anne & Lynn Wheeler writes:
There was a temporary problem late in the CP/67 days with the BPS loader. The standard BPS loader supported 255 ESD entries ... when the CP kernel grew past 255 ESD entries there were games played attempting to manage external entries ... while the search for a copy of the BPS loader source went on. A copy was discovered in the attic of 545 tech sq (instead of basement store rooms ... the top floor of 545 tech sq was a store room). CSC had a number of card cabinets stored up there ... one of the card cabinents had a tray containing the assembler source for the BPS loader. This was initially modified to support up to 4095 ESD entries (from 255). A number of further modifications was made for VM/370 ... one being support for control statement to round next available address to a 4k page boundary. In any case, the CP and CMS kernels could be considered to be BPS programs ... that happened to have been checkpointed to disk.

I sort of precipitated the BPS loader problem when I was an undergraduate. Shortly after boeing formed BCS ... I was con'ed into skipping spring break and teaching a 40hr computing class to the BCS technical staff. Then that summer I got a summer job at BCS ... as part of the inducement they carried me on the books as a fulltime management position ... with an executive badge ... that among other things got me parking space in the "close" parking lot.

Anyway ... that summer besides teaching and installing and being responsible for CP/67, i made a number of enhancement:

1) balr linkage 2) pageable kernel 3) dump formatter

so how does this all relate to the BPS loader issue?

Ok, first thing was that all of CP internal kernel call/linkages were via svc where the svc flih would allocate/deallocate savearea that went along with the call. This was for all internal calls. however, i noticed that there were a lot of kernel functions that were on the order of small tens (or less) of instructions and svc linkage was a significant percentage of that. these kernel calls also tended to be closed subroutines that were always guaranteed of immediately returning ... or at most be 2-level deep call. So a carved out two save areas in page 0 that were dedicated to balr routines ... and changed the kernel call macro to use balr for a specific list of functions (instead of svc).

now, part two ... the cp kernel was fixed in memory and ran real. I noticed that it was starting to grow and consume more of real storage ... and that there was a lot of the kernel that was very low usage ... like console functions (sort of the reverse of the high usage analysis). I started work on splitting some of the console functions and other low usage stuff into 4k chunks and then created a dummy address space table for the kernel. I moved all these "4k" chunks to the end of the kernel above a "pageable" line. Then linkage was modified to do a trasn/lock in the calling sequence (i.e. the svc call routine was entered with a called to address, it would compare if it was above the pageable line ... and if so, do a trans/lock ... i.e. translate the address virtual->real using the kernel dummy address space table, lock/pin the real page ... and then call the translated address; on routine from "above" the pageable line, it would decrement the lock/pin count on the real from address).

this could be considered somewhat analogous to os/360 2k byte transient SVCs, in any case, the fragmentation into 4k chunks is what initial pushed the number of loader table entries over the 255 limit. At this point, i started investigating the bps loader ... but wasn't able to get access to the source. Evnetually I had to play games with keeping the number of ESD entries under the 255 limit.

now, part three ... as part of investigating the ESD table entry problem, I discovered that the BPS loader, at the completion of loading and when it transferred control the loaded application, passed in registers a pointer to its internal loader table (aka all the ESD entries) and the count of entries. So looking at the CP routine that got control from the loader ... and was responsible for doing the memory image checkpoint to disk ... I modified the code to sort & copy the BPS loader table to the end of the kernel (which was after all the pageable kernel routines). The save routine just took the end address of the kernel, rounded it up to the next 4k boundary ... stored the count of entries there followed by the sorted loader table.

now, part four .... since i now had the full ESD loader table (8 byte character name, one byte flag indicating ESD type, and three byte address). Given that I now had a copy of the full loader table, I could enhance the dump print routine to do some formatting ... translating absolute addresses into ESD entry name ... or module name (ESD 0) plus displacement. Could also play games with executable code above the fixed kernel line ... use the dummy address space table to figure out the "virtual kernel address" and then translate that address using the loader table.

I somewhat repeated the above almost 15 years later when i did a dump reader in REX (now REXX ... as sort of a demonstration exercise that REX could be used for serious programming rather than just simple shell programming)

slightly related
https://www.garlic.com/~lynn/submain.html#dumprx problem determination, zombies, dump readers

only slightly related ... when I did the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare

one of the items was making a lot of virtual machine control tables pageable. for this, i created a dummy address space table for each virtual machine (analogous to the dummy address space table that I had created for pageable kernel) ... and would copy control blocks out of fixed storage into these dummy address spaces ... which then could get paged out.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

bps loader, was PLX

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: bps loader, was PLX
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 09 Nov 2002 21:24:50 GMT
Anne & Lynn Wheeler writes:
I sort of precipitated the BPS loader problem when I was an undergraduate. Shortly after boeing formed BCS ... I was con'ed into skipping spring break and teaching a 40hr computing class to the BCS technical staff. Then that summer I got a summer job at BCS ... as part of the inducement they carried me on the books as a fulltime management position ... with an executive badge ... that among other things got me parking space in the "close" parking lot.

straying even further ot ... boeing/bcs was possible the largest corporate consumer of 360 hardware (after ibm itself).

their story was that the day after the 360 announcement, they walked into the local salesman's office (somebody who relatively recently ran for president) and placed a large 360 order. supposedly the salesman at the time hardly knew what 360s were ... but that one order made him the highest paid person in the company that year. This event supposedly instigated the whole invention of quotas and the switch-over the next year from straight commission to quotas. The boeing story was that even on quota ... the 360 orders that boeing placed the next year made the salesman the highest paid person in the company again (the company had a hard time increasing the salesman's quota faster than boeing's growing appetite for 360s).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Home mainframes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Home mainframes
Newsgroups: alt.folklore.computers
Date: Sun, 10 Nov 2002 05:06:22 GMT
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
I'm confused. Isn't VM just a hypervisor? So shouldn't you be able to run whatever batch system you like on it, be it MVS or something else?

the group in cambridge did an operating system with a unique characteristic ... it had a strongly enforced API boundary between the kernel supervisor and the user interface. The kernel was called CP/67 and the user interface was called the cambridge monitor system (since renamed the conversational monitor system with the transition of cp/67 to vm/370).

The CP/67 api was the hardware machine interface as defined in the 360 principles of operation ... which not only allowed CMS to operate in a "virtual" machine ... but also relatively standard operating systems ... like mvt, dos, cp itself, etc.

As an undergraduate, i put a lot of inventions into cp/67 that made the cp/cms combination significantly better for interactive services compared to operating sysetms of more traditional bent (highly optimized kernel path lengths, fastpath, optimized page replacement algorithm, fair share scheduling, dynamic adaptive resource management, etc).

In the late cp/67 era, there were special APIs developed especially for CMS that allowed it to take advantage of functions of the CP kernel ... when it was running in virtual machine, although CMS still retained the capability to operate in using the vanilla 360 "POP" interface (and therefor could run on "real" hardware w/o cp/67). In the transition of CP/67 to VM/370 and cambridge monitor system to the conversational monitor system, the ability for CMS to operate w/o the custom APIs was removed, resulting in CMS no longer having the ability to run on a "real" machine.

VM/370 had the ability to operate as a straight hypervisor ... running traditional batch operating systems ... but the CP/CMS combination also provided significantly enhanced interactive services ... as the CERN TSO(&VMS)/CMS(&VM/370) share report indicated. It also could be seen from the fact that there were a number of commercial interactive time-sharing service bureaus built using the CP/CMS platform.

related refs:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/subtopic.html#545tech

probably the largest such cp/cms interactive time-sharing operation was the corporate internal HONE system which supported all the marketing, sales, and field people in the world.
https://www.garlic.com/~lynn/subtopic.html#hone

misc, somewhat related recent postings:
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#35 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002n.html#37 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002n.html#48 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#53 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#57 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002n.html#64 PLX
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002n.html#67 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#72 bps loader, was PLX

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Everything you wanted to know about z900 from IBM

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Everything you wanted to know about z900 from IBM
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 10 Nov 2002 05:43:34 GMT
"del cecchi" writes:
The part about z900 being a direct descendent of S/360 I thought was common knowledge. The technical aspects of each stage have probably been well covered in the IBM Journal of Research and Development and perhaps in book form. A considerable amount of documentation of the various steps along the way is still available on the web.

also the appendixes of the principles of operation manuals have much of the change lore listed . ... you need an older POP to get the list of changes between 370 & 360.

you can get hardcopy at:
http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?CTY=US

the following are available for order:
S/360 PRINCIPLES OF OPERATION GA22-6821-08
S/370 PRINCIPLES OF OPERATION GA22-7000-10
370/XA PRINCIPLES OF OPERATION SA22-7085-01
ESA/370 PRINCIPLES OF OPERATION SA22-7200-00


the later POPs are fully online

from z/architecture
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR001/CCONTENTS?SHELF=DZ9ZBK01&DN=SA22-7832-01&DT=20020416112421


1.1           Highlights of z/Architecture
1.1.1         General Instructions for 64-Bit Integers
  1.1.2         Other New General Instructions
1.1.3         Floating-Point Instructions
1.1.4         Control Instructions
1.1.5         Trimodal Addressing
    1.1.5.1       Modal Instructions
1.1.5.2       Effects on Bits 0-31 of a General Register
  1.1.6         Extended-Translation Facility 2
1.1.7         Input/Output
1.2           The ESA/390 Base
1.2.1         The ESA/370 and 370-XA Base
1.3           System Program
1.4           Compatibility
  1.4.1         Compatibility among z/Architecture Systems
1.4.2         Compatibility between z/Architecture and ESA/390
1.4.2.1       Control-Program Compatibility
1.4.2.2       Problem-State Compatibility

from esa/390
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CONTENTS?SHELF=

1.1           Highlights of ESA/390
1.1.1         The ESA/370 and 370-XA Base
1.2           System Program
1.3           Compatibility
1.3.1         Compatibility among ESA/390 Systems
1.3.2         Compatibility among ESA/390, ESA/370, 370-XA, and System/370
    1.3.2.1       Control-Program Compatibility
1.3.2.2       Problem-State Compatibility

D.0           Appendix D.   Comparison between ESA/370 and ESA/390
D.1           New Facilities in ESA/390
D.1.1         Access-List-Controlled Protection
  D.1.2         Branch and Set Authority
D.1.3         Called-Space Identification
  D.1.4         Checksum
D.1.5         Compare and Move Extended
D.1.6         Concurrent Sense
D.1.7         Immediate and Relative Instruction
  D.1.8         Move-Page Facility 2
D.1.9         PER 2
  D.1.10        Perform Locked Operation
D.1.11        Set Address Space Control Fast
D.1.12        Square Root
D.1.13        Storage-Protection Override
  D.1.14        String Instruction
D.1.15        Subspace Group
  D.1.16        Suppression on Protection
D.2           Comparison of Facilities

E.0           Appendix E.  Comparison between 370-XA and ESA/370
E.1           New Facilities in ESA/370
E.1.1         Access Registers
  E.1.2         Compare until Substring Equal
E.1.3         Home Address Space
E.1.4         Linkage Stack
E.1.5         Load and Store Using Real Address
  E.1.6         Move Page Facility 1
E.1.7         Move with Source or Destination Key
  E.1.8         Private Space
E.2           Comparison of Facilities
E.3           Summary of Changes
E.3.1         New Instructions Provided
  E.3.2         Comparison of PSW Formats
E.3.3         New Control-Register Assignments
  E.3.4         New Assigned Storage Locations
E.3.5         New Exceptions
E.3.6         Change to Secondary-Space Mode
E.3.7         Changes to ASN-Second-Table Entry and ASN Translation
  E.3.8         Changes to Entry-Table Entry and PC-Number Translation
E.3.9         Changes to PROGRAM CALL
  E.3.10        Changes to SET ADDRESS SPACE CONTROL
E.4           Effects in New Translation Modes
E.4.1         Effects on Interlocks for Virtual-Storage References
E.4.2         Effect on INSERT ADDRESS SPACE CONTROL
  E.4.3         Effect on LOAD REAL ADDRESS
E.4.4         Effect on TEST PENDING INTERRUPTION
  E.4.5         Effect on  TEST PROTECTION

F.0           Appendix F.  Comparison between System/370 and 370-XA
F.1           New Facilities in 370-XA
  F.1.1         Bimodal Addressing
F.1.2         31-Bit Logical Addressing
  F.1.3         31-Bit Real and Absolute Addressing
F.1.4         Page Protection
F.1.5         Tracing
F.1.6         Incorrect-Length-Indication Suppression
  F.1.7         Status Verification
F.2           Comparison of Facilities
F.3           Summary of Changes
F.3.1         Changes in Instructions Provided
F.3.2         Input/Output Comparison
F.3.3         Comparison of PSW Formats
  F.3.4         Changes in Control-Register Assignments
F.3.5         Changes in Assigned Storage Locations
  F.3.6         Changes to SIGNAL PROCESSOR
F.3.7         Machine-Check Changes
F.3.8         Changes to Addressing Wraparound
F.3.9         Changes to LOAD REAL ADDRESS
  F.3.10        Changes to 31-Bit Real Operand Addresses

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/




next, previous, index - home