From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: additional pictures of the 6180 Newsgroups: alt.os.multics Date: Sun, 13 Oct 2002 02:02:00 GMT"Hugo Drax" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Sun, 13 Oct 2002 15:03:43 GMTCharles Richmond writes:
as somewhat implied in other posts ... was that many of these operations had huge up-front costs and manufacturing/delivery costs were lower percentage (there tended to be significantly lower volumes than some of today PC volumes). I believe some aspect of that has been in the news related to high costs of drugs ... significant percentage is the up-front costs.
a frequent scenario was that the device was designed and priced based on full capacity and the projected volumes for that design point. Then you get a bunch of customers saying that they would buy it if it was only cheaper/slower (they didn't need all that capacity anyway). This original design point may represent 80 percent of the market size.
There may be an emerging/entry market that wants half the capacity at half the price but the size of this market is only 1/5th the original target market. Cutting the price in half for everybody in order to pick up 20 percent more sales could fail to recover the up front costs (and in some cases might violate some gov. decree that products not be priced at less than costs).
The size of the emerging, entry level market may not be sufficient to justify designing a totally different product because in order to recover independent up-front costs the product might have to be priced four times that of the standard product. Sometimes the problem is that there is a misimpression that because something is 1/2 something else that it costs 1/2; and/or that entry level market is significantly larger than mainstream market.
So in a product market that is extremely price sensitive to up-front costs (design, manufacturing setup, etc represents significant large percentage of the price), there may be a tendency to try and amortize those costs over a larger market segment and that may require (or the gov. effectively demand) tiered pricing of effectively the identical product for different parts of the market (or not serve that market at all).
A more recent example might be the 486DX/486SX .... the 486SX was effectively the same chip at a lower price with floating point (permanently) disabled. The cost of taking basically a 486DX and disabling floating point is likely to have been significantly less than designing a 486SX chip from scratch.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SRP authentication for web app Newsgroups: sci.crypt Date: Sun, 13 Oct 2002 15:27:08 GMTPaul Rubin <phr-n2002b@NOSPAMnightsong.com> writes:
one of the major reasons for SSL domain name server certificates are trust issues regarding the domain name infrstructure ... and can you trust the domain name infrastructure to correctly point you at the server you want to be pointed at.
however, what happens when a trusted third party certification authority gets a request for a server domain name certificate .... it has to go verify that the requester is valid for that domain name ... in order to do validate information that is "bound" in a certificate it is to issue, it must check with the authoritative agency for the information it is certifying. For domain names, the authoritative agency is the domain name infrastructure. This creates sort of a catch-22 ... the same agency that everybody is worried about trust issues ... and generates the requirement for SSL domain name certificates ... is also the same agency that the CAs rely on for effectively the same information.
So it is possible to attack the domain name infrastructure and result in individuals getting bad information and point to the wrong server. It is also possible to attack the domain name infrastructure, apply for a valid certificate, get the certificate and result in individuals getting bad information and point to the wrong server. All of this is frequently obscured by discussions regarding the integrity of the mathematical process that protects the information in a certificate. In some cases the obfuscation can be distraction that the trust/quality of the information directly from the domain name infrastructure and the trust/quality of the information in a certificate is nearly the same (so what that it is extremely difficult to attack the integrity of a certificate once it has been created ... if it much simple to attack the integrity of the source of the information that goes into a certificate).
So the CA businesses have a requirement to improve the integrity of the domain name infrastructure .... so that not only can the integrity of certificates can be trusted ... but also the integrity of the information in a certificate can be trusted. The catch-22 here is that improving the integrity of the domain name infrastructure so that information from the domain name infrastructure can be trusted (by CAs) ... also significantly reduces the requirement for needing SSL domain name certifictaes (since others will also better trust the information from the domain name infrastructure).
So the question isn't just about being afraid of a forged server certificate (aka the integrity of the certificate itself) but also things like spoofed domain name (the integrity of the information in the certificate, valid certificate, bad information).
misc. refs to various domain name exploits:
https://www.garlic.com/~lynn/subintegrity.html#fraud
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Sun, 13 Oct 2002 16:12:12 GMTalso in somewhat related scenario (that i'm more familiar with) is the transition days starting to charge for software.
at hundred thousand foot level, one of the processes was to select high, medium, and low price and then do volume forecast (market size) at those prices. one check was that (gov. requirement?) forecast volume times price had to be greater than costs. Higher price tended to be lower volumes, lower price tended to be higher volumes (of course there is the vodka maker tale about 30 percent price increase doubled the volume).
For the most part (at this point in time), software manufacturing and distribution costs were pretty volume insensitive ... vast majority of the costs are up-front with development, organizational setup, etc (fixed up front training costs of field support people might be as much as development). Anyway in this transition period ... some software projects found that there was no forecasted price point where development costs could be recovered ... and they couldn't go to market.
also to hardware scenario an equivalent entry level analogy these days (with software) is with demo/freeware where you have the full product but it is crippled (or not full function) pending paying (additional) money and getting an unlocking key.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Sun, 13 Oct 2002 18:59:47 GMTSteve O'Hara-Smith writes:
this assumes that there is some yield issues to begin with .... if there happens to be nearly 100 percent yield ... using the product to implement other products with different operational characteristics wouldn't help (assuming the alternative products are lower cost). Another kind of yield is sorting for max. operational frequency where the chips show a significant variance.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Mon, 14 Oct 2002 17:33:44 GMTnote many of these issues come up related to trying to make the transition from an early agrarian/gathering society to an industrial society. the economic model in the agrarian/gathering society is frequently that there is almost a linear, simplistic relationship between the delivered product and the work effort/value (and tends to contribute to the simplesting economic view by members of such societies).
in the transition to industrial (and even information) society there are frequently signficiant up front, fixed costs that are relatively independent of the actual item delivered. As a result it becomes a lot more complicated to demonstrate a linear economic relationship with one specific item in isolation from the overall infrastructure.
The fixed, up front infrastructure costs contribute to significantly increased efficiencies compared to the linear econcomic relationships found in the early agrarian/gathering infrastructures ... assuming some specific product delivery volumes. However, if such huge up front infrastructures were developed and delivered only one item ... it is pretty obvious that it wouldn't be economically viable compared to an earlier agrarian/gathering infrastructure (with a strictly linear relationship). It is only being able to amortize such up-front infrastructures & costs across a large volume that the economic benefit accrures to the participants of such infrastructures. A more simplistic explanation is that in such environments, the cost of producing five times as many items is typically a lot less than a factor of five (which would be the case in the earlier agrarian/gathering societies). As a result there is much more atttention given to a pricing paradigm that recovers the cost of the up-front infrastructures (which is frequently more complex than the more simplistic agrarian/gathering societies that are just looking at economic recovery of the linear costs associated with per item production).
I remember in the early '80s looking at devices produced strictly for the computer industry with a price per unit in the $6k range. Similar items with similar capability (actually more advanced) that had been produced for the consumer electronic business were in the $300-$600 dollar range (between a 10:1 to 20:1 price reduction). The direct linear work effort that went into production of the different items were nearly the same.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Wed, 16 Oct 2002 18:55:20 GMTSteve O'Hara-Smith writes:
So you can have physical machine with N number of processors enabled, running LPARs where each LPAR can have some number of logical processors where it is possible to specify the CPU utilizatin target for that LPAR (finer granularity than whole processors), and within an LPAR you can also have a virtual machine operating system ... that can provide even finer granularity.
I think that was the 40,000 copies of linux from two years ago, running in a modest sized LPAR under vm (aka VM was providing 40,000 virtual machines for 40,000 different copies of linux and VM was running in an LPAR that was less than the whole machine.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Wed, 16 Oct 2002 19:56:15 GMTnote as manual service costs increased, first there was a migration to FRUs and then actually packaging for spares or sparing. lots of the sparing ... the customer would pay more for the availability. This obviously seen in the HA configurations ... my wife and I did ha/cmp
various ha/cmp configurations would be simple 1+1 fall-over where spare idle machine was just sitting there waiting to take over. Customer was typically paying more than two times a simple non-ha configuration (at least for the hardware .... but possibly got by with just a single-copy application software licenses).
I believe one of the other factors was lots of gov. contracts started specifying field upgradable hardware (gov. regs that made it significantly easier to get new hardware as upgrades than as replacement).
So ... tied into industrial non-linear production ... a combination of work already going on in sparing ... and at least the gov. market segment being big driver in field upgrading ... a natural evoluation would be field upgradability built in at time of original manufacturing (compared to cost of having physical person appear).
This industrial-age paradigm is somewhat out of synch with the linear process found in the early agrarian/gathering cultures (the book flatlanders also comes to mind as a possible analogy).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Thu, 17 Oct 2002 16:00:21 GMTjmfbahciv writes:
I have heard of people talk about nightmare situations after they got the first 100 (or 1000) units to customers and a proper tracking system hadn't been set-up before hand. Then along comes field service and begins to really confuse what level are the components at any specific customer location.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Asynch I/O Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 17 Oct 2002 16:44:25 GMT"John S. Dyson" writes:
18844 Jun 1 1993 jobs.slides.ps.gz 91160 Jun 1 1993 usenix.1.93.ps.gz 84246 Jun 1 1993 txnsim.tar.gz 338106 Jun 1 1993 thesis.ps.gz 102210 Jun 1 1993 andrew.tar.gzshe did a lot of work on FFS, log-structured ... and if i remember correctly there were comparisons between FFS, log-structured, Sprite and some others (she also consulted on some ha/cmp issues after she graduated).
I also archived many of the Raid papers from the same time.
somewhat as total aside
http://hyperion.cs.berkeley.edu/
has announcement of the RAID Project 10-year reunion (for members of the raid project only)
old raid stuff from their site
56607 Mar 2 1996 raid5stripe.ps.gz 27235 Mar 2 1996 nossdav93.ps.gz 23779 Mar 2 1996 mss93rama.ps.gz 185174 Mar 2 1996 ieeetocs93.ps.gz 91530 Mar 2 1996 algorithmica.ps.gz 90941 Mar 2 1996 tech93_778.ps.gz 456694 Mar 2 1996 tech93_770.ps.gz 82033 Mar 29 1993 tech91_616.ps.gz 141166 Mar 29 1993 winter93usenix.ps.gz 89675 Mar 29 1993 sigmetrics93.ps.gz 44589 Mar 29 1993 vlsisys93.ps.gz 763 Mar 29 1993 journal.bib.gz 6047 Mar 29 1993 raid.bib.gz 40624 Mar 29 1993 ipps93.ps.gz 2029 Mar 29 1993 README.gz 119689 Jul 25 1992 measureSOSP91.ps.gz 41298 Jul 25 1992 benchUsenix90.ps.gz 22541 Jul 25 1992 zebra.ps.gz 141279 Jun 6 1992 asplos91.ps.gz 172963 Jun 6 1992 tech90_573.ps.gz 174414 Jun 6 1992 tech91_660.ps.gz 62023 Jun 6 1992 tech92_672.ps.gz 81019 Jun 6 1992 sigmetrics91.ps.gz 45582 Jun 6 1992 sigarch90.ps.gz 69140 Jun 6 1992 sigmetrics90.ps.gz 33854 Jun 6 1992 usenix90.ps.gz 76536 Jun 6 1992 superComputing91.ps.gz 76194 Jun 6 1992 tech91_638.ps.gz--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Coherent TLBs Newsgroups: comp.arch Date: Fri, 18 Oct 2002 08:21:40 -0600iain-3@truecircuits.com (Iain McClatchie) writes:
The 370s TLBs (for the TLBs that supported multiple concurrent address spaces) were STO associative, which was the (consistent) real address of the segment table origin, the same across all processors.
The IPTE selective invalidate finally appeared with the 3033 model in the late '70s.
with or w/o selective invalidate ... the sequence still required a CPU signal broadcast; typical scenario was turn on the invalid bit in the PTE (either with IPTE or an OI followed by PTLB) and then broadcast because there was kernel code (running in parallel) that might be operating on the virtual memory page using its real address. Some of the implementations tended to try and batch up a whole slew of page invalidates at a single time ... amortizing the broadcast that "drained" any kernel operations in progress that were using real address. There was some trade-off regarding relatively short-lived kernel operations getting locks on the address space as a means of serializing any page invalidates against that address space.
There was also a lot of discussion in the 1970 time-frame about advantages of STE-associative TLBs (rather than STO-associative) to improve invalidates in the case of segment sharing. An IPTE on a PTE in a shared segment ... might possibly involve multiple different TLB entries in a STO-associative (aka address space associative) TLB. For a STO-associative TLB, the choices were having logic at TLB entry load time to not allow multiple TLB entries for the same PTE (aka real address) ... or software cycled invalidates for all possible STOs (aka address space) ... or the software punts and just does a PTLB (whenever it dealt with page that might be in multiple different address spaces).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Wanted: the SOUNDS of classic computing Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Fri, 18 Oct 2002 21:52:09 GMTCharles Richmond writes:
part of this i gave at keynote for nasa high assurance conference last
year ... pointer at:
https://www.garlic.com/~lynn/index.html
something similar was done in support of the original stuff for what
is frequently now called e-commerce.
https://www.garlic.com/~lynn/aadsm5.htm#asrn1 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aadsm5.htm#asrn2 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aadsm5.htm#asrn4 assurance, X9.59, etc
misc past postings on industrial/commercial strength computing:
https://www.garlic.com/~lynn/94.html#44 bloat
https://www.garlic.com/~lynn/98.html#4 VSE or MVS
https://www.garlic.com/~lynn/2001d.html#70 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001h.html#1 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001l.html#4 mainframe question
https://www.garlic.com/~lynn/2001l.html#14 mainframe question
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002.html#26 Buffer overflow
https://www.garlic.com/~lynn/2002.html#28 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002f.html#23 Computers in Science Fiction
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Fri, 18 Oct 2002 21:59:25 GMTjcmorris@mitre.org (Joe Morris) writes:
It must have made an impression on me ... because i also started backing things up ... frequently in triplicate (although I had situations where all three copies got scratched because of operator error). some of mine (and others) fanaticism for backing everything up ... leaked into things like email products (which may have contributed to issue at the white house in the early '80s).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Fri, 18 Oct 2002 21:33:20 GMTJay Miller <jnmiller@@cryptofreak.org> writes:
it isn't an identification chip ... it is an authentication chip (and, yes there can be significant difference).
in conjunction with x9.59 and aads
https://www.garlic.com/~lynn/x959.html#x959
the objective was purefly to provide strong authentication in an otherwise untrusted environment.
the chip can be 7816 contract, 14443 contactless, usb, 2-way combo (7816+usb, 7816+14443, 14443+usb) or 3-way combo.
no keys required in the reader for the card to perform the authentication operator. basically the card is at a known integrity level and relying party can choose to trust something at that integrity level.
the reader is an integrity issue however ... not so much for correct chip operation ... but for correct business process operation; some of that shows up in the EU finread stuff. the issue is not whether the AADS chip provides correct authentication ... in straight authentication business processes .... but there are business process that have both authentication & approval facets; aka a chip is used to demonstrate both authentication and approval; like a financial transaction, the person is both authenticating themselves and agreeing to pay a merchant some amount of money. While an untrusted reader can't spoof the authentication ... an untrusted reader may transmit a transaction to the card for $5000 when only displaying $50 (the person thinks they are authenticating and approving a $50 transaction, not a $5000 transaction).
one approach is to potentially have the reader also sign any transaction, the relying party can then evaluate the integrity of the authentication chip, and also evaluate the integrity of any reader that may have also signed the transaction ... with respect to performing any operation.
misc finread &/or intention related stuff:
https://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
https://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#7 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#9 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#13 Words, Books, and Key Usage
https://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative to PKI?
https://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#13 anybody seen (EAL5) semi-formal specification for FIPS186-2/x9.62 ecdsa?
https://www.garlic.com/~lynn/aadsm12.htm#18 Overcoming the potential downside of TCPA
https://www.garlic.com/~lynn/aadsm12.htm#19 TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aadsm12.htm#30 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/2002f.html#46 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002f.html#55 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002g.html#69 Digital signature
https://www.garlic.com/~lynn/2002h.html#13 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002i.html#77 Does Diffie-Hellman schema belong to Public Key schema family?
https://www.garlic.com/~lynn/2002j.html#29 mailing list history from vmshare
https://www.garlic.com/~lynn/2002l.html#24 Two questions on HMACs and hashing
https://www.garlic.com/~lynn/2002l.html#28 Two questions on HMACs and hashing
https://www.garlic.com/~lynn/2002m.html#38 Convenient and secure eCommerce using POWF
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: So how does it work... (public/private key) Newsgroups: sci.crypt Date: Sun, 20 Oct 2002 03:53:26 GMTCarlos Moreno writes:
FIPS186-2 is one such digital signature algorithm that uses FIPS180,
SHA-1 (and now SHA-2).
http://csrc.nist.gov/cryptval/dss.htm
in any case issues are (at least):
integrity
authentication
confidentiality
some cases integrity and authentication are sufficient w/o actually
requiring confidentiality.
one of the most common scenarios on the internet is electronic
commerce in conjunction with SSL. A major function of SSL is to
encrypt the credit card number and keep it confidential. Note however,
that the PAN (aka primary account number, aka credit card number) is
needed in a large number of business processes ... and therefor while
SSL provides confidentiality for the number while in transit/flight ...
it doesn't do anything for the number while at rest. most of the
credit card exploits have been involved with some part or another
of the business process where the number is in the clear. misc.
fraud/exploit refernces (including some card related stuff):
https://www.garlic.com/~lynn/subintegrity.html#fraud
the x9a10 financial standards working group was to devise a standard
for all electronic retail payments (credit, debit, stored-value, etc)
that preserved the integrity of the financial infrastructure
... regardless of the environment (pos, internet, etc). The result
was x9.59
https://www.garlic.com/~lynn/x959.html#x959
in this scenario ... the analysis was that the fundamental problems was the credit card number had to be both a shared-secret (needing confidentiality) as well as open and pretty freely available because of the various business process. The x9.59 solution wasn't to try and add more levels of confidentiality (and there never would be enuf) and instead change things so the credit card number was no longer a shared-secret and therefor didn't require confidentiallity (or encryption). Basically x9.59 defines transactions that are always digitally signed (providing both integrity and authentication) and the PAN used in a x9.59 transaction can never be used in a non-X9.59 (non-authenticated) transaction. That business rule ... then removes (x9.59) PAN from the category of shared-secret (since knowing the PAN is not sufficient to perform a fraudulent transaction). Since the PAN is no longer a shared-secret ... it no longer requires confidentiality (encryption) to protect it. Integrity and authentication (i.e. digital signature) is sufficient. Furthermore since the PAN is no longer a shared-secret .... its exposure in a multitude of other business processes is also no longer a risk.
A slightly related posting regarding PAN as a shared-secret ... and
the issue of the necessary level of security (and confidentiality)
that would be proportional to the fraud risk:
https://www.garlic.com/~lynn/2001h.html#61 Net banking, is it safe????
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Sun, 20 Oct 2002 21:06:46 GMTEric Smith <eric-no-spam-for-me@brouhaha.com> writes:
there was also the (pentagon paper-like) scenario involving the leakage of a virtual memory document to somebody in the press some months before announcement .... big investigation and a result that all comapny copying machines were retrofitted with a serial number on the glass that printed thru on all copies.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Sun, 20 Oct 2002 21:35:37 GMTthere is the alternative explanation.
you craft a public key and take it to these mystical organizations called certification authorities. They laboriously create an object of great power called a certificate and grant it great magical powers. The certificate is used to create a digital signature and it only performs this duty when you have thoroughly understood and agreed with the meaning contained in the computer binary bits that are being digitally signed. Such digital signatures now carry the attribute of non-repudiation ... that it is impossible for you to later claim that you don't fully agree with the terms and conditions expressed in any computer binary bits that carry your digital signature.
some past discussions on the subject of ssl domain name certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert
some recent refs to non-repudiation and such stuff
https://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#5 NEWS: 3D-Secure and Passport
https://www.garlic.com/~lynn/aadsm12.htm#13 anybody seen (EAL5) semi-formal specification for FIPS186-2/x9.62 ecdsa?
https://www.garlic.com/~lynn/aadsm12.htm#18 Overcoming the potential downside of TCPA
https://www.garlic.com/~lynn/aadsm12.htm#19 TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aadsm12.htm#30 Employee Certificates - Security Issues
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: updated security glossary & taxonomy Newsgroups: comp.security.misc Date: Mon, 21 Oct 2002 17:04:52 GMTi recently updated merged security glossary at
with nstissc glossary:
https://web.archive.org/web/*/http://www.nstissc.gov/Assets/pdf/4009.pdf
notes on other sources:
https://www.garlic.com/~lynn/index.html#glosnote
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Mon, 21 Oct 2002 20:18:35 GMTJay Miller <jnmiller@@cryptofreak.org> writes:
I joked that wasn't sufficient integrity for many purposes ... so I joked that I wanted to take a $500 mil-spec part, cost reduce it by more than two orders of magnitude and at the same time increase the integrity/security ... that basically is the aads chip strawman.
aads and the aads chip strawman aren't synonymous ... but it looked like trying to put together a high integrity chip would be an interesting exercise.
I gave a talk about the effort in the TCPA track on assurance at the
intel developer's conference two years ago ... slides at
https://www.garlic.com/~lynn/x959.html#aads
a little further down in the screen.
I somewhat joked that the TPM specification at that time was such that the aads chip strawman could meet all of the TPM requirements; The other part of the joke (from somebody in the audience) was that I came to the design almost three years earlier than TCPA because i didn't have 200 people in committees helping me with the design.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Mon, 21 Oct 2002 21:46:36 GMTChristopher Browne writes:
misc privacy/identification/biometrics and authentication vis-a-vis
identification
https://www.garlic.com/~lynn/subpubkey.html#privacy
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Mon, 21 Oct 2002 23:24:06 GMTJay Miller <jnmiller@@cryptofreak.org> writes:
1) anybody generates their own public/private key however ... and registers it with relying parties ... the relying parties use some process to make sure that the person presenting the public key ... actually can do corresponding digital signatures (lets use ec/dsa, fips186-2, as an example). that has one level of integrity. the institution is responsible for making sure that the person presenting the public key for registration corresponds to whatever they are registering. if it is purely opening a bank account ... then it can be analogous to tearing a dollar bill in half and giving one half to the bank ... and telling them not to honor any requests unless the matching half can be presented.
2) institutions get chips/cards from the foundary ... the chips do on-chip key gen ... the private key never leaves the chip, the public key is exported. any digital signature algorithm will do from a framework standard, but for some mundane purposes can again select ec/dsa, fisp186-2. the institution has done some FIPS/EAL certification on the chip ... and so trust it to whatever level it is certified to. these chips are given to their end users. institutions only trust & register public keys from chips they get directly from the foundary. lots of corporate employee stuff has various kinds of hardware tokens (door badge system, login system, etc) ... it doesn't have to be just military. also there are all sorts of chip cards (especially in europe) for financial transactions. for the financial they would possible like the highest possible integrity at the lowest possible costs. again it doesn't have to military ... just anything of value.
OK, so in the case of institutional delivered tokens ... they have a high level of confidence in the integrity of the delivered/registered tokens. By contrast, it can be relatively difficult for an institution to trust a random consumer-presented token. As you have pointed out many infrastructures are subject to counterfeit/mimic chips that can be programmed to talk like, smell like, look like and be accepted as valid chips.
So an interesting opportunity is how can trust be created for a token that is presented (whether it is card format, or dongle/key-fob format, or whatever). There are a couple steps here that are somewhat orthogonal. If a random token is presented ... on what basis does a institutional organization for trusting the token to be a "valid" token (for some degree of valid).
Once they get past can the trust the token ... then they have other business processes that they go thru that establishes some relationship between that token and other attributes ... so that whenever the token is presented in the future ... that the token represents the equivalent of all the business processes that previously equated the token to some set of attributes.
The attributes could be identity ... something like whoever uses this card probably has some specific fingerprint and/or DNA. The attributes might not be identity ... the attributes might just be that the person is allowed to make financial transactions against a specific bank account (and totally divorced from whether or not the financial institution has a separate process relating the account to some identity ... like SSN). The attribute might be that this is a valid employee (w/o actually having to indicate which employee) and the front door should open.
The higher the risk ... the larger the amount that the bad guys will be willing to spend on exploits off the infrastructure (counterfeit cards for instance). This goes somewhat to past statements about the amount of security proportional to risk (actually this frequently degenerates to the cost of security proportional to risk ... there isn't necessarily a strick linear relationship between security cost and security strength).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Tue, 22 Oct 2002 16:06:04 GMTjdallen2000@yahoo.com (James Dow Allen) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Tue, 22 Oct 2002 16:07:48 GMTjdallen2000@yahoo.com (James Dow Allen) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Tue, 22 Oct 2002 16:13:14 GMTjdallen2000@yahoo.com (James Dow Allen) writes:
the other going from 165 to 168 was that m'code was reworked (and some hardware added) that reduced the avg 370 instruction from 2.1 machine cycles (on 165) to 1.6 machine cycles (on 168).
some selective invalidate posts:
https://www.garlic.com/~lynn/2001k.html#8 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002b.html#48 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Sandia, Cray and AMD Newsgroups: comp.arch Date: Tue, 22 Oct 2002 16:41:49 GMTRobert Myers writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Tue, 22 Oct 2002 16:30:48 GMTJay Miller <jnmiller@@cryptofreak.org> writes:
The chip can establish (authenticate) whether or not the owner had rights to perform certain operations ... like withdrawing money from a bank account. no identification is involved and the chip is identity agnostic. any identity would require the business to establish a mapping between the entity that had rights to withdraw from an account with some identity (but totally outside the scope of the chip).
many of the biometrics systems flow the information up to a central repository where the match is done. in that sense these systems not only involve identity but turn the biometric value into a shared-secret (similar to previous postings about cc account number is a shared-secret). match on card eliminates biometrics as a shared-secret. the problem with many of the current generation of biometric chips with match on card ... is that they've been designed for offline environment. biometrics tend to be very fuzzy with some assceptable scoring threshold sent (i.e. percent match) for whether or not the card works or doesn't work (also leading to the whole notion of false positives and false negatives). the issue is that in an area somewhat related to security proportional to risk ... the threshold values are somewhat tuned to the value of the operation. For an environment that migrated to chip-based biometrics across a broad range of envirnments with a broad range of values and risks ... that could lead to a very fat wallet filled with different cards.
random biometrics:
https://www.garlic.com/~lynn/aadsm2.htm#privacy Identification and Privacy are not Antinomies
https://www.garlic.com/~lynn/aadsm3.htm#cstech2 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech4 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm5.htm#shock2 revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm6.htm#terror12 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm7.htm#rhose9 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#carnivore2 Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aadsm10.htm#tamper Limitations of limitations on RE/tampering (was: Re: biometrics)
https://www.garlic.com/~lynn/aadsm10.htm#biometrics biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio1 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio2 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio3 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#bio4 Fingerprints (was: Re: biometrics)
https://www.garlic.com/~lynn/aadsm10.htm#bio5 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio6 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio7 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#bio8 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aepay3.htm#passwords Passwords don't work
https://www.garlic.com/~lynn/aepay4.htm#comcert Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay6.htm#cacr7 7th CACR Information Security Workshop
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aepay7.htm#3dsecure2 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aepay10.htm#5 I-P: WHY I LOVE BIOMETRICS BY DOROTHY E. DENNING
https://www.garlic.com/~lynn/aepay10.htm#8 FSTC to Validate WAP 1.2.1 Specification for Mobile Commerce
https://www.garlic.com/~lynn/aepay10.htm#15 META Report: Smart Moves With Smart Cards
https://www.garlic.com/~lynn/aepay10.htm#20 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/99.html#160 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#166 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#172 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#235 Attacks on a PKI
https://www.garlic.com/~lynn/2000.html#57 RealNames hacked. Firewall issues.
https://www.garlic.com/~lynn/2000.html#60 RealNames hacked. Firewall issues.
https://www.garlic.com/~lynn/2001c.html#30 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#42 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#60 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001g.html#11 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#38 distributed authentication
https://www.garlic.com/~lynn/2001h.html#53 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#16 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
https://www.garlic.com/~lynn/2001j.html#52 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#1 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#6 Is VeriSign lying???
https://www.garlic.com/~lynn/2001k.html#61 I-net banking security
https://www.garlic.com/~lynn/2002.html#39 Buffer overflow
https://www.garlic.com/~lynn/2002e.html#18 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002e.html#36 Crypting with Fingerprints ?
https://www.garlic.com/~lynn/2002e.html#38 Crypting with Fingerprints ?
https://www.garlic.com/~lynn/2002f.html#22 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#32 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#45 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002g.html#56 Siemens ID Device SDK (fingerprint biometrics) ???
https://www.garlic.com/~lynn/2002g.html#65 Real man-in-the-middle attacks?
https://www.garlic.com/~lynn/2002g.html#72 Biometrics not yet good enough?
https://www.garlic.com/~lynn/2002h.html#6 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#8 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#9 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#13 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002h.html#41 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002i.html#61 BIOMETRICS
https://www.garlic.com/~lynn/2002i.html#65 privileged IDs and non-privileged IDs
https://www.garlic.com/~lynn/2002j.html#40 Beginner question on Security
https://www.garlic.com/~lynn/2002l.html#38 Backdoor in AES ?
https://www.garlic.com/~lynn/2002m.html#14 fingerprint authentication
https://www.garlic.com/~lynn/2002n.html#19 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002n.html#20 Help! Good protocol for national ID card?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Tue, 22 Oct 2002 17:00:14 GMTJay Miller <jnmiller@@cryptofreak.org> writes:
it requires both an infrastructure model and the corresponding
standards operation. the x9.59 protocol removes the credit card number
as the point of attack (and all the large multitude of databases that
contain it in the clear) and effectively moves the attack to the
end-points ... the signing environment and the authentication
environment.
https://www.garlic.com/~lynn/x959.html#x959
the aads chip strawman proposes the best token technology in existance
today at optimized cost-reduced delivery ... for protection of the
private key and the signing operations
https://www.garlic.com/~lynn/x959.html#aads
that moves the attacks & exploits on the signing end point to
different areas ... some addressed by the eu finread stuff (are you
really signing what you thing you are signing):
https://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
https://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative to PKI?
https://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and their users [was Re: Cryptogram: Palladium Only for DRM]
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
https://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
https://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
https://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002f.html#46 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002f.html#55 Security Issues of using Internet Banking
https://www.garlic.com/~lynn/2002g.html#69 Digital signature
https://www.garlic.com/~lynn/2002m.html#38 Convenient and secure eCommerce using POWF
https://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: why does wait state exist? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 22 Oct 2002 20:24:11 GMTgah@UGCS.CALTECH.EDU (glen herrmannsfeldt) writes:
one of the big things that CP/67 did in the late '60s was switch to "PREPARE" sequence on terminal lines.
CP/67 was precursor to VM/370 (which survives today as both LPAR support and zVM ... my guess that the LOCS in LPAR microcode are comparable to the LOCS in the original CP/67 kernel). CP/67 and CMS were pretty much an evoluation of CTSS time-sharing system ... done by some of the same people that worked on CTSS ... and done in parallel and in the same building as other people (that had also worked on CTSS) doing Multics.
In any case, CP/67 was doing all this super-optimized time-sharing, time-slicing, dynamic adaptive workload management, fastpath kernel optimization, near optimal page replacement algorithms, lot of the precursor stuff to what became capacity planning, etc, etc.
However one of the major things that allowed CP/67 to transition into the time-sharing service bureau was the change to use PREPARE in the terminal CCW sequence. CP/67 was already going into wait state when there wasn't anything to do ... and not waking up gratuitously ... but the terminal I/O sequence still had the channel active and ran the meter.
one of the requirements for offering cp/67 service bureau ... was being able to provide 24x7 service ... and be able to recover costs of the operation. Going into wait state helped with stopping the meter under off-shift low usage scenarios. But it wasn't until the PREPARE CCW sequence change was made that the meter actually totally stopped. At that point, just leaving the system up and running continuously became much more cost effective.
another issue (at least during the start up phases) time-sharing service bureau stuff was various automated operator stuff and automated recovery & reboot in case of failures.
In any case, somewhat after CP/67 was announced at the spring '68 SHARE meeting in houston (coming up 35 years)... two CP/67 time-sharing service offerings spun off.
misc. other pieces of ctss, timesharing, cp/67, vm/370, and virtual
machine lore at:
https://www.leeandmelindavarian.com/Melinda#VMHist
these days ... with time-sharing by both virtual machine kernel and the microcode ... the issue isn't the (leasing) meter running .... but not unnecessarily using processor that could be put to better use by some other component.
random other posts related to the subject
https://www.garlic.com/~lynn/99.html#179 S/360 history
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000b.html#44 20th March 2000
https://www.garlic.com/~lynn/2000b.html#72 Microsoft boss warns breakup could worsen virus problem
https://www.garlic.com/~lynn/2000d.html#40 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2000e.html#9 Checkpointing (was spice on clusters)
https://www.garlic.com/~lynn/2000f.html#52 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#4 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2001g.html#30 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#52 Compaq kills Alpha
https://www.garlic.com/~lynn/2001h.html#14 Installing Fortran
https://www.garlic.com/~lynn/2001h.html#35 D
https://www.garlic.com/~lynn/2001h.html#59 Blinkenlights
https://www.garlic.com/~lynn/2001k.html#38 3270 protocol
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002b.html#1 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002c.html#44 cp/67 (coss-post warning)
https://www.garlic.com/~lynn/2002d.html#48 Speaking of Gerstner years
https://www.garlic.com/~lynn/2002e.html#27 moving on
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002f.html#17 Blade architectures
https://www.garlic.com/~lynn/2002f.html#59 Blade architectures
https://www.garlic.com/~lynn/2002h.html#34 Computers in Science Fiction
https://www.garlic.com/~lynn/2002i.html#21 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#62 subjective Q. - what's the most secure OS?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#64 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002k.html#64 History of AOL
https://www.garlic.com/~lynn/2002l.html#66 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002m.html#61 The next big things that weren't
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: why does wait state exist? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 22 Oct 2002 20:33:32 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: why does wait state exist? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 23 Oct 2002 15:19:23 GMTpa3efu@YAHOO.COM (Jan Jaeger) writes:
much of the interrupt overhead ... wasn't in the hardware ... it was
how the operating systems implemented first level intherrupt handler
(FLIH). I have claim that (as undergraduate) i had optimized the CP/67
FLIHs that they were possibly ten times faster than mvt/mft (even tho
I had done lots of MFT/MVT optimization work also). minor refs
to presentation at fall '68 SHARE in Atlantic City (both lots of work modifying
MFT14 for standalone production work, lots of work modifying CP/67,
and lots of work modifying MFT14 for running under cp/67):
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
lots of FLIH and i/o initiation tended to be with non-standard, anomolous,
fault handling ... but it was still possible to build a bullet-proof
infrastructure that was still very optimized (work done for the disk
engineering lab):
https://www.garlic.com/~lynn/subtopic.html#disk
as mentioned in another posting to this thread ... various other machine architectures provided for vectored interrupts ... which would shave a couple instructions off FLIH. The big savings in some real-time architectures was that they had rings & vectored interrupts ... and interrupt into "better" ring ... suspended execution of "poorer" ring (each ring had its own regs, etc ... so FLIH didn't have to save & restore). This tended to be special case for small number of things.
On cache machines, asynchronous interrupts can imply task switching and
cache trashing. One of the little special twists that I did for VM/370
was some dynamic adaptive code that under heavy interrupt load would
run user processes disabled for I/O interrupts ... but with a managed
timer interrupt. I/O interrupts tended to be slightly delayed ... and
then batch drained with an interrupt window in the kernel. Properly
tuned (on heavily loaded 370/168) it actually improved interrupt
processing (since tended to have very good cache hits on the kernel
interrupt code since it was being repeatedly executed) and application
execution (since it didn't have a lot of asynchronous interrupts
trashing the cache).
https://www.garlic.com/~lynn/2002l.html#25 Do any architectures use instruction count instead of timer
Large part of the I/O features for XA ... was to offload a lot more of the kernel I/O processing into dedicated asynchronous processors (over and above already provided by the channel architecture). Part of this was justified on the significantly painful long MVS pathlengths (and in some sense ... it was easier to rewrite from scratch in a new microprocessor than try and cleanup existing spaghetti code, although I had demonstrated it was possible with the work supporting the disk engineering lab). One of the ancillary issues of outboarding more of the I/O function allowed asynchronous queuing of new requests and dequeuing of completed requests ... with dedicated processors being able to handle things like device redrive ... processing the completion of the current requests and immediately redriving the device with the next queued request ... w/o impacting the cache locality of the main processor.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Wed, 23 Oct 2002 15:52:28 GMT"Tony T. Warnock" writes:
counterfeit/invalid cards (either valid or ficticious persona) valid cards for somebody else's (valid) persona (identity theft) valid cards for ficticious persona
also ... who is the authority that decides what are valid persona and what are ficticious persona?
lots of privacy stuff going on (like GLB) ... big issues are institutional "mis-use" of privacy information ... as well as criminal "mis-use" of privacy information (identity theft).
one of the (effective) claims regarding x.509 "identity" certificates is that they can represent major privacy violation issues ... and therefor some past transition to relying-party-only certificates (aka effectively authentication-only certificates). Note however, that traditional certificates are like letters of credit from one institution to another institution. In general writing a letter of credit for somebody to yourself can frequently be shown to be redundant and superfluous (aka dear me, please accept my assurance that the holder of this document is good for $10,000, signed me).
that then strays into the semantics of identification and authentication. rather than looking at cards as identification ... embodying a persona ... they are part of some authentication schema ... aka 3-factor authentication
something you have (aka hardware token or card) something you know (password or PIN) something you are (biometrics)
now within the structure of 3-factor authentication semantics ... in conjunction with cards ... something you know and something you are can either represent "secrets" or shared-secrets. shared-secrets is that the information is registered someplace else (like mother's maiden name) and somebody is responsible for presenting it A "secret" is that the information is registered in the token and correct presentation affects the operation of the token. With properly designed authentication token, a non-shared-secret paradigm tends to be at least privacy agnostic.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: why does wait state exist? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 23 Oct 2002 18:34:57 GMTAnne & Lynn Wheeler writes:
anyway ... i did a (corporate classified) paper on what was needed and the changes & restructuring. then there was a letter from somebody in POK RAS management ... which effectively wanted to fire me; not for fixing everything ... but the document could be interpreted (if you so desired) as a list of things that had needed fixing (which then could be construed as reflecting on the RAS group, especially if you were totally focused on image building).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: why does wait state exist? Newsgroups: alt.folklore.computers,bit.listserv.ibm-main Date: Wed, 23 Oct 2002 23:19:15 GMTJeff Raben writes:
the 2702 had other problems ... which resulted in a project that i
worked on as an undergraduate that reversed engineering the channel
interface and we built our own controller ... using an interdata/3 as
a base microprocessor. supposedly this originated the pcm controller
business (something that CPD wasn't too happy with me for).
https://www.garlic.com/~lynn/submain.html#360pcm
as for the extraneous reference to TSS ... some other extraneous refernces. the discussion was specifically about change to cp/67 resulting in the meter to not tic ... especially off shift with possible low activity ... and enhancing the ability for some service bureaus to offer cost effective cp/67 time sharing service.
at approximately the time the prepare command change was done in cp/67 ... i believe the cp/67 & cms ibm group was somewhere around 12 people. I was told that at about the same time the tss/360 ibm group numbered around 1200 people (two orders of magnitude more). All sorts of discussions could be had about whether it was better to have had just 12 people or 1200 people. there are also discussions about the subsequent tss/370 (on 370s with virtual memory than 360/67) activities may have possible done better with only a 20 person group (rather than the original 1200).
While there were a number of commercial cp/67 (and later vm/370) time sharing service bureaus ... i'm not aware of there having been any commercial tss/360 time sharing service bureaus (as well as a significantly larger number of 360/67s running cp/67 than tss/360).
as an aside ... almost 20 years later ... i tried to do another CPD
controller replacement/clone using a series/1 (peachtree) migrating to
RIOS ... but was somewhat less successful than the original effort
(that is orthanganol to an earlier attempt to have original 3705 be
based on peachtree (s/1) rather than uc.5 microprocessor.
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#155 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#165 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#239 IBM UC info
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: why does wait state exist? Newsgroups: alt.folklore.computers,bit.listserv.ibm-main Date: Thu, 24 Oct 2002 00:09:59 GMT
another somewhat extraneous ... interdata was bought up by perkin/elmer which continued to offer the product. 5-6 years ago (nearly 30 years after we built the original) i was in a large oltp data center and there was a p/e box handling a significant amount of the load.
i didn't get to examine that box in detail ... but i've talked to some people that were selling p/e boxes into the federal gov. in the early '80s and they said that the channel interface was still a wire-wrap board ... possibly unchanged since we built the original board.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Opera 6.05 resources problem? Newsgroups: opera.general Date: Wed, 23 Oct 2002 23:34:17 GMTpossible totally unrelated to any other problem ... i normally run with javascript disabled ... unless i have to absolutely turn it on for some website.
i found with linux firewall ... displaying transmission activity ... that after having turning on javascript in conjunction with visiting a website (doesn't seem to be any correlation with any specific site) ... constant relatively low-level arriving packet rate starts, even when absolutely nothing (that i know of) is going on. turning off javascript has no affect. killing/dropping the link and then restarting the link does successfully interrupt it tho.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VR vs. Portable Computing Newsgroups: alt.folklore.computers,comp.arch,comp.society.futures,rec.arts.sf.science,soc.history.science Date: Thu, 24 Oct 2002 02:30:22 GMTKeith R. Williams writes:
also note that GML was done at the science center ... which begot SGML and then HTML ... possibly in large part because CERN was a vm/cms installation ... and had been running it since the (infamous) cern mvs/vm bake-off. i believe its sister location, slac (also was a large vm/cms installation) claims to have the web site that has been around the longest (i don't know if they are claiming the original web site, but i believe they are at least claiming the earliest one still around).
during much of the 70s & 80s slac hosted the bay area vm user group (baybunch) meetings ... there were some at the 30th anniv. party for vm/370 at share 99 in san fran (this past aug).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VR vs. Portable Computing Newsgroups: alt.folklore.computers,comp.arch,comp.society.futures,rec.arts.sf.science,soc.history.science Date: Thu, 24 Oct 2002 02:56:34 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VR vs. Portable Computing Newsgroups: alt.folklore.computers,comp.arch,comp.society.futures,rec.arts.sf.science,soc.history.science Date: Thu, 24 Oct 2002 06:53:42 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VR vs. Portable Computing Newsgroups: alt.folklore.computers,comp.arch,comp.society.futures,rec.arts.sf.science,soc.history.science Date: Thu, 24 Oct 2002 07:12:15 GMTKeith R. Williams writes:
1597 -
Address Allocation for Private Internets, DeGroot G., Karrenberg D.,
Moskowitz R., Rekhter Y., 1994/03/17 (8pp) (.txt=17430) (Obsoleted by
1918)
1627 -
Network 10 Considered Harmful (Some Practices Shouldn't be Codified),
Crocker D., Fair E., Kessler T., Lear E., 1994/07/01 (8pp)
(.txt=18823) (Obsoleted by 1918)
1917
An Appeal to the Internet Community to Return Unused IP Networks
(Prefixes) to the IANA, Nesser P., 1996/02/29 (10pp) (.txt=23623)
(BCP-4)
1918
Address Allocation for Private Internets, DeGroot G., Karrenberg D.,
Lear E., Moskowitz R., Rekhter Y., 1996/02/29 (9pp) (.txt=22271)
(BCP-5) (Obsoletes 1597, 1627)
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Date: Thu, 24 Oct 2002 02:08:27 -0600 Newsgroups: bit.listserv.vmesa-l Subject: CMS updatelong, long ago ... when some guys come out to install CP/67/CMS at the university ... the method was basically
update fn assemble a fn update a
where the fn update file had ./ i number, ./ r number <number>, ./ d number <number>
where number were the sequence number in cols 73-80 of the assemble file.
you then could assemble the resulting temporary file from update (actually update could be used against any kind of file as long as it had sequence numbers in 73-80)
periodically the temporary file would be taken and used to replace the permanent assemble file (normally resequencing the assemble file when that was done ... but not always). The convention was that you also needed to manual type in the sequence numbers into the update file in cols 73-80 ... appropriately choosing the numbers you typed. all the updates I was doing ... it really got to be a pain to constantly type in those numbers. So i wrote a little preprocessor routine ... it would read the update file and look for dollar sign on the ./ control cards. If it found one ... it would take that as indication to automatically generate the sequence numbers in the cards it output. "$" could have nothing following it ... in which case it did the default ... or it could have an optional starting number and an optional increment following the dollar sign. This is was all still one level update.
Later in the "L", "H", and "I" time-frame (distributed development project implementing virtual 370 support in cp/67 running on real 360/67) ... the work was done at cambridge for multi-level update. As mentioned in one of melinda's notes ... I was able to resurrect this original infrastructure and send her a copy.
basically it was all still plain update command but driven by exec that iterated one for every update specified in the control file. this multi-level update exec started out looking for files of the form UPDGxxxx where xxxx could be specified in the CNTRL file. For every UPDGxxxx it found, it would run it thru the dollar preprocessor and generate a UPDTxxxx (temporary) file ... which was than applied to the assemble file resulting in a temporary assemble file. Any subsequent UPDG files it found in the specified search order would be run thru the "$" process, generate the UPDT file and then applied (iteratively) to the resulting assemble file. Finally when it exhausted all UPDG files, it would assemble the resulting assemble file.
Then there was some really fancy stuff done by an MIT co-op that attempted to merge multiple parallel update threads and resolve conflicts between the parallel development threads. That fairly sophisticated work eventually fell by the way-side. In the mean time, the development group (which had split off from the scientific center by this time) had a need for PTF/APAR files. They took the CNTRL/UPDG structure developed by the science center and added "aux" file support to the CNTRL file ... i.e. the update exec instead of looking for a update file of the form UPDTxxxx ... would look for a "aux" file that contain lists of update files ... giving the full filetype name of each file to be applied.
Eventually, the exec code for supporting control file loop and the "$" sequence number processing was incorporated into the standard update routine, aka update would read the assemble file into memory and iteratively execute the control file loop applying all update files it found ... before writing out the resulting updated assemble file. Even later, support was extended in the editor ... that it would 1) do the iterative CNTRL file update operation prior to editing sessions ... and on file ... instead of writing out the complete file ... generate the appropriate update file reflecting all edit changes (prior to that, the update file had to be explicitly edited ... including all the ./ control commands ... instead of having the editor automatigically generate them for you).
The other part was after the assemble process .... the resulting TEXT/binary file was appropriately renamed to reflect the highest level update that had been applied and "comments" card were added to the front of the TEXT file ... one comment line for each file involved .in the process ... with full name, date, time, etc ... the original assemble, file, all the update files applied and all the maclib files involved in the assembly. And then there was the VMFLOAD process which took the CNTRL file and looked for TEXT files in the appropriate search order for inclusion in the runtime image. And of course when the loader read the runtime image and generated the load map ... it output as part of the loadmap process each one of the comment cards that it ran across. It could somewhat reconstruct what pieces were part of a CP kernel routine by all the comments cards in the load map.
So i was in madrid sometime in the mid-80s. This was to visit the madrid science center ... they had a project that was imaging all sorts of old records ... preparing stuff that would be a comprehensive cdrom getting reading for the 500th annv of 1492. So while i'm there, I visit a local theater. They have this somewhat avant guard short done at the university that runs about 20 minutes. A big part of the short was a apparently a hotel room or apartment that had a wall of possible two dozen TVs ... they all appeared to be scrolling some text at 1200 baud ... the same text on all TVs (looks like all TVs are slaved to the same computer output). The dardest thing was that I recognized it as a CP kernel load map that was being scrolled ... and what is even worse, I recognized the release and PLC level from the APAR/PTF comments.
In any case, it is nice to have all the individual updates around for some kinds of audit processes ... compared to effectively the "downdates" of RCS & CVS ... the rest of CVS support is a lot more comprehensive.
--
Anne & Lynn Wheeler lynn@garlic.com, https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Thu, 24 Oct 2002 17:07:02 GMTChristopher Browne writes:
the certificate then contains something that can be used to validate the entity that the information is about.
everybody carries with them the public key of the trusted agency ... so the validaty of the certificate (and its assertions) can be validated.
in the traditional x.509 identity digital certificate ... the entity validation information is a public key ... the entity is asked to digital sign some arbitrary information (aka like a challenge/response) and then the public key in the certificate is used to check the response. Assuming that the trusted agencies public key validates the certificate and that the public key in the certificate validates the challenge/response) ... then it is assumed that the attributes in the certificate correspond to the entity signing the challenge/response.
in variations on this ... rather than having the entities public key "bound" in the certificate ... there is biometric information or digitized picture of the person ... or some other way of validating the entity and the certificate are bound together.
The driver's license analogy was frequently used as the business case for justifying huge x.509 identity digital certificate business cases.
Note that the digital signature on the certificate/credential is that of the authoritative agency that is trusted for the information of interest. The public key of the trusted/authoriative agency is then used to validate that digital signature. Any public key in the certificate/credential is then used to validate some digital signature generated by the entity of interest. This is somewhat the hierarchy trust model of PKI ... you have to first validate the correctness of the credential/certificate and then validate the binding to the entity that the credential/certificate information is about.
Note that this is all a paradigm developed for the offline world before police had radios, portable computers, and checked real-time databases. Effectively the suggested solution tried to make up for the difficiency in the offline world by creating read-only stale copies of the real-time authoritative information. This was the offline, hardcopy model translated to the offline, electronic world.
However, to some extent the world has moved on ... typically the online connectivity is such that for anything of real importance or value ... if it is electronic ... it is possible to directly query the authoritative agency for the real-time information .... instead of relying on stale, static copies of the information. The driver license information (and almost all other information) works for offline, stale, static hardcopy ... when there isn't access to electronic and online. However, it is becoming such that if there is a reason for the electronic (rather than the hardcopy) ... and the issue involves anything of importance or value ... then it is electronic&online ... and not electronic&offline. In the driver's license case ... the officer except for cursory checks ... can check the picture and the number ... and then calls in the number for real-time, online transaction.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Home mainframes Newsgroups: alt.folklore.computers Date: Thu, 24 Oct 2002 20:14:16 GMTjmaynard@thebrain.conmicro.cx (Jay Maynard) writes:
However, if you have requirement for nearly any sort of automated delivery service that needs to run repeatedly day-in, day-out ... with little or no hands on required ... things like payroll, check clearing, financial transactions, etc. it is very dependable work horse. In that sense it is more like some of the big 18 wheelers on the highway ... people looking for something simple like a small two-seater sports car are going to find a big 18 wheeler with a couple trailers somewhat unsuited.
One of the intersection points in the current environment ... is that a large number of web services have requirements for 7x24, reliable, totally automated operation (even dark room). Lots of users around the world don't care why either the ATM machine is down or their favorite web server is down ... they just want it up and running all the time.
a couple years ago ...one of the large financial services claimed a major reason for 100 percent uptime for the previous six years was automated operations in MVS ... aka people effectively were almost never allowed to touch the machine ... because people make mistakes.
Hardware had gotten super reliable ... software was getting super reliable ... but people weren't getting a whole lot better.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help! Good protocol for national ID card? Newsgroups: sci.crypt Date: Thu, 24 Oct 2002 21:27:07 GMTChristopher Browne writes:
then from another facet, it is possible to divide the business and institutional solution space into four quadrants offline/online and electronic/hordcopy (or electronic/non-electonic):
offline& online& hardcopy hardcopythe world prior to the 70s was significantly in the upper left quadrant, offline&hardcopy. During the '70s there started to be a transition to online (at least in the US) either directly to online&electronic (for money/value type transactions with POS terminals) or with a stop-over in online&hardcopy before proceeding to online&electronic (polic car 2-way radios, personal 2-way radios, laptops with online connectivity, etc).
offline& online& electronic electronic
basically the business and institutional infrastructures were moving from the offline&hardcopy quadrant to the "online" column ... either directly to the online&electronic quadrant or possibly passing thru the online&hardcopy quadrant.
In the '80s there started to appear in the literature description of solutions that fit in the offline&electronic quadrant ... it wasn't a domain space that any of the business & institutional infrastructures were migrating to but it had potential market niches.
Possibly some market niches driving the literature for solutions in the offline&electronic quadrant were 1) the (no-value) offline email (business process that didn't justify the expense of online connectivity; dial-up, efficiently exchange email, and hangup ... and do the actual processing offline) and 2) potentially various places around the world that had poor, very expensive, little or no online connectivity.
Many of these solutions somewhat circled around digitally signed credential ... aka information from some trusted database copied into a read-only, distributable copy. This somewhat culmunated in the x.509 identity digital certificate standard.
By the early '90s when the x.509 standard was starting to settle out, the potential market niches in the offline&electronic quardrant was starting to shrink rapidly. Internet ISPs were starting to bring the possibility of nearly online, all-the-time connectivity. In many parts of the rest of the world, a combination of deregulation of PTTs and the internet was starting to also bring the possibility of nearly online, all-the-time connectivity. Finally, in parts of the world where the last-mile physical infrastructure had not happened, the whole wireless revolution was turning into a wild-fire. Some places that had not heavily invested in legacy last-mile physical infrastructure had a high uptake for wireless solutions.
It was almost like that by the time digital signed credentials (copies of information from trusted databases by trusted agencies) started to come into their own for the offline&electronic solution quadrant ... the market was evaporating because the possibility of nearly online, all-the-time was spreading like wildfire.
In some sense that puts x.509 digitally signed identity certificates in somewhat the same category as OSI&GOSIP. There were huge amounts of stuff written about it, it is still taught in various academic circles, but it is unrelated to real-life business and institutional requirements, directions, and real live deployments.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VR vs. Portable Computing Newsgroups: alt.folklore.computers Date: Fri, 25 Oct 2002 20:29:46 GMTDavid Powell writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: public-key cryptography impossible? Newsgroups: sci.crypt Date: Fri, 25 Oct 2002 20:19:40 GMT"Ben Mord" writes:
there is always the joke about making extremely secure systems by eliminating all connections and not letting anybody touch and/or use the systems ... the sysem would be perfectly fine if there weren't any users.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: RFC 2647 terms added to merged security glossary Newsgroups: comp.security.firewalls,comp.security.misc Date: Sun, 27 Oct 2002 23:17:35 GMTI've updated my merged security glossary & taxonomy with 26 terms from RFC 2647 (Benmarking Terminology for Firewall Performance):
the terms allowed traffic, application proxy, authentication, bit forwarding rate, circuit proxy, concurrent connections, connection, connection establishment, connection establishment time, connection maintenance, connection overhead, connection teardown, connection teardown time, data source, demilitarized zone, firewall, goodput, homed, illegal traffic, logging, network address translation, packet filtering, policy, protected network, proxy, rejected traffic
notes:
Terms merged from: AFSEC, AJP, CC1, CC2, FCv1, FIPS140, IATF V3,
IEEE610, ITSEC, Intel, JTC1/SC27, KeyAll, MSC, NCSC/TG004, NIAP, NSA
Intrustion, NSTISSC, RFC1983, RFC2504, RFC2647, RFC2828, TCSEC, TDI,
TNI, and misc.
Updated 20020928 with more ISO SC27 definitions. Updated 20020929
with glossary for online security study (www.srvbooks.com). Updated
20021020 with glossary from NSTISSC. Updated 20021027 with RFC2647.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Mon, 28 Oct 2002 14:50:33 GMTjmfbahciv writes:
personal firewalls may do something similar ... i get sporadic requests to re-validate permissions for program after running disk optimization (and then have duplicate entries for programs in the permissions display).
then there are old int13 bios/software with 1024 cylinder problem
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Mon, 28 Oct 2002 22:42:32 GMTPete Fenelon writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Tue, 29 Oct 2002 14:26:48 GMTPete Fenelon writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Tweaking old computers? Newsgroups: alt.folklore.computers Date: Wed, 30 Oct 2002 18:28:54 GMTjmfbahciv writes:
the meter runs whenever the cpu executed instructions and/or there was "active" I/O ... and the meter "tic" resolution was 400milliseconds (aka any activity, no matter how small caused at least a 400 ms tic).
ref:
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#31 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#33 why does wait state exist?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: EXCP Newsgroups: bit.listserv.ibm-main Date: Wed, 30 Oct 2002 20:15:16 GMTbblack@FDRINNOVATION.COM (Bruce Black) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: History of HEX and ASCII Newsgroups: comp.lang.asm370 Date: Thu, 31 Oct 2002 05:12:41 GMT"glen herrmannsfeldt" writes:
hex is somewhat orthogonal to ascii/ebcdic. ebcdic is 8bit extension fo 6bit bcd. the 6bit bcd machine had things like six six-bit characters in 36bit word. ebcdic machines had four eight characters in 32bit word.
ascii came from things like TTY/ascii terminals.
as an undergraduate when i worked on the original PCM controller ... initially for TTY ascii environment ... one of the early things that I found out was ibm terminal controllers had a convention of placing the leading bit in the low-order bit position of a byte ... and ibm terminals worked correspondingly. TTY/ASCCI terminals used the convention of the leading bit in the high bit position of the byte (not the low bit position).
so one of the peculiarities of ascii terminal support in an ebcdic mainframe wasn't just that the bit pattern definitions for characters were different ... but when dealing with ascii/tty terminals ... the bits arrived in the storage of a mainframe bit-reversed. The terminal translate tables for tty/ascii terminals actually was bit-reversed ascii; in coming bits were bit-reversed in a byte ... and so the ascii->ebcdic translate table (like btam) had the translate tables appropriately confused. Outgoing ebcdic->asclii translated to bit-reversed ascii ... relying on IBM controller to turn the bits around before transmitting on the line to the terminal.
this starts to get more confused later on with ibm/pc (ascii machines) directly attached to mainframe and not going thru traditional terminal controller. now you have two different sets of translate tables ... one for bit-reversed ascii (going thru ibm terminal controller) and one for direct connected transmission.
all of this is independent of the issue that there are some characters in ascii that have no corresponding character defined in the ebcdic character set ... so a particular (bit-reversed) ascii character gets mapped to some arbritrary ebcdic pattern. similarly there are some ebcdic characters that don't exist in ascii.
misc. pcm controller refs:
https://www.garlic.com/~lynn/submain.html#360pcm
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Computing on Demand ... was cpu metering Newsgroups: alt.folklore.computers,bit.listserv.ibm-main Date: Thu, 31 Oct 2002 20:56:30 GMTcpu metering refs:
http://www.washingtonpost.com/wp-srv/javascript/channelnav/nav_imagemap.map?324,15
IBM's Plan: Computing On Demand
Washington Post Staff Writer Thursday, October 31, 2002; Page E01
International Business Machines Corp. chief executive Samuel
J. Palmisano said yesterday that his company is investing $10 billion
in a business strategy aimed at getting corporate customers to pay for
their computing power in much the way they now buy power from
utilities: as they use it.
Palmisano described his vision of "on-demand" computing in a speech to
customers and analysts in New York. It was his first address since the
company announced that he would gain the title of chairman Jan. 1.
IBM, he said, hoped to fashion a computing grid that would allow
services to be shifted from company to company as they are needed. For
instance, a car company might need the computing power of a
supercomputer for a short period as it designs a new model but then
have little need for that added horsepower once production
begins. Other services could be delivered in much the same way,
assuming IBM can pull together the networks, computers and software
needed to manage and automate the chore. Palmisano said the industry
would first need to embrace greater standardization.
Palmisano said the company is pursuing its $10 billion strategy
through acquisitions, marketing and research, much of which has taken
place in the past year.
"No doubt about it, it is a bold bet. Is it a risky bet? I don't think
so," he said.
Analysts regarded the speech as Palmisano's road map for IBM's
future. "We view this as Palmisano's coming-out party," said Thomas
Bittman, an analyst at Gartner Research. "The industry will be
measuring IBM against this as a benchmark for years."
The concepts of grid computing are not entirely new or unique to
IBM. Hewlett-Packard Co. is pursuing similar ideas, for example.
"A lot of the threads we've heard before," said David Schatsky,
research director at Jupiter Research. "But it does represent a new
coalescence of their vision."
Palmisano is to succeed Chairman Louis V. Gerstner Jr., and analysts
are already picking up on differences between the men.
"Gerstner always talked to the CEOs," said Bittman. "Today, Palmisano
was focusing on the [chief information officers] as the executives to
drive change. He's able to do that because he's more of a techie."
Palmisano joined the company in 1973 as a sales representative in
Baltimore and has been the driving force behind many of IBM's
announcements and decisions in recent years, such as the company's
move to adopt and promote the open-source operating system Linux.
Since becoming chief executive in March, Palmisano helped oversee the
acquisition of PricewaterhouseCoopers Consulting -- a purchase that
has shored up IBM's dominance in computer consulting
services. Palmisano also arranged the pending sale of IBM's hard disk
drive business and outsourced its desktop PC manufacturing business.
During his address, Palmisano said he saw signs that the global
economy may have hit bottom and is flattening out. But he also said
the tech sector would be slow to rebound because of the enormous
growth and overinvestment of the Slate 1990s.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SHARE MVT Project anniversary Newsgroups: alt.folklore.computers Date: Fri, 01 Nov 2002 16:36:39 GMTjmaynard@thebrain.conmicro.cx (Jay Maynard) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SHARE MVT Project anniversary Newsgroups: alt.folklore.computers Date: Mon, 04 Nov 2002 21:31:50 GMT"David Wade" writes:
MTS was on 360/67 ... but UTS (originally code named gold for A.U.) was much later. there was also stuff like wilbur, music, cps, apl\360, etc .. all interactive services platformed on os/360.
at the university, on a MVT18 base ... I had removed the 2780 device support code from HASP and replaced it with a kind of CRJE supporting 2741 and tty terminals. I copied most of the syntax from CMS edit (circa '69).
as an aside ... the lore is MTS was originally built on LLMPS. LL also had 360/67 and ran cp/67. Also it was some of the LL people that left and formed one of the early cp/67 time-share service bureaus.
Late in cp/67 cycle ... before converting over to vm/370 ... cambridge was able to run 70-80 "mixed-mode" (interactive, program development, compile, test, execute, apl, various kinds of apl analysis packages, batch, etc) on 768kbyte 360/67 (with three 4mbyte 2301 paging "drums", 45 2314 disk drives)) with subsecond response for interactive and nearly 100 percent cpu utilization during prime shift.
misc ...
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/2002f.html#37 Playing Cards was Re: looking for information on the IBM 7090
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: ibm time machine in new york times? Newsgroups: alt.folklore.computers,bit.listserv.ibm-main Date: Mon, 04 Nov 2002 22:06:55 GMThaven't seen much comment on the ibm time machine advertisement in the new york times
one other reference to the use of time machine ... and I once even suggested that some policy or another could only make sense if they had a time machine in the basement of bldg. 12 in the main san jose plant site.
[Hume89] A.G. Hume, "The Use of a Time Machine to Control Software," in Proceedings of the USENIX Software Management Workshop, Usenix, New Orleans, Louisiana, April 1989, pp. 119-124.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: REVIEW: "Internet Security Dictionary", Vir V. Phoha Newsgroups: misc.books.technical,comp.security.misc,alt.computer.security Date: Tue, 05 Nov 2002 17:27:57 GMTas an aside I've got nearly 2600 terms with nearly 4000 definitions in the merged security taxonomy & glossary at
i don't know about printing ... although somebody from SC27 sent me a
converted PDF file from my HTML (and PDF reader does pretty good job
of following the converted HREF links).
Security
Terms merged from: AFSEC, AJP, CC1, CC2, FCv1, FIPS140, IATF V3,
IEEE610, ITSEC, Intel, JTC1/SC27, KeyAll, MSC, NCSC/TG004, NIAP, NSA
Intrustion, NSTISSC, RFC1983, RFC2504, RFC2647, RFC2828, TCSEC, TDI,
TNI, and misc. Updated 20020928 with more ISO SC27 definitions.
Updated 20020929 with glossary for online security study
(www.srvbooks.com). Updated 20021020 with glossary from
NSTISSC. Updated 20021027 with RFC2647.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SHARE MVT Project anniversary Newsgroups: alt.folklore.computers Date: Wed, 06 Nov 2002 18:16:36 GMT"David Wade" writes:
of course neither their uniprocessor or dual-processor thruput was as good as cp/67 single processor thruput (unless you are talking about a carefully tuned, processor intensive benchmark comparing tss/360 on a dual processor machine against cp/67 on a single processor machine).
the actual situation was that tss/360 kernel was fairly bloated and was quite memory constrained on a 1mbyte single processer system. Going to a 2mbyte dual processor system ... with a single copy of the kernel, increased available memory for applications on the order of four times. This was the primary reason for the 3.5 times increase in the benchmark thruput ... but wasn't ever referenced.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM S/370-168, 195, and 3033 Newsgroups: alt.folklore.computers Date: Thu, 07 Nov 2002 05:24:53 GMTlwinson@bbs.cpcn.com (lwin) writes:
The 370/165-3 was 2.5 to 3 mip machine (depending on cache hit and workload, etc).
The 370/158 was a processor engine that was "time-shared" between performing the I/O channel functions and 370 instruction set.
For the 303x, there was a channel director for I/O ... which was effectively the 370/158 processor engine with just the channel I/O microcode (and no 370 microcode).
A 3031 was a 370/158 processor engine running just the 370 instruction set microcode and no channel i/o microcode ... coupled to a channel director (which was a 370/158 processor engine running just the channel i/o microcode and no 370 instruction set microcode).
A 3032 was a 370/168-3 redone to use channel director ... instead of the 168 outboard channels.
A 3033 started out being a 370/168-3 remapped to newer technology. The 168 used 4circuit/chip logic ... and the 3033 had chips with about ten times the circuit density ... and the chips ran about 20 percent faster. The 3033 started out just being a straight wiring remap to the new technology ... which would have given a straight 20 percent boost ... from 3mips to about 3.6mips. Late in the development cycle, there was some rework of critical logic to take advantage of the higher circuit density ... eventually yielding a 50 percent improvement ... aka 3033 were about 4.5mip machines. For operating system and regular data processing type codes ... the 3033 was almost as fast as 370/195 (however, for highly optimized codes utilizing the 370/195 pipeline would run twice as fast).
Following the 3033 was the 3081 ... the initial 3081D was a pair of five mip processors. The later 3081K was a pair of seven mip processors. There was a 3084 which was a pair of 3081 in a 4-way configuration. 3081 & XA architecture were code-name 811.
370/135 turned into 370/138 and then 4331
370/145 turned into 370/148 and then 4341, and then 4381.
3081s had a UC.5 microprocessor for the service processor.
After the 3081 was the 3090 ... which had a pair of 4331s running a highly modified version VM/370 release 6 for the service processor function.
When I was doing the RFC1044 support for mainframe tcp/ip ... the standard code could just about saturate a 3090 engine getting 44kbytes/sec thruput using standard adapter (8232). Tuning the RFC1044 at cray research ... between a 4341-clone and a cray would hit nearly 1mbyte/sec using about 20 percent of the (4341) processor.
from
http://ap01.physik.uni-greifswald.de/~ftp/bench/linpack.html
IBM 370/195 2.5 IBM 3081 K (1 proc.) 2.1 IBM 3033 1.7 IBM 3081 D 1.7 IBM 4381-23 1.3 IBM ES/9000 Model 120 1.2 IBM 370/168 Fast Mult 1.2 IBM 4381 90E 1.2 IBM 4381-13 1.2 IBM 4381-22 .97 IBM 4381 MG2 .96 IBM 4381-12 .95 IBM-486 33MHz .94 IBM 9370-90 .78 IBM 370/165 Fast Mult .77 IBM 9377-80 .58 IBM 4381-21 .47 IBM 4381 MG1 .46 IBM 9370-60 .40 IBM 4381-11 .39 IBM 9373-30 .36 IBM 4361 MG5 .30 IBM 370/158 .23 IBM 4341 MG10 .19 IBM 9370-40 .18 IBM PS/2-70 (20 MHz) .15 IBM 9370-20 .14 IBM PS/2-70 (16 MHz) .12 IBM 4331 MG2 .038misc dates from some old list
CDC6600 63-08 64-09 LARGE SCIENTIFIC PROCESSOR IBMS/360-67 65-08 66-06 10 MOD 65+DAT; 1ST IBM VIRTUAL MEMORY IBMPL/I.LANG. 66-?? 6???? MAJOR NEW LANGUAGE (IBM) IBMS/360-91 66-01 67-11 22 VERY LARGE CPU; PIPELINED IBMPRICE 67-?? 67??? PRICE INCREASE??? IBMOS/360 67-?? 67-12 MVT - ADVANCED MULTIPROGRAMMED OS IBMTSS 67??? ??-?? 32-BIT VS SCP-MOD 67; COMMERCIAL FAILURE 1Kbit/chip.RAM 68 First commercial semicon memory chip IBMCP/67 68+?? 68+?? MULTIPLE VIRTUAL MACHINES SCP-MOD 67 IBMSW.UNBUNDLE 69-06 70-01 07 IBM SOFTWARE, SE SERVICES SEP. PRICED IBMS/360-195 69-08 71-03 20 VERY LARGE CPU; FEW SOLD; SCIENTIFIC IBMS/370ARCH. 70-06 71-02 08 EXTENDED (REL. MINOR) VERSION OF S/360 IBM3330-1 70-06 71-08 14 DISK: 200MB/BOX, $392/MB IBMS/370-155 70-06 71-01 08 LARGE S/370 IBMS/370-165 70-06 71-04 10 VERY LARGE S/370 IBMS/370-145 70-09 71-08 11 MEDIUM S/370 - BIPOLAR MEMORY - VS READY AMHAMDAHL 70-10 AMDAHL CORP. STARTS BUSINESS Intel,Hoff 71 Invention of microprocessor IBMS/370-135 71-03 72-05 14 INTERMED. S/370 CPU IBMS/360-22 71-04 71-07 03 SMALL S/360 CPU IBMLEASE 71-05 71 06 01 FixTERM PLAN;AVE. -16% FOR 1,2 YR LEASE IBMPRICE 71-07 71+?? +8% ON SOME CPUS;1.5% WTD AVE. ALL CPU IBMS/370-195 71-07 73-05 22 V. LARGE S/370 VERS. OF 360-195, FEW SOLD IBMVM.ASSIST 72+?? 7?-?? MICROCODE ASSIST FOR VM/370 IBMMVS-JES3 72+?? 75-10 LOOSE-COUPLED MP (ASP-LIKE) IBMMVS-JES2 72-?? 72-08 JOB-ENTRY SUBSYSTEM 2 (HASP-LIKE) IBMVSAM 72+?? 7?-?? NEW RANDOM ACCESS METHOD IBM3705 72-03 72-06 COMMS CNTLR: 352 LINES; 56KB/SEC IBMS/370.VS 72-08 73-08 12 VIRTUAL STORAGE ARCHITECTURE FOR S/370 IBM135-3 72-08 73-08 12 INTERMED. S/370 CPU IBM145-3 72-08 73-08 12 INTERMED. S/370 CPU IBM158 72-08 73-04 08 LARGE S/370, VIRTUAL MEMORY IBM168 72-08 73-08 12 VERY LARGE S/370 CPU, VIRTUAL MEMORY IBMOS/VS1 72-08 73-?? VIRTUAL STORAGE VERSION OF OS/MFT IBMOS/VS2(SVS) 72-08 72+?? VIRTUAL STORAGE VERSION OF OS/MVT IBMOS/VS2(MVS) 72-08 74-08 MULTIPLE VIRTUAL ADDRESS SPACES IBMVM/370 72-08 72+?? MULTIPLE VIRTUAL MACHINES (LIKE CP/67) IBM125 72-10 73-04 06 SMALL S/370 CPU AMHV/6 75-04?75-06 02 FIRST AMDAHL MACHINE, FIRST PCM CPU AMHV6-2 76-10 77-09 11 (1.05-1.15)V6 WITH 32K BUFFER AMHV7 77-03 78-09 18 AMDAHL RESP. TO 3033 (1.5-1.7) V6 IBM3033 77-03 78-03 12 VERY LARGE S/370+EF INSTRUCTIONS IBM3031 77-10 78-03 05 LARGE S/370+EF INSTRUCTIONS IBM3032 77-10 78-03 05 LARGE S/370+EF INSTRUCTIONS IBM3033MP 78-03 79-09 18 MULTIPROCESSOR OF 3033 IBM3033MP 78-03 79-09 18 MULTIPROCESSOR OF 3033 AMHPLANT 78-05 AMDAHL OPENS DUBLIN, IRELAND PLANT AMHV8 78-10 79-09 11 (1.80-2.00)V6, FLD UPGR. FROM V7 IBM3033AP 79-01 80-02 13 ATTACHED PROCESSOR OF 3033 (3042) IBM3033 79-11 79-11 00 -15% PURCHASE PRICE CUT IBM3033N 79-11 80-01 04 DEGRADED 3033, 3.9MIPS IBM3033AP 80-06 80-08 02 3033 ATTACHED PROCESSOR IBM3033 80-06 81-10 16 Ext. Addr.=32MB REAL ADDR.;MP ONLY IBMD.Addr.Sp. 80-06 81-06 12 Dual Address Space for 3033 IBM3033XF 80-06 81-06 12 OPTIONAL HW/FW PERF. ENHANCE FOR MVS/SP AMHUTS 80-09 81-05 UTS=Amdahl Unix Op. System (under VM) IBM3033.24MB 80-11 81-11 12 24MB REAL MEM. FOR 3033UP, AP IBM3081D 80-11 81-4Q 12 FIRST H MODEL, 10MIPS IN DP, WATER COOLED AMH580/5860 80-11 82-09 22 (2V8, 12+ MIPS) UP, NEW,AIR COOLED TECH. AMH580/5880 80-11 85-05 54 MP OF 5860 AT 21+ MIPS IBM3033S 80-11 81-01 02 2.2MIPS, DEGRADED 3033 (ENTRY 3033 MODEL) IBM3033N.UPGR. 80-11 80-11 00 9%-14% PERF. IMPROVE, NO CHARGE IBM3081K 81-10 82-2Q 08 NEW DP FUM: 1.353081D, 64K BUFFER/OVLAP IBM370-XA 81-10 83-03 17 NEW ARCH 3081: 31 BIT REAL/VIRT, NEW I/O IBM3033.PRICE 81-10 10% IN US, 12-20% EUROPE PURCH. ONLY IBM3033S.PERF. 81-10 82-06 08 NO-CHARGE PERF. BOOST BY 8%-10% IBM3033 82-03 16% PUR.PRICE CUT, -14%Mem.Price($31K/MB) IBM3033 82-03 3033 Placed on LIMITED-NEW PRODUCTION IBM3084 82-09 83-4Q 15 1.93081K Perf., 4 way MP, 3081K upgrade--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM S/370-168, 195, and 3033 Newsgroups: alt.folklore.computers Date: Thu, 07 Nov 2002 05:38:51 GMTAnne & Lynn Wheeler writes:
the service processor for the 3090 started out to be 4331, but by the time it shipped ... it had turned into a pair of 4361s (not 4331s).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Follklore Newsgroups: alt.folklore.computers Date: Thu, 07 Nov 2002 16:52:12 GMT"Russ Holsclaw" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Who wrote the obituary for John Cocke? Newsgroups: comp.arch Date: Fri, 08 Nov 2002 00:32:33 GMTDavid Brower writes:
With regard to Unix & RISC ... you could view it from the opposite facet. In the late '70s and early '80s it was possible for lots of start-ups to relatively easily produce inexpensive hardware systems. The earlier genre of everybody having to produce expensive proprietary operating systems to go with their hardware would have made the whole undertaking impossible. Unix represented a "portable" operating system that just about anybody could adopt to their hardware platform offering (regardless of the type). The market was the explosion(?) in lots of different hardware system platforms (not just risc) that needed a relatively easily adaptible operating system (because the undertaking couldn't afford to invent and develop their own from scratch).
In the early '80s, 801 was targeted at "closed" environments ... things like Fort Knox which was going to adopt 801 as the universal microprocessor engine across the corporation (the low & mid-range 370 processors were micro-code running on some microprocessor, and then there was broad range of "controllers" using one kind or another of microprocessor). The specific 801 project with ROMP microprocessor that resulted in the PC/RT (and aix) was originally targeted as a (office product division's) displaywriter replacement. When that project got killed ... effectively the group looked around and asked what could the hardware be adapted for. The reverse side of the "portable" Unix market was supposedly it was relatively hardware platform independent aka not only the hardware vendors delivering their product w/o the expense of developing a proprietary operating system from scratch ... but the customer market place was getting use to the idea that they could get their unix w/o a lot of concern regarding the specifics of the processor architecture. In theory, PC/RT started out as just a matter of hiring the same group that did the PC/IX port for the ibm/pc to do a similar port to ROMP.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PLX Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 08 Nov 2002 02:33:43 GMTtjpo@AIRBORNE.COM (Patrick O'Keefe) writes:
CP/67 and then VM/370 continued using BPS ... or at least the BPS loader. Both the CP and CMS kernel build into at least into the '80s would combine the BPS loader followed by all the kernel programs and then do a (real or software simulated) IPL loaded the BPS loader. The BPS loader, in turned loaded all the kernel modules into memory. The standard procedure was when the BPS loader had finished loading everything, it would branch (default) to the last entry point ... which typically would then write the memory image to disk. Standard operating procedure would then load/ipl the memory image from disk.
There was a temporary problem late in the CP/67 days with the BPS loader. The standard BPS loader supported 255 ESD entries ... when the CP kernel grew past 255 ESD entries there were games played attempting to manage external entries ... while the search for a copy of the BPS loader source went on. A copy was discovered in the attic of 545 tech sq (instead of basement store rooms ... the top floor of 545 tech sq was a store room). CSC had a number of card cabinets stored up there ... one of the card cabinents had a tray containing the assembler source for the BPS loader. This was initially modified to support up to 4095 ESD entries (from 255). A number of further modifications was made for VM/370 ... one being support for control statement to round next available address to a 4k page boundary. In any case, the CP and CMS kernels could be considered to be BPS programs ... that happened to have been checkpointed to disk.
A big CP/67 issue was all problem-mode instructions would run "as is" ... but supervisor state instructions and interrupts went into the CP kernel and had to be simulated. SIO instruction was a big simulation issue because the "virtual" CCWs had to be copied to scratch storage ... the virtual address referenced locations had to have their associated virtual pages fixed/pinned in real stroage and the "shadow" CCWs rewritten to use real addresses instead of the original virtual addresses. The routine in CP/67 that did all this was called CCWTRANS (renamed in VM/370 to DMKCCW).
In jan. of 1968, three people came out to the university that I was at to install CP/67. This was the 3rd cp/67 location (after cambridge itself and lincoln labs). Then it was officially announced at the spring '68 share meeting in houston. At the time there was some reference to there being something like 1200 people working on tss/360 development (the official operating system for the 360/67) compared to possibly 12 people total on both CP and CMS at cambridge (one of the reasons that the OS/360 environment was only 32k bytes of instructions in CMS).
For the early development work on SVS (VS2 with single virtual storage) they started with a copy of MVT and a copy of CP/67's CCWTRANS cobbled into MVT to perform the virtual to real CCW translation and associated virtual->real page management.
A later effort in this area was in the late '70s for the GPD san jose
disk engineering lab. The operating environment for work under
development were "testcells" which were cabled into mainframes for
development and test. The MTBF for MVS (system failure) with a single
(just one) testcell operating was on the order of 15 minutes ... and
so all development was going on "stand-alone" with half dozen or more
testcells competing for scheduled time connecting to the CPU. An
effort was launced to move testcell operation into an operating system
environment so that a dozen testcells could be operating concurrently
(improving engineering productivity and eliminating the stand-alone
test time competition). This required a significant redesign and
rewrite of the I/O subsystem to make it much more resiliant to all
sorts of faulty operational conditions. random refs:
https://www.garlic.com/~lynn/subtopic.html#disk
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Help me find pics of a UNIVAC please... Newsgroups: alt.folklore.computers Date: Fri, 08 Nov 2002 16:54:49 GMTCharles Richmond writes:
another job that was competing for scheduling on the 370/195 was the
GPD disk head air bearing simulation (for the 3380s). it turns out
that the disk engineering (bldg 14) and product test labs (bldg 15)
had lots of computing power but they were dedicated to testcell
stand-alone testing. when we got the I/O subsystem redone so that
multiple testcells could be tested in an operating system environment
concurrently, lots of processing power became available.
https://www.garlic.com/~lynn/subtopic.html#disk
Shortly after all of this became part of the bldg 14/15 standard process, bldg. 15 got both early models of 4341 and 3033 (bldg. 15 disk product test got 4341 before the 4341 product test people got their machine so we ran some stuff on the bldg 15 machine in san jose for the 4341 product test people in endicott). In any case, the GDP disk head air bearing simulation was able to get lots of time on bldg 15 3033 that was essentially zero percent cpu utilization (instead of long turn around competing with all the other stuff being scheduled on the SJR 370/195 in bldg 28). recent 195 and/or disk posts:
https://www.garlic.com/~lynn/2002n.html#52 Computing on Demand ... was cpu metering
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#59 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#60 Follklore
https://www.garlic.com/~lynn/2002n.html#62 PLX
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PLX Newsgroups: bit.listserv.ibm-main Date: Fri, 08 Nov 2002 17:11:41 GMTRick.Fochtman@BOTCC.COM (Rick Fochtman) writes:
it also had the distinction of forming the basis for MTS (there was the official tss/360 operating system for 360/67, CP/67 done by a few people in cambridge, and MTS ... michigan terminal system).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Follklore Newsgroups: alt.folklore.computers Date: Fri, 08 Nov 2002 19:27:08 GMT"Russ Holsclaw" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mainframe Spreadsheets - 1980's History Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 08 Nov 2002 22:16:14 GMTsmithwj@US.IBM.COM (William Smith) writes:
... but cambridge had originally took apl\360 ... stripped out the monitor stuff and then reworked the interpretor to run under cms ... as well as redid pieces (like storage allocation) to perform better in a virtual memory environment. Cambridge also put in APIs for system call support into the apl language which offended all the old time APLers. this was released as cms\apl. palo alto science center then took cms\apl and redid some of the stuff ... including doing the apl microcode for the 370/145 and the support for it in apl. They also redid the system call API stuff as shared variables (which was a lot more paletable to the old time APLers). This was called apl\cms. The 145 apl microcode gave about a ten times performance boost for lots of typical apl stuff (for many things apl\cms on 145 with microcode assist ran as fast on 168).
APL product was then transferred from the palo alto science center to STL (not SRL) lab. STL did the changes to allow APL to run both in CMS and MVS ... and was released as APL\SV and then APL2. The STL lab was originally going to be called the coyote lab (based on the convention of the closest post office) and was to be dedicated the same time as the smithsonian air & space museam. However a week or two before the dedication, the "coyote union" demonstrated on the steps of the capital in wash DC ... and the name of the lab was quickly changed to santa teresa (closest cross street is bailey and santa teresa).
possibly the biggest apl user in the world was "HONE" ... which
provided support world wide for all the sales, marketing, and field
people
https://www.garlic.com/~lynn/subtopic.html#hone
At some point the head of the APL2 group in STL transferred to PASC to head up a new group that was going to port BSD unix to 370. I had been talking to one of the VS/Pascal guys about doing a C front end to the pascal backend code generator. I went on a business trip when i got back he had disappeared, going to work for metaware in santa cruz. In any case, doing some work with the unix 370 group ... I suggested that they might talk to metaware about getting a 370 C compiler ... as part of the BSD to 370 port. Somewhere along the way the PC/RT came on the scene with AIX (at&t unix port similar to the pc/ix port) ... and the PASC group was redirected to do a BSD port to PC/RT instead of the 370. They continued to use metaware for the c compiler ... even tho the target machine had changed from 370 to PC/RT. This was done very quickly by a small number of people and released as "AOS".
how 'bout tinycalc that came with borland's pascal.
misc 801, pc/rt refs:
https://www.garlic.com/~lynn/subtopic.html#801
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mainframe Spreadsheets - 1980's History Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 08 Nov 2002 22:29:31 GMTcouple other footnotes
1)
in the late '70s, multiple HONE locations were all consolidated in a single location in Palo Alto ... eventually operating the largest single system complex in the world (all VM/CMS based) ... subsets of this were also replicated around the world ... and then for disaster survivability the Palo Alto location was also replicated in Dallas and Boulder (with fall-over and load sharing between the three locations).
2)
late '70s ... some of the largest online services data centers in the world could be found within a couple miles of each other (at least hone, large vm/cms complex, sales, marketing and field support; tymshare .... vm/cms service bureau, and dialog ... world-wide online library catalog and abstracts).
3)
somewhat in parallel with the BSD effort for 370 which was retarged to PC/RT ... PASC was also working with UCLA on locus ... having locus running on 68k and S/1 machines in the early '80s. Somewhat in the same time frame as the work PASC did for AOS ... they also did AIX/370 and AIX/PS2. The AIX/370/PS2 were Locus ports (not bsd or at&t) ... even tho they shared the name "AIX" with AIX on the pc/rt (and then rs/6000) which was an AT&T port.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: merged security glossary updated with glossary from CIAO Newsgroups: comp.security.misc,alt.computer.security Date: Sat, 09 Nov 2002 05:48:47 GMTI've updated the merged security glossary with terms from CIAO, i.e.
https://web.archive.org/web/20030210115112/http://www.ciao.gov/CIAO_Document_Library/glossary/A.htm
Security
Terms merged from: AFSEC, AJP, CC1, CC2, CIAO, FCv1, FIPS140, IATF V3,
IEEE610, ITSEC, Intel, JTC1/SC27, KeyAll, MSC, NCSC/TG004, NIAP, NSA
Intrusion, NSTISSC, RFC1983, RFC2504, RFC2647, RFC2828, online
security study, TCSEC, TDI, TNI, and misc. Updated 20021020 with
glossary from NSTISSC. Updated 20021027 with RFC2647. Updated 20021108
with terms from CIAO.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: merged security glossary updated with glossary from CIAO Newsgroups: comp.security.misc,alt.computer.security Date: Sat, 09 Nov 2002 06:00:18 GMToops, and of course the url
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Forrest Curve (annual posting) Newsgroups: comp.arch Date: Sat, 09 Nov 2002 15:54:09 GMTnospam@oddhack.engr.sgi.com (Jon Leech) writes:
the big(?) stuff is grid, babar, and the petabytes of data that will be flying around world wide grid. i interpreted what i heard was that there is so much data and so much computation required to process the the data ... that is being partitioned out to various organizations around the world (slightly analogous to some of the distributed efforts on the crypto challenges/contests ... except that hundreds of mbytes/sec & gbytes/sec will be flying around the world). The economic model seems to be that they can get enuf money for the storage and for some of the processing ... but it seems that world-wide gbyte/sec transfer cost is such that various computational facilities around the world can share in the data analysis (also there is quite of database work supporting all of this stuff).
BaBar home page
http://www.slac.stanford.edu/BFROOT/
sc2002 is coming up in baltimore.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: bps loader, was PLX Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 09 Nov 2002 20:09:57 GMTAnne & Lynn Wheeler writes:
Anyway ... that summer besides teaching and installing and being responsible for CP/67, i made a number of enhancement:
1) balr linkage 2) pageable kernel 3) dump formatter
so how does this all relate to the BPS loader issue?
Ok, first thing was that all of CP internal kernel call/linkages were via svc where the svc flih would allocate/deallocate savearea that went along with the call. This was for all internal calls. however, i noticed that there were a lot of kernel functions that were on the order of small tens (or less) of instructions and svc linkage was a significant percentage of that. these kernel calls also tended to be closed subroutines that were always guaranteed of immediately returning ... or at most be 2-level deep call. So a carved out two save areas in page 0 that were dedicated to balr routines ... and changed the kernel call macro to use balr for a specific list of functions (instead of svc).
now, part two ... the cp kernel was fixed in memory and ran real. I noticed that it was starting to grow and consume more of real storage ... and that there was a lot of the kernel that was very low usage ... like console functions (sort of the reverse of the high usage analysis). I started work on splitting some of the console functions and other low usage stuff into 4k chunks and then created a dummy address space table for the kernel. I moved all these "4k" chunks to the end of the kernel above a "pageable" line. Then linkage was modified to do a trasn/lock in the calling sequence (i.e. the svc call routine was entered with a called to address, it would compare if it was above the pageable line ... and if so, do a trans/lock ... i.e. translate the address virtual->real using the kernel dummy address space table, lock/pin the real page ... and then call the translated address; on routine from "above" the pageable line, it would decrement the lock/pin count on the real from address).
this could be considered somewhat analogous to os/360 2k byte transient SVCs, in any case, the fragmentation into 4k chunks is what initial pushed the number of loader table entries over the 255 limit. At this point, i started investigating the bps loader ... but wasn't able to get access to the source. Evnetually I had to play games with keeping the number of ESD entries under the 255 limit.
now, part three ... as part of investigating the ESD table entry problem, I discovered that the BPS loader, at the completion of loading and when it transferred control the loaded application, passed in registers a pointer to its internal loader table (aka all the ESD entries) and the count of entries. So looking at the CP routine that got control from the loader ... and was responsible for doing the memory image checkpoint to disk ... I modified the code to sort & copy the BPS loader table to the end of the kernel (which was after all the pageable kernel routines). The save routine just took the end address of the kernel, rounded it up to the next 4k boundary ... stored the count of entries there followed by the sorted loader table.
now, part four .... since i now had the full ESD loader table (8 byte character name, one byte flag indicating ESD type, and three byte address). Given that I now had a copy of the full loader table, I could enhance the dump print routine to do some formatting ... translating absolute addresses into ESD entry name ... or module name (ESD 0) plus displacement. Could also play games with executable code above the fixed kernel line ... use the dummy address space table to figure out the "virtual kernel address" and then translate that address using the loader table.
I somewhat repeated the above almost 15 years later when i did a dump reader in REX (now REXX ... as sort of a demonstration exercise that REX could be used for serious programming rather than just simple shell programming)
slightly related
https://www.garlic.com/~lynn/submain.html#dumprx problem determination, zombies, dump readers
only slightly related ... when I did the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
one of the items was making a lot of virtual machine control tables pageable. for this, i created a dummy address space table for each virtual machine (analogous to the dummy address space table that I had created for pageable kernel) ... and would copy control blocks out of fixed storage into these dummy address spaces ... which then could get paged out.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: bps loader, was PLX Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 09 Nov 2002 21:24:50 GMTAnne & Lynn Wheeler writes:
their story was that the day after the 360 announcement, they walked into the local salesman's office (somebody who relatively recently ran for president) and placed a large 360 order. supposedly the salesman at the time hardly knew what 360s were ... but that one order made him the highest paid person in the company that year. This event supposedly instigated the whole invention of quotas and the switch-over the next year from straight commission to quotas. The boeing story was that even on quota ... the 360 orders that boeing placed the next year made the salesman the highest paid person in the company again (the company had a hard time increasing the salesman's quota faster than boeing's growing appetite for 360s).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Home mainframes Newsgroups: alt.folklore.computers Date: Sun, 10 Nov 2002 05:06:22 GMTEric Smith <eric-no-spam-for-me@brouhaha.com> writes:
The CP/67 api was the hardware machine interface as defined in the 360 principles of operation ... which not only allowed CMS to operate in a "virtual" machine ... but also relatively standard operating systems ... like mvt, dos, cp itself, etc.
As an undergraduate, i put a lot of inventions into cp/67 that made the cp/cms combination significantly better for interactive services compared to operating sysetms of more traditional bent (highly optimized kernel path lengths, fastpath, optimized page replacement algorithm, fair share scheduling, dynamic adaptive resource management, etc).
In the late cp/67 era, there were special APIs developed especially for CMS that allowed it to take advantage of functions of the CP kernel ... when it was running in virtual machine, although CMS still retained the capability to operate in using the vanilla 360 "POP" interface (and therefor could run on "real" hardware w/o cp/67). In the transition of CP/67 to VM/370 and cambridge monitor system to the conversational monitor system, the ability for CMS to operate w/o the custom APIs was removed, resulting in CMS no longer having the ability to run on a "real" machine.
VM/370 had the ability to operate as a straight hypervisor ... running traditional batch operating systems ... but the CP/CMS combination also provided significantly enhanced interactive services ... as the CERN TSO(&VMS)/CMS(&VM/370) share report indicated. It also could be seen from the fact that there were a number of commercial interactive time-sharing service bureaus built using the CP/CMS platform.
related refs:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/subtopic.html#545tech
probably the largest such cp/cms interactive time-sharing operation
was the corporate internal HONE system which supported all the
marketing, sales, and field people in the world.
https://www.garlic.com/~lynn/subtopic.html#hone
misc, somewhat related recent postings:
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#35 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002n.html#37 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002n.html#48 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#53 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#57 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002n.html#64 PLX
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002n.html#67 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#72 bps loader, was PLX
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Everything you wanted to know about z900 from IBM Newsgroups: comp.arch,alt.folklore.computers Date: Sun, 10 Nov 2002 05:43:34 GMT"del cecchi" writes:
you can get hardcopy at:
http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?CTY=US
the following are available for order:
S/360 PRINCIPLES OF OPERATION GA22-6821-08
S/370 PRINCIPLES OF OPERATION GA22-7000-10
370/XA PRINCIPLES OF OPERATION SA22-7085-01
ESA/370 PRINCIPLES OF OPERATION SA22-7200-00
the later POPs are fully online
from z/architecture
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR001/CCONTENTS?SHELF=DZ9ZBK01&DN=SA22-7832-01&DT=20020416112421
1.1 Highlights of z/Architecture 1.1.1 General Instructions for 64-Bit Integers 1.1.2 Other New General Instructions 1.1.3 Floating-Point Instructions 1.1.4 Control Instructions 1.1.5 Trimodal Addressing 1.1.5.1 Modal Instructions 1.1.5.2 Effects on Bits 0-31 of a General Register 1.1.6 Extended-Translation Facility 2 1.1.7 Input/Output 1.2 The ESA/390 Base 1.2.1 The ESA/370 and 370-XA Base 1.3 System Program 1.4 Compatibility 1.4.1 Compatibility among z/Architecture Systems 1.4.2 Compatibility between z/Architecture and ESA/390 1.4.2.1 Control-Program Compatibility 1.4.2.2 Problem-State Compatibilityfrom esa/390
1.1 Highlights of ESA/390 1.1.1 The ESA/370 and 370-XA Base 1.2 System Program 1.3 Compatibility 1.3.1 Compatibility among ESA/390 Systems 1.3.2 Compatibility among ESA/390, ESA/370, 370-XA, and System/370 1.3.2.1 Control-Program Compatibility 1.3.2.2 Problem-State Compatibility D.0 Appendix D. Comparison between ESA/370 and ESA/390 D.1 New Facilities in ESA/390 D.1.1 Access-List-Controlled Protection D.1.2 Branch and Set Authority D.1.3 Called-Space Identification D.1.4 Checksum D.1.5 Compare and Move Extended D.1.6 Concurrent Sense D.1.7 Immediate and Relative Instruction D.1.8 Move-Page Facility 2 D.1.9 PER 2 D.1.10 Perform Locked Operation D.1.11 Set Address Space Control Fast D.1.12 Square Root D.1.13 Storage-Protection Override D.1.14 String Instruction D.1.15 Subspace Group D.1.16 Suppression on Protection D.2 Comparison of Facilities E.0 Appendix E. Comparison between 370-XA and ESA/370 E.1 New Facilities in ESA/370 E.1.1 Access Registers E.1.2 Compare until Substring Equal E.1.3 Home Address Space E.1.4 Linkage Stack E.1.5 Load and Store Using Real Address E.1.6 Move Page Facility 1 E.1.7 Move with Source or Destination Key E.1.8 Private Space E.2 Comparison of Facilities E.3 Summary of Changes E.3.1 New Instructions Provided E.3.2 Comparison of PSW Formats E.3.3 New Control-Register Assignments E.3.4 New Assigned Storage Locations E.3.5 New Exceptions E.3.6 Change to Secondary-Space Mode E.3.7 Changes to ASN-Second-Table Entry and ASN Translation E.3.8 Changes to Entry-Table Entry and PC-Number Translation E.3.9 Changes to PROGRAM CALL E.3.10 Changes to SET ADDRESS SPACE CONTROL E.4 Effects in New Translation Modes E.4.1 Effects on Interlocks for Virtual-Storage References E.4.2 Effect on INSERT ADDRESS SPACE CONTROL E.4.3 Effect on LOAD REAL ADDRESS E.4.4 Effect on TEST PENDING INTERRUPTION E.4.5 Effect on TEST PROTECTION F.0 Appendix F. Comparison between System/370 and 370-XA F.1 New Facilities in 370-XA F.1.1 Bimodal Addressing F.1.2 31-Bit Logical Addressing F.1.3 31-Bit Real and Absolute Addressing F.1.4 Page Protection F.1.5 Tracing F.1.6 Incorrect-Length-Indication Suppression F.1.7 Status Verification F.2 Comparison of Facilities F.3 Summary of Changes F.3.1 Changes in Instructions Provided F.3.2 Input/Output Comparison F.3.3 Comparison of PSW Formats F.3.4 Changes in Control-Register Assignments F.3.5 Changes in Assigned Storage Locations F.3.6 Changes to SIGNAL PROCESSOR F.3.7 Machine-Check Changes F.3.8 Changes to Addressing Wraparound F.3.9 Changes to LOAD REAL ADDRESS F.3.10 Changes to 31-Bit Real Operand Addresses--