From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Did Intel Bite Off More Than It Can Chew? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 18 Feb 2002 03:34:28 GMT"Stephen Fuld" writes:
correlary was that if you were the only company that got that single, most important criteria correct ... it might even be possible that you could did everything else wrong ... and still beat the competition.
peak employment at ibm possibly approached 500k and the mainframes were (and are) the strategic workhorse of almost every industry. given the company size and the diversification around the world a lot more things would have to go wrong for a much longer period of time.
repeat:
https://www.garlic.com/~lynn/94.html#44 bloat
https://www.garlic.com/~lynn/96.html#20 1401 series emulation still running?
https://www.garlic.com/~lynn/99.html#184 Clustering systems
https://www.garlic.com/~lynn/99.html#231 Why couldn't others compete against IBM?
https://www.garlic.com/~lynn/2001j.html#33 Big black helicopters
https://www.garlic.com/~lynn/2001j.html#38 Big black helicopters
https://www.garlic.com/~lynn/2001j.html#39 Big black helicopters
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
even DEC's largest and most successful market position ... is still dwarfed by the overall mainframe market. also, the mini-computer market felt the high-end workstations and then high-end PCs moving upstream into their markets (taking sales as well as cutting profit margins) long before such things started to directly affect mainframes.
random ref:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Gerstner moves over as planned Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Mon, 18 Feb 2002 13:59:43 GMTStephen Samson writes:
A lot of advanced technology activities (the stuff bridging the gap between research and product deliverables) just evaporated (in part because of the rush to try and back-fill the product gaps that were suppose to have been FS).
random fs refs:
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#17 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#21 OT?
https://www.garlic.com/~lynn/2000f.html#27 OT?
https://www.garlic.com/~lynn/2000f.html#28 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#56 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#18 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001f.html#30 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001n.html#46 Blinking lights
https://www.garlic.com/~lynn/2001n.html#65 Holy Satanism! Re: Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#43 hollow files in unix filesystems?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Need article on Cache schemes. Newsgroups: comp.arch Date: Mon, 18 Feb 2002 14:06:19 GMTMartin Knoblauch writes:
how many posts would that be if they were even just limited to one such question post per student per semester?
misc. replacement algorithm posts (virtual memory as well as other
"caching" implementations)
https://www.garlic.com/~lynn/subtopic.html#wsclock
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Did Intel Bite Off More Than It Can Chew? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 18 Feb 2002 14:39:49 GMTAnne & Lynn Wheeler writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Did Intel Bite Off More Than It Can Chew? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 18 Feb 2002 18:23:42 GMT"Walter Rottenkolber" writes:
remember ... genesis of a lot of ibm current (software) product offerings was during the era where everything was open source (hasp, cp/67, vm/370, etc) and some number of them were actually written/developed at customer sites (hasp, cics, etc). It seemed that the industry then went thru a long period of ossification where rather than need for agile & rapid advances (which open source promotes) ... it was era of consolidation and protecting installed turf.
There were lots of vocal customers during the late '70s & early '80s complaining about the transition to OCO (object code only) ... as opposed to the early convention of open and freely distributed source.
I think there was a line in the ibm mainframe ng about determining where the profit margins & where vendors can establish product differentiation (and operating system and proprietary can be an inhibitor in some of these market segments).
In the 50s, 60s, etc ... there was a lot of attention placed on hardware compatibility. In the 80s that started to move upstream into operating system compatibility and interoperability ... at least for some market segments. For those market segments where agility to move quickly to different vendor hardware products ... operating system compatibility and interoperability is significant factor (not necessarily hardware compatibility). Open source is making a "come-back" and playing more & more of a significant role in these market segments.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Did Intel Bite Off More Than It Can Chew? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 18 Feb 2002 19:17:03 GMT"Walter Rottenkolber" writes:
in a market segment that has significant orientation towards standardized commoditization ... vendors have to look for other places with regard to profit margin.
very thin profit margins may be very attractive to some customer segments ... however if the profit margin gets too thin, the corporate entity may not be able to continue to exist. This can create downside effects to some customers in that market segment. On the other hand, it could also be viewed as darwinism in action.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: medium term future of the human race Newsgroups: comp.society.futures Date: Wed, 20 Feb 2002 18:09:52 GMTMalcolm McMahon writes:
some areas of the world that had been subject to periodic severe starvation got some respit with the green revolution ... until their population growth caught up (again) with production. because of the much larger population base with significant larger dependency on petro-chemical for food production, changes in petro availability/prices not only affects general economic stability because of transportation costs but also can have significant downside effect on food availability (in some situations where there may be little supply elasticity already).
raising prices could put the availability of petro-chemical out of reach of some uses in various parts of the world (gasoline for car use going from $1-$2/gal to maybe $10/gal or more might put in crimp in some people's recreational transportion use ... but it could also make it totally unavailable for others).
random refs:
https://www.garlic.com/~lynn/2001d.html#25 Economic Factors on Automation
https://www.garlic.com/~lynn/2001d.html#29 Economic Factors on Automation
https://www.garlic.com/~lynn/2001d.html#37 Economic Factors on Automation
https://www.garlic.com/~lynn/2001d.html#39 Economic Factors on Automation
the un population URL
http://www.un.org/popin/
world population trends
http://www.un.org/popin/wdtrends.htm
world population reached 6.1 billion in mid-2000 and is currently
growing at an annual rate of 1.2% or 77 million people per year. Six
countries account for half the annual growth.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Opinion on smartcard security requested Newsgroups: sci.crypt Date: Wed, 20 Feb 2002 20:19:16 GMT"norman" writes:
or are the cards used for authentication for valid transactions?
is it a shared-secret cryptographic key or a non-shared-secret cryptographic key system (aka like a asymmetric key or public key system).
if shared-secret cryptographic key is it the same key for the whole infrastructure .... implying the compromise of that single key puts the whole infrastructure at risk ... aka systemic risk.
systemic risk failures putting the infrastructure at risk can also apply to some of the asymmetric key implementations like PKIs where there may be certificates issued under the control of a root signing key (either directly or indirectly).
in a per account-specific transaction authentication skeme (where cryptographic key is used for valid transactions) ... individual cards per account with unique public/private key pairs can avoid the PKI systemic risk failure modes by just registering the associated public key with each specific account.
In the smart-card authentication scheme (assuming elimination of the
systemic risk failure modes as per above), then the issue is it one, two,
or 3-factor authentication ... i.e. one or more of the following:
• something you have
• something you know
• something you are
smartcard represents something you have and (single-account)
infrastructure can be compromised by stealing the card. Stronger
authentication is possible by using something you have in
conjunction with something you know or something you are.
it is possible to find chips where the cost of extracting the private key (in a asymmetric key authentication infrastructure) can approach or exceed your "at risk" value. furthermore, the elapsed time to perform such extraction can exceed the nominal expected interval where a card is reported lost or stolen (costly to extract but also a race to beat having the use of the private key suspended). In this case, the "system" isn't "totally another matter" because being able to suspend use of a specific card/key is part of the overall system security.
random 3-factor authentication
https://www.garlic.com/~lynn/aadsmore.htm#schneier Schneier: Why Digital Signatures are not Signatures (was Re :CRYPTO-GRAM, November 15, 2000)
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm7.htm#rhose12 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm7.htm#rhose13 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm7.htm#rhose14 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm7.htm#rhose15 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aadsm10.htm#bio6 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/2000f.html#65 Cryptogram Newsletter is off the wall?
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001g.html#1 distributed authentication
https://www.garlic.com/~lynn/2001g.html#11 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#38 distributed authentication
https://www.garlic.com/~lynn/2001j.html#44 Does "Strong Security" Mean Anything?
https://www.garlic.com/~lynn/2001j.html#49 Are client certificates really secure?
https://www.garlic.com/~lynn/2001j.html#52 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#34 A thought on passwords
https://www.garlic.com/~lynn/2001k.html#61 I-net banking security
random systemic risk
https://www.garlic.com/~lynn/aadsmail.htm#variations variations on your account-authority model (small clarification)
https://www.garlic.com/~lynn/aadsmail.htm#complex AADS/CADS complexity issue
https://www.garlic.com/~lynn/aadsmail.htm#parsim parsimonious
https://www.garlic.com/~lynn/aadsmail.htm#mfraud AADS, X9.59, security, flaws, privacy
https://www.garlic.com/~lynn/aadsmail.htm#vbank Statistical Attack Against Virtual Banks (fwd)
https://www.garlic.com/~lynn/aadsm2.htm#risk another characteristic of online validation.
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm2.htm#strawm3 AADS Strawman
https://www.garlic.com/~lynn/aadsm3.htm#cstech7 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aepay2.htm#fed Federal CP model and financial transactions
https://www.garlic.com/~lynn/aepay2.htm#cadis disaster recovery cross-posting
https://www.garlic.com/~lynn/aepay2.htm#aadspriv Account Authority Digital Signatures ... in support of x9.59
https://www.garlic.com/~lynn/aadsm10.htm#smallpay2 Small/Secure Payment Business Models
https://www.garlic.com/~lynn/98.html#41 AADS, X9.59, & privacy
https://www.garlic.com/~lynn/99.html#156 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#238 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#240 Attacks on a PKI
https://www.garlic.com/~lynn/2000.html#36 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2001c.html#34 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#45 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001f.html#35 Security Concerns in the Financial Services Industry
https://www.garlic.com/~lynn/2001n.html#54 The demise of compaq
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: TOPS-10 logins (Was Re: HP-2000F - want to know more about it) Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Thu, 21 Feb 2002 16:07:38 GMT"Douglas H. Quebbeman" writes:
the cp/67 work had started as cp/40 (on a 360/40), where the group had modified the machine & built their own virtual memory relocation hardware. When a 360/67 became available (which had virtual memory relocation hardware standard) ... cp/40 was ported to 360/67 and renamed cp/67 (the virtual memory hardware for 360/40 was significantly different than the standard virtual memory hardware on the 360/67).
i don't know whether the significantly larger number of people working on multics helped compensate for the "ring-challenged ge645"(?) or not.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM Doesn't Make Small MP's Anymore Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 21 Feb 2002 16:40:58 GMTEBIE@PHMINING.COM (Eric Bielefeld) writes:
370 cache machines, when operating it multiprocessor mode also had a performance degradation of about 10-15 percent because of cross-cache synchronization effects.
the 3081 was a dyadic (to distinguish between a multiprocessor that could be configured as two independent uniprocessors) ... a two processor system that was not partitionable into two independent operating uniprocessors (the two processors came in the same box that shared a lot of common components).
originally the 308x was only going to be a dyadic machine along with two 3081s configurable as a multiprocessor 3084 (i.e. a 3084 was partitionable into two independent 3081s).
The "problem" was that TPF didn't have SMP support, many/most TPF installations were operating at 100 percent cpu utilization and needed maximum sustained CPU processing power. As a result, there was eventually a 3083 uniprocessor (some components of the 2nd 3081 processor disabled and/or not present). With the elimination of the cross-cache synchronization, the single 3083 processor ran at about 15 percent higher mip rate than the individual 3081 processors.
The 158-3 raw MIP rate was approx. 1mip ... two 158-3 processors either in MP or AP configuration, was a raw aggregate of of 1.8mips (because of the cross-cache synchronization slow-down).
Because of additional MVS operating system SMP overhead ... the effective delivered thruput was about 1.4-1.5 times that of a uniprocessor (i.e. the combination effects of cross-cache synchronization overhead and the operating system cross-machine sysncronization overhead) or about equivalent to 1.4-1.5mips.
The early VM/370 release 3 SMP suppport that I installed at HONE (official VM/370 multiprocessor support didn't come out until VM/370 relase 4) on a 158-3 AP actually got better than two times thruput of a uniprocessor 158-3. This was a slight of hand because of
1) extremely minimal inline pathlength for SMP support 2) bot i/o and all interrupts all on single processor 3) some slight of hand that tended to keep processes with frequent i/o requests & I/O wait on the processor with channels
The processor with the i/o channels clocked in around .9 mips (i.e. the cross-cache synchronization degradation from 1mips). The processor w/o the i/o channels was clocked at 1.2-1.5mips because of the improved cache hit effects (i/o interrupts were very detrimental to high cache hit ratios). The careful operating system pathlength implementation for SMP support kept that degradation in couple percent range.
random refs:
https://www.garlic.com/~lynn/96.html#0b Hypothetical performance question
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#92 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2000c.html#9 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000c.html#61 TF-1
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#62 z/Architecture I-cache
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001f.html#73 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#18 I hate Compaq
https://www.garlic.com/~lynn/2001n.html#86 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Opinion on smartcard security requested Newsgroups: sci.crypt Date: Thu, 21 Feb 2002 17:05:58 GMTNicol So writes:
there are chips rated at EAL-4 high or better and FIPS140-2 or higher.
it is also possible to buy no-security chips at lower prices (at least sometimes you get what you pay for).
the issue can be viewed from two perspectives
1) EAL-4high/fips140-2 hardware token implementing EC/DSA for authentication with PIN &/or biometric (2-factor or 3-factor) and the cost to compromise the system vis-a-vis say a simple password scheme and the cost to compromise that system. It doesn't necessarily mean that it isn't impossible to compromise either system ... the issue is does the difference in risk outweigh the difference in expense aka is the reduction in risk (going from simple password to hardware token) greater than the cost of going from simple password to 2/3-factor authentication.
2) does the overall system reduce risk ... i.e. in a server oriented environment ... with no shared-secret global keys, unique keys per hardware tokens, eal-4high/fips140-2 chips, pin/biometric required token operation ... does the theft & probing of a chip cost more than probability that they can complete the operation and perform any reasonable fraudulent transaction in period less than typical lost/stolen reporting interval.
there are various kinds of pin-entry exploits ... especially when using a dumb reader and PC keyboard entered PIN. The EU FINREAD standard (european union standard for readers used in financial transactions) addresses many of these issues.
again, it is important that overall system vulnerabilities be investigated. in some cases it is possible to mitigate excessive risk in one area by compensating procedures in another area.
random EU finread refs:
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
https://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
https://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
https://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Thu, 21 Feb 2002 18:02:14 GMT"Rupert Pigott" writes:
in the resource management area ... a lot of work was done in dynamic adaptive controls. basically the underlying infrastructure managed resource consumption goals. Layered on top of the goal-oriented resource consumption management were various policy oriented facilities ... like "fair share" policy.
two specific cases related to the adaptive nature:
1) prior to official release of the resource manager ... the implementation was used extensively inside the company. there was also a special deal cut with AT&T longlines to provide them a copy (this is in the days of open source was standard ... before the advent of OCO ... and current situation making big deal of providing open source). The copy disappeared into AT&T longlines ... and nearly ten years later somebody responsible for the account tracked me down because longlines was still using it, having migrated it to newer & newer generation of machines. The remarkable thing was that the dynamic adaptive stuff appeared to have adapted not only to a wide range of workloads, pure interactive, mixed interactive & batch, pure batch, etc ... but has also managed to adapt to evolving hardware that represented nearly two orders magnitude increase in available resources (real storage size, cpu processing power, etc).
2) for the official release of the resource manager, an official set of carefully calibrated configurations and workload benchmarks were performance that took three months elapsed time (to verify that the dynamic adaptive characteristics were actually working). The first thousand such benchmarks were specified to cover greater than the expected operational configuration & workloads space that might be found in a large diverse customer base. However, as part of the authomated benchmarking methodology, effectively after the first thousand "specified" benchmarks ... there was some dynamic adaptive features put into the benchmarking specification methodology to try and search for anomolous operating points (aka configuration and/or workload). the benchmarks not only validated the dynamic adaptive nature of the implementation but also the ability to implement various policy specifications, fair-share, non-fair-share, multiple/fractions of fair-share for specific processes, absolute percentage, etc ... across a wide range of hardware configurations and workloads.
random refs:
https://www.garlic.com/~lynn/94.html#52 Measuring Virtual Memory
https://www.garlic.com/~lynn/95.html#1 pathlengths
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/95.html#14 characters
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#180 The Watsons vs Bill Gates? (PC hardware design)
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2000b.html#74 Scheduling aircraft landings at London Heathrow
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2001b.html#15 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#16 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#74 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001b.html#79 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2001e.html#51 OT: Ever hear of RFC 1149? A geek silliness taken wing
https://www.garlic.com/~lynn/2001e.html#64 Design (Was Re: Server found behind drywall)
https://www.garlic.com/~lynn/2001f.html#48 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#56 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#18 checking some myths.
https://www.garlic.com/~lynn/2001l.html#9 mainframe question
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#28 First DESKTOP Unix Box?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002b.html#55 "Fair Share" scheduling
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Thu, 21 Feb 2002 18:28:39 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Thu, 21 Feb 2002 20:33:44 GMT"Rupert Pigott" writes:
policies were speciafiable .... but the resource manager did dynamic adaptive based on efficiently measuring lots of things.
prior to shipping a review of other resource manager products at the time indicated the prevalent use of large number of tuning knobs ... and the state-of-the-art at the time was significant random walk activity twiddling performance tuning knobs this way and that. Huge numbers of reports from the period were published on the results and recommendations from these random walks.
in anycase, marketing decreed that because all other resource manager products had large number of tuning knobs ... giving system programmers lots of job security ... that this resource manager also needed tuning knobs.
ok, so a number of tuning knobs were implemented. they were fully documented, the algorithms published on how the math work ... and of course all source was delivered/published with the product.
ok, what is the joke?
most resource managers at the time were static operations and used (human managed) turning knobs to approximate dynamic adaption to configuration and workload.
So why do you need such tuning knobs if the system is constantly dynamically adapting to actual measured configuration and workload?
Given any dynamics at all, workload variation over time (hours, days, minutes, etc) ... manually set tuning knobs will tend to be least common denominator for average observed workloads over extended periods of time.
ok, so how do you actually implement effictive dynamic adaptive controls and also install tuning knobs that appear to do something based on documentation, formulas, and code inspection?
Well, in 4-space environment with dynamic adaptive controls with extensive feedback operation ... it is possible to set degrees of freedom for different coefficients. If the dynamic adaptive feedback controls have much greater degree of freedom than the tuning knob coefficients ... it is possible for the system to compensate for human meddling.
Now since most of the resource managers of the era didn't actually implement tightly controlled resource distribution, most systems appears to operate with lots of anomolous activity where tuning knobs had little or no observable effect. This resource manager had tightly control resource distribution rules ... as calibrated by extensive benchmarking tests over wide-range of workloads and configurations. However, it did share the characteristic that frequently the tuning knobs appeared to have little or no effect .... not because the actual resource allocation controls were not effective ... but because the system decided that the tuning knob values should be compensated for.
While the resource manager show large customer installation ... it wasn't exactly an academic hit. I used to build custom modified operating systems for internal corporate distriubtion (in addition to furnishing code for customer production distribution). I sometimes joke that the number of internal corporate installations that I explicitly build, distributed, and supported was frequently larger than the total customer install-base of some better known time-sharing systems (i.e. not comparing total number of customer installations, but comparing the number of internal corporate installations that I personally supported against some other systems total number of customer instllations).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Thu, 21 Feb 2002 20:38:08 GMT"Rupert Pigott" writes:
and of course ... the fight to change how america fights ... the (then) upcoming crop of capt. & majors being refered to as boyd's jedi knights.
https://www.garlic.com/~lynn/subboyd.html#boyd
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Opinion on smartcard security requested Newsgroups: sci.crypt Date: Thu, 21 Feb 2002 20:54:48 GMTdaw@mozart.cs.berkeley.edu (David Wagner) writes:
We've been working on this on and off for nearly four years. There are chips that have very impressive tamper-resistant characteristics (eal4-high evaluation). somebody made the observation to me 30 years ago that chips go to $.10/chip in quantity ... given sufficiently large quantity. also a lot of the hardware token/smartcard costs is in the post-FAB processing ... not the actual cost of the chip.
so the two places to address "en masse" costs are
1) sufficiently large quantity
2) all post-fab processing
Note that item one has a correlary ... one of the ways to achieve large quantity is to have something that represents wide-spread applicability.
There are two current ways of achieving wide-spread applicability
a) large general purpose, supporting everything, including the kitchen sink. This has somewhat dynanic offsetting forces, since "large general purpose" also implies more expensive ... aka the increased complexity has to be increasing market size faster than the increased complexity is increasing chip cost (although note that there isn't a straight linear relationship between the two).
b) simple (KISS) operation that addresses a wide-spread, well defined business requirement. While "a" isn't synergistic, it turns out that this approach ("b") has advantage that simple, reduces nearly all costs areas in conjunction with increasing market size by addressing a specific well defined business requirement.
random strawman discussions:
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm2.htm#strawm2 AADS Strawman
https://www.garlic.com/~lynn/aadsm3.htm#cstech3 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech9 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech10 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aepay3.htm#passwords Passwords don't work
https://www.garlic.com/~lynn/aepay3.htm#x959risk1 Risk Management in AA / draft X9.59
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aadsm9.htm#carnivore2 Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aadsm10.htm#keygen Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/99.html#170 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#189 Internet Credit Card Security
https://www.garlic.com/~lynn/2000c.html#2 Financial Stnadards Work group?
https://www.garlic.com/~lynn/2001c.html#73 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#5 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001n.html#94 Secret Key Infrastructure plug compatible with PKI
https://www.garlic.com/~lynn/2002.html#39 Buffer overflow
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Thu, 21 Feb 2002 21:18:22 GMTAnne & Lynn Wheeler writes:
So we are riding up a the elevator in the HK "tinker-toy" bank building, and some young(er) person in the back says are you "lynn wheeler" of the "wheeler scheduler"? So what do you say? He then says that he studied it as an undergraduate at xyz university. So what do I say? that nearly 20 years later nobody realized the joke about dynamic adaptive feedback, operating in 4-space, and degrees of freedom?
random hacmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
random scheduler
https://www.garlic.com/~lynn/subtopic.html#fairshare
somewhat related virtual memory algorithm
https://www.garlic.com/~lynn/subtopic.html#wsclock
both the original fair share and clock stuff was done when I was an undergraduate. the wsclock stuff turned out to be an issue over ten years later when somebody was getting a stanford PhD on essentially the same work. The problem was that about the time I had done the original work as an undergraduate there were a number of papers published on a different technique. The clock stuff I did was significantly better and got wide-spread commercial system deployment, however, as mentioned it didn't really leak well into the academic world. There was fairly strong opposition to the stanford PhD because of the alternative approach that had been published in the '60s. Whether it was significant or not ... I did manage to dredge up some old A/B comparisons and provide them ... and the PhD was finally awarded.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Thu, 21 Feb 2002 22:14:45 GMTCharles Richmond writes:
allowing somebody directly on his tail ... he could reverse the situation in less than 40 seconds.
also 40-second boyd in "genghis john" article:
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Did Intel Bite Off More Than It Can Chew? Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 21 Feb 2002 23:18:12 GMTname99@mac.com (Maynard Handley) writes:
i seem to remember some article form the late 1800s about severe environmental pollution in NYC caused by all the horses. i got the impression that on a per unit basis (horses vis-a-vis automobile) that if you had a couple hundred thousand horses up & down the streets of some large city ... that it would be significantly more environmental pollution than equivalent number of automobiles.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Did Intel Bite Off More Than It Can Chew? Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 21 Feb 2002 23:33:20 GMTname99@mac.com (Maynard Handley) writes:
the mini-computer market got severely squeezed from lots of sides, workstations/PCs from below, some mainframes from above, and whole issue of proprietrary/non-proprietary.
the issue in the mini-computer market would have been both component hardware cost structure as well as organizational cost structures (somewhat the line about organizations expand to fill the available ROI). as/400 in the mini-computer market seems to done a bit of adoption using powerpc chips to address various hardware component cost structure and presumably doing various organization and system things needed to deal with changes in the ROI-profile in a much more non-proprietary and price competitive market.
It seemed like alpha was directed at vax solution (in somewhat similar way to as/400 with powerpc) as well as workstation and pc solutions. as/400 was significantly aided in its ability to pull off the powerpc hardware transition because it had maintained a significantly higher level application environment abstraction (than vax). This is somewhat legacy of the s38/as400 approximating FS (future system) architecture (all the folklore about after the company killed off FS, it continued to survive in rochester).
recent FS related posting
https://www.garlic.com/~lynn/2002c.html#1 Gerstner moves over as planned
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: medium term future of the human race Newsgroups: comp.society.futures Date: Thu, 21 Feb 2002 23:43:45 GMTMalcolm McMahon writes:
having enuf/excess resources would appear to enable more resources & time spent on non-direct subsistance activities ... like for instance schooling.
so does functional multi-party democracy create abundant resources ... or does abundant resources enable functional multi-party democracies?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Opinion on smartcard security requested Newsgroups: sci.crypt Date: Fri, 22 Feb 2002 15:41:51 GMTSebastian_30@lycos.com (Sebastian) writes:
smartcards & cc
https://web.archive.org/web/20020124070419/http://csrc.nist.gov/cc/sc/sclist.htm
fips 140-1 & 140-2 validations
http://csrc.ncsl.nist.gov/cryptval/140-1/1401val.htm
infineon security controller certification
http://www.infineon.com/cmc_upload/documents/029/198/zertifizierungsstatus_0109.pdf
some philips certification
http://www.semiconductors.philips.com/news/publications/content/file_866.html
finread overview
http://www.semiconductors.philips.com/news/publications/content/file_866.html
misc. EAL evaluation (from austrailia)
https://web.archive.org/web/20020221213202/http://www.dsd.gov.au/infosec/aisep/EPL/ineval.html
note in the cases of hardware token products listed above ... they don't actually mention what chip is being used.
NIAP certification laboratories:
https://web.archive.org/web/20020221012004/http://www.nsa.gov/releases/cctls_08282000.html
cambridge tamper lab:
http://www.cl.cam.ac.uk/Research/Security/tamper/
dpa
http://www.cryptography.com/resources/whitepapers/DPA.html
some overview
https://web.archive.org/web/20020126132220/http://www.geocities.com/ResearchTriangle/Lab/1578/smart.htm
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Opinion on smartcard security requested Newsgroups: sci.crypt Date: Fri, 22 Feb 2002 18:49:56 GMTstevei_69@hotmail.com (Steve H) writes:
the current (magstripe) payment cards are authentication/transaction devices. rather than look at the raw magstripe costs ... look at the fully-loaded costs for deliverying such a magstripe card to a customer and the incremental cost of adding a chip as part of that delivery.
in effect the current expiration date is a form of something you know information requiring frequent card re-issue. aka there are well known algorithms for generating valid payment card account numbers ... the expiration date is in some sense a check-code.
there is a claim ... that name & expiration date can be eliminated as
a payment card attribute if a chip was added (meeting EU point-of-sale
privacy issues as well as eliminating periodic need for frequent card
re-issue) and the transactions performed as in the x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959
If adding a chip to an existing magstripe card delivery could result in eliminating just one subsequent card re-issue (because of elimination of the expiration date as an authentication attribute) ... the incremental costs of the chip (even a higher secruity IC) could easily be less than the fully loaded costs of a subsequent card re-issue. aka a chip could actually save money (when you take into account the overall system and infrastructure issues).
That is independent of the issue of something needing to be done because of the increasing vulnerabilities and exploits in the existing magstripe based payment cards (aka chips reducing risk & fraud costs).
the requirement given the x9a10 working group for x9.59 was to preserve the integrity of the financial infrastructure for all electronic retail payments in all environments (i.e. stored-value, debit, credit, atm, point-of-sale, internet, aka ALL).
random risk/exploits:
https://www.garlic.com/~lynn/subintegrity.html#fraud
additional x9.59 privacy & authentication:
https://www.garlic.com/~lynn/subpubkey.html#privacy
some aads chip strawman references at:
https://www.garlic.com/~lynn/x959.html#aads
some related discussions of x9.59 (and hardware token, card, dongle,
etc) with respect to current spectrum of (online) magstripe payment
cards (including current online stored-value magstripe payment cards
usable at existing POS debit/credit terminals):
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm6.htm#digcash IP: Re: Why we don't use digital cash
https://www.garlic.com/~lynn/aadsm6.htm#terror12 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#pcards2 The end of P-Cards? (addenda)
https://www.garlic.com/~lynn/aadsm7.htm#pcards4 FW: The end of P-Cards?
https://www.garlic.com/~lynn/aadsm7.htm#idcard2 AGAINST ID CARDS
https://www.garlic.com/~lynn/aadsm9.htm#cfppki12 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm9.htm#smallpay Small/Secure Payment Business Models
https://www.garlic.com/~lynn/aadsmore.htm#eleccash re:The Law of Digital Cash
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Opinion on smartcard security requested Newsgroups: sci.crypt Date: Fri, 22 Feb 2002 18:49:56 GMTstevei_69@hotmail.com (Steve H) writes:
the current (magstripe) payment cards are authentication/transaction devices. rather than look at the raw magstripe costs ... look at the fully-loaded costs for deliverying such a magstripe card to a customer and the incremental cost of adding a chip as part of that delivery.
in effect the current expiration date is a form of something you know information requiring frequent card re-issue. aka there are well known algorithms for generating valid payment card account numbers ... the expiration date is in some sense a check-code.
there is a claim ... that name & expiration date can be eliminated as
a payment card attribute if a chip was added (meeting EU point-of-sale
privacy issues as well as eliminating periodic need for frequent card
re-issue) and the transactions performed as in the x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959
If adding a chip to an existing magstripe card delivery could result in eliminating just one subsequent card re-issue (because of elimination of the expiration date as an authentication attribute) ... the incremental costs of the chip (even a higher secruity IC) could easily be less than the fully loaded costs of a subsequent card re-issue. aka a chip could actually save money (when you take into account the overall system and infrastructure issues).
That is indendent of the issue of something needing to be done because of the increasing vulnerabilities and exploits in the existing magstripe based payment cards (aka chips reducing risk & fraud costs).
the requirement given the x9a10 working group for x9.59 was to preserve the integrity of the financial infrastructure for all electronic retail payments in all environments (i.e. stored-value, debit, credit, atm, point-of-sale, internet, aka ALL).
random risk/exploits:
https://www.garlic.com/~lynn/subintegrity.html#fraud
additional x9.59 privacy & authentication:
https://www.garlic.com/~lynn/subpubkey.html#privacy
some aads chip strawman references at:
https://www.garlic.com/~lynn/x959.html#aads
some related discussions of x9.59 (and hardware token, card, dongle,
etc) with respect to current spectrum of (online) magstripe payment
cards (including current online stored-value magstripe payment cards
usable at existing POS debit/credit terminals):
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm6.htm#digcash IP: Re: Why we don't use digital cash
https://www.garlic.com/~lynn/aadsm6.htm#terror12 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#pcards2 The end of P-Cards? (addenda)
https://www.garlic.com/~lynn/aadsm7.htm#pcards4 FW: The end of P-Cards?
https://www.garlic.com/~lynn/aadsm7.htm#idcard2 AGAINST ID CARDS
https://www.garlic.com/~lynn/aadsm9.htm#cfppki12 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm9.htm#smallpay Small/Secure Payment Business Models
https://www.garlic.com/~lynn/aadsmore.htm#eleccash re:The Law of Digital Cash
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Opinion on smartcard security requested Newsgroups: sci.crypt Date: Fri, 22 Feb 2002 19:20:15 GMTAnne & Lynn Wheeler writes:
adding magstripe to the payment card moved it from an offline credential/certificate model to an online transaction model (out of the pre-'70s offline era model ... something that PKI certificates are still being targeted at ... the pre-70s offline era world).
issues for magstripe payment cards now is supporting non-secure/non-private online networks (existing payment card networks have been private) and the advances in technology supporting card counterfeiting.
x9.59 effectively adds a digital signature to an existing iso 8583 online transaction (w/o requiring any PKI certificate ... which are targeted at offline environments where there is no prior relationship between the parties; payment transactions are both online & involve prior relationship between consumer and their financial institution).
a hardware token performing x9.59 digital signature operation added to existing iso 8583 online transaction (not only online debit & credit, but various of the stored-value flavors) would not only address secure transactions being able to flow over non-secure (non-private) online networks as well as the magstripe card counterfeiting issue (chip being significantly harder to counterfeit than existing magstripe).
This is effectively the NACHA/Debit network trial:
https://www.garlic.com/~lynn/x959.html#aads
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: the same question was asked in sci.crypt newgroup Newsgroups: alt.technology.smartcards Date: Fri, 22 Feb 2002 21:54:51 GMT"norman" writes:
somewhat related postings here
https://www.garlic.com/~lynn/99.html#224 X9.59/AADS announcement at BAI this week
https://www.garlic.com/~lynn/99.html#229 Digital Signature on SmartCards
https://www.garlic.com/~lynn/2000.html#33 SmartCard with ECC crypto
https://www.garlic.com/~lynn/2000.html#35 SmartCard with ECC crypto
https://www.garlic.com/~lynn/2000.html#65 Cybersafe & Certicom Team in Join Venture (x9.59/aads press release at smartcard forum)
https://www.garlic.com/~lynn/2000b.html#53 Digital Certificates-Healthcare Setting
https://www.garlic.com/~lynn/2000c.html#55 Java and Multos
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2000f.html#77 Reading wireless (vicinity) smart cards
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#5 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001n.html#8 Future applications of smartcard.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: economic trade off in a pure reader system Newsgroups: sci.crypt Date: Sat, 23 Feb 2002 00:31:51 GMT"norman" writes:
also, for standard PC market you could pay somewhat more for USB dongle hardware token (compared to chipcard) and eliminate the requirement for card acceptor device (dongle plugs directly into usb port). This still leaves open the issue of secure PIN-entry.
if it was for new installation ... it is possible to get keyboard/reader/usb ... where the keyboard has numeric keypad "cut-out" i.e. mode where key entry from the keypad goes directly to the hardware token and bypasses bios/system/etc which could be prone to virus/trojan-horse evesdropping. note this is a 2-factor authentication ... where the PIN represents something you know and affects corrects operation of the chip (something that is obviously not possibly with a 2d bar-code ... since a 2d bar-code isn't actually executing anything).
the issue for cards/reader also raises the question of high traffic activity ... where standard 7816 contact start to have reliability issues (one of the things driving 14443 contactless).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Sat, 23 Feb 2002 17:04:38 GMTjmfbahciv writes:
this turned into a corporate report on detailed analysis of how i used language, how i communicated, and cmc (computer mediated communication) ... also a stanford phd thesis between language department and computer AI department. material also used subsequently in some number of books.
the person had thaught esl (english as second language) for 10-15 years prior to going back to school (england, australia, thailand, etc).
their observation was that i bore all the marks of esl ... even tho i was born and raised in the us, and had little non-english language exposure (except for couple years of latin, french & spanish in high school). there was some slander that i thot & spoke machine language.
random refs:
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/99.html#205 Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/2000c.html#1 A note on the culture of database
https://www.garlic.com/~lynn/2001j.html#29 Title Inflation
https://www.garlic.com/~lynn/2001k.html#64 Programming in School (was: Re: Common uses...)
https://www.garlic.com/~lynn/2002b.html#51 "Have to make your bones" mentality
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Sat, 23 Feb 2002 20:43:58 GMTAnne & Lynn Wheeler writes:
besides wanting tuning knobs on the resource manager ... the other thing that marketing wanted was that the resource manager was going to be the test case for the first SCP charged for product (i.e. it was add-on to the basic system control program ... but they were going to charge for it). besides having to put in the tuning knobs ... I also got to spend six months with business, planning, & forecasting people trailblasing on all the stuff associated with charging for a SCP product.
another distinction was that csc (cambridge science center, 4th floor, 545 tech sq) had been listed as data processing division "field" location. up to a couple weeks prior to the release of the resource manager. The distinction was that people in "field" locations that released products got the 1/12th of annual license fee for the first two years (as bonus/incentive to develop charged for products). A month before the release of the resource manager ... vs/repack product had been released by CSC and the primary individuals responsible collected the first month's license fee for vs/repack. Between the time vs/repack was released and the time the resource manager was released a month later, csc was reclassified as a hdqtr's site (not a "field" site) and therefor no longer eligible for the license fee incentive.
market uptake of resource manager was such that monthly license revenue exceeded $1m within couple months of FCS (first customer ship).
random vs/repack refs:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
as an aside, early development version of vs/repack was used by CSC in helping redo the apl storage manager as part of the apl\360 to cms\apl port (aka transition from small 16k/32k real storage orientation to making it much more virtual memory friendly).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Page size (was: VAX, M68K complex instructions) Newsgroups: comp.arch Date: Sat, 23 Feb 2002 21:10:53 GMTanton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
the basic implementation would cluster a track worth of (4k) pages ("big page", 10 4k pages on 3380) for transfer to disk as a single track write. a fault on any member of a page in a "big page" would bring in the complete "big page". An advantage over doing straight 40k pages was that the track "cluster" members didn't have to be contiguous virtual memory ... just collection of pages from the same address space that appeared to have some recent use affinity.
the "big page" paging area and allocation/deallocation was done similar to some of the journaling file system ... always write to a new location that was closest to the current disk arm position (in part because the actual cluster members of any big page might change/update on any output operation). performance recommendation was that the total available disk space for big pages would be five to ten times the actual allocated big pages. That way as the cursor allocation/write algorithm swept across the disk surface ... it could almost always do a full cylinder of writes before having to move the arm.
the implementation didn't bother with garbage collection & file compaction (as in most journaling file systems) since it was felt that most allocated data would naturally evaporate when an application eventually got around to (re)touching some member of a big page (requiring it to be read and associated disk space deallocated).
random big page postings:
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
some old postings on relative disk "system" performance
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????
random 4m page past postings:
https://www.garlic.com/~lynn/2000g.html#38 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#42 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#43 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#44 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#45 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#47 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2000g.html#52 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001.html#1 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001d.html#68 I/O contention
https://www.garlic.com/~lynn/2001h.html#20 physical vs. virtual addresses
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001k.html#62 SMP idea for the future
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#41 mainframe question
https://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#34 Does it support "Journaling"?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Sat, 23 Feb 2002 22:25:58 GMTab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
cambridge had taken apl\360 and done a lot of work on it to turn it into cms\apl (as an aside, cms\apl also had license "charged-for" and the primary csc people working on it got their part of the first months' license fee). Besides adopting the whole structure to virtual memory environment, CSC also implemented a lot of system call functions that allowed bunch of stuff like external file access.
One of the early business critical apl "modeling" applications was corporate hdqtrs business planning and forecasting. Basically corporate hdqtrs people were given online access to the csc machine in cambridge and they dumped a large part of corporate economic infrastructure into large apl model and munged on it extensively. This made for some very interesting security issues. The csc machine room had to have extremely tight security because some of the data resident on the machine. However, the machine also hosted researchers at CSC doing a wide variety of devleopment and scientific research (even if there were only 30-35 people), onsite mit, bu, harvard student & other access, employee home terminal access and even some off-site student access from various universities in the area.
Eventually a vm service closer to corporate hdqtrs was created for those guys ... and also cloned for other hdqtr operations (emea moved from white planes to paris, i hand carried a installation tape to emea hdqtrs in the then, brand-new la defense bldgs).
The whole world wide online field support system (HONE) had just about all of its features delivered on a vm/370 cms\apl (and then apl/cms) platform. US eventually had large multi-machine distributed cluster running in palo alto, dallas, and boulder. There were other large HONE installation that sprang up around various other places in the world, Havant in England, Uithoorne on the continent, at various times a couple different places in &/or around paris, toronto, tokyo, etc.
The system call function support in cms\apl caused a big flap with the people at the phili science center that had done apl\360 ... violating the integrity purity of apl. This led to a strenuous effort to develop a apl paradigm that allowed supporting system function access w/o violating the purity of apl; aka "shared variables". The palo alto science center did much of the work for incorporating shared variables as well as doing the 370/145 apl microcode accelerator turning cms\apl into apl\cms.
Eventually a group was formed in STL to take-over apl\cms product and enhance it so that it operate in both cms and tso ... since they then could no longer call it apl\cms ... they renamed it to vs/apl.
random hone, apl, etc
https://www.garlic.com/~lynn/subtopic.html#hone
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: You think? TOM. Newsgroups: alt.technology.smartcards Date: Sat, 23 Feb 2002 21:56:05 GMT"Wim Ton" writes:
eliminating the ability to have shared-secrets capable of originating fraudulent transactions simplifies everybody's infrastructures (controlling modification of records is simpler than preventing viewing records or dealing with audit trail of everybody that might have ever view the record).
the issue then becomes key registration .... key registration can be similar to all the current operations for key (authentication material registration) registration.
if you are taling digital signature authentication with ec/dsa with a secure chip that provides reasonable protection of the key material ... the des accelerator for ec/dsa and des is effectively the same cost in those class of chips. it is only when you get into the no-security class of chips (effectively no key protection), that you might see a little cost difference between ec/dsa and des.
the primary distinction between ec/dsa and des is the requirement in dsa for high quality random number generator (not present in a straight des requirement). however, the higher security chips have high quality randomizer as part of other security features (which then effectively eliminates it as being a unique cost differentiation between ec/dsa and des or other symmetric key algorithm)
things change if you are talking about rsa signature ... rsa signature can be done on no-security chip because it doesn't directly require a high quality random number generator (especially if keys are injected as opposed to generated on chip). however, rsa signature performance does typically lead to a unique accelerator ... which does take a lot of silicon and increases cost.
I would prefer a high quality hardware randomizer in support of various security features as well as random number generator supporting ec/dsa for authentication (and common accelerator for both des/symmetric and ec/dsa) ... which then also supports on-chip key generation ... and allows for key not being divulged outside the chip (compared to a no-security chip with huge silicon area in support of rsa acceleration).
then w/o compromising security (current security guidelines that require unique password/key/pin for each security domain) the same simple chip/hardware-token with the same public key can be used for authentication in multiple, different security domains.
random refs:
https://www.garlic.com/~lynn/x959.html#aads
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Did Intel Bite Off More Than It Can Chew? Newsgroups: comp.arch,alt.folklore.computers Date: Sun, 24 Feb 2002 20:11:38 GMT"Bill Todd" writes:
note that there have been somewhat similar thread in
comp.society.futures. random pieces:
https://www.garlic.com/~lynn/2002c.html#6
https://www.garlic.com/~lynn/2002c.html#20
there was posting today claiming that various oil extraction is now approaching (or has crossed) negative energy expenditure (i.e. energy needed to extract the oil is greater than the energy available in the oil extracted); somewhat orthogonal to whether or not all such fossil resources need to be restored to standard ecological balance.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Did Intel Bite Off More Than It Can Chew? Newsgroups: comp.arch,alt.folklore.computers Date: Sun, 24 Feb 2002 20:11:38 GMT"Bill Todd" writes:
note that there have been somewhat similar thread in
comp.society.futures. random pieces:
https://www.garlic.com/~lynn/2002c.html#6
https://www.garlic.com/~lynn/2002c.html#20
there was posting today claiming that various oil extraction is now approaching (or has crossed) negative energy expenditure (i.e. energy needed to extract the oil is greater than the energy available in the oil extracted); somewhat orthogonal to whether or not all such fossil resources need to be restored to standard ecological balance.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: OS Workloads : Interactive etc. Newsgroups: alt.folklore.computers Date: Mon, 25 Feb 2002 04:12:08 GMTab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: TOPS-10 logins (Was Re: HP-2000F - want to know more about it) Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Mon, 25 Feb 2002 16:59:23 GMTfox@crisp.demon.co.uk (Paul D Fox) writes:
slightly related:
https://www.garlic.com/~lynn/2000g.html#4 virtualizable 360, was TSS ancient history
random information assurance/security:
https://www.garlic.com/~lynn/aadsmail.htm#mfraud AADS, X9.59, security, flaws, privacy
https://www.garlic.com/~lynn/aadsm3.htm#cstech12 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#kiss4 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm4.htm#01 redundant and superfluous (addenda)
https://www.garlic.com/~lynn/aadsm5.htm#epaym "e-payments" email discussion list is now "Internet-payments"
https://www.garlic.com/~lynn/aadsm5.htm#encryp Encryption article
https://www.garlic.com/~lynn/aadsm6.htm#terror3 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#terror7 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#terror10 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
https://www.garlic.com/~lynn/aadsm10.htm#cfppki13 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#smallpay2 Small/Secure Payment Business Models
https://www.garlic.com/~lynn/aadsm10.htm#cfppki18 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#bio3 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#keygen Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/ansiepay.htm#aadsach NACHA to Test ATM Card Payments for Consumer Internet Purchases
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security
https://www.garlic.com/~lynn/aepay3.htm#riskaads AADS & RIsk Management, and Information Security Risk Management (ISRM)
https://www.garlic.com/~lynn/aepay3.htm#x959risk1 Risk Management in AA / draft X9.59
https://www.garlic.com/~lynn/aepay4.htm#nyesig e-signatures in NY
https://www.garlic.com/~lynn/aepay4.htm#comcert3 Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay6.htm#docstore ANSI X9 Electronic Standards "store"
https://www.garlic.com/~lynn/aepay6.htm#gaopki GAO: Government faces obstacles in PKI security adoption
https://www.garlic.com/~lynn/aepay6.htm#cacr7 7th CACR Information Security Workshop
https://www.garlic.com/~lynn/aepay6.htm#cacr7b 7th CACR Information Security Workshop
https://www.garlic.com/~lynn/aepay7.htm#cacr8 8th CACR Information Security Workshop (human face of privacy)
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aepay7.htm#orst X9.59 paper ... fyi
https://www.garlic.com/~lynn/aepay8.htm#orst2 Project Corvalllis
https://www.garlic.com/~lynn/2001c.html#61 Risk management vs security policy
https://www.garlic.com/~lynn/2001d.html#7 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001e.html#77 Apology to Cloakware (open letter)
https://www.garlic.com/~lynn/2001f.html#31 Remove the name from credit cards!
https://www.garlic.com/~lynn/2001f.html#35 Security Concerns in the Financial Services Industry
https://www.garlic.com/~lynn/2001f.html#79 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#0 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#38 distributed authentication
https://www.garlic.com/~lynn/2001h.html#45 Article: Future Trends in Information Security
https://www.garlic.com/~lynn/2001h.html#64 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#9 Net banking, is it safe???
https://www.garlic.com/~lynn/2001j.html#5 E-commerce security????
https://www.garlic.com/~lynn/2001j.html#44 Does "Strong Security" Mean Anything?
https://www.garlic.com/~lynn/2001k.html#1 Are client certificates really secure?
https://www.garlic.com/~lynn/2001l.html#56 hammer
https://www.garlic.com/~lynn/2002.html#12 A terminology question
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: economic trade off in a pure reader system Newsgroups: sci.crypt Date: Mon, 25 Feb 2002 17:44:34 GMTFrancois Grieu writes:
on the other hand, various authentication/authorization protocols operating over insecure networks have been developed for chipcards ... not only is the reader possibly remote ... but the connection between the reader and the authorization agent is non-private &/or possibly non-secure.
an obvious example is internet e-commerce transactions.
Example of number 4 are the offline stored-value cards used in europe. there are similar stored value cards in the us that have extensive deployment and use ... but they are online. The contrast was that (at least one time) there was significant PTT costs &/or even question of online being available in some regions (compared to the US). However the world is significantly changing ... with things like internet and wireless changing the online/offline consideration in most areas of the world.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?) Newsgroups: comp.arch Date: Mon, 25 Feb 2002 18:14:19 GMTdsiebert@excisethis.khamsin.net (Douglas Siebert) writes:
that had two problems, 1) running in 2k page mode, the machine was restricted to 32kbyte and 2) any switching between page size modes, the complete cache got flushed.
there were some number of customers that upgraded from a 168-1 to a 168-3 and actually saw a performance degradation (any use of double cache size for 4k page mode address spaces was more than offset by the cache flushing any time page size mode occurred).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Wang tower minicomputer Newsgroups: alt.folklore.computers Date: Tue, 26 Feb 2002 14:38:17 GMTBarry OGrady writes:
random ref:
https://www.garlic.com/~lynn/2002c.html#19 Did Intel Bite Off More Than It Can Chew?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?) Newsgroups: comp.arch Date: Tue, 26 Feb 2002 15:58:13 GMTgah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
the 360/67 was a major effort along with tss/360 circa 1966 (although not a commercial success) with virtual memory. It had 4k pages and 1mbyte segments ... but had both 24bit and 32bit virtual addressing modes.
308x follow-on to 370 introduced 24bit & 31bit virtual addressing circa 1980 (one bit less than the original 360/67 from nearly 15 years earlier).
cambridge science center had been hoping that ibm would win the multics project with the proposed virtual memory machine ... actually the 360/62. The original 360 models were 30, 40, 50, 60, & 70 (with the 360/62 being a virtual memory version of the 360/60). Prior to first customer ship ... the 60, 62, & 70 got enhanced hardware storage (the new memory had 8byte wide, 750ns access, I believe the original memory was to have been 1000ns access) and were "re-named" 65, 67, and 75.
note/update
I remember reading an early document about 360/6x machine with virtual
memory having one, two, and four processors. I sort of had vaque
recollection that it was model number other than 360/67.
however, i've subsequently been told that 360/60 was with 2mic memory
and 360/62 was with 1mic memory. both models never shipped, and were
replaced with 360/65 with 750ns memory. the 360/67 then shipped as
360/65 with virtual memory ... only available in one (uniprocessor)
and two processor (multiprocessor) configurations
https://www.garlic.com/~lynn/2006m.html#50 The System/360 Model 20 Wasn't As Bad As All That
In part, because of the loss of multics to ge ... CSC started on the virtual machine/memory project. They had intended to modify a 360/50 with virtual memory ... but all of the 360/50s were going to FAA ... so they modified a 360/40 with virtual memory support and developed cp/40. Later when 360/67s became available, they ported cp/40 to 360/67 and renamed it cp/67.
CP/67 saw a fraily successful commercial deployment in customer shops. It was also used internally for a lot of online service delivery as well as system development. An early version of VS/2 was built by modifying MVT to include CCWTRANS (virtual to real CCW/io translation) from CP/67 (as well as other CP/67 components) and testing on 360/67s.
One of the internal online services that was built originally on cp/67
and later migrated to VM/370 was "HONE". It provided online support and
interactive tools to all the field/sales people in the world (several
tens of thousands):
https://www.garlic.com/~lynn/subtopic.html#hone
in the late '70s HONE developed a large scale cluster support
... initially deployed at a single location in the US (although by
then there were a number of HONE clones spread around the world) and
then the US HONE was expanded into a distributed cluster in three
sites (palo alto, dallas, and boulder) for disaster survivability
purposes. My wife and I used some of that experience when we
were doing the HA/CMP product:
https://www.garlic.com/~lynn/subtopic.html#hacmp
Also modified versions of CP/67 were done which provided both 360/67 virtual machines as well as 370 virtual memory virtual machines (i.e. CP/67 running on 360/67 ... but providing virtual machines that conformed to the 370 virtual memory architecture; which included somewhat different page & segment table formats as well as some new instructions).
I believe that the first and earliest code that paged "page tables"
was part of the resource manager product that I released. It didn't
actually page page tables .... it paged the disk backing store mapping
tables for a segment that had no valid pages. random ref:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
There is the story that virtual 370 virtual memory machines were in production use one year prior to the first engineering model of a 370 machine with virtual memory was operation. There were two versions of CP/67 with 370 virtual memory modifications; a) version of CP/67 that ran on 360/67 architecture and provided 370 virtual memory architecture virtual machines and b) version of cp/67 that ran on 370 architecture. When the first 370 virtual memory engineering machine was ready for testing the 370-modified CP/67 was booted on the machine as a test (this engineering machine had a knife switch for the "boot" button). The boot failed and after some analysis it turned out that the engineers had implemented one of the new 370 instructions incorrectly. CP/67 was quickly modified to conform to the mis-implemented instruction and was rebooted & run successfully.
more detailed description of the whole MIT CTSS, CP/40, CP/67,
Multics, Project MAC, 360/67, TSS/360, cambridge science center, etc
history can be found at:
https://www.leeandmelindavarian.com/Melinda#VMHist
as an aside, the cambridge science center was also responsible for the "internal network" (larger than arpanet/internet until the mid-80s) as well as GML (precursor to SGML, HTML, XML, etc), a lot of the early work transitioning from performance tuning to capacity planning and various interactive and other tools.
lots of CSC references (4th floor, 545 tech sq, same building as
project mac & multics):
https://www.garlic.com/~lynn/subtopic.html#545tech
random other refs:
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#53 How Do the Old Mainframes
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/95.html#1 pathlengths
https://www.garlic.com/~lynn/98.html#10 OS with no distinction between RAM a
https://www.garlic.com/~lynn/98.html#11 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#12 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#13 S/360 operating systems geneaology
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#127 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#142 OS/360 (and descendants) VM system?
https://www.garlic.com/~lynn/99.html#177 S/360 history
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000.html#1 Computer of the century
https://www.garlic.com/~lynn/2000.html#43 Historically important UNIX or computer things.....
https://www.garlic.com/~lynn/2000.html#52 Correct usage of "Image" ???
https://www.garlic.com/~lynn/2000.html#81 Ux's good points.
https://www.garlic.com/~lynn/2000.html#82 Ux's good points.
https://www.garlic.com/~lynn/2000.html#89 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#79 Unisys vs IBM mainframe comparisons
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#53 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#59 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#2 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#21 First OS?
https://www.garlic.com/~lynn/2001b.html#35 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001e.html#69 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2001f.html#47 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#48 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#57 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#32 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#39 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#6 mainframe question
https://www.garlic.com/~lynn/2001l.html#7 mainframe question
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2001m.html#53 TSS/360
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2001n.html#10 TSS/360
https://www.garlic.com/~lynn/2001n.html#18 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#62 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#89 TSS/360
https://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#46 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002b.html#64 ... the need for a Museum of Computer Software
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: using >=4GB of memory on a 32-bit processor Newsgroups: comp.arch Date: Tue, 26 Feb 2002 16:21:08 GMTgah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
the 370 architecture was pure 24 bit addressing and many 3033 configurations were severely real storage constrained at 16mbytes.
turns out that the 370 page table entry (in 4k page mode) had two bits that were undefined. 3033s were built that had 26bit "real" address lines (for 64mbyte addressing, even tho instructions only had 24bit addressing). On the 3033, the two undefined bits in the PTE were then used as extended page number bits (two additional page number bits in addition to the standard defined 12). The TLB then was extended to support 14 bit page numbers. The result was a machine that only had 24 bit instruction addressing ... but 26 bit real storage addressing (input into TLB might be 12bit virtual page number ... but output was a 14bit real page number).
these machines were 4k page mode only.
now this is analogous but different to various publications describing ROMP (pc/rt) having 40bit addressing and POWER (rs/6000) have 52bit addressing.
801 ROMP had 16 segment registers and "inverted" page tables. The top four bits of a 32bit address was used to address one of 16 segment registers and the remaining 28bits addressed within a 256mbyte segment. A segment register contained a 12bit "segment id" which was used as part of TLB lookup. At any one time, a 801 ROMP woulc have up to 4096 uniquely defined segments. These are somewhat analogous to the number of different and uniquely defined virtual address spaces in some other architectures.
Various documents have descriptions that the 28bit addressing (within 256mbyte segment) plus the 12bit segment identifier yielded a machine that supported 40bit addressing.
Later 801 RIOS documents extended that description to 52bit addressing because the 801 RIOS segment registers support 24bit segment identifiers (instead of 12bit segment identifiers).
However, the actual analogy is really more akin to number of simultaneous, unique virtual address spaces. For a historical 370 comparison with segment & page tables, the 370/168 had a 7-entry STO (segment table origin, basically unique address space pointer) stack. Whenever the current address space pointer register was changed, the STO stack was checked for matching value. If no match was found, a entry was scavenged ... and all TLB entries with the corresponding 3bit ID were invalidated. The romp 12bit segment id and the RIOS 24bit segment id are analogous to the 370/168 3bit STO stack idenfier. The difference between 370 and ROMP/RIOS was that when a address space was changed, instead of changing a address space pointer register (in 370), all 16 segment registers (normally) needed to be changed.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Beginning of the end for SNA? Newsgroups: bit.listserv.ibm-main Date: Tue, 26 Feb 2002 16:33:21 GMTJMckown@HEALTHAXIS.COM (McKown, John) writes:
some claim that original pu4/pu5, ncp/sscp design for 3705/vtam was response to the building of the original 360 pcm telecommunication controller that a couple of us did when I was an undergraduate.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Beginning of the end for SNA? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 26 Feb 2002 19:25:26 GMTMike.O'Neill@53.COM (O'Neill, Mike) writes:
when I made the following presentation to the SNA ARB in raleigh, oct
'86
https://www.garlic.com/~lynn/99.html#66 System/1
https://www.garlic.com/~lynn/99.html#67 System/1
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1?)
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
during the presentation the executive responsible for ncp asked how so few people could have done all that work (basically a superset of both pu4 & pu5 with peer-to-peer networking support implemented on series/1, and sna emulation only at the necessary boundary nodes) when Raleigh had so many people supporting NCP (somewhere between ten times and a hundred times more).
one of the issues appeared to be that the core NCP kernel was only about 6000 lines of uc.5 code; as a result any of the drivers and other features had to implement a large amount of their own ROI services (instead of being able to rely on a lot of common services being provided by the kernel).
there was subsequent agitated reaction ... possibly not as much as
the original project implementing the first 360 pcm controller (when
I was an undergradudate)
https://www.garlic.com/~lynn/submain.html#360pcm
in any case, besides the base uc.5 engine needing emulation there would be extensive line-scanner and other i/o hardware peculiar to the 37xx boxes.
a significantly better base would be the above mentioned project that we were looking at doing a s/1 to 801/rios port.
random other refs:
https://www.garlic.com/~lynn/94.html#8 scheduling & dynamic adaptive ... long posting warning
https://www.garlic.com/~lynn/94.html#33a High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/94.html#52 Measuring Virtual Memory
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#189 Internet Credit Card Security
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/99.html#239 IBM UC info
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000.html#16 Computer of the century
https://www.garlic.com/~lynn/2000.html#50 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#51 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#90 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000b.html#57 South San Jose (was Tysons Corner, Virginia)
https://www.garlic.com/~lynn/2000b.html#78 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#89 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000c.html#45 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#47 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#48 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000c.html#51 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000c.html#52 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#54 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000c.html#58 Disincentives for MVS & future of MVS systems programmers
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#40 Why trust root CAs ?
https://www.garlic.com/~lynn/2000e.html#56 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001.html#10 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/2001.html#72 California DMV
https://www.garlic.com/~lynn/2001b.html#49 PC Keyboard Relics
https://www.garlic.com/~lynn/2001b.html#63 Java as a first programming language for cs students
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#47 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001d.html#38 Flash and Content address memory
https://www.garlic.com/~lynn/2001e.html#8 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#55 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
https://www.garlic.com/~lynn/2001g.html#32 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#21 checking some myths.
https://www.garlic.com/~lynn/2001h.html#56 Blinkenlights
https://www.garlic.com/~lynn/2001h.html#57 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001h.html#59 Blinkenlights
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#52 misc loosely-coupled, sysplex, cluster, supercomputer, & electronic commerce
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#13 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#21 OT: almost lost LBJ tapes; Dictabelt
https://www.garlic.com/~lynn/2001k.html#42 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001k.html#46 3270 protocol
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001l.html#23 mainframe question
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
https://www.garlic.com/~lynn/2001n.html#9 NCP
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#53 A request for historical information for a computer education project
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2002.html#48 Microcode?
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#57 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#59 Computer Naming Conventions
https://www.garlic.com/~lynn/2002c.html#41 Beginning of the end for SNA?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Beginning of the end for SNA? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 26 Feb 2002 19:29:42 GMTedjaffe@PHOENIXSOFTWARE.COM (Edward E. Jaffe) writes:
the position was that SNA and APPN were in absolutely no way related.
note however by most traditional standards, SNA is a telecommunication control protocol ... totally lacking a "network" layer. One of the ways that APPN violated basic SNA architecture was that APPN at least had a real network layer.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: cp/67 (coss-post warning) Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 26 Feb 2002 22:01:24 GMTprune@ZAnkh-Morpork.mv.com (Paul Winalski) writes:
some previous postings:
https://www.garlic.com/~lynn/2000f.html#59 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
In the following, by the time I graduated and joined CSC, Creasy had transferred to the Palo Alto Science Center (where he was manager of various projects including apl\cms & the 145 microcode assist), Comeau had transferred to g'burg (in FS, he was in charge of advanced I/O and system interconnect, and my future wife reported to him ... this was before she went to pok to be in charge of loosely coupled architecture; later he retired & returned to boston where he was the "C" in CLaM aka C, L, & M; who we subcontracted a lot of HA/CMP development to), and Bayles had left to be one of the founders of NCSS (a cp/67 service bureau in stanford, conn; that actually happened the summer of '68. the friday before a one week cp/67 class that IBM hosted in IBM office in Hollywood, so instead of attending the class I got roped into teaching some amount of it ... somewhat as a Bayles backfill, bayles and a couple of others had visited the univ. the last week of jan. '68 to do a cp/67 install and it was turned over to me as a hobby).
melinda's paper
https://www.leeandmelindavarian.com/Melinda#VMHist
various things extracts:
CP-40 and CMS
In the Fall of 1964, the folks in Cambridge suddenly found themselves
in the position of having to cast about for something to do next. A
few months earlier, before Project MAC was lost to GE, they had been
expecting to be in the center of IBM's time-sharing activities. Now,
inside IBM, ''time-sharing'' meant TSS, and that was being developed
in New York State. However, Rasmussen was very dubious about the
prospects for TSS and knew that IBM must have a credible time-sharing
system for the S/360. He decided to go ahead with his plan to build a
time-sharing system, with Bob Creasy leading what became known as the
CP-40 Project. The official objectives of the CP-40 Project were the
following:
1. The development of means for obtaining data on the
operational characteristics of both systems and application programs;
2. The analysis of this data with a view toward more efficient machine
structures and programming techniques, particularly for use in
interactive systems;
3. The provision of a multiple-console computer
system for the Center's computing requirements; and
4. The investigation of the use of associative memories in the control
of multi-user systems.
The project's real purpose was to build a time-sharing system, but the
other objectives were genuine, too, and they were always emphasized in
order to disguise the project's ''counter-strategic'' aspects.
Rasmussen consistently portrayed CP-40 as a research project to ''help
the troops in Poughkeepsie'' by studying the behavior of programs and
systems in a virtual memory environment. In fact, for some members of
the CP-40 team, this was the most interesting part of the project,
because they were concerned about the unknowns in the path IBM was
taking. TSS was to be a virtual memory system, but not much was really
known about virtual memory systems. Les Comeau has written: Since the
early time-sharing experiments used base and limit registers for
relocation, they had to roll in and roll out entire programs when
switching users....Virtual memory, with its paging technique, was
expected to reduce significantly the time spent waiting for an
exchange of user programs.
...
Creasy and Comeau were soon joined on the CP-40 Project by Dick
Bayles, from the MIT Computation Center, and Bob Adair, from
MITRE. Together, they began implementing the CP-40 Control Program,
which sounds familiar to anyone familiar with today's CP. Although
there were a fixed number (14) of virtual machines with a fixed
virtual memory size (256K), the Control Program managed and isolated
those virtual machines in much the way it does today. 28 The Control
Program partitioned the real disks into minidisks and controlled
virtual machine access to the disks by doing CCW translation. Unit
record I/O was handled in a spool-like fashion. Familiar CP console
functions were also provided.
This system could have been implemented on a 360/67, had there been
one available, but the Blaauw Box wasn't really a measurement
tool. Even before the design for CP-40 was hit upon, Les Comeau had
been thinking about a design for an address translator that would give
them the information they needed for the sort of research they were
planning. He was intrigued by what he had read about the associative
memories that had been built by Rex Seeber and Bruce Lindquist in
Poughkeepsie, so he went to see Seeber with his design for the
''Cambridge Address Translator'' (the ''CAT Box''), which was based
on the use of associative memory and had ''lots of bits'' for
recording various states of the paging system. Seeber liked the idea,
so Rasmussen found the money to pay for the transistors and engineers
and microcoders that were needed, and Seeber and Lindquist implemented
Comeau's translator on a S/360 Model 40.
Comeau has written:
Virtual memory on the 360/40 was achieved by placing a 64-word
associative array between the CPU address generation circuits and the
memory addressing logic. The array was activated via mode-switch logic
in the PSW and was turned off whenever a hardware interrupt occurred.
The 64 words were designed to give us a relocate mechanism for each 4K
bytes of our 256K-byte memory. Relocation was achieved by loading a
user number into the search argument register of the associative
array, turning on relocate mode, and presenting a CPU address. The
match with user number and address would result in a word selected in
the associative array. The position of the word (0-63) would yield the
high-order 6 bits of a memory address. Because of a rather loose cycle
time, this was accomplished on the 360/40 with no degradation of the
overall memory cycle.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: cp/67 addenda (cross-post warning) Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 26 Feb 2002 23:20:36 GMTprune@ZAnkh-Morpork.mv.com (Paul Winalski) writes:
I had started doing "hand built" os/360 stage-II sysgens with careful disk location placement of data as well as other optimizations ... that got about a three times thruput speed-up in various workloads at the university.
After CP/67 was installed at the university in jan. '68 ... i also did a lot of cp/67 rewrite ... the initial version of the goal-oriented scheduler with fair-share policy support, redo of the page replacement algorithm, the initial idea of wsclock workingset-like algorithm, fastpath invention, and a bunch of other pathlength reductions ... for both virtual os/360 as well as cms online intensive environments.
part of share presentation that I made in fall '68 SHARE meeting
on both MFT14 enhancements as well as CP/67 enhancements.
https://www.garlic.com/~lynn/94.html#18
Between the above presentation and the time I graduated and joined CSC, I had also done extensive additional pathlength reduction rewrites of critical components as well as early version of "paging" portions of the CP kernel (many of the changes were released as part of the standard CP/67 product, but the kernel paging changes didn't get out until vm/370).
I had also done cp/67 TTY/ascii support (which ibm released) with a
peculiar feature (one byte arithmetic for calculating incoming line
length ... which worked until somebody modified the code to support
devices that could have 400-500 byte "input") ... see cp/67 story at:
https://www.multicians.org/thvv/
also
https://www.garlic.com/~lynn/2002b.html#62 TOPS-10 logins ...
Doing tty/ascii support, I found a design feature of the 360
telecommunication controller that in part led to a project at the
university to build the first non-ibm controller (and we got blamed
for originating the 360 pcm controller business):
https://www.garlic.com/~lynn/submain.html#360pcm
A lot of the performance stuff was:
1) extensive data gathering of cp/67 performance operation ... typically every five minutes every possible system and process specific counter was dumped to tape (and there was a lot of counters). CSC archived that data since the time CP/67 first booted so by the mid-70s there was nearly ten years worth of production performance monitoring data across all the system and hardware changes ... as well as the switch over to vm/370 running on 370.
2) csc had done the port of apl\360 to cms\apl (opening up workspace
restriction from 32kbytes to 16mbytes, as well as adding functions to
do system calls so it was possible to do things like read/write
external files). There was then a lot of performance modeling and
analysis programs written in APL. This was used to analyse the
extensive performance history information. A lot of this led to the
early work in the paradigm change from performance tuning to capacity
planning. The performance predictor, an APL-based model was then
also made available on HONE as a sales tool (i.e. sales people could
characterize their current hardware configuration and workload, and
then ask what-if questions about changes to configuration and/or
workload). This wasn't so much the result of doing extensive
monitoring of operating system in a virtual machine, but the extensive
performance history of the production CP/67 and VM/370 operation.
https://www.garlic.com/~lynn/subtopic.html#hone
3) In the mid-70s, I got to do the "resource manager"
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#12 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#13 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
as part of that we combined some stuff done in #2 with some automated benchmarking stuff, workload profiling, and artificial workload generators that we had developed and set out to validate the resource manager operation, across a wide range of configurations and workloads. Something over 2000 benchmarks were eventually run taking three months elapsed time. Baiscally a configuration "envelope" and workload "envelopes" were defined. The first 1000 benchmarks or so were somewhat manually defined configurations and workloads that were pretty uniformly distributed across the envelopes with some specific "outlyers" ... aka workloads five to ten times heavier than seen in normal operation. After the first thousand or so benchmarks ... we started using an APL model to analyse the benchmarks done up to that point and start to define new benchmark workloads &/or configurations that it thot should be run (in part attempting to look for discontinuites or other anomolous conditions).
4) The VS/repack product grew out of a CSC research project to see if
it was possible to improve application virtual memory
operation. Basically it started as a full instruction trace that was
typically run in a cp/67 virtual machine (basically all i-fetch,
storage-fetch and storage-stores). One of the first things it was used
for was in the apl\360 to cms\apl port. apl\360 had a small
real-storage work space orientation with a storage allocation
algorithm that always assigned the next available storage location
(starting low) for every assign statement (even ones involving
previously assigned variables). Eventually the algorithm would reach
the end of "memory" and then do garbage collection, moving/compacting
all allocated storage at lowest memory address. This wasn't bad in a
32kbyte real storage workspace operation ... but it was guaranteed to
touch every virtual page in a 16mbyte virtual address space
... whether it needed to or not. This garbage collection scheme was
completely rewritten to be virtual memory friendly ... and vs/repack
was to do detailed analysis of the changes. The "official" vs/repack
product was used to do detailed traces of example application
execution and then was provided a "module" map of the individual
modules making up the application. vs/repack would then do various
kinds of cluster analysis to come up with optimum module order/packing
for application execution in a virtual memory (aka minimum avg working
set, minimum page faults for given storage, etc). random vs/repack
refs:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
in support of the vs/repack operation, i developed a modification to the standard kernel virtual memory support. This allowed a user to set an artificial limit on the number of simultaneous "valid" virtual pages. The application would start at zero, & page fault up to the limit. When the limit was reached, the virtual page numbers of valid pages were dump to an analysis file, all the pages invalidated (but left in memory) and the application restarted. It would then restart the page fault sequence. It turned out that the sequence of virtual page number sets, when the "max" virtual pages was reasonably chosen provided effectively the same quality information to the vs/repack process (as a full instruction trace) at significantly reduced overhead. This kernel mod., we used extensively but wasn't made available to customers as part of the vs/repack product (who had to rely on the full instruction trace method).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: cp/67 addenda (cross-post warning) Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 26 Feb 2002 23:35:20 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Moving big, heavy computers (was Re: Younger recruits versus experienced ve Newsgroups: alt.folklore.computers Date: Wed, 27 Feb 2002 21:03:58 GMT"Charlie Gibbs" writes:
There were times on the weekend when things were powered off for various reasons and you went to do power-on and the sequence wouldn't complete. Rather than call in field maintance, first go around and put everything you could into CE mode and then hit the front panel power-on button ... which would typically bring the processor up.
Then go around to each individual unit, hit its power-on button ... and then take that unit out of CE mode and go to the next unit (until everything had been manually sequenced). For some installations this could be dozens of units (although the probability of getting dedicated time on the weekend of really large configuration started to diminish to zero).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Swapper was Re: History of Login Names Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers Date: Wed, 27 Feb 2002 21:36:29 GMTJ Ahlstrom writes:
some demand paging systems had various kinds of optimized block page out/in. many of them weren't referred to as swappers because of the earlier definition/use of the term. The block page out/in was coupled intol the standard demand paging in various ways ... aka for entities that weren't member of a standard block page in/out set.
block page out/in implementations may or may not have also implemented contiguous allocation for all members of a specific page group/set.
There are couple issues:
w/o contiguous allocation,
1) a block page out/in still reduces latency and
2) also tends to throw a group of pages against the page device driver which can be organized for optimal device operation (as opposed to treating the requests as random sequential, one at a time).
contiguous allocation
can further improve block page out/in I/O efficiency over non non-contiguous operation.
========================================================
"big pages" was an attempt to maximize both. For page out operation, clusters of pages were grouped in full track units, where members of a track cluster tended to be pages in use together (not contiguous or sequential) ... somewhat related to members of working set. A suspended process could have all of its resident pages re-arranged into multiple track clusters and all queued simultaneously for write operation. When a task was re-activated, fetch requests for some subset of the pages previously paged out was queued (instead of waiting for individual demand page faults). Subsequent demand page faults would not only fetch the specific page, but all pages in the same track cluster.
A tricky part was when real storage was fully commuted and a demand page fault occurred, there was a trade-off decision regarding attempting to build a single "big page" (track cluster) on the fly, or to select individual pages for page out. If individual pages are selected for replacement, then there becomes two classes of pages on secondary storage, singlet pages, and track cluster pages (which potentially also needs different allocations strategy).
Other optimization issues:
simultaneous scheduling of write I/O for all track clusters on task suspension or placing them on a pending queue and only performing the writes as required
dynamic allocation of disk location of a track cluster at the moment of the write operation to the first available place closest to the current disk arm location
simultaneous scheduling of read I/O for all track clusters on task activation, only demand page fault members of track clusters (a demand page fault of any member of a track cluster is the equivalent of a demand page fault for all members of the same track cluster), or some hybrid of the two.
misc. big page refs:
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Swapper was Re: History of Login Names Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers Date: Thu, 28 Feb 2002 02:20:51 GMTJ Ahlstrom writes:
Later at Cambridge Science Center, I made several additional enhancement.
In the early '70s, the grenoble science center took essentially the same cp/67 kernel and implemented a "straight" working set implementation ... very close to the '68 acm paper. grenoble published an acm paper on their effort cacm16, apr73. The grenoble & cambridge machines, workload mix, and configurations were similar except
the grenoble 67 was 1m machine (154 4k pageable pages after fixed kernel requirements)
the cambridge 67 was 768k machine (104 4k pageable pages after fixed kernel requirements)
the grenoble had 30-35 users
the cambridge was a similar workload mix but twice the number of users 70-75 (except there was probably somewhat more cms\apl use on the cambridge machine ... making the avg. of the various kinds of transaction/workload types somewhat more processor intensive).
both machines provided subsecond response for the 90th percentile of trivial interactive transactions ... however, cambridge response was slighter better than the grenoble response (even with twice the users).
The differences early '70s:
grenoble cambridge machine 360/67 360/67 # users 30-35 70-75 real store 1mbyte 768k p'pages 154 4k 104 4k replacement local LRU "clock" global LRU thrashing working-set dynamic adaptive priority cpu aging dynamic adaptivemisc. refs
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Swapper was Re: History of Login Names Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers Date: Thu, 28 Feb 2002 06:29:27 GMT"John Keiser" writes:
In cp/67 it was possible to fix/pin/lock virtual memory pages in real storage. I used the lock command to lock specific virtual pages of an idle process .... leaving specific amount of pageable, unlock pages available for other tasks. I then ran a number of a large number of different tasks.
I included a simple example of that in a presentation I made at the SHARE user group meeting in fall '68.
I also used the technique when evaluating different paging techniques that I was developing as well as modifications/improvements to the code exeucting in the virtual address space.
much of the presentation pieces was previously posted
https://www.garlic.com/~lynn/94.html#18
MODIFIED CP/67 OS run with one other user. The other user was not active, was just available to control amount of core used by OS. The following table gives core available to OS, execution time and execution time ratio for the 25 FORTG compiles. CORE (pages) OS with Hasp OS w/o HASP 104 1.35 (435 sec) 94 1.37 (445 sec) 74 1.38 (450 sec) 1.49 (480 sec) 64 1.89 (610 sec) 1.49 (480 sec) 54 2.32 (750 sec) 1.81 (585 sec) 44 4.53 (1450 sec) 1.96 (630 sec)--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: cp/67 addenda (cross-post warning) Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 28 Feb 2002 17:09:35 GMTjcmorris@mitre.org (Joe Morris) writes:
Since I was putting job cards on all the exec steps, i started doing re-ordering of the steps to help optimize disk position (i.e. you couldn't specify disk position, but since there was sequential allocation ... you could affect sequential allocation by the exeuction ordering).
then i re-arranged the iehmove/iebcopy statements for the most obvious 100 or so members of sys1.linklib & sys1.svclib. When I started this, stage-ii was about a box (2000) of physical cards and would send them off to be "interpreted" ... i.e. read the holes and print the character. standard keypunch would print one character per column at the top of the card (80 punched cols., one line of 80 chars). The interpreter (this had an IBM machine model, but I can't remember it at) printed about one character per two columns and resulted in two lines of text printed at the top of the card (first 40 chars on the top line). After, I got CP/67 in jan68, I would run stage-1 under cp/67 and "punch" the stage-ii deck to a CMS virtual machine and use the CMS editor to munge around with the virtual cards.
after my first SHARE presentation on results of production system sysgens, somebody got me an little thing from pok that traced every load and produced a report giving counts of things loaded. i used that to further refine the ordering (application was possibly what was used for articles in installation newsletters, although with-out specific customer workload traces, such recommendations would tend to be very generic).
I had also complained about not being able to put the vtoc in the middle of the pack and radiate allocation out from the center. In release 15/16 (normally you got nice sequential release numbers every six months or so ... except for things like release 9.5 ... aka 9 so bad that it needed to be immediately fix, and release 15/16, aka 15 was so bad/late that it was combined with 16) you got option to specify vtoc cylinder location as part of device format (you still need stand-alone time to format 2414 pack for the new system generation).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Swapper was Re: History of Login Names Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers Date: Thu, 28 Feb 2002 17:40:18 GMT"John Keiser" writes:
when i was rewriting much of the cp/67 code in the '60s, I believed that everything needed to be dynamic adaptive, and you didn't do anything that you didn't need to do. With things like process suspension (like interactive waiting for event or scheduling decision for contention for real memory) if dynamic adaptive indicated high enuf real storage contention, the suspension code would gather all the task's pages and queue them for (effectively) block page out ... but possibly wouldn't actually start the events (unless real storage contention was at a higher level, since there would likely be some probability of reclaim). If dynamic adaptive indicated much lower level of real storage contention, it would even do less ... so there was very high probability of re-use/reclaim.
In very late 70s (probably '79), somebody from the MVS group (that had just gotten a big award) contacted me about the advisability of changing VM/370 to correspond to his change to MVS; which was at task suspension don't actually write out all pages ... but queue them on the possibility that the writes wouldn't actually have to be done, because the pages could be reclaimed. My reply was that I never could figure out why anybody would do it any other way ... and that was the way that I had always done since I first became involved w/computers as an undergraduate.
This was not too long after another MVS gaff was fixed. When POK was first working on putting virtual memory support into OS/370 (aos2,svs,vs2), several of us from cambridge got to go to POK and talk to various groups. One of the groups was the POK performance modeling people that were modeling page replacement algorithms. One of the things that their (micro) model had uncovered was that if you select non-changed pages for replacement before changed pages (because you first had to write changed pages, non-changed pages you had some chance of re-using the copy that was already on disk and so could avoid the write) you did less work. I argued that page replacement algorithms were primarily based on approximating LRU-type methodology ... and choosing changed before non-changed pages violating any reasonable algorithm approximation. In any case, VS2/SVS was shipped with that implementation. Well into the MVS cycle, somebody discovered that the page replacement algorithm was choosing shared, high-use, resident, linklib program pages before simple application data pages for replacements (even tho simple application data pages had much lower use than high-use shared resident linklib a.k.a. system pages).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?) Newsgroups: comp.arch Date: Thu, 28 Feb 2002 18:11:21 GMThack@watson.ibm.com (hack) writes:
the synchronous paging paradigm was better than an asynchronous electronic disk model ... since a lot of operating system gorp could be bypassed.
the third characteristic was if you were having trouble building an integrated page management & replacement algorithm that dealt effectively with very large memories ... this paradigm forced the designers into a layered approach which could result in a better overall resource management solution than attempting to design & deliver a single integrated solution (this is somewhat akin to use of LPARs today with respect to management of mainframe processor resources).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Swapper was Re: History of Login Names Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers Date: Thu, 28 Feb 2002 20:54:57 GMTAnne & Lynn Wheeler writes:
i had reached a working hypothesis that the people typically responsible for kernel programming spent almost all of their time dealing with binary yes/no, on/off, true/false situations resulting in a fairly entrenched mind set. dynamic adaptive was a significant paradigm shift which was more characteristic of the OR crowd implementing fortran and apl models.
To dynamically adapt programming style ... even with-in a span of a couple machine instructions didn't seem to be a common occurance. In fact, some number of people complained that they couldn't understand how some of the resource manager was able to work ... there was a sequence of few machine instructions flowing along a traditional kernel programming paradigm dealing with true/false states .... and then all of a sudden the machine instruction programming paradigm completely changed. In some cases I had replaced a couple thousand instructions implementing n-way state comparisons with some values that was calculated someplace else, a sorted queue insert, and some simple value compares and possibly a FIFO or LIFO pop off the top of a queue (although I do confess to have also periodicly rewritten common threads thru the kernel ... not only signficantly reducing the aggregate pathlength but also sometimes making certiain effects automagically occur as a side-effect of the order that other things were done, my joke about doing zero pathlength implementations)
boyd, performance envelopes, and ability to rapidly adapt:
https://www.garlic.com/~lynn/subboyd.html#boyd
recent dynamic adaptive related thread (check for the "feed-back" joke now 25 years old):
https://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#12 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#13 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#16 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
scheduler posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
virtual memory posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/