From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Where did text file line ending characters begin? Newsgroups: alt.folklore.computers Date: Fri, 28 Jun 2002 12:35:01 GMT"Rostyslaw J. Lewyckyj" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: User 2-factor authentication on laptops Newsgroups: comp.security.misc Date: Fri, 28 Jun 2002 12:40:01 GMTAli-Reza Anghaie writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Where did text file line ending characters begin? Newsgroups: alt.folklore.computers Date: Fri, 28 Jun 2002 15:45:48 GMTjcmorris@mitre.org (Joe Morris) writes:
The CP "loader" programmer was original the BPS (very early 360) card deck loader with some number of subsequent modifications.
In any case, instead of looping thru memory setting it to zeros ... it was replaced with MVCL. All the original 360 instructions were defined to pretest all the memory access and protection information before starting the instruction. 370 introduced a couple instructions (mvcl, clcl) that were defined to incrementally execute ... a byte at a time. They could be interrupted and restarted ... but they also could incrementally work up to a page boundary (w/o requiring all virtual pages to be resident at one time) or other forms of conditions that would terminate the instruction.
In any case, the 370/125 initially shipped with a bug in the microcode where the instruction arguments were pretested and the instruction never executed if the pretest failed. As a result the loader instead of getting storage cleared and idea of the machine size ... got an indication that there was no storage.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DCAS [Was: Re: 'atomic' memops?] Newsgroups: comp.arch Date: Fri, 28 Jun 2002 19:33:29 GMTTerje Mathisen writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DCAS [Was: Re: 'atomic' memops?] Newsgroups: comp.arch Date: Fri, 28 Jun 2002 23:09:33 GMTnmm1@cus.cam.ac.uk (Nick Maclaren) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: DCAS [Was: Re: 'atomic' memops?] Newsgroups: comp.arch Date: Sat, 29 Jun 2002 09:11:46 GMTnmm1@cus.cam.ac.uk (Nick Maclaren) writes:
very early in CP/67 system supported full dump and auto-reboot/restart so that it was possible to diagnose various kernel failures and also keep running (although there was service interruption).
tale from the multics crowd and justification for fast file system
(because of comparison between cp/67 restart times and multics restart
times .... both projects were done in the same bldg. at 545 tech sq).
https://www.multicians.org/thvv/360-67.html
the other case that I'm fairly familiar with is when i redid the i/o
supervisor to make it absolutely bullet proof (initially for the disk
engineering lab). a common failure mode was whole series of tight
kernel loops related to i/o operations ... which were all done based
on retrying an operation ... because that was what the i/o
specification said to do (frequently some sort of busy condition, it
wasn't a counter loop ... it was a retry operation that would never
stop). unfortunately, various kinds of hardware glitches could result
in things not quite conforming to the i/o
specification/architecture. i had to do almost total rewrite,
bracketing all sorts of retry operations (with things like retry limit
counts). no explicit i/o error actually occurred ... but the bracketing
code would treat the situation as if an i/o error had occurred, logged
it, aborted the operation and then kept going.
https://www.garlic.com/~lynn/subtopic.html#disk
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: how to set up a computer system Newsgroups: alt.folklore.computers Date: Sun, 30 Jun 2002 20:05:25 GMTJoe Yuska writes:
some cases you might run into room loading limits and have to figure out some other solution to high volume water flow straight into the sewer.
some past related discussions on pdus, water chillers and tanks:
https://www.garlic.com/~lynn/2000b.html#82 write rings
https://www.garlic.com/~lynn/2000b.html#85 Mainframe power failure (somehow morphed from Re: write rings)
https://www.garlic.com/~lynn/2000b.html#86 write rings
https://www.garlic.com/~lynn/2001.html#61 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001m.html#40 info
https://www.garlic.com/~lynn/2002g.html#62 ibm icecube -- return of watercooling?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Mon, 01 Jul 2002 01:41:55 GMTmschaef@eris.io.com (MSCHAEF.COM) writes:
rain/rain4 158 3031 4341 Rain 45.64 secs 37.03 secs 36.21 secs Rain4 43.90 secs 36.61 secs 36.13 secs also times approx; 145 168-3 91 145 secs. 9.1 secs 6.77 secs rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in 35.77 secs.--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: how to set up a computer system Newsgroups: alt.folklore.computers Date: Mon, 01 Jul 2002 13:59:07 GMTHoward S Shubs writes:
the issue was there is this existing elevated section coming down from the north that was four lanes .... with traffic having strong y-split at north station/garden. i93 elevated with four lanes would feed into the same elevating section about 100yards(?) before the traffic y-split pattern ... creating a strong x-traffic pattern ... in addition to merging from 8 lanes for 4.
When somebody realized that the architect messed up in the design and "in theory" you would have two streams of cars traveling at 55 MPH effectively crossing each other in a very short physical space ... they realized they would have to dump all the i93 traffic off into the streets before the elevated merge and the last couple mile section of i93 would never be used. The issue that was investigated that since the last couple mile section was never going to be opened .... whether is was worthwhile actually building that section. The analysis in the press was that it would cost the state something like $50m in construction penalties if it canceled the building of the remaining section ... and $200m to finish building the section. However since it was supposedly an interstate (even tho it would never meet any interstate standards .... that even if they ever did open the section for traffic ... the speed limit ... because of the upcoming merge ... and strong "X" traffic pattern would fail interstate standards), the federal gov. paid 90 percent (would only actually cost the state $20m to complete).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: More about SUN and CICS Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Mon, 01 Jul 2002 14:22:33 GMTg.goersch@WORLDNET.ATT.NET (Bo Goersch) writes:
recent thread with respect to storage size issues and IPL:
https://www.garlic.com/~lynn/2002i.html#2 Where did text file line ending characters begin?
the 115 & 125 were effectively the same machine, both had the same engines .... basically up to nine microprocessors with appropriate microcode loaded to perform the various control/io functions ... about a 800kip microprocessor. The difference was that the 115 used the standard 800kip engine for the 370 microcode load (at 10:1 emulation giving effectively about 80kip 370 instruction rate). The 125 used all the same engines .... except the engine that the 370 emulation code ran on was about a 1mip processor (with 10:1 emulation yielded about 100kip 370 thruput).
I once worked on a project that would populate up to five of the microprocessor bus positions with 125 microprocessor engines with 370 microcode load .... creating a 5-way multiprocessing configuration. The microcode was also tweaked to put majority of tasking & SMP management into the microcode. The disk controller microcode was also tweaked to offload some amount of the paging supervisor.
random refs:
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#10 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous
https://www.garlic.com/~lynn/2000e.html#7 Ridiculous
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001j.html#18 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#19 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#48 Pentium 4 SMT "Hyperthreading"
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Signing email using a smartcard Newsgroups: alt.technology.smartcards Date: Mon, 01 Jul 2002 14:39:00 GMT"Bernt Skjemstad" writes:
nominally a public/private key is generated. A signing consists of calculating the SHA-1 of the data (20 bytes) and then "encrypting" the SHA-1 with the private key yielding a 20 byte signature (or in the case of FIPS186 federal digital signature standard yields 40 byte signature).
Given the original message and the public key, the recipient can verify the signature.
A certificate is one of the methods for transporting the public key to the recipient .... in order for the recipient to be able to perform the signature verification. Basically a certificate contains something like your name and your public key ... and is digitally signed by thawte or verisign private key.
In effect, a certificate is also a signed message. Before the recipient can use a public key in a certificate to verify your message ... they must "verify" the signature on the certificate .... say thawtes or verisigns ... by already having thawtes/verisigns public key sitting around somewhere (i.e. they've used some other method of obtaining the public key used to sign certificates ... and have that public key stored locally).
Note in the case of something like PGP .... basically the use of certificates isn't necessary ...all public keys are acquired and stored locally by recipients (basically the method in the certificate model of acquiring public keys for "certification authorities" and storing them locally is used for all public keys .... doing away with the requirement for having separate "certification authority" public keys and certificates).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Mon, 01 Jul 2002 20:38:50 GMTkent@nettally.com (Kent Olsen) writes:
In the same bldg (545 tech sq) some other people that had worked on CTSS were working on Multics. Both CMS and unix trace some common heritage back to CTSS.
here is page giving command correspondence between cms, vax, pc-dos,
and unix:
https://web.archive.org/web/20020213071156/http://www.cc.vt.edu/cc/us/docs/unix/cmd-comp.html
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Mon, 01 Jul 2002 21:05:00 GMT"Douglas H. Quebbeman" writes:
bunch of stuff from above
Computer N=100(Mflops) ------------------------------------ --- ------------- Cray T916 (1 proc. 2.2 ns) 522 Hitachi S-3800/180(1 proc 2 ns) 408 Cray-2/4-256 (1 proc. 4.1 ns) 38 IBM RISC Sys/6000-580 (62.5MHz) 38 IBM ES/9000-520 (1 proc. 9 ns) 38 SGI CHALLENGE/Onyx (6.6ns, 2 proc) 38 DEC 4000-610 Alpha AXP(160 MHz) 36 NEC SX-1 36 FPS 510S MCP707 (7 proc. 25 ns) 33 CDC Cyber 2000V 32 Convex C-3430 (3 proc.) 32 NEC SX-1E 32 SGI Indigo2 (R4400/200MHz) 32 Alliant FX/2800-200 (14 proc) 31 IBM RISC Sys/6000-970 (50 MHz) 31 IBM ES/9000-511 VF(1 proc 11ns) 30 DEC 3000-500 Alpha AXP(150 MHz) 30 Alliant FX/2800-200 (12 proc) 29 HP 9000/715 (75 MHz) 29 Sun Sparc 20 90 MHz, (1 proc) 29 Alliant FX/2800 210 (1 proc) 25 ETA 10-P (1 proc. 24 ns) 27 Convex C-3420 (2 proc.) 27 Cray-1S (12.5 ns) 27 DEC 2000-300 Alpha AXP 6.7 ns 26 IBM RISC Sys/6000-950 (42 MHz) 26 SGI CHALLENGE/Onyx (6.6ns, 1 proc) 26 Alliant FX/2800-200 (8 proc) 25 NAS AS/EX 60 VPF 25 HP 9000/750 (66 MHz) 24 IBM ES/9000-340 VF (14.5 ns) 23 Meiko CS2 (1 proc) 24 Fujitsu M1800/20 23 DEC VAX 9000 410VP(1 proc 16 ns) 22 IBM ES/9000-320 VF (1 proc 15 ns) 22 IBM RISC Sys/6000-570 (50 MHz) 22 Multiflow TRACE 28/300 22 Convex C-3220 (2 proc.) 22 Alliant FX/2800-200 (6 proc) 21 Siemens VP400-EX (7 ns) 21 IBM ES/9221-211 (16 ns) 21 FPS Model 522 20 Fujitsu VP-400 20 IBM RISC Sys/6000-530H(33 MHz) 20 Siemens VP200-EX (7 ns) 20 Amdahl 1400 19 Convex C-3410 (1 proc.) 19 IBM ES/9000 Model 260 VF (15 ns) 19 IBM RISC Sys/6000-550L(42 MHz) 19 Cray S-MP/11 (1 proc. 30 ns) 18 Fujitsu VP-200 18 HP 9000/720 (50 MHz) 18 IBM ES/9221-201 (16 ns) 18 NAS AS/EX 50 VPF 18 SGI 4D/480(8 proc) 40MHz 18 Siemens VP100-EX (7 ns) 18 Sun 670MP Ross Hypersparc(55Mhz) 18 Alliant FX/2800-200 (4 proc) 17 Amdahl 1100 17 CDC CYBER 205 (4-pipe) 17 CDC CYBER 205 (2-pipe) 17 Convex C-3210 (1 proc.) 17 Convex C-210 (1 proc.) 17 Cray XMS (55 ns) 17 Hitachi S-810/20 17 IBM ES/9000 Model 210 VF (15 ns) 17 Siemens VP50-EX (7 ns) 17 Multiflow TRACE 14/300 17 Hitachi S-810/10 16 IBM 3090/180J VF (1 proc, 14.5 ns) 16 Fujitsu VP-100 16 Amdahl 500 16 Hitachi M680H/vector 16 SGI Crimson(1 proc 50 MHz R4000) 16 FPS Model 511 15 Hitachi M680H 15 IBM RISC Sys/6000-930 (25 MHz) 15 Kendall Square (1 proc) 15 NAS AS/EX 60 15 SGI 4D/440(4 proc) 40MHz 15 Siemens H120F 15 Cydrome CYDRA 5 14 Fujitsu VP-50 14 IBM ES/9000 Model 190 VF(15 ns) 14 IBM POWERPC 250 (66 MHz) 13 IBM 3090/180E VF 13 SGI 4D/340(4 proc) 33MHz 13 CDC CYBER 990E 12 Cray-1S (12.5 ns, 1983 run) 12 Gateway 2000 P5-100XL 12 IBM RISC Sys/6000-520H(25 MHz) 12 SGI Indigo 4000 50MHz 12 Stardent 3040 12 CDC 4680InfoServer (60 MHz) 11 Cray S-MP/MCP101 (1 proc. 25 ns) 11 FPS 510S MCP101 (1 proc. 25 ns) 11 IBM ES/9000 Model 340 11 Meiko Comp. Surface (1 proc) 11 Gateway 2000 P5-90(90 MHz Pentium) 11 SGI Power Series 50MHz R4000 11 Stardent 3020 11 Sperry 1100/90 ext w/ISP 11 Multiflow TRACE 7/300 11 DEC VAX 6000/410 (1 proc) 1.2 ELXSI 6420 1.2 Gateway 2000/Micronics 486DX/33 1.2 Gateway Pentium (66HHz) 1.2 IBM ES/9000 Model 120 1.2 IBM 370/168 Fast Mult 1.2 IBM 4381 90E 1.2 IBM 4381-13 1.2 MIPS M/800 (12.5MHz) 1.2 Prime P6350 1.2 Siemans 7580-E 1.2 Amdahl 470 V/6 1.1 Compaq Deskpro 486/33l-120 w/487 1.1 SUN 4/260 1.1 ES1066 (1 proc. 80 ns Russian) 1.0 CDC CYBER 180-840 .99 Solbourne .98 IBM 4381-22 .97 IBM 4381 MG2 .96 ICL 3980 w/FPU .93 IBM-486 33MHz .94 Siemens 7860E .92 Concurrent 3280XP .87 MIPS M800 w/R2010 FP .87 Gould PN 9005 .87 VAXstation 3100-76 .85 IBM 370/165 Fast Mult .77 Prime P9955II .72 DEC VAX 8530 .73 HP 9000 Series 850 .71 HP/Apollo DN4500 (68030 + FPA) .60 Mentor Graphics Computer .60 MIPS M/500 ( 8.3HHz) .60 Data General MV/20000 .59 IBM 9377-80 .58 Sperry 1100/80 w/SAM .58 CDC CYBER 930-31 .58 Russian PS-2100 .57 Gateway 486DX-2 (66HHz) .56 Harris H1200 .56 HP/Apollo DN4500 (68030) .55 Harris HCX-9 .50 Pyramid 9810 .50 HP 9000 Series 840 .49 DEC VAX 8600 .48 Harris HCX-7 w/fpp .48 CDC 6600 .48 IBM 4381-21 .47 SUN-3/260 + FPA .46 CDC CYBER 170-835 .44 HP 9000 Series 840 .43 IBM RT 135 .42 Harris H1000 .41 microVAX 3200/3500/3600 .41 Apple Macintosh IIfx .41 Apollo DN5xxT FPX .40 microVAX 3200/3500/3600 .40 IBM 9370-60 .40 Sun-3/160 + FPA .40 Prime P9755 .40 Ridge 3200 Model 90 .39 IBM 4381-11 .39 Gould 32/9705 mult acc .39 NORSK DATA ND-570/2 .38 Sperry 1100/80 .38 Apple Mac IIfx .37 CDC CYBER 930-11 .37 Sequent Symmetry (386 w/fpa) .37 CONCEPT 32/8750 .36 Celerity C1230 .36 IBM RT PC 6150/115 fpa2 .36 IBM 9373-30 .36 CDC 6600 .36 IBM 370/158 .22 IBM PS/2-70 (16 MHz) .12 IBM AT w/80287 .012 IBM PC w/8087 .012 IBM PC w/8087 .0069 Apple Mac II .0064 Atari ST .0051 Apple Macintosh .0038--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Mon, 01 Jul 2002 21:12:50 GMTLarry__Weiss writes:
thornton after working on 6600 left cdc and founded NSC and built HYPERchannels.
random past stuff
https://www.garlic.com/~lynn/99.html#119 Computer, supercomputers & related
https://www.garlic.com/~lynn/2001.html#19 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#20 Disk caching and file systems. Disk history...people forget
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: AS/400 and MVS - clarification please Newsgroups: bit.listserv.ibm-main Date: Mon, 01 Jul 2002 23:19:40 GMTit_hjw@JUNO.COM (Harold W.) writes:
you could possibly get a p/390 card running mvs in a rs/6000 .... i have no idea whether you could get a p/390 card running in an as/400 or not. This isn't a case of running MVS on a power/pc chipset ... it is a case of running MVS on a p/390 card .... which can fit in such a box (apple, rs/6000, or as/400).
here is discussion of P/390 card in an rs/6000
https://web.archive.org/web/20010309161535/http://tech-beamers.com/r390new.htm
a possible question is does any of the as/400 boxes support PCI bus which would take a P/390 card and can you get software in the as/400 that talks to the p/390 card. Even tho aix, apple, and as/400 all run on power/pc chips .... the software/programming environments are different.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Al Gore and the Internet Newsgroups: alt.folklore.computers Date: Tue, 02 Jul 2002 02:06:47 GMTFloyd Davidson writes:
There could be a case made that if comercial/industry interests hadn't been so heavily involved in the NSFNET and regionals .... that left to "purely" government influence ... everything would have been migrated to the morass of OSI and we wouldn't have any internet at all today.
misc. past discussions about OSI & GOSIP (and other things)
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
some gosip, nren, etc discussions:
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
other gosip specific mentions:
https://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2001e.html#17 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#32 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001i.html#5 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#6 YKYGOW...
https://www.garlic.com/~lynn/2002g.html#21 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
I have copies of gosip-v2.txt and gosip-v2.ps. Misc. from "gosip-order-info.txt" (9/91):
GOSIP Version 1. ---------------- GOSIP Version 1 (Federal Information Processing Standard 146) was published in August 1988. It became mandatory in applicable federal procurements in August 1990. Addenda to Version 1 of GOSIP have been published in the Federal Register and are included in Version 2 of GOSIP. Users should obtain Version 2. GOSIP Version 2. ---------------- Version 2 became a Federal Information Processing Standard (FIPS) on April 3, 1991 and will be mandatory in federal procurements initiated eighteen months after that date, for the new functionality contained in Version 2. The Version 1 mandate continues to be in effect. Version 2 of GOSIP supersedes Version 1 of GOSIP. Version 2 of GOSIP makes clear what protocols apply to the GOSIP Version 1 mandate and what protocols are new for Version 2.--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: AS/400 and MVS - clarification please Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 02 Jul 2002 02:16:39 GMTSEYMOUR.J.METZ@CUSTOMS.TREAS.GOV (Shmuel Metz , Seymour J.) writes:
2301 held about 4mbytes of data. TSS & CP/67 formated 2301 with 9 4k pages on a pair of 2301 "tracks" (eight physical "2303" tracks).
old comparison of 360/67 with 3 2301s compared to 3081k with six 2305
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: AS/400 and MVS - clarification please Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 02 Jul 2002 02:19:52 GMTefinnell@SEEBECK.UA.EDU (Edward J. Finnell, III , Ed) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: AS/400 and MVS - clarification please Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 02 Jul 2002 09:31:48 GMTIBM-MAIN@ISHAM-RESEARCH.COM (Phil Payne) writes:
what the group came back with was that i had slightly understated the problem, if you took into account RPS-miss .... the relative disk system thruput had declined by more than a factor of 10 (aka cpu/memory goes up by a factor of 50+, disk thruput goes up by a factor of 5-, relative disk system thruput declines by a factor of more than 10). There was also an big issue with significant increase in 3880 processing overhead as well as anytime the 3880 had to switch channel interfaces.
I got nailed on this because I had built a bullet proof I/O supervisor
.... so the disk engineering labs could run the testcells in an
operating system environment. The first time they switched a string of
16 3350s from 3830 controller to first 3880 (over a weekend), they
first suggested that I had made software changes over the weekend that
resulted in the big performance hit. Fortunately this was six months
prior to FCS of 3880s .... and there was time to do some tweaking of
the m'code. misc. refs:
https://www.garlic.com/~lynn/subtopic.html#disk
the result was reformulated into recommended system configuration
presentation and initially given at Share 63, presentation B874
(8/18/84). Summary from the paper:
DASD subsystems have been crucial to the success of time-sharing
systems for over twenty years. Hardware has evolved and components
get bigger and faster at differing rates. Faster CPUs are now
available with parallel processing capabilities that exceed the
traditional notions of IO requirements. Bigger external storage as
well as larger and faster memories are coming on line that will
require even more effective storage and performance management. If
the full system potentials are to be realized, the effectiveness of
the user IO is going to have to be improved.
Configuration of DASD subsystems for availability and performance is
accomplished by using many dedicated channel paths and keeping strings
short. The requirement for high path availability to an arm to
support good response leads to the less than 25% busy channel
guidelines, etc. Where this is too expensive or impractical for
space, cost, or configuration reasons, compromises must be made. DASD
capabilities, quantified by reliability, throughput, response time and
cost, can make an application successful or unsuccessful. Equally
important are the effects of the application environment. An
understanding of this environment as well as the DASD parameters
usually is required for successful application management. An
extensive data base cataloging the systems past performance, coupled
with a calibrated model provides what is effectively an expert or
knowledge based system for exploring these compromises.
Storage management, the system centered management of capacity and
performance, is required to deal with the complexities of active and
inactive data. Because of the large number of DASD and connections
involved, the effects also are difficult to simulate and measure
precisely. More attention to the IO subsystem, in particular, the
user data base IO, is required to realize the potential of current and
future technologies.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Tue, 02 Jul 2002 20:03:21 GMTeugene@cse.ucsc.edu (Eugene Miya) writes:
"fast double precision" was introduced for 168-3 (not on initial 168-1 machines) ... and so the 9.1 secs should be for RAIN ... as was the 6.77 secos for the 91.
the interesting numbers are the 3031 and 158 numbers. The processor engine in the 3031 and 158 were the same; however in the case of the 158 .... there were "integrated channels" ... aka there was two sets of microcode running on the same 158 processor engine .... the microcode that implemented the 370 instruction set ... and the microcode that implemented the I/O support ... and the processor engine basically had to time-slice between the two sets of microcode.
For the 3031, there were two "158" processor engines ... one processor engine dedicated to the 370 function (i.e. the 3031) and a second "158" processor engine (i.e. the "channel director") that implemented all the I/O function outboard.
The dates for some of the machines (note 4341 and 3031 were about the same time):
CDC 6600 63-08 64-09 LARGE SCIENTIFIC PROCESSOR IBM S/360-67 65-08 66-06 10 MOD 65+DAT; 1ST IBM VIRTUAL MEMORY IBM S/360-91 66-01 67-11 22 VERY LARGE CPU; PIPELINED AMH=AMDAHL 70-10 AMDAHL CORP. STARTS BUSINESS IBM S/370 ARCH. 70-06 71-02 08 EXTENDED (REL. MINOR) VERSION OF S/360 IBM S/370-145 70-09 71-08 11 MEDIUM S/370 - BIPOLAR MEMORY - VS READY IBM S/370-195 71-07 73-05 22 V. LARGE S/370 VERS. OF 360-195, FEW SOLD Intel, Hoff 71 Invention of microprocessor Intel DRAM 73 4Kbit DRAM Chip IBM 168-3 75-03 76-06 15 IMPROVED MOD 168 IBM 3031 77-10 78-03 05 LARGE S/370+EF INSTRUCTIONSand to repeat the numbers for rain/rain4:
158 3031 4341 Rain 45.64 secs 37.03 secs 36.21 secs Rain4 43.90 secs 36.61 secs 36.13 secs also times approx; 145 168-3 91 145 secs. 9.1 secs 6.77 secs rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in 35.77 secs.--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 6600 Console was Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Tue, 02 Jul 2002 20:13:38 GMT"Russell P. Holsclaw" writes:
in the late '60s, somebody at the science center ported spacewars from pdp to the 1130/2250m4 (my kids played it in the mid '70s).
lincoln labs had one or more 2250m1 attached to 360/67 and somebody there wrote fortran graphics package for CMS to drive the screen.
the university i was at also had 2250m1 .... and I hacked the CMS editor with the 2250m1 support code from lincoln labs to generate a full screen editor ... circa fall '68.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Tue, 02 Jul 2002 20:51:29 GMTab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Wed, 03 Jul 2002 16:13:46 GMTCharles Shannon Hendrix writes:
Basically the processor hardware engineers got the first machine built and the disk engineering/product-test labs got the second machine built, aka in addition to developing new disk drives they validated existing disks against new processors as they became available. The processors in the disk engineering lab had been running "stand-alone" applications (FRIEND, couple others) for the testing .... problem was that the testcell disks under development tended to sometimes deviate from normal operational characteristics (MTBF for a standard MVS when operating a single testcel was on the order of 15 minutes).
As something of a hobby, i rewrote the I/O supervisor to make it absolute bullet proof, aka no kind of i/o glitches could make the system crash. As a result it was installed in all the "test" processors in the disk engineering and product test labs .... and they were able to do concurrent, simulataneous testing of 6-12 testcells (instead of scheduling stand-alone time for one testcell at a time) on each processor (as needed).
I then got the responsibility of doing system support on all those machines and periodically would get blamed when things didn't work correctly and so had to get involved in debugging their hardware (as part of prooving that the software wasn't at fault). One such situation was the weekend they replaced the 3830 control unit for a 16-drive string of 3350s (production timesharing) with a "new" 3880 control unit and performance went into the can on that machine. Fortunately this was six months before first customer ship of the 3880 controller so there were times to make some hardware adjustments (I make this joke at one point of working 1st shift at research, 2nd shift in the disk labs, and 3rd shift down at STL, and also couple times a month supporting the operating system for the HONE complex in palo alto).
In any case, at that particular point that there were two 4341s in existance, one in edicott and one in san jose disk machines. Since I supported the operating system for san jose disk ... and since while the machines might be i/o intensive ... the workload rarely exceeded 5 percent cpu utilization. They had 145, 158, 3031, 3033, 4341, ect. machines that I could worry about and had some freedom in doing other types of things with.
So i ran the rain/rain4 benchmarks for the endicott performance engineers and got 4341 times (aka they couldn't get time on the machine in endicott because it was booked solidly for other things), 3031 times, and 158 times. They previously had collected numbers for the 168-3 and 91 times for rain/rain4 ... and of course rain had been run on 6600 (numbers they sent to me along with the benchmarks to run).
There may have been other benchmark runs made by other people ... but I didn't do the runs and didn't have the data sitting around conveniently. I posted some numbers that I had conveniently available.
misc. disk engineer related posts:
https://www.garlic.com/~lynn/subtopic.html#disk
misc. hone related posts:
https://www.garlic.com/~lynn/subtopic.html#hone
random post about working 4-shift work weeks (24hrs, 7days):
https://www.garlic.com/~lynn/2001h.html#29 checking some myths
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Wed, 03 Jul 2002 16:27:15 GMTCharles Shannon Hendrix writes:
The instructions for th3 360/370 i/o processors were, in fact called "channel programs". You could write a channel program ... and signal one of the asycnronous "channel processors" to begin asynchronous execution of that channel program. The channel program could cause asynchronous interrupts back to the instruction processor signling various kinds of progress/events.
On some of the machines, the i/o processors were real, honest to goodness independent asynchronous processors. On other machines, a common microcode engine was used to emulate both the instruction processor and multiple i/o (channel) processors. Machines where a common processor engine was used to emulate multiple processors (cpus, channels, etc) where typically described as having "integrated" channels.
158s, 135, 145, 148, 4341, etc ... were "integrated" channel machines (aka the native microcode engine had microcode for both emulating 370 processing and microcode for performing the channel process function and executing "channel programs"). 168 machines had outboard channels (independent hardware boxes that implement the processing of channel programs). Channels processors and instruction processors had common access to the same real storage (in much the same way that multiple instruction processors have common access to the same real storage).
For the 303x line of machines .... they took a 158 integrated channel machine .... and eliminated the 370 instruction emulation microcode ... creating a dedicated channel program processing machine called a "channel director". The "channel director" was then a common component used for 3031, 3032, and 3033 machines ... aka they were all "outboard channel" machines (having dedicated hardware processing units for executing channel programs) ... as opposed to "integrated channel" machines.
A 3031 was then a 158 with just the 370 instruction emulation microcode and reconfigured for "outboard channel" operation rather than "integrated channel" operation. A 3032 was then a 168 that was reconfigured to use "channel director" for outbarod channels (rather than the 168 outboard channel hardware boxes).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Wed, 03 Jul 2002 16:56:15 GMTAnne & Lynn Wheeler writes:
At one time the US HONE system in Palo Alto had grown into the largest single-system-image complex in the world. At one time, I knew it had something over 40,000 defined "userids".
The US HONE system was also cloned for a number of country and/or regional centers around the world.
Also, in the early '80s, the Palo Alto complex was extended with redundant centers in Dallas and Boulder for disaster survivability (my wife and I later coined the terms disaster survivability and geographic survivability when we were doing HA/CMP) ... online workload was spread across the three datacenters, but if one failed the remaining two could pick up.
Nearly all of the application delivery to branch & field people were written in APL ... running under CMS. One of the most important were the "configurator" applications. Starting with the 370/125 (& 115), it was no longer possible for a salesman to manual fill-out a mainframe machine order .... they all had to be done interacting with HONE configurator.
random ha/cmp refs:
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Wed, 03 Jul 2002 16:57:13 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: : Re: AS/400 and MVS - clarification please Newsgroups: bit.listserv.ibm-main Date: Wed, 03 Jul 2002 16:47:18 GMTRick.Fochtman@BOTCC.COM (Rick Fochtman) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Wed, 03 Jul 2002 17:10:31 GMTCharles Shannon Hendrix writes:
the 4381 linpack entries from that posting
IBM 4381 90E 1.2 IBM 4381-13 1.2 IBM 4381-22 .97 IBM 4381 MG2 .96 IBM 4381-21 .47 IBM 4381-11 .39ref linpack posting
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: trains was: Al Gore and the Internet Newsgroups: alt.folklore.computers Date: Wed, 03 Jul 2002 19:27:03 GMTeugene@cse.ucsc.edu (Eugene Miya) writes:
slightly related:
https://www.garlic.com/~lynn/2002d.html#32 Farm kids
it was quite an adjustment moving to boston. I drove cross country in the winter .... crossing into mass. i observed that there were county two-lane mountain roads in the west better built than the mass pike.
there were two claims by various long time mass. residents/natives .... 1) the frost heaves caused all the problems on the pike (frost heaves is a problem out west also ... but they build the road bed appropriately) & 2) road repair was a thriving lobby in the state and become dependent on doing major road repairs every year (somebody joked about water soluble asphalt being used for mass. roads).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Wed, 03 Jul 2002 19:32:22 GMTCharles Shannon Hendrix writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Wed, 03 Jul 2002 20:45:38 GMTAnne & Lynn Wheeler writes:
note as per:
https://www.garlic.com/~lynn/2002f.html#0
the total world-wide vax ships as of the end of 1982 was 14,508 and the total as of the end of 1983 was 25,070.
from
https://web.archive.org/web/20050207232931/http://www.isham-research.com/chrono.html
4341 announced 1/79 and fcs 11/79 4381 announced 9/83 and fcs 1q/84
workstation and PCs were starting to come on strong in the departmental server market by the time 4381s started shipping in quantity.
also per:
https://www.garlic.com/~lynn/2002f.html#0
while the total number of vax shipments kept climbing thru '87 ... they were micro-vax. combined 11/750 & 11/780 world wide shipments thru 1984 was 35,540 ... and then dropped to a combined total of 7,600 for 1985 and 1660 for 1986.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: : Re: AS/400 and MVS - clarification please Newsgroups: bit.listserv.ibm-main Date: Wed, 03 Jul 2002 20:56:15 GMTvbandke@BSP-GMBH.COM (Volker Bandke) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM was: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Thu, 04 Jul 2002 14:23:01 GMTeugene@cse.ucsc.edu (Eugene Miya) writes:
Maybe what I did was run sample program not benchmarks. Sometimes when I didn't have direct access to the hardware I would ask people at other locations to run some sample program .... I got some good/different numbers from places like BNR.
There was a situation that created some ambivalence. I wanted dynamic
adaptive to not take into account workload and different machine
models but the future. So for the Resource Manager .... I put in some
timing code at boot/ipl and dynamically adjusted various factors that
had been previously pulled out of a static table based on cpuid.
https://www.garlic.com/~lynn/subtopic.html#fairshare
Part of this was that you couldn't assume what possible model numbers might be invented in the future. Probably the wierdest story was AT&T longlines getting an early pre-release version and it disappearing into their corporate structure (actually they got source for a heavily modifed kernel including some other stuff also). Ten years later somebody handling the AT&T longlines account came tracking me down. Turns out that AT&T longlines was still running the same source and it had propagated out into the corporate infrastructure ... and they just kept moving the kernel to the latest machine models. Over a period of 10 years some of the things had changed by factors of fifty to hundred, but that little ol Resource Manager just kept chugging along.
In any case, not only did the change eliminate needing to have model numbers (and corresponding values) in predefined static table ... and therefor didn't need to have preknowledge about the future ... there were also these things called clones that had there own convention for cpuids (which the little ol resource manager chugged along on also).
clones ... plug compatible manufactur cpus (or PCMs cpus).
Another PCM story ... that they couldn't really hold against me for
because it was done while still an undergraduate ... was building the
first PCM controller ... and getting blamed for helping starting the
PCM controller business.
https://www.garlic.com/~lynn/submain.html#360pcm
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "Mass Storage System" Newsgroups: alt.folklore.computers Date: Thu, 04 Jul 2002 14:28:59 GMTmikekingston@cix.co.uk (Michael J Kingston) writes:
I think this had some discussion in this n.g. within the past year ... as one being at LLNL (which i only heard about ... didn't actually see).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM was: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Thu, 04 Jul 2002 14:32:37 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: pop density was: trains was: Al Gore and the Internet Newsgroups: alt.folklore.computers Date: Thu, 04 Jul 2002 14:46:35 GMTeugene@cse.ucsc.edu (Eugene Miya) writes:
the other slight connection was that mass. residents claimed that the name of the major paving/road company and the name of a certain federal sec-DOT was the same (there is also a federal bldg. in cambridge by that name).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: pop density was: trains was: Al Gore and the Internet Newsgroups: alt.folklore.computers Date: Thu, 04 Jul 2002 14:55:05 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM was: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Thu, 04 Jul 2002 15:14:39 GMTAnne & Lynn Wheeler writes:
also when I did this particular benchmark it was still in the '70s (2nd 4341 built ... before any shipped to customers) ... and the issues about benchmarking results weren't as strict at that time (including possibly some functional specifications still having meaningful data).
the worst "benchmarking" scenario that i'm aware of is the MVS/CMS backoff done by CERN (on the same hardware). Even tho it was a SHARE report .... some internal organization had it stamped "Confidential, Restricted" (only available on need-to-know basis ... as least so far as employees were concerned).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Thu, 04 Jul 2002 18:30:15 GMT"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Thu, 04 Jul 2002 18:40:11 GMTSteve O'Hara-Smith writes:
http wasn't so much of an issue of the number of concurrern sessions, it was the number of sessions endings per second and the length of the dangling finwait queue (high activity web servers spending 98 percent of total cpu running the finwait list). tcp finwait hadn't seen it before, even with relatively high number of concurrent sessions because tcp was somewhat presumed to be a connection protocol that lasted for some time. http1.0 is effectively a connectionless protocol being driven over a connection protocol (as a result http would drive the number of tcp session terminations per second thru the roof, as well as causing a large amount of session setup/tear-down packet chatter).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Thu, 04 Jul 2002 18:51:39 GMT"Douglas H. Quebbeman" writes:
now, back with cp/67 on a 360/67 single processor (say maybe comparable to a large AT w/80287 co-processor) we supported 75-80 "active" 2741 terminal "mixed-mode" users with subsecond trivial response (of course this had line interrupts not character interrupts) ... mixed-mode ... apl, program development, document preparation, source editing, compilation, program debug & test, etc.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM was: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Fri, 05 Jul 2002 09:37:25 GMTAnne & Lynn Wheeler writes:
that was funcational characteristics manuals ... for instance for 360/67
A27-2719 IBM System/360 Model 67: Functional Characteristics
gave detailed instruction timings
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Fri, 05 Jul 2002 10:00:30 GMTAnne & Lynn Wheeler writes:
over the next six months I significantly reduced general pathlengths
and introduced "fastpaths" (fastpath is methodology for optimized
pathlength for the most common case(s) .... in contrast to just
straight-forward optimized pathlength for all cases). ref to
report I gave at fall '68 SHARE meeting as to the result of some
of the pathlength work for guest operating systems:
https://www.garlic.com/~lynn/94.html#18 CP/67 and OS MFT14
Of the next 18 months at the university I also rewrote dispatch/schedule and implemted fair share scheduling, rewrote paging subsystem and implemented clock replacement algorithm, rewrote "DASD" support .... implementing ordered-seek queueing for 2311 and 2314 disks and chained scheduling for 2301 drums.
The orders seek queueing reduced latency and increase thruput of both 2311 and 2314 disk drives for both paging and user i/o. The 2301 was a fixed head device that doing one i/o operations at a time was subject to average rotational latency delay for every i/o operation. In this mode, the 2301 had peak sustained paging rate of about 80 page i/os per second. With chained-scheduling, multiple requests were ordered in a single i/o ... so rotational latency typically applied to just the first operation .... given a dedicated channel the 2301 would then see peek sustained thruput of 300 page i/os per second.
The other feature affecting performance that I did as an undergraduate was to implement support for being able to page selected portions of the kernel ... reducing the total fixed storage requirements ... freeing up more space user execution. Machine at the university was 768k memory ... 192 4k pages ... but that would be reduced to as little as 104 "available" 4k pages under heavy load. Some simple kernel paging could easily pick up 10 percent real storage for application execution.
Eventually all of the above changes (except for kernel paging) was picked up and distributed as part of the standard release (they also distributed various other stuff that I had done like the TTY/ascii terminal support).
Much of the above was carried over in the CP/67 to VM/370 port
... except for the paging and dispatch/scheduling algorithm changes.
I was able to re-introduce those in the resource manager product:
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
which went out initially as PRPQ (special offering) but was shortly changed into standard product status.
misc. general refs:
https://www.garlic.com/~lynn/subpubkey.html#technology
including
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Fri, 05 Jul 2002 10:32:39 GMTCharles Shannon Hendrix writes:
3270s did have some number of drawbacks. The 3274 controller (for 3278, 3279, etc terminals ) was slower than the original 3272 controller for 3277 terminals, in part because a lot of "head" electronics was moved out of the terminal and back into the 3274 controller (making the cost of each 3278/3279 terminal cheaper).
With some local electronic tinkering it was possible to adjust the key repeat delay and key repeat speed to just about any value you wanted (on the 3277) ... improving the ability to navigate the cursor around the screen. You could also get a keystroke FIFO box for the 3277. The 3270 had a characteristic if you used to fast typing ... that if you happen to hit a key at just the same instant that something was being transferred to the screen ... the keyboard would lock and you would have to het the reset buttom. For fast typists ... that just about threw you into half-duplex mode of operation to avoid having to deal with the interruption of the keyboard lockup and having to hit reset.
All of that was lost in upgrade to 3274/3278/3279 (since all the required electronics were now back in the controller and not in the terminal). It never really came back until you got ibm/pc with 3278/3279 terminal emulation.
The other impact was that while local 3274s (direct channel attach) exhibited significantly better human factor response than remote 3274s controller (i.e. connected over some sort of 9.6kbit telco line ... with multiple attached terminals all sharing the controller rate) ... the 3274 command processing time on the channel was excessive.
A fully configured system tended to have a lot of disk controllers and a lot of 3274 controllers all spread out on the available channels (with some disk controllers and some 3274 controllers attached to each channel). It turned out that 3274 slowness was causing high channel busy time and typically interferring with disk thruput. This wasn't immediately recognized .... however I was doing a project implementing kernel aupport for HYPERchannel Remote Device Adapter (basically channel extender over T1 telco line) for large numbers of local 3274 controllers. You would remove the 3274 controllers from all the channels and put them in a remote building. You would then attach a HYPERchannel A22x to the channel and setup a HYPERchannel network to some number of A51x boxes at the remote site. The A51x boxes emulated mainframe channels and you could attach "local" controllers (like 3274) to the A51x boxes and they would think they were talking to real channel.
The result was you could put a couple hundred people and their terminals at a remote site and they typically couldn't tell the difference between being local or remote. This is in contrast to "remote" 3274s where the response and human factor degradation was significant.
A side effect of this project was that it appeared that total disk thruput (and therefor total system thruput) went up by about 10-15 percent. Further analysis showed that the HYPERchannel A22x boxes that were directly attached to mainframe channel had significantly lower channel "overhead" for doing the same exact operation compared to configuration with all the local 3274s controllers attached to the real channels. This discovery resulted in some number of presentations and configuration adviseries as to not mixing 3274 controllers and the same channels with anything else of significant thruput importance (like disk controllers).
slightly related discussion with respect to some disk thruput & controller
issues
https://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please
some past HYPERchannel postings
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Slightly related HYPERchannel issue ... mentioned in some of the above postings. I had also did the standard mainframe product support for RFC1044 (aka hyperchannel) and did some tuning at Cray Research ... where we got sustained thruput between a cray machine and 4341-clone that was nearly equal to 1.5mbyte/sec hardware channel speed ... with only relatively modust cpu utilization. By comparison the "standard" base support (non-rfc1044) had trouble getting 44kbyte/sec while nearly saturating a 3090 cpu.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unisys A11 worth keeping? Newsgroups: alt.folklore.computers,comp.sys.unisys Date: Fri, 05 Jul 2002 11:27:24 GMTjmfbahciv writes:
started in the '80s a lot of the activity started move to various mailing lists .... first on bitnet/earn and then on the internet (bitnet was the vm-based corporate funded educational network in the US and earn was similar corporate funded educational network in europe).
random bitnet/earn refs:
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#39 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000b.html#67 oddly portable machines
https://www.garlic.com/~lynn/2000c.html#61 TF-1
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000f.html#22 Why trust root CAs ?
https://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2000g.html#39 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#19 What is "IBM-MAIN"
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#25 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001h.html#65 UUCP email
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#57 Computer Naming Conventions
https://www.garlic.com/~lynn/2002d.html#33 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002e.html#6 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
from above vmshare ref:
Welcome to the VMSHARE Archives
About VMSHARE
VMSHARE has been the conferencing system of the VM Cluster of SHARE
since August 1976. After VMSHARE was closed down in August 1998 it was
decided that the database should be kept available for reference. Read
here the announcement of that by Ross Patterson. The best way to get
a feeling for what VMSHARE meant to its users is probably by browsing
through the VMSHARE Archives where you will find appends like this.
It may also be helpful to read Melinda Varian's History of VM to get a
better understanding of the community that has developed around VM and
VMSHARE.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Fri, 05 Jul 2002 11:49:51 GMTBrian Inglis writes:
Basically a channel attached hyperchannel A22x box on ibm mainframe connected to a hyperchannel network. i wrote all the code ... device driver and various optimized paths for the box in the base product.
The original mainframe TCP/IP product ... in the'80s initially only supported the 8232 ... basically a pc/at with channel attach card and some number of LAN cards. The 8232 wasn't really a tcp/ip box .... but a channel to LAN gateway ... so all the TCP/IP to LAN/MAC level stuff had to be done in the mainframe (which accounted for a lot of the cpu overhead processing). The channel attached HYPERchannel box was a real TCP/IP router ... which allowed a lot of the processing for the 8232 to be bypassed.
This was also the basis for what we used for the mainframe part of our
internal corporate highspeed backbone. This internal corporate
highspeed backbone is what the NSF audit claimed was five years ahead
of all bid submissions for the NSFNET1 backbone. For the internal
backbone I had done some additional stuff that didn't appear in the
product (like rate-based pacing ... which the audit cliamed was
included in the five years ahead .... and 15 years later it still
looks to be five years ahead ... internet2 is looking at it tho).
random refs:
https://www.garlic.com/~lynn/internet.htm
misc. 8232 refs:
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
misc. stuff about hsdt ... high speed data transport:
https://www.garlic.com/~lynn/subnetwork.html#hsdt
in the following ref:
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)
somebody in the SNA group had posted an announcement for a new newsgroup. The contrast was significant.
tale slightly out of school:
https://www.garlic.com/~lynn/2000c.html#58
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: AS/400 and MVS - clarification please Newsgroups: bit.listserv.ibm-main Date: Fri, 05 Jul 2002 10:54:09 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: AS/400 and MVS - clarification please Newsgroups: bit.listserv.ibm-main Date: Fri, 05 Jul 2002 10:57:24 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Fri, 05 Jul 2002 12:14:36 GMTcbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
Then there was a east research center that was really proud of the fact that the had .22 second response under heavy load and provided one of the best time-sharing services in the world. We then pointed out that we had a .11 second response with effectively the same load and the same hardware. There was then some discussion whether less than .20 second response really had any meaning (i.e. could humans tell the difference between .11 second response and .20 second response).
the sna group were less than thrilled about various of these
activities (also see the recent posting mentioning tcp/ip and
vtam/sna). they were really not thrilled that i was part of the four
person group that created the first non-ibm controller and started the
pcm controller business. another slightly related sna/hyperchannel
comparison:
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2001k.html#21 OT: almost lost LBJ tapes; Dictabelt
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Fri, 05 Jul 2002 16:05:30 GMTcbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
prior ref:
https://www.garlic.com/~lynn/2001h.html#48 Whom Do Programmers Admire Now???
a later study in the early '80s found that predictability was also important .... if people could perceive the delay, they could get into a pattern anticipating it ... if the delay was longer than anticipated, it interrupted the (human) pattern and it then took the person twice as long to "recover". If the delay was about two seconds, and it went to four seconds .... a human took an additional two seconds (six seconds total) to recover. The theory was that the person's attention started to wonder when their expectation failed and the attention "recovery" time was equal to the amount of time spent "wondering". A supporting example was if the delay extended into the minutes, you might even leave your desk to do something else.
other random refs:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#24 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#25 How many Megaflops and when?
https://www.garlic.com/~lynn/2000c.html#64 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#40 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Fri, 05 Jul 2002 16:22:15 GMTcbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Fri, 05 Jul 2002 21:42:23 GMTBrian Inglis writes:
one of the "new" releases saw a 10-15 percent degradation compared to a previous release .... which raised a lot of attention as to performance in general. One of my things that had been laying around had to do with the internal processing of full-screen i/o transactions. If you relied on the standard product internal response numbers it was based on elapsed time between certain physical events ... actually 3 different response events .... with the avg. time for each one going into the calculation.
In the above reference, I had collapsed the three separate events into a single event .... resulting in higher aggregate thruput because of better optimization processing ... but the internal avg. response calculation might look worse (since I had a single event that was three times longer than previous three single events each one-third as long). Until my changes were merged into the standard product ... a system with my changes in this area with .1 second resposne was actually three times better than an unmodified system that produced a calculation of .1 second response.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: AFP really was 2000: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers,comp.sys.cdc Date: Sat, 06 Jul 2002 09:42:12 GMT"John Keeney" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: wrt code first, document later Newsgroups: alt.folklore.computers,comp.lang.forth Date: Sat, 06 Jul 2002 09:59:09 GMTSteve O'Hara-Smith writes:
there is the joke(?) in the resource manager. I was doing all this stuff to make it dynamic adaptive based on load and configuration. I got told by some marketing people that the "state of the art" was implementations with manual tuning parameters for system programmers ... and it was necessary to add additional tuning parameters to the resource manager or it wouldn't be considered modern.
So i implemented some tuning parameters. I then wrote the documentation and manuals as part of shipping the resource manager as a product.
now the "real" test of the resource manager were the 2000 automated benchmarks that took three months to run that validated the resource managers ability to dynamically adapt to a broad range of configurations and workloads. in some sense there were some heuristics in the automated benchmarking manager that looked at results of previous benchmarks and then chose load/workload for the next benchmark (aka not only was there dynamic adaption developed for the resource manager ... but there was also dynamic adaption developed for the benchmarking methodology used to validate the resource manager).
So the resource manager ships with documentation, formulas for the dynamic adaption, and formulas for the tuning parameters, and all the source code. It is possible to read the documentation and formulas and validate it against the sorce code. Note however, the tuning parameters have very limited effect ... which wasn't ever discovered/realized.
Part of the dynamic adaptive operation of the resource manager implementation was interative feedback/feedforward operation. One of the things that you can do in an interative cycle is control the degrees of freedom of the various items. Giving the dynamic adaptive items more degrees of freedom than the tuning parameters met that the dynamic adaptive code could compensate for any manual fiddling of the tuning parameters.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unisys A11 worth keeping? Newsgroups: alt.folklore.computers,comp.sys.unisys Date: Sat, 06 Jul 2002 16:14:35 GMTjmfbahciv writes:
dec & ibm had jointly funded/backed project athena (kerberos, X, palladium, etc). IBM had funded cmu (mach, camelot, andrew stuff). IBM had also funded UCLA (aka aix/370/ps2 was locus .... as opposed to pc/rt's aix ... which was a at&t system v).
OSF Announcement Press Conference, Tuesday, May 17, 1988. The Auditorium at Equitable Center, Boston 7 Speakers; John Young acted as moderator and master of ceremonies. -- John Young, Hewlett-Packard - Introduction and Moderator - We 7 are competitors: Apollo, Digital, H-P, Groupe Bull, IBM, Nixdorf, Siemens. - Knock-down, drag-out battle, but we've taken the gloves off to re-define the rules for the next round. - Have jointly funded $90 million Open Software Foundation - OSF will develop a software environment, including application interfaces, advanced systems extensions, and a new operating system, using POSIX definitions are the starting point. -- Jacques Stern, Chairman and CEO, Groupe Bull - Four major customer needs/wants: - To easily use applications software on computers from multiple vendors - To integrate or unify distributed applications and resources across systems from different vendors and in geographically diverse locations. ("Interoperability") - To use the same operating system on many classes of computers, from a workstation to a supercomputer. - To have a voice in the formation of standards. - OSF will address these needs with an open application environment specification. -- Ken Olson, President of Digital Equipment Corporation - Industry has improved its products through technology, creativity, good business sense. Vendors have independently chosen the "best" architecture. - Different operating systems have been created to match the architecture on which they run. - The consequence is that users must tailor applications to operating systems, and applications can't easily be moved. - True applications portability requires an international standardized application environment -- a software platform that provides clear specifications for how all applications can interface. - Vendors then provide the tailoring to match their systems to this applications environment. Can also add enhancements and features that add value to the open system, but don't affect compliance. - Truly open systems require a kind of public trusteeship in which many players have access to the system and a voice in determining its future. - OSF will ensure that such a trusteeship is in effect. -- John Doyle, Executive VP, Hewlett-Packard, and Chairman of the OSF Board - Truly open systems require software built with an open decision and development process. OSF addresses these needs. - Impetus was widespread and deep concern about the future of open operating systems. - But a bigger idea emerged: to make is easier for users to mix and match computers and software from different vendors. - Specifications and products developed by OSF will meet all of the needs identified by (earlier speakers) today: - Application Portability - Easier integration of distributed applications and resources from different vendors. - Run on a wide range of processors, from workstations to supercomputers. - Live up to the name: OPEN Software Foundation: pursue a vendor-neutral process - 7 Guiding Principles will guide OSF's operation: - Seek best technologies from a wide range of sources. Membership open to all for-profit and non-profit organizations, worldwide. All members able to provide inputs on their needs. - Ensure openness by supporting accepted international industry standards. Build on existing standards, rather than starting from scratch. POSIX will be starting point; X/Open will be used as well. - Work closely with university and industry research organizations to obtain innovative technologies. Have established a Research Institute to fund and oversee relevant research. - Decision making process will be visible to foundation members. The results will be publicly available. - At various stages, licensees will have timely access to the source code for ease in designing their own applications or porting to their own hardware. - Consistent and straightforward procedures for licensing source code. Non-members may obtain source code licenses. - Offering will not favor any given hardware architecture. - Offerings will be phased to give lead time to application developers: - OSF Application Environment Specification, Level 0, being released today. Includes POSIX, X/Open Portability Guide, X Windows. - OSF AES, Level 1, will expand to areas such as interoperability and user interfaces. OSF will produce an operating system consistent with the Level 1 specifications. - OSF will provide validation test suites for members and customers to verify conformance. - Lots more to come. -- Tom Vanderslice, Chairman and CEO of Apollo Computer -- Viable international organization: more than $90 Million in initial funding. -- Membership fees provide additional support: - $25000 annually for profit-making organizations - $5000 annually for non-profits. -- Yesterday (5/16) sent out hundreds of invitations, worldwide, to hardware and software suppliers and universities. Membership open to all. -- Will receive licensing feeds from those who chose to adopt its software. -- Management and technical know-how. Borrowed experts from sponsors. Will begin hiring immediately. Expect to attract "best and brightest" because should be an interesting place to work. -- OSF has access to some technological assets from members. Will base its development efforts on its own research as well as on technologies licensed from members. Those under consideration are: - Apollo's Network Computing System - Bull's UNIX system-based multiprocessing architecture - Digital's user interface toolkit and style guides for X Windows - Hewlett-Packard's National Language Support - Nixdorf's relational database technology - Siemen's OSI protocol support Will include features to support current System V-based and Berkeley-based UNIX applications. Operating System will use core technology from a future version of IBM's AIX as a development base. -- Claus Kessler, President, CEO and Chairman of Siemens -- University research has always played a key role in advancement of operating systems technology. -- Impressive results: - MIT's X Windows - Berkeley's utilities, tools and virtual memory support for UNIX - University of Karlsberg's work on OSI and large networks - University of Wisconsin's contributions to TCP/IP and OSI -- OSF will sponsor research on open software and technology that contribute to its goals. -- OSF has created a research institute to build relations and interfaces with university and research organizations worldwide. -- Will be structured by a formation committee. Members so far: - Dr. Lynn Conway, University of Michigan - Proferssor Michael Dertouzos, MIT - Dean James F. Gibbons, Stanford - Professor Roger Needham, University of Cambridge - Dr. Raj Reddy, Carnegie-Mellon - Professor George Turin, University of California, Berkeley -- Klaus Luft, Chairman of the Executive Board, Nixdorf Computer --OSF is unusual: right from the start, launched worldwide. -- No standard can be a true standard unless it is an international standard. No open standard is genuinely open unless it is open worldwide. -- All the major computer vendors are international. But more important is the fact that many of their customers operate internationally. -- OSF is committed to international standards. OSF's OS will conform, right from the start, with the X/Open specification. Will work with X/Open and ISO to advance new standards. -- OSF development will be carried out on an international basis. Goal is to access the widest possible range of talents and technologies. There will be more than one research center. -- OSF will work closely with universities and research laboratories throughout the world. OSF management will be an international team. -- John Akers, Chairman and CEO, IBM -- Computer industry has grown and prospered because its products serve a wide range of customer needs. We expect that to continue. -- But we must be responsive to many different customer requirements. In particular, those customers currently using UNIX want: - Ability to select from a wide range of application software, and to use that software on a variety of systems from different vendors; - To choose hardware and software that meets their needs and solves their problems, with the expectation that it will all work together. - To be able to choose a software environment that spans a wide range of processors. -- We've concluded that these customers can be best served if an independent body, beholden to no one vendor but benefiting from the expertise and support of many, can create a common set of specifications for a POSIX and X/Open-based software environment. -- We believe the OSF products will complement the many unique architectures our industry will continue to offer, and that our customers will be the winners. -- OSF participants are all in a race for customer preference and loyalty. We will all be adding value to differentiate our products. Q & A, moderated by John Young, Hewlett-Packard - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Application Environment Specification, Level 0 Operating System POSIX Standards: ANSI, ISO, FIPS X/Open XPG3, base level Languages C: ANSI X3J11 FORTRAN: ANSI X3.9-1978, ISO 1539-1980(E), FIPS 069-1 Pascal: ANSI X3J9, ISO 7185-1983, FIPS 109 Ada: ANSI/MIL 1815A-1983,FIPS 119 Basic: Minimal Basic: ANSI X3.60-1978, FIPS 068-1 Full Basic: ANSI X3.113-1987, FIPS -68-2 Cobol: ANSI X3.23-1985 High Level, FIPS 021-2 LISP: Common LISP, ANSI X3J13 User Interface X Window System Version 11, ANSI X3H3 Libraries: X language bindings, ANSI X3H3 Graphics Libraries GKS, ANSI X3.124-1985, FIPS 120 PHIGS, ANSI X3H3.1 Network Services Selected ARPA/BSD Services TCP (MIL-STD-1778), IP (MIL-STD-1777) SMTP (MIL-STD-1781), TELNET (MIL-STD-1782) FTP (MIL-STD-1780) Selected OSI Protocols Database Management SQL: ANSI X3.135-1986 (with 10/87 addendum), levels 1 & 2, FIPS 127--
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: wrt code first, document later Newsgroups: alt.folklore.computers,comp.lang.forth Date: Sat, 06 Jul 2002 16:43:08 GMTjmfbahciv writes:
the resource manager that i did for vm/370 had an enhance "#CP IND"
command which was a recent running avgs for total system cpu
utilization, paging rate, various q-lengths/size and
"response". Response was actually the avg. "interactive" queue service
time ... which is similar but could be significantly different as per
https://www.garlic.com/~lynn/2002i.html#52
https://www.garlic.com/~lynn/2001m.html#19
aka ... the resource manager couldn't do a good job unless it accurately measured everything (with no overhead).
you could sort of follow your own progress via the "q time". The "indicate" command was a sensitive issue in some circles since it provided information about total system activity. Most places thot informed users aided in the intelligent use of the system. but there were some places that thot that users should be as isolated/insulated as possible from what was happening concurrently on the system.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: wrt code first, document later Newsgroups: alt.folklore.computers,comp.lang.forth Date: Sat, 06 Jul 2002 16:46:23 GMTjmfbahciv writes:
... oh yes, CMS from the start also had the blip command which you could turn off/on. with "blip on" .... for every two seconds of virtual execution time (didn't include cp kernel time) cms would "wiggle" the 2741 type-ball (i.e. not actually type anything).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Sat, 06 Jul 2002 19:10:26 GMTCharles Shannon Hendrix writes:
I've conjectured that possibly part of the reason for trying to leverage window-based pacing to addressing congestion issues is that many of the machines & systems used in the '80s had eather very poor hardware and/or software support for time-based facitlities.
rate-based pacing was one of the features for HSP (high speed protocol), standards activity in the late '80s. However, part of the problem was that HSP was being worked on in ANSI (x3s3.3) targeted for ISO ... and ISO had this strong OSI bent .... aka if it violated OSI model it wouldn't get very far. Much of HSP had to do with taking the level 4 (transport) interface directly to the LAN interface ... approximately the middle of level 3 (network). ISO sort of tried to squint real hard and ignore that LANs had collapsed into a single layer ... the bottom half of layer 3, and all of layer 2 and 1 (aka not only did LANs collapse several layers into one ... but the LAN boundary didn't correspond to any defined OSI boundry ... being in the middle of layer 3). HSP was to cut directly from layer 4 interface directly to LAN interface. Again the ISO forces had to squint a lot when objecting to HSP .... it bypassed the level 3 boundary interface and talked directly to the LAN boundary interface ... but at least it had a boundary interface that corresponded to defined OSI layer (level 4) ... which LANs didn't.
random rate-based pacing, slow-start, congestion, and hsp postings.
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/99.html#0 Early tcp development?
https://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/2000b.html#1 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#5 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#9 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency (was Re: Ms employees begging for food)
https://www.garlic.com/~lynn/2001b.html#57 I am fed up!
https://www.garlic.com/~lynn/2001e.html#24 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#25 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001k.html#62 SMP idea for the future
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2001n.html#27 Unpacking my 15-year old office boxes generates memory refreshes
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002c.html#54 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#26 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#46 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#49 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#50 Why did OSI fail compared with TCP-IP?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Sat, 06 Jul 2002 19:21:01 GMTCharles Shannon Hendrix writes:
sna had a mainframe api (like lu6.2), vtam (sscp, or pu5) and 37xx
(ncp, or pu4). there was some conjecture that driving factor in the
pu5/pu4 evoluation/definotion was largely because of a project that i
worked on as undergraduate that created PCM controller business.
misc. refs:
https://www.garlic.com/~lynn/submain.html#360pcm
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: wrt code first, document later Newsgroups: alt.folklore.computers,comp.lang.forth Date: Sat, 06 Jul 2002 19:33:47 GMTTim Shoppa writes:
having learned to drive on non-synchromesh truck ... starter was peddle on the floor and you had to listen to the sound of the engine and double clutch just right to shift gears; none of this sissy tachs & synchromesh stuff.
https://www.garlic.com/~lynn/38yellow.jpg
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unisys A11 worth keeping? Newsgroups: alt.folklore.computers,comp.sys.unisys Date: Sun, 07 Jul 2002 13:24:35 GMTjmfbahciv writes:
I did get involved in advocating doing a revised kernel ... but the proposal was to take bits and pieces of existing things and with a small group of people cobble them all together and then incrementally revise using new programming methodologies. Unfortnately, it got a lot of interest and had lots of resources assigned to it. Towards the end it had 300 people writing documentation for a year ... before it was decided to terminate. This was pre-OSF and pre-HA/CMP.
this was further born out by (at least) pink and spring.
random rewrite postings:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/2001l.html#25 mainframe question
https://www.garlic.com/~lynn/2001l.html#42 is this correct ? OS/360 became MVS and MVS >> OS/390
old fs postings:
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#17 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#21 OT?
https://www.garlic.com/~lynn/2000f.html#27 OT?
https://www.garlic.com/~lynn/2000f.html#28 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#56 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#18 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001f.html#30 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001n.html#46 Blinking lights
https://www.garlic.com/~lynn/2001n.html#65 Holy Satanism! Re: Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#43 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002c.html#1 Gerstner moves over as planned
https://www.garlic.com/~lynn/2002c.html#19 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002d.html#10 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#27 iAPX432 today?
https://www.garlic.com/~lynn/2002e.html#44 SQL wildcard origins?
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
misc. pink
https://www.garlic.com/~lynn/2000e.html#42 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000e.html#48 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
spring posting:
https://www.garlic.com/~lynn/2001j.html#32 Whom Do Programmers Admire Now???
and of course ha/cmp (which wasn't a new operating system):
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: BIOMETRICS Newsgroups: alt.computer.security Date: Sun, 07 Jul 2002 13:32:21 GMT"John Stadelman" writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: subjective Q. - what's the most secure OS? Newsgroups: comp.security.unix Date: Sun, 07 Jul 2002 14:58:18 GMTdavid20 writes:
Multics is held up as one example.
The other examples are a couple of the virtual machine implementations where the basic kernel is relatively compact and there is compact, easy to understand, strongly enforced API. Part of the issue is complexity itself tends to contribute to failures (security or otherwise); in part is that with complexity comes mistakes (either code developers, sysadms, sysprogs, application developers, etc).
KISS can be considered a prime principle of security.
Sometimes there are references to security by obfuscation thru complexity. The problem with complexity is that over time it increases the probability that the developers themselves will make mistakes creating exploit exposures. Any contribution to security that complexity might contribute is frequently extremely transient.
One of the highest security ratings was to a VMS kernel that was wrappered with some form of virtual machine monitor. There has also been some references to uses of ibm virtual machine implementations being used in high integrity installations (in some sense various of the time-sharing service bureaus needed extremely high strength integrity/security because of the open access by a diverse community ... and real revenue was at stake). I once had to help support a system that had fairly open access by MIT and BU students.
Faulty paradigm and obfuscation can also lead to mistakes. Probably the most noted recently (over the past ten years) is the implicit length paradigm in common C strong handling as being one of the single largest causes of security exploits (so by implication one would might conclude that anything involving c programming might be eliminated as a candidate).
random buffer overflow refs:
https://www.garlic.com/~lynn/99.html#219 Study says buffer overflow is most common security bug
https://www.garlic.com/~lynn/2000.html#30 Computer of the century
https://www.garlic.com/~lynn/2000g.html#50 Egghead cracked, MS IIS again
https://www.garlic.com/~lynn/2001c.html#32 How Commercial-Off-The-Shelf Systems make society vulnerable
https://www.garlic.com/~lynn/2001c.html#38 How Commercial-Off-The-Shelf Systems make society vulnerable
https://www.garlic.com/~lynn/2001n.html#30 FreeBSD more secure than Linux
https://www.garlic.com/~lynn/2001n.html#71 Q: Buffer overflow
https://www.garlic.com/~lynn/2001n.html#72 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#76 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#84 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2002.html#19 Buffer overflow
https://www.garlic.com/~lynn/2002.html#20 Younger recruits versus experienced veterans ( was Re: The demise of compa
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002.html#26 Buffer overflow
https://www.garlic.com/~lynn/2002.html#27 Buffer overflow
https://www.garlic.com/~lynn/2002.html#28 Buffer overflow
https://www.garlic.com/~lynn/2002.html#29 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#33 Buffer overflow
https://www.garlic.com/~lynn/2002.html#34 Buffer overflow
https://www.garlic.com/~lynn/2002.html#35 Buffer overflow
https://www.garlic.com/~lynn/2002.html#37 Buffer overflow
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002.html#39 Buffer overflow
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Hercules and System/390 - do we need it? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 07 Jul 2002 23:04:02 GMTIBM-MAIN@ISHAM-RESEARCH.COM (Phil Payne) writes:
there were at least three major cp-based service bureaus .... all with heavily modified versions of CP & CMS ... NCSS, IDC, and Tymshare. Both NCSS & IDC were formed in the CP/67 time-frame, both including former IBMers that had worked on CP/67 (I have a tale about as an undergraduate having to backfill in teaching a cp/67 class because the designated IBMer had left to form NCSS).
Tymshare started an OS rewrite from scratch called GNOSIS. When M/D bought Tymshare ... a number of things were spun off ... including GNOSIS into something called KeyKos. There were Keykos efforts within the past 5-6 years or so with benchmarks about providing significant amount of MVS functional emulation but with transaction thruput comparable to or better than TPF (although gnosis/keykos were done by different people than RASP/aspen .... there were some similarities between the RASP/aspen design point and at least the keykos design point ... the original gnosis objective was somewhat different than what evolved into keykos).
Another from the period that was somewhat similar was MTS at univ. of mich. ... which also had a significant implementation of OS/360 in MTS allowing the execution of numerous OS/360 applications.
slightly misthreaded ... i think the reference to OS/360 under CP/67 ... wasn't to the running of MFT or MVT in a virtual machine ... i believe the reference was to the os/360 simulation provided in CMS so that numerous OS/360 applications could execute (like compilers). There used to be a joke about the implementation of the 32kbyte code-size OS/360 in cms was significantly more effecient and frugal than the implementation of OS/360 in MVT.
random gnosis/keykos postings:
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#22 No more innovation? Get serious
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001n.html#10 TSS/360
https://www.garlic.com/~lynn/2002f.html#59 Blade architectures
https://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: learning how to use a computer)
https://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
misc. MTS postings:
https://www.garlic.com/~lynn/93.html#15 unit record & other controllers
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/2000.html#89 Ux's good points.
https://www.garlic.com/~lynn/2000.html#91 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000f.html#52 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
https://www.garlic.com/~lynn/2002f.html#37 Playing Cards was Re: looking for information on the IBM 7090
misc. uts & apsen refs:
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001l.html#18 mainframe question
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Hercules and System/390 - do we need it? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 07 Jul 2002 23:17:43 GMTAnne & Lynn Wheeler writes:
remember in the tale below .... both Tymshare & NCSS are cp/cms based
time-sharing services:
http://www.decosta.com/Nomad/tales/history.html
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: privileged IDs and non-privileged IDs Newsgroups: comp.security.unix Date: Mon, 08 Jul 2002 18:25:35 GMTnoname writes:
a basic tenet is that there never be any ambiquity when performing authentication for an ID .... that there is never more than one person associated with the authentication. that doesn't preclude one person having more than one ID (but it does preclude two or more people sharing the same ID). Multiple people may share the same role ... but never should there be people sharing the same role-id.
Even if different authentication requirements aren't currently supported, sometimes multiple role-ids might be assigned to a person in anticipation of migration to more sophisticated security environment.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unisys A11 worth keeping? Newsgroups: alt.folklore.computers,comp.sys.unisys Date: Tue, 09 Jul 2002 16:17:33 GMTcbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
some of the unix async i/o issues have to do with trade-off between the application having to support various serialization paradigms in the use of data vis-a-vis doing some number of buffer copies (some asyncr i/o implementations totally eliminate buffer copies; doing i/o directly in/out of application buffer memory).
random past buffer copy/async i/o threads
https://www.garlic.com/~lynn/93.html#32 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#00 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/98.html#34 ... cics ... from posting from another list
https://www.garlic.com/~lynn/2000b.html#5 "Mainframe" Usage
https://www.garlic.com/~lynn/2001c.html#26 The Foolish Dozen or so in This News Group
https://www.garlic.com/~lynn/2001d.html#59 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001k.html#62 SMP idea for the future
https://www.garlic.com/~lynn/2002e.html#34 Lisp Chips
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002g.html#5 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Does Diffie-Hellman schema belong to Public Key schema family? Newsgroups: sci.crypt Date: Tue, 09 Jul 2002 15:53:27 GMT"nntp.lucent.com" writes:
The common associated business processes associated with public/private key convention are 1) digital signature and 2) encryption.
In digital signature business process, the hash of the message is "encrypted" with the private key. Anybody with the public key can verify the message signature by computing the message hash and comparing it with the value of the "decrypted" digital signature (aka the original hash). The business convention specifies that only the owner of the private key ever has access to (use of) its value and therefor only the owner can generate digital signatures with that private key. Arbritrary populations have access to the public key and therefor can validate the digital signature. The digital signature convention works based on the business process private key specification that there is only one owner that has access to a particular private key.
The other common use of asymmetric keys is to address the issue of key distribution problems found in typical secret key encryption paradigms. Public keys can be freely distributed and anybody can encrypt messages using the public keys ... but only the entities with access to the corresponding private keys can decode the message.
Note that the business process scenario of encryption may have fewer privacy requirements regarding the private key (exclusivity to a single individual) than the digital signature business process scenario. The use of public/private keys in the digital signature business process scenario tends to get into issues about individual authentication and other business issues like non-repudiation.
However, that doesn't preclude relaxing business process privacy requirements in the digital signature case to purely message authentication, aka a "private key" is secured but not necessarily limited to a single individual.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CDC6600 - just how powerful a machine was it? Newsgroups: alt.folklore.computers Date: Tue, 09 Jul 2002 19:06:54 GMTtmm@spamfilter.asns.tr.unisys.com (Tim McCaffrey) writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Hercules and System/390 - do we need it? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 09 Jul 2002 19:25:21 GMTAnne & Lynn Wheeler writes:
In addition to the focus, nomad, etc, also note that System/R (the
original relational database) was totally developed on VM platform and
then there was technology transfer from SJR to Endicott for the SQL/DS
product. Later one of the people in the referenced meeting below
handled the SQL/DS technology transfer from Endicott to STL for DB2
(long way around since SJR and STL are only about 10 miles apart).
https://www.garlic.com/~lynn/95.html#13 SSA
further aside, the tymshare complex and the us hone complex were only a couple miles apart.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Does Diffie-Hellman schema belong to Public Key schema family? Newsgroups: sci.crypt Date: Tue, 09 Jul 2002 19:01:35 GMTDaniel Mehkeri writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: TCPA Newsgroups: sci.crypt Date: Tue, 09 Jul 2002 21:47:44 GMT"Maciek" writes:
they have pointers to specifications on the above page.
i was panelist in assurance session in what (I believe) was tcpa track
at the intel developer's conference last year. i mentioned some more
details of the AADS strawman ... and made the claim that it sould able
to perform all the stated duties of a TPM (packaged on a motherboard
instead of in a dongle or card) w/o changing its implementation. see
old reference to aads chip strawman at:
https://www.garlic.com/~lynn/x959.html#aads
note that the idea for TPM isn't new ... i was involved in looking at possibility of a tpm-like chip on PC motherboard 20 years ago.
random past posting on tpm
https://www.garlic.com/~lynn/aadsm5.htm#asrn4 assurance, X9.59, etc
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A Lesson In Security Newsgroups: sci.crypt,alt.hackers.malicious Date: Wed, 10 Jul 2002 02:26:37 GMTashwood@msn.com (Joseph Ashwood) writes:
.... or security proportional to risk
note that merchants are typically held financially responsible (this isn't the consumer centric view of fraud ... this is the fraud centric view of potential exposure).
one of the things that x9.59 is targeted at addressing is removing the
cc# from an exploit as a secret (aka capturing an existing transaction log
with cc# is not sufficient to execute fraudulent transactions). misc refs:
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subintegrity.html#fraud
- Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Unisys A11 worth keeping? Newsgroups: alt.folklore.computers Date: Wed, 10 Jul 2002 15:08:52 GMTPete Fenelon writes:
misc. refs:
http://burks.bton.ac.uk/burks/foldoc/17/80.htm
http://www.channelu.com/NeXT/NeXTStep/3.3/nd/DevTools/14_MachO/MachO.htmld/index.html
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: A Lesson In Security Newsgroups: sci.crypt Date: Wed, 10 Jul 2002 15:30:54 GMTMichael Sierchio writes:
there are still things like the waiter swipe fraud that has been written up. waiter in nyc had a little magstripe reader inside their jacket and when they took the card ... they also swiped the card inside their jacket and recorded the information in a PDA. That night the contents of the PDA were mailed over the internet to somebody across the country or on the other side of the world. Within a couple hrs, counterfeit cards were on the street being used. Since the waiter swipe fraud recorded tracks 1&2 ... it would pass the electronic auth.
Other types are to create counterfeit cards just using transaction information (card number, expire date, etc) ... and then scratch the magstripe. The scratch means that it fails the magnetic swipe ... and the merchant then decides to take it with physical impression (and manually enters the number on the POS terminal for the electronic auth).
There are also skimming & counterfeit cards with respect to debit (not just credit).
general
https://www.garlic.com/~lynn/subintegrity.html#fraud
some specifics.
https://www.garlic.com/~lynn/aadsm6.htm#pcards The end of P-Cards?
https://www.garlic.com/~lynn/aadsm6.htm#pcards2 The end of P-Cards? (addenda)
https://www.garlic.com/~lynn/aadsm6.htm#pcards3 The end of P-Cards? (addenda)
https://www.garlic.com/~lynn/aepay6.htm#ccfraud2 "out of control credit card fraud"
https://www.garlic.com/~lynn/aepay9.htm#skim High-tech Thieves Snatch Data From ATMs (including PINs)
https://www.garlic.com/~lynn/aepay10.htm#3 High-tech Thieves Snatch Data From ATMs (including PINs)
https://www.garlic.com/~lynn/2001f.html#40 Remove the name from credit cards!
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Does Diffie-Hellman schema belong to Public Key schema family? Newsgroups: sci.crypt Date: Wed, 10 Jul 2002 15:43:06 GMTDavid Hopwood writes:
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: HONE was .. Hercules and System/390 - do we need it? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 10 Jul 2002 16:18:24 GMTone of the things that I liked about HONE was that they had enormous appetite for pushing the technology envelope. They had an extremely demanding service and clientele (all the company's salesmen and field support people in the world). I supplied them heavily customized kernel and other functions.
The basic service delivery was a large CMS\APL (and then APL\CMS) application that provided a "padded cell" time-sharing environment with lots of customized functions that the uwer could invoke. The total dependency on APL gave HONE a very large appetite for CPU cycles. The major APL HONE application was code-named Sequoia (most of the time, a HONE user saw little or no native CMS user interface).
Starting with the 370/125 ... it was no longer possible for a salesman to place a machine order without the use of HONE (i.e. in the 360 days, a salesman could fill out an order form for a customer and get a machine, starting with 370/115&125 ... order specifications were generated by HONE thru a "configurator" application).
In the VM/370 release 2 time-frame ... I also provided HONE "shared modules" and PAM (CMS paged mapped filesystem support). Shared modules was a mechanism that CMS executables could be identified as containing "shareable" segments. When the CMS supervisor went to load CMS executable code, it would look it the control information for the "shareable" option and then invoke CP kernel options to "load" the appropriate segments in "shared mode". "Shared Modules" feature was dependent on the executables being resident in a paged mapped filesystem (not a normal CMS filesystem).
The base CP system had a method of defining shared segments ... but it used a mechanism that involved a kernel resident module that defined the memory space (and the segments of the memory that were "shareable") and the place in the CP filesystem that the memory image was to be located. Changes required rebuilding & rebooting the kernel. Furthermore, the invokation of the memory image was only available thru the virtual IPL/boot command.
"Shared Modules" had the advantage that there was no CP kernel processes involved (no rebuilding the kernel, etc). For VM/370 release 3, a subset of the CMS changes were merged into the product .... and a new CP interface to the standard CP saved memory images was created. This allowed an APL or GML processor to be loaded as shared segments within the CMS environment with out having to reboot the virtual machine. However, it continued to have enormous drawbacks compared to the shared module implementation. The full-blown paged mapped filesystem only saw limited release in the XT/370 product.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Does Diffie-Hellman schema belong to Public Key schema family? Newsgroups: sci.crypt Date: Wed, 10 Jul 2002 18:28:48 GMTdjohn37050@aol.com (DJohn37050) writes:
1) in the original posting both the term encrypt and decrypt were
double quoted. in many instances this is used to alert the audience to
a possible use of the term different (or broader &/or narrower) than
what they might be expecting. In a more formal presentation ... there
might have been a reference number and a detailed explanation in the
appendix about what might be the audiences' expected use of the term
vis-a-vis the particular use of the term:
https://www.garlic.com/~lynn/2002i.html#67
2) in the attached, i would claim that even the DSA private key "operation" is covered. The intent of doing the DSA private key "operation" is not so much to hide the data but to guarantee the integrity of the data. Note however there have been some mechanisms that do "encrypt" a whole message solely for integrity purposes (not for privacy reasons) ... although the "use" semantics might possibly be used to infer integrity (as opposed to "known" semantics which presumably reference to secrecy and privacy).
https://www.garlic.com/~lynn/secure.htm
encryption
(I) Cryptographic transformation of data (called 'plaintext') into
form (called 'ciphertext') that conceals the data's original meaning
to prevent it from being known or used. If the transformation is
reversible, the corresponding reversal process is called 'decryption',
which is a transformation that restores encrypted data to its original
state. (C) Usage note: For this concept, ISDs should use the verb 'to
encrypt' (and related variations: encryption, decrypt, and
decryption). However, because of cultural biases, some international
usage, particularly ISO and CCITT standards, avoids 'to encrypt' and
instead uses the verb 'to encipher' (and related variations:
encipherment, decipher, decipherment). (O) 'The cryptographic
transformation of data to produce ciphertext.' (C) Usually, the
plaintext input to an encryption operation is cleartext. But in some
cases, the plaintext may be ciphertext that was output from another
encryption operation. (C) Encryption and decryption involve a
mathematical algorithm for transforming data. In addition to the data
to be transformed, the algorithm has one or more inputs that are
control parameters: (a) key value that varies the transformation and,
in some cases, (b) an initialization value that establishes the
starting state of the algorithm. [RFC2828] (Reversible) transformation
of data by a cryptographic algorithm to produce ciphertext, i.e. to
hide the information content of the data. [ISO/IEC WD 18033-1
(12/2001)] [SC27] The process of making information indecipherable to
protect it from unauthorized viewing or use, especially during
transmission or storage. Encryption is based on an algorithm and at
least one key. Even if the algorithm is known, the information cannot
be decrypted without the key(s). [AJP)
misc for completeness ... encryption is the "cryptographic
transformation" and digital signature is a "value computed with a
cryptographic algorithm".
digital signature
(I) A value computed with a cryptographic algorithm and appended to a
data object in such a way that any recipient of the data can use the
signature to verify the data's origin and integrity. (I) 'Data
appended to, or a cryptographic transformation of, a data unit that
allows a recipient of the data unit to prove the source and integrity
of the data unit and protect against forgery, e.g. by the recipient.'
(C) Typically, the data object is first input to a hash function, and
then the hash result is cryptographically transformed using a private
key of the signer. The final resulting value is called the digital
signature of the data object. The signature value is a protected
checksum, because the properties of a cryptographic hash ensure that
if the data object is changed, the digital signature will no longer
match it. The digital signature is unforgeable because one cannot be
certain of correctly creating or changing the signature without
knowing the private key of the supposed signer. (C) Some digital
signature schemes use a asymmetric encryption algorithm (e.g., see:
RSA) to transform the hash result. Thus, when Alice needs to sign a
message to send to Bob, she can use her private key to encrypt the
hash result. Bob receives both the message and the digital
signature. Bob can use Alice's public key to decrypt the signature,
and then compare the plaintext result to the hash result that he
computes by hashing the message himself. If the values are equal, Bob
accepts the message because he is certain that it is from Alice and
has arrived unchanged. If the values are not equal, Bob rejects the
message because either the message or the signature was altered in
transit. (C) Other digital signature schemes (e.g., see: DSS)
transform the hash result with an algorithm (e.g., see: DSA, El Gamal)
that cannot be directly used to encrypt data. Such a scheme creates a
signature value from the hash and provides a way to verify the
signature value, but does not provide a way to recover the hash result
from the signature value. In some countries, such a scheme may improve
exportability and avoid other legal constraints on usage. [RFC2828] A
cryptographic method, provided by public key cryptography, used by a
message's recipient and any third party to verify the identity of the
message's sender. It can also be used to verify the authenticity of
the message. A sender creates a digital signature or a message by
transforming the message with his or her private key. A recipient,
using the sender's public key, verifies the digital signature by
applying a corresponding transformation to the message and the
signature. [AJP] A data appended to, or a cryptographic transformation
of, a data unit that allows a recipient of the data unit to prove the
origin and integrity of the data unit and protect the sender and the
recipient of the data unit against forgery by third parties, and the
sender against forgery by the recipient. [ISO/IEC 11770-3: 1999] Data
appended to, or a cryptographic transformation of, a data unit that
allows the recipient of the data unit to prove the origin and
integrity of the data unit and protect against forgery, e.g. by the
recipient. [ISO/IEC FDIS 15946-3 (02/2001)] A cryptographic
transformation of a data unit that allows a recipient of the data unit
to prove the origin and integrity of the data unit and protect the
sender and the recipient of the data unit against forgery by third
parties, and the sender against forgery by the recipient. NOTE -
Digital signatures may be used by end entities for the purposes of
authentication, of data integrity, and of non-repudiation of creation
of data. The usage for non-repudiation of creation of data is the most
important one for legally binding digital signatures. [ISO/IEC 15945:
2002] [SC27] A digital signature is created by a mathematical computer
program. It is not a hand-written signature nor a computer-produced
picture of one. The signature is like a wax seal that requires a
special stamp to produce it, and is attached to an Email message or
file. The origin of the message or file may then be verified by the
digital signature (using special tools). The act of retrieving files
from a server on the network. [RFC2504] A method for verifying that a
message originated from a principal and that it has not changed en
route. Digital signatures are typically generated by encrypting a
digest of the message with the private key of the signing
party. [IATF][misc] A non-forgeable transformation of data that allows
the proof of the source (with non-repudiation) and the verification of
the integrity of that data. [FIPS140] Data appended to, or a
cryptographic transformation of, a data unit that allows the recipient
of the data unit to prove the origin and integrity of the data unit
and protect against forgery, e.g. by the recipient. [ISO/IEC 9798-1:
1997] [SC27]
as an aside in the above with regard to non-repudiation .... see
detailed definition involving requirement for non-repudiation
service(s).
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Does Diffie-Hellman schema belong to Public Key schema family? Newsgroups: sci.crypt Date: Wed, 10 Jul 2002 19:16:36 GMToh, and i forgot ... also from
from FIPS186-2 ... it doesn't doesn't actually make the statement that the specified algorithms are all "cryptographic" algorithms. However, there is fairly common use of the definition of "encrypt" or "encryption" that the DSS algorithmic transformation of the SHA-1 would be considered a cryptographic transformation (as per its use in the common definitions for DSA & DSS above).
The FIPS186-2 does mention some of the business processes involved for
"private" keys ... aka "never shared" and "can be performance only by
the possessor of the user's private key". Even with common defintion
of both DSA & DSS mentioning cryptographic transformation ... I still
felt that I might make use of the quotations around "encrypt" and
"decrypt" in an attempt to avoid any knee-jerk reaction to the
particular use.
https://www.garlic.com/~lynn/2002i.html#67
FIPS186-2 can be found at:
http://csrc.nist.gov/publications/fips/
from fips186-2
Explanation: This Standard specifies algorithms appropriate for
applications requiring a digital, rather than written, signature. A
digital signature is represented in a computer as a string of binary
digits. A digital signature is computed using a set of rules and a set
of parameters such that the identity of the signatory and integrity of
the data can be verified. An algorithm provides the capability to
generate and verify signatures. Signature generation makes use of a
private key to generate a digital signature. Signature verification
makes use of a public key which corresponds to, but is not the same
as, the private key. Each user possesses a private and public key
pair. Public keys are assumed to be known to the public in
general. Private keys are never shared. Anyone can verify the
signature of a user by employing that user's public key. Signature
generation can be performed only by the possessor of the user's
private key. A hash function is used in the signature generation
process to obtain a condensed version of data, called a message digest
(see Figure 1). The message digest is then input to the digital
signature (ds) algorithm to generate the digital signature. The
digital signature is sent to the intended verifier along with the
signed data (often called the message). The verifier of the message
and signature verifies the signature by using the sender's public
key. The same hash function must also be used in the verification
process. The hash function is specified in a separate standard, the
Secure Hash Standard (SHS), FIPS 1801. FIPS approved ds algorithms
must be implemented with the SHS. Similar procedures may be used to
generate and verify signatures for stored as well as transmitted data.
=====
however FIPS186-2 does specifically refer to ECDSA as being a cryptographic transformation:
=====
1. INTRODUCTION
This publication prescribes three algorithms suitable for digital
signature (ds) generation and verification. The first algorithm, the
Digital Signature Algorithm (DSA), is described in sections 4 6 and
appendices 1 5. The second algorithm, the RSA ds algorithm, is
discussed in section 7 and the third algorithm, the ECDSA algorithm,
is discussed in section 8 and recommended elliptic curves in appendix
6.
7. RSA DIGITAL SIGNATURE ALGORITHM
The RSA ds algorithm is a FIPS approved cryptographic algorithm for
digital signature generation and verification. This is described in
ANSI X9.31.
8. ELLIPTIC CURVE DIGITAL SIGNATURE ALGORITHM (ECDSA)
The ECDSA ds algorithm is a FIPS approved cryptographic algorithm for
digital signature generation and verification. ECDSA is the elliptic
curve analogue of the DSA. ECDSA is described in ANSI X9.62. The
recommended elliptic curves for Federal Government use are included in
Appendix 6.
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Fw: HONE was .. Hercules and System/390 - do we need it? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 10 Jul 2002 20:33:43 GMTwmklein@IX.NETCOM.COM (William M. Klein) writes:
random past posts:
https://www.garlic.com/~lynn/96.html#41 IBM 4361 CPU technology
https://www.garlic.com/~lynn/99.html#60 Living legends
https://www.garlic.com/~lynn/99.html#61 Living legends
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001f.html#8 Theo Alkema
https://www.garlic.com/~lynn/2001f.html#9 Theo Alkema
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: HONE Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 10 Jul 2002 21:48:34 GMTAnne & Lynn Wheeler writes:
It turns out the resource manager was a lot of code that had been dropped in the CP/67 to VM/370 rewrite ... and the resource manager code was SMP sensitized (from cp/67 smp support). I was also using the code as part of a 5-way SMP project called VAMPS (that never shipped as a product, but I got to put a lot of stuff down into the micro-code, more than had been in either VMA or ECPS). In any case, when SMP support was incorporated into the standard product with VM/370 release 4, something like 80 percent of the Resource Manager code migrated into the base (non-charged for SCP code).
Prior to that, HONE because of its APL-affinity and hunger for CPU cycles, I built a VM?370 release 3 sysetm for HONE with SMP support and they upgraded all the processors to 2-cpu units (at least at US HONE complex in california).
misc. microcode discussions
https://www.garlic.com/~lynn/submain.html#360mcode
misc. smp discussions
https://www.garlic.com/~lynn/subtopic.html#smp
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: McKinley Cometh... Newsgroups: comp.os.vms,comp.arch Date: Thu, 11 Jul 2002 02:29:02 GMTDavid Gay writes:
Then 801 ROMP (Research OPD Micro Processor) project was started to be a office products displaywriter follow-on with CPr as base operating system (and PL.8 language). I believe there was then some analysis that while a ROMP-based displaywriter was cost effective given enuf seats sharing the same machine ... that the least expensive ROMP-based displaywriter was still more expensive that the most expensive acceptable displaywriter configuration. That spawned the morphing of ROMP/CPr into the PC/RT workstation using the company that had been hired to do the PC/IX port .... doing the port to VMR machine abstraction layer (which retained some amount of the original PL.8 technology & technicians).
Then came RIOS/POWER (and RS/6000, as follow-on to PC/RT) ... and then somerset project (joint with motorola) for power/pc (aka 601) (also involving apple). Up until somerset/powerpc a basic premise of 801 chip designs had been non-cache coherent shared memory multiprocessor (actually no multiprocessing except for a special 4-way RSC aka "RIOS single chip" implementation that didn't support cache coherency). RS/6000 workstations continued on with both RIOS/POWER & POWER/PC chipsets (for some time you could tell power from power/pc based on whether they supported multiprocessor configurations or not).
With the as/400 moving to a power/pc chipset ... some things sort of came full circle back to Fort Knox.
motorola bought out somerset in '98 ... and ibm came out with chipset that was rios/powerpc merge.
27 years of IBM risc:
http://www.rootvg.net/column_risc.htm
note the above leaves out the CPr & PL.8 work. it also leaves out fort knox. it also leaves out PC/RT using ROMP which was targed as an office product division (aka OPD) displaywriter. also there is comment about aix/ps2 in the above. aix for the pc/rt was a at&t system v port by the same people that had done the pc/ix implementation. AIX/PS2 (and its companion AIX/370) was a Locus implementation from UCLA. There was also a BSD port to pc/rt from ibm called AOS (that was to the bare metal w/o any vrm).
random url with locus refs:
http://plasmid.dyndns.org:81/plasmidweb/joehopfield.htm
random past ROMP, somerset, fort knox postings:
https://www.garlic.com/~lynn/98.html#26 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#27 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#129 High Performance PowerPC
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000.html#59 Multithreading underlies new development paradigm
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2001c.html#84 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#23 IA64 Rocks My World
https://www.garlic.com/~lynn/2001h.html#69 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#28 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001j.html#37 Proper ISA lifespan?
https://www.garlic.com/~lynn/2002c.html#40 using >=4GB of memory on a 32-bit processor
https://www.garlic.com/~lynn/2002g.html#12 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#14 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#39 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002h.html#63 Sizing the application
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: HONE Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 11 Jul 2002 02:57:01 GMTedgould@AMERITECH.NET (Edward Gould) writes:
Any thot of maximum at that time was more than two ... charlie had previously worked on the only 360/67 triplex (and I remember seeing a 360/62 system reference that called for 4-way smp).
There were two custom 370 SMP projects (predating VM/370 release 4) ... one was VAMPS that I worked on that would support up to five-way (because of hardware limitation) and something called "logical machines" that I worked on with Charlie and a couple other people in cambridge (logical machines was a 16-way SMP using 158 processor technology). Of course we had engineers in KGN & POK for the actual hardware stuff. Neither VAMPS nor logical machines shipped as a product.
random ref:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/95.html#5 Who started RISC? (was: 64 bit Linux?)
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: HONE Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 11 Jul 2002 03:11:55 GMTSEYMOUR.J.METZ@CUSTOMS.TREAS.GOV (Shmuel Metz , Seymour J.) writes:
HONE (us hone in california) in late '70s was 8-way cluster of 2-processer smp machines. The closest were possibly some airline res TPF systems ... but TPF didn't have SMP support ... so it was purely single processors. As an aside topic drift ... IBM mainframes tended to "suffer" a 10 percent thruput slowdown in 2-way (compared to uniprocessor) that allowed for cross-cache chatter. The 3081 was not a full IBM SMP ... being two processors sharing some number of hardware components. The 3083 was something of an afterthot for the TPF industry .... disabling the 2nd processor allowed the cycle delay for the cross-cache chatter to be eliminated for straight single-processor thruput.
Later when we were doing HA/CMP ... we also were participated in SCI and FCS standards activities .... looking at both 256-machine clusters (with FCS) as well as 256-machine shared memory. Both Sequent and DG produced 256 intel processor SMP using the dolphin SCI chip. Convex produced 256 hp/risc processor SMP using custom SCI hardware.
random refs:
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001l.html#16 Disappointed
https://www.garlic.com/~lynn/2001n.html#83 CM-5 Thinking Machines, Supercomputers
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/