List of Archived Posts

2002 Newsgroup Postings (06/28 - 07/11)

Where did text file line ending characters begin?
User 2-factor authentication on laptops
Where did text file line ending characters begin?
DCAS [Was: Re: 'atomic' memops?]
DCAS [Was: Re: 'atomic' memops?]
DCAS [Was: Re: 'atomic' memops?]
how to set up a computer system
CDC6600 - just how powerful a machine was it?
how to set up a computer system
More about SUN and CICS
Signing email using a smartcard
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
AS/400 and MVS - clarification please
Al Gore and the Internet
AS/400 and MVS - clarification please
AS/400 and MVS - clarification please
AS/400 and MVS - clarification please
CDC6600 - just how powerful a machine was it?
6600 Console was Re: CDC6600 - just how powerful a machine was
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
AS/400 and MVS - clarification please
CDC6600 - just how powerful a machine was it?
trains was: Al Gore and the Internet
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
: Re: AS/400 and MVS - clarification please
IBM was: CDC6600 - just how powerful a machine was it?
"Mass Storage System"
IBM was: CDC6600 - just how powerful a machine was it?
pop density was: trains was: Al Gore and the Internet
pop density was: trains was: Al Gore and the Internet
IBM was: CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
IBM was: CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
Unisys A11 worth keeping?
CDC6600 - just how powerful a machine was it?
AS/400 and MVS - clarification please
AS/400 and MVS - clarification please
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
AFP really was 2000: CDC6600 - just how powerful a machine was
wrt code first, document later
Unisys A11 worth keeping?
wrt code first, document later
wrt code first, document later
CDC6600 - just how powerful a machine was it?
CDC6600 - just how powerful a machine was it?
wrt code first, document later
Unisys A11 worth keeping?
BIOMETRICS
subjective Q. - what's the most secure OS?
Hercules and System/390 - do we need it?
Hercules and System/390 - do we need it?
privileged IDs and non-privileged IDs
Unisys A11 worth keeping?
Does Diffie-Hellman schema belong to Public Key schema family?
CDC6600 - just how powerful a machine was it?
Hercules and System/390 - do we need it?
Does Diffie-Hellman schema belong to Public Key schema family?
TCPA
A Lesson In Security
Unisys A11 worth keeping?
A Lesson In Security
Does Diffie-Hellman schema belong to Public Key schema family?
HONE was .. Hercules and System/390 - do we need it?
Does Diffie-Hellman schema belong to Public Key schema family?
Does Diffie-Hellman schema belong to Public Key schema family?
Fw: HONE was .. Hercules and System/390 - do we need it?
HONE
McKinley Cometh
HONE
HONE

Where did text file line ending characters begin?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where did text file line ending characters begin?
Newsgroups: alt.folklore.computers
Date: Fri, 28 Jun 2002 12:35:01 GMT
"Rostyslaw J. Lewyckyj" writes:
A couple of consequences of this, I believe, are: You begin building a hierarchy of escapes similar to the idea of set of sets, etc. in math. You introduce history into the processing of the data stream. i.e. you have to remember whether you are processing data of part of an escape, etc.

another consequence of making length implicit .... is that programmers start to ignore some of its more practical issues resulting in a huge exploision in the number of length-related vulnerabilities & exploits

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

User 2-factor authentication on laptops

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: User 2-factor authentication on laptops
Newsgroups: comp.security.misc
Date: Fri, 28 Jun 2002 12:40:01 GMT
Ali-Reza Anghaie writes:
Or you can rig up something by which a person would have to present their private key via removeable media. Say business-card sized CD-R.

i've seen this done with secure filesystem (i.e. the whole harddisk is encrypted) ... and the system needs to boot off the (business-card sized) CD-R ... which then asks for password to unlock the key file ... in order to have any access to the hard disk.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Where did text file line ending characters begin?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where did text file line ending characters begin?
Newsgroups: alt.folklore.computers
Date: Fri, 28 Jun 2002 15:45:48 GMT
jcmorris@mitre.org (Joe Morris) writes:
A slightly different problem in this class occurred in the IPL (IBMese for "boot") program used by OS/360. The S/360 architecture has no mechanism to directly obtain the memory size, so the IPLTXT program (loaded into memory by the hardware) wrote zeros through memory until it gets a program interruption for an invalid memory address.

The joker was that the S/360 architecture provided for exactly 24 bits of addressing for memory. When the first machines with 16 MB of memory (oh ye gods, the cost of that memory) arrived, the IPLTXT program cheerfully ran through memory looking for the address where an interruption occurred -- and never found it, since the addressing logic quietly looped the effective address from 2^24-1 to 0.


some similar but different happend on 370/125. somebody had changed the CP "loader" to use the MVCL instruction to clear and test for memory size (the "loader" booted, read the information for a new kernel and wrote it to disk in load format .... so it could be booted from disk).

The CP "loader" programmer was original the BPS (very early 360) card deck loader with some number of subsequent modifications.

In any case, instead of looping thru memory setting it to zeros ... it was replaced with MVCL. All the original 360 instructions were defined to pretest all the memory access and protection information before starting the instruction. 370 introduced a couple instructions (mvcl, clcl) that were defined to incrementally execute ... a byte at a time. They could be interrupted and restarted ... but they also could incrementally work up to a page boundary (w/o requiring all virtual pages to be resident at one time) or other forms of conditions that would terminate the instruction.

In any case, the 370/125 initially shipped with a bug in the microcode where the instruction arguments were pretested and the instruction never executed if the pretest failed. As a result the loader instead of getting storage cleared and idea of the machine size ... got an indication that there was no storage.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

DCAS [Was: Re: 'atomic' memops?]

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DCAS [Was: Re: 'atomic' memops?]
Newsgroups: comp.arch
Date: Fri, 28 Jun 2002 19:33:29 GMT
Terje Mathisen writes:
The limited spin loop is to avoid the risk of pegging a cpu when something bad has happened, and the OS probably have to go in and clean something up, or just let some other process finish whatever it wants to do before it can release the contested lock.

no serious system or application delivery code would ever have uncontrolled loops. simple desktop throwaway applications under the assumption that there is a human attending ... might default to system or human kill for one reason or another (with human expected to perform remediation actions). system and application delivery code (including dim/dark room web farm apps) should always bracket all loops/conditions and have a couple levels (or more) of automated back-off/remediation strategy (this is applicable to both system level services ... as well as higher level, more sophisticated application services); aka somewhat difference between programs and business critical services.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

DCAS [Was: Re: 'atomic' memops?]

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DCAS [Was: Re: 'atomic' memops?]
Newsgroups: comp.arch
Date: Fri, 28 Jun 2002 23:09:33 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Similarly, I consciously and positively do NOT put loop limits in cases where there is no obvious upper bound, the user kill action is equivalent to the one I would take if I detected an excessive loop, and the real criterion for killing the program is likely to be that the user has specified too large a task for his resource limits (or patience).

the original post tried to make specific distinction between a program being run by a user and a service possibly being operated in dim/dark room with nobody around .... the former can have human factors considerations .... the later might be something like supporting ATM cash machines (or various flavors of webservers in large webhosting farm). they have totally different design point assumptions .... although one could make a case that a lot of the web farm stuff being deployed may have evolved from desktop environment and possibly has numerous design point assumptions about desktop operation (as opposed to totally different design point used for service operation).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

DCAS [Was: Re: 'atomic' memops?]

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DCAS [Was: Re: 'atomic' memops?]
Newsgroups: comp.arch
Date: Sat, 29 Jun 2002 09:11:46 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Yes, but my point stands even there. If the only reasonable action on a loop limit exceeded is to abort and let the controlling system (whether operating system, human or other) sort out the mess, and there is no natural limit to put on loop counts, then there is no point in having them. While this is a fairly specific circumstance, it is a very common one.

This argument was used by some people who ought to have known better talking about embedded programming and numerical errors. They claimed that not allowing the OPTION of aborting on an error was correct, as failure during embedded running is unacceptable. I pointed out that (a) this made debugging MUCH harder and hence increases the chance of failure and (b) aborting is generally better than giving wildly wrong answers (e.g. reversed signs).


original code can be bad .... and the remediation can be bad.

very early in CP/67 system supported full dump and auto-reboot/restart so that it was possible to diagnose various kernel failures and also keep running (although there was service interruption).

tale from the multics crowd and justification for fast file system (because of comparison between cp/67 restart times and multics restart times .... both projects were done in the same bldg. at 545 tech sq).
https://www.multicians.org/thvv/360-67.html

the other case that I'm fairly familiar with is when i redid the i/o supervisor to make it absolutely bullet proof (initially for the disk engineering lab). a common failure mode was whole series of tight kernel loops related to i/o operations ... which were all done based on retrying an operation ... because that was what the i/o specification said to do (frequently some sort of busy condition, it wasn't a counter loop ... it was a retry operation that would never stop). unfortunately, various kinds of hardware glitches could result in things not quite conforming to the i/o specification/architecture. i had to do almost total rewrite, bracketing all sorts of retry operations (with things like retry limit counts). no explicit i/o error actually occurred ... but the bracketing code would treat the situation as if an i/o error had occurred, logged it, aborted the operation and then kept going.
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

how to set up a computer system

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how to set up a computer system
Newsgroups: alt.folklore.computers
Date: Sun, 30 Jun 2002 20:05:25 GMT
Joe Yuska writes:
You forgot the 35-ton Liebert for the peripherals, and the 400 Hz motor-generators and water chiller system for the mainframe.

in addition to the PDUs and the water chillers ... these days many places, you have to have the water tank on the roof of the building for water recycling instead of just having a 4inch or larger water pipe just dumping water into the sewer.

some cases you might run into room loading limits and have to figure out some other solution to high volume water flow straight into the sewer.

some past related discussions on pdus, water chillers and tanks:
https://www.garlic.com/~lynn/2000b.html#82 write rings
https://www.garlic.com/~lynn/2000b.html#85 Mainframe power failure (somehow morphed from Re: write rings)
https://www.garlic.com/~lynn/2000b.html#86 write rings
https://www.garlic.com/~lynn/2001.html#61 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001m.html#40 info
https://www.garlic.com/~lynn/2002g.html#62 ibm icecube -- return of watercooling?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Mon, 01 Jul 2002 01:41:55 GMT
mschaef@eris.io.com (MSCHAEF.COM) writes:
It's not the CDC 6600, but this site has information on one of its almost contemporaries:

original long ago ... reposting from
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe:

rain/rain4
                  158               3031              4341

Rain              45.64 secs       37.03 secs         36.21 secs
Rain4             43.90 secs       36.61 secs         36.13 secs

also times approx;
                   145                168-3              91
                   145 secs.          9.1 secs          6.77 secs

rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in
35.77 secs.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

how to set up a computer system

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how to set up a computer system
Newsgroups: alt.folklore.computers
Date: Mon, 01 Jul 2002 13:59:07 GMT
Howard S Shubs writes:
So -that's- what that was for! You're talking about the big rectangular building -right- next to I93, yes? Just south of the on-ramp?

remember when they had momentary abberration and considered not completing i93.

the issue was there is this existing elevated section coming down from the north that was four lanes .... with traffic having strong y-split at north station/garden. i93 elevated with four lanes would feed into the same elevating section about 100yards(?) before the traffic y-split pattern ... creating a strong x-traffic pattern ... in addition to merging from 8 lanes for 4.

When somebody realized that the architect messed up in the design and "in theory" you would have two streams of cars traveling at 55 MPH effectively crossing each other in a very short physical space ... they realized they would have to dump all the i93 traffic off into the streets before the elevated merge and the last couple mile section of i93 would never be used. The issue that was investigated that since the last couple mile section was never going to be opened .... whether is was worthwhile actually building that section. The analysis in the press was that it would cost the state something like $50m in construction penalties if it canceled the building of the remaining section ... and $200m to finish building the section. However since it was supposedly an interstate (even tho it would never meet any interstate standards .... that even if they ever did open the section for traffic ... the speed limit ... because of the upcoming merge ... and strong "X" traffic pattern would fail interstate standards), the federal gov. paid 90 percent (would only actually cost the state $20m to complete).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

More about SUN and CICS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More about SUN and CICS
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 01 Jul 2002 14:22:33 GMT
g.goersch@WORLDNET.ATT.NET (Bo Goersch) writes:
Meanwhile, the package was re-evaluated and it was decided to proceed with the new release on either VSE or MVS. No word from the powers that be on where it will sit yet although a good case for MVS is being made based upon the pre-req of a smaller CPU and no extras (FILESERV, OPC, etc..) which leaves me with mixed emotions. The company is not considering SUN equipment due to a large AIX investment. It seems SUN loses out and we do too, running MVS on a small box brings back fears of trying to run MVS on a 370-115/2, it ran, but nothing else did, and of course the alternative is VSE.

ran VM on 125 ... but in doing so uncovered 125 m'code bug with MVCL instruction. I had done some tuning for CP/67 kernel size on 256k 360/67 ... which included adding enhancement to be able to make part of the kernel "pageable". The pageable stuff was picked up for VM/370 kernel .... but the fixed kernel still got quite a bit bloated. It took a bit of re-arranging code from the fixed kernel to make operating on 256k machine reasonable.

recent thread with respect to storage size issues and IPL:
https://www.garlic.com/~lynn/2002i.html#2 Where did text file line ending characters begin?

the 115 & 125 were effectively the same machine, both had the same engines .... basically up to nine microprocessors with appropriate microcode loaded to perform the various control/io functions ... about a 800kip microprocessor. The difference was that the 115 used the standard 800kip engine for the 370 microcode load (at 10:1 emulation giving effectively about 80kip 370 instruction rate). The 125 used all the same engines .... except the engine that the 370 emulation code ran on was about a 1mip processor (with 10:1 emulation yielded about 100kip 370 thruput).

I once worked on a project that would populate up to five of the microprocessor bus positions with 125 microprocessor engines with 370 microcode load .... creating a 5-way multiprocessing configuration. The microcode was also tweaked to put majority of tasking & SMP management into the microcode. The disk controller microcode was also tweaked to offload some amount of the paging supervisor.

random refs:
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#10 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous
https://www.garlic.com/~lynn/2000e.html#7 Ridiculous
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001j.html#18 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#19 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#48 Pentium 4 SMT "Hyperthreading"

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Signing email using a smartcard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Signing email using a smartcard
Newsgroups: alt.technology.smartcards
Date: Mon, 01 Jul 2002 14:39:00 GMT
"Bernt Skjemstad" writes:
Has anyone out there sucseeded in signing email using a certificate (digital ID) on a smartcard?

I have a smartcard reader which is tested and working (Omnikey Cardman 2020 USB), and a smartcard containing a certificate for signing email. The problem is how do I get an email client to use the certificate on the smartcard? Up to now I only get up the software certificates (which I have downloaded from Thawte and VeriSign).

Anyone who have done this before me? Or someone with any ideas?

Bernt


note that the certificate and the private key are two completely different things.

nominally a public/private key is generated. A signing consists of calculating the SHA-1 of the data (20 bytes) and then "encrypting" the SHA-1 with the private key yielding a 20 byte signature (or in the case of FIPS186 federal digital signature standard yields 40 byte signature).

Given the original message and the public key, the recipient can verify the signature.

A certificate is one of the methods for transporting the public key to the recipient .... in order for the recipient to be able to perform the signature verification. Basically a certificate contains something like your name and your public key ... and is digitally signed by thawte or verisign private key.

In effect, a certificate is also a signed message. Before the recipient can use a public key in a certificate to verify your message ... they must "verify" the signature on the certificate .... say thawtes or verisigns ... by already having thawtes/verisigns public key sitting around somewhere (i.e. they've used some other method of obtaining the public key used to sign certificates ... and have that public key stored locally).

Note in the case of something like PGP .... basically the use of certificates isn't necessary ...all public keys are acquired and stored locally by recipients (basically the method in the certificate model of acquiring public keys for "certification authorities" and storing them locally is used for all public keys .... doing away with the requirement for having separate "certification authority" public keys and certificates).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Mon, 01 Jul 2002 20:38:50 GMT
kent@nettally.com (Kent Olsen) writes:
They were so much easier to use than VM, MVS, CICS, etc... Depending on the application, the 6000s outran the IBMs, sometimes the IBMs outran the 6000s. They were definitely the two biggest horses in the race.

note that CMS on VM ... had copyfile command ... effectively inherited from CTSS (aka some of the CMS people had worked on CTSS) ... although huge number of parameters were added to the copyfile over time .... eventually endowing it with all sorts of extra capability (as opposed to just simply doing a file copy).

In the same bldg (545 tech sq) some other people that had worked on CTSS were working on Multics. Both CMS and unix trace some common heritage back to CTSS.

here is page giving command correspondence between cms, vax, pc-dos, and unix:
https://web.archive.org/web/20020213071156/http://www.cc.vt.edu/cc/us/docs/unix/cmd-comp.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Mon, 01 Jul 2002 21:05:00 GMT
"Douglas H. Quebbeman" writes:
Although the Wheelers refer to RAIN/RAIN4, LINPACK was also used for relative speed comparisons, and has been ported to modern platforms and other languages. You should be able to find any number of different versions all over the net...

linpack numbers (including 6600):
https://web.archive.org/web/20020718000903/http://ap01.physik.uni-greifswald.de/~ftp/bench/linpack.html

bunch of stuff from above


Computer                                   N=100(Mflops)
------------------------------------  ---  -------------
Cray T916 (1  proc. 2.2 ns)                          522
Hitachi S-3800/180(1 proc 2 ns)                      408
Cray-2/4-256 (1 proc. 4.1 ns)                         38
IBM RISC Sys/6000-580 (62.5MHz)                       38
IBM ES/9000-520 (1 proc. 9 ns)                        38
SGI CHALLENGE/Onyx (6.6ns,  2 proc)                   38
DEC 4000-610 Alpha AXP(160 MHz)                       36
NEC SX-1                                              36
FPS 510S MCP707 (7 proc. 25 ns)                       33
CDC Cyber 2000V                                       32
Convex C-3430 (3 proc.)                               32
NEC SX-1E                                             32
SGI Indigo2 (R4400/200MHz)                            32
Alliant FX/2800-200 (14 proc)                         31
IBM RISC Sys/6000-970 (50 MHz)                        31
IBM ES/9000-511 VF(1 proc 11ns)                       30
DEC 3000-500 Alpha AXP(150 MHz)                       30
Alliant FX/2800-200 (12 proc)                         29
HP 9000/715 (75 MHz)                                  29
Sun Sparc 20 90 MHz, (1 proc)                         29
Alliant FX/2800 210 (1 proc)                          25
ETA 10-P (1 proc. 24 ns)                              27
Convex C-3420 (2 proc.)                               27
Cray-1S (12.5 ns)                                     27
DEC 2000-300 Alpha AXP 6.7 ns                         26
IBM RISC Sys/6000-950 (42 MHz)                        26
SGI CHALLENGE/Onyx (6.6ns,  1 proc)                   26
Alliant FX/2800-200 (8 proc)                          25
NAS AS/EX 60 VPF                                      25
HP 9000/750 (66 MHz)                                  24
IBM ES/9000-340 VF (14.5 ns)                          23
Meiko CS2 (1 proc)                                    24
Fujitsu M1800/20                                      23
DEC VAX 9000 410VP(1 proc 16 ns)                      22
IBM ES/9000-320 VF (1 proc 15 ns)                     22
IBM RISC Sys/6000-570 (50 MHz)                        22
Multiflow TRACE 28/300                                22
Convex C-3220 (2 proc.)                               22
Alliant FX/2800-200 (6 proc)                          21
Siemens VP400-EX (7 ns)                               21
IBM ES/9221-211 (16 ns)                               21
FPS Model 522                                         20
Fujitsu VP-400                                        20
IBM RISC Sys/6000-530H(33 MHz)                        20
Siemens VP200-EX (7 ns)                               20
Amdahl 1400                                           19
Convex C-3410 (1 proc.)                               19
IBM ES/9000 Model 260 VF (15 ns)                      19
IBM RISC Sys/6000-550L(42 MHz)                        19
Cray S-MP/11 (1 proc. 30 ns)                          18
Fujitsu VP-200                                        18
HP 9000/720 (50 MHz)                                  18
IBM ES/9221-201 (16 ns)                               18
NAS AS/EX 50 VPF                                      18
SGI 4D/480(8 proc) 40MHz                              18
Siemens VP100-EX (7 ns)                               18
Sun 670MP Ross Hypersparc(55Mhz)                      18
Alliant FX/2800-200 (4 proc)                          17
Amdahl 1100                                           17
CDC CYBER 205 (4-pipe)                                17
CDC CYBER 205 (2-pipe)                                17
Convex C-3210 (1 proc.)                               17
Convex C-210 (1 proc.)                                17
Cray XMS (55 ns)                                      17
Hitachi S-810/20                                      17
IBM ES/9000 Model 210 VF (15 ns)                      17
Siemens VP50-EX (7 ns)                                17
Multiflow TRACE 14/300                                17
Hitachi S-810/10                                      16
IBM 3090/180J VF (1 proc, 14.5 ns)                    16
Fujitsu VP-100                                        16
Amdahl 500                                            16
Hitachi M680H/vector                                  16
SGI Crimson(1 proc 50 MHz R4000)                      16
FPS Model 511                                         15
Hitachi M680H                                         15
IBM RISC Sys/6000-930 (25 MHz)                        15
Kendall Square (1 proc)                               15
NAS AS/EX 60                                          15
SGI 4D/440(4 proc) 40MHz                              15
Siemens H120F                                         15
Cydrome CYDRA 5                                       14
Fujitsu VP-50                                         14
IBM ES/9000 Model 190 VF(15 ns)                       14
IBM POWERPC 250 (66 MHz)                              13
IBM 3090/180E VF                                      13
SGI 4D/340(4 proc) 33MHz                              13
CDC CYBER 990E                                        12
Cray-1S (12.5 ns, 1983 run)                           12
Gateway 2000 P5-100XL                                 12
IBM RISC Sys/6000-520H(25 MHz)                        12
SGI Indigo 4000 50MHz                                 12
Stardent 3040                                         12
CDC 4680InfoServer (60 MHz)                           11
Cray S-MP/MCP101 (1 proc. 25 ns)                      11
FPS 510S MCP101 (1 proc. 25 ns)                       11
IBM ES/9000 Model 340                                 11
Meiko Comp. Surface (1 proc)                          11
Gateway 2000 P5-90(90 MHz Pentium)                    11
SGI Power Series 50MHz R4000                          11
Stardent 3020                                         11
Sperry 1100/90 ext w/ISP                              11
Multiflow TRACE 7/300                                 11
DEC VAX 6000/410 (1 proc)                            1.2
ELXSI 6420                                           1.2
Gateway 2000/Micronics 486DX/33                      1.2
Gateway Pentium  (66HHz)                             1.2
IBM ES/9000 Model 120                                1.2
IBM 370/168 Fast Mult                                1.2
IBM 4381 90E                                         1.2
IBM 4381-13                                          1.2
MIPS M/800  (12.5MHz)                                1.2
Prime P6350                                          1.2
Siemans 7580-E                                       1.2
Amdahl 470 V/6                                       1.1
Compaq Deskpro 486/33l-120 w/487                     1.1
SUN 4/260                                            1.1
ES1066 (1 proc. 80 ns Russian)                       1.0
CDC CYBER 180-840                                    .99
Solbourne                                            .98
IBM 4381-22                                          .97
IBM 4381 MG2                                         .96
ICL 3980 w/FPU                                       .93
IBM-486 33MHz                                        .94
Siemens 7860E                                        .92
Concurrent 3280XP                                    .87
MIPS M800 w/R2010 FP                                 .87
Gould PN 9005                                        .87
VAXstation 3100-76                                   .85
IBM 370/165 Fast Mult                                .77
Prime P9955II                                        .72
DEC VAX 8530                                         .73
HP 9000 Series 850                                   .71
HP/Apollo DN4500 (68030 + FPA)                       .60
Mentor Graphics Computer                             .60
MIPS M/500  ( 8.3HHz)                                .60
Data General MV/20000                                .59
IBM 9377-80                                          .58
Sperry 1100/80 w/SAM                                 .58
CDC CYBER 930-31                                     .58
Russian PS-2100                                      .57
Gateway 486DX-2  (66HHz)                             .56
Harris H1200                                         .56
HP/Apollo DN4500 (68030)                             .55
Harris HCX-9                                         .50
Pyramid 9810                                         .50
HP 9000 Series 840                                   .49
DEC VAX 8600                                         .48
Harris HCX-7 w/fpp                                   .48
CDC 6600                                             .48
IBM 4381-21                                          .47
SUN-3/260 + FPA                                      .46
CDC CYBER 170-835                                    .44
HP 9000 Series 840                                   .43
IBM RT 135                                           .42
Harris H1000                                         .41
microVAX 3200/3500/3600                              .41
Apple Macintosh IIfx                                 .41
Apollo DN5xxT FPX                                    .40
microVAX 3200/3500/3600                              .40
IBM 9370-60                                          .40
Sun-3/160 + FPA                                      .40
Prime P9755                                          .40
Ridge 3200 Model 90                                  .39
IBM 4381-11                                          .39
Gould 32/9705 mult acc                               .39
NORSK DATA ND-570/2                                  .38
Sperry 1100/80                                       .38
Apple Mac IIfx                                       .37
CDC CYBER 930-11                                     .37
Sequent Symmetry (386 w/fpa)                         .37
CONCEPT 32/8750                                      .36
Celerity C1230                                       .36
IBM RT PC 6150/115 fpa2                              .36
IBM 9373-30                                          .36
CDC 6600                                             .36
IBM 370/158                                          .22
IBM PS/2-70 (16 MHz)                                 .12
IBM AT w/80287                                      .012
IBM PC w/8087                                       .012
IBM PC w/8087                                      .0069
Apple Mac II                                       .0064
Atari ST                                           .0051
Apple Macintosh                                    .0038

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Mon, 01 Jul 2002 21:12:50 GMT
Larry__Weiss writes:
Does anyone know when UT Austin retired their CDC-6600 ?

don't know about campus ... but balcones research had a cray and my wife and I manage to donate a bunch of HYPERchannel equipment to them for interconnecting various stuff.

thornton after working on 6600 left cdc and founded NSC and built HYPERchannels.

random past stuff
https://www.garlic.com/~lynn/99.html#119 Computer, supercomputers & related
https://www.garlic.com/~lynn/2001.html#19 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#20 Disk caching and file systems. Disk history...people forget

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

AS/400 and MVS - clarification please

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AS/400 and MVS - clarification please
Newsgroups: bit.listserv.ibm-main
Date: Mon, 01 Jul 2002 23:19:40 GMT
it_hjw@JUNO.COM (Harold W.) writes:
In a couple weeks, I will be venturing into new territory (i.e., I will do a security assessment at a new client organization for the first time). I'm a bit confused, because on their initial background submission, they are stating that they run MVS on their AS/400. I called the primary contact (who is not in the IS division), and he insist that it is how things are. Based on my knowledge, this is not supported, and I have never even heard of it happening. Maybe I'm overlooking something, I've only been doing this 4 years.

Has anyone ever heard of MVS running on an AS/400, if it is even possible?


note while the as/400 some years ago ... moved from a cisc hardware architecture to a power/pc chipset .... it is a power/pc chipset. Running MVS on as/400 is like saying running MVS on any other kind of hardware with a power/pc chipset (apple, rs/6000, as/400, etc).

you could possibly get a p/390 card running mvs in a rs/6000 .... i have no idea whether you could get a p/390 card running in an as/400 or not. This isn't a case of running MVS on a power/pc chipset ... it is a case of running MVS on a p/390 card .... which can fit in such a box (apple, rs/6000, or as/400).

here is discussion of P/390 card in an rs/6000
https://web.archive.org/web/20010309161535/http://tech-beamers.com/r390new.htm

a possible question is does any of the as/400 boxes support PCI bus which would take a P/390 card and can you get software in the as/400 that talks to the p/390 card. Even tho aix, apple, and as/400 all run on power/pc chips .... the software/programming environments are different.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Al Gore and the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Tue, 02 Jul 2002 02:06:47 GMT
Floyd Davidson writes:
Gingrich was very prominent on the national political scene yet his activity in support of DARPA (which admittedly is significantly second to Al Gore's) is unknown by the public. Newt Gingrich at the time didn't think it was politically worth much. That does suggest a very basic difference in his view of the Internet compared to Al Gore's view. Gore actually understood how significant the Internet was going to be, long before it was. He may well be the only member of Congress who had a clue until it was all but a done deal.

total aside .... from possibly '88 to possibly '92 .... while there was some NSF funding for NSFNET backbone (but substantial amount of the resources were in excess of the NSF funding and "donated" by commercial/industry sources) ... the "official?" government and darpa strategy was OSI/GOSIP.

There could be a case made that if comercial/industry interests hadn't been so heavily involved in the NSFNET and regionals .... that left to "purely" government influence ... everything would have been migrated to the morass of OSI and we wouldn't have any internet at all today.

misc. past discussions about OSI & GOSIP (and other things)
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

some gosip, nren, etc discussions:
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private

other gosip specific mentions:
https://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2001e.html#17 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#32 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001i.html#5 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#6 YKYGOW...
https://www.garlic.com/~lynn/2002g.html#21 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?

I have copies of gosip-v2.txt and gosip-v2.ps. Misc. from "gosip-order-info.txt" (9/91):


GOSIP Version 1.
----------------

GOSIP Version 1 (Federal Information Processing Standard 146) was
published in August 1988.  It became mandatory in applicable federal
procurements in August 1990.

Addenda to Version 1 of GOSIP have been published in the Federal
Register and are included in Version 2 of GOSIP.  Users should obtain
Version 2.

GOSIP Version 2.
----------------

Version 2 became a Federal Information Processing Standard (FIPS) on
April 3, 1991 and will be mandatory in federal procurements initiated
eighteen months after that date, for the new functionality contained
in Version 2.  The Version 1 mandate continues to be in effect.
Version 2 of GOSIP supersedes Version 1 of GOSIP.  Version 2 of GOSIP
makes clear what protocols apply to the GOSIP Version 1 mandate and
what protocols are new for Version 2.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

AS/400 and MVS - clarification please

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AS/400 and MVS - clarification please
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 02 Jul 2002 02:16:39 GMT
SEYMOUR.J.METZ@CUSTOMS.TREAS.GOV (Shmuel Metz , Seymour J.) writes:
1. The 2305 was a fixed-head disk

2. The drums were 2301 and 2303.


basically 2303 and 2301 were the same physical device. 2303 was used with standard disk controller. 2301 was essentially a 2303 that read/wrote four heads in parallel (and had four times the data transfer rate of the 2303) and had a high-speed controller.

2301 held about 4mbytes of data. TSS & CP/67 formated 2301 with 9 4k pages on a pair of 2301 "tracks" (eight physical "2303" tracks).

old comparison of 360/67 with 3 2301s compared to 3081k with six 2305
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

AS/400 and MVS - clarification please

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AS/400 and MVS - clarification please
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 02 Jul 2002 02:19:52 GMT
efinnell@SEEBECK.UA.EDU (Edward J. Finnell, III , Ed) writes:
Don't think we ever ran the 2305, MMC was humongous, but did have some of the STK SSDs that emulated them. I was scared to death of the things and would only keep the MIX primary and SWAP datasets on them. Blow off a few TSO users, just have to log back on..... then we got suckered into some of the Amdahl 6880's and entered the world of continual EC change.

anybody out there run any 1655s from intel that emulated 2305. I think there was something like 500 built ... and all went to internal corporate installations.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

AS/400 and MVS - clarification please

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AS/400 and MVS - clarification please
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 02 Jul 2002 09:31:48 GMT
IBM-MAIN@ISHAM-RESEARCH.COM (Phil Payne) writes:
I also remember an IBMer trying deperately to convince me that a fully populated 3380 string didn't have exactly the same string switch path busy problem as four full strings of 3350. He came back several times - even tried to get me browbeaten by management. Then when they got (a form of) dual port, the foils made exactly that point.

i got all sorts of grief making the statement that the relative disk system thruput had declined by a factor of 10 over a 15 year period. GPD (disk division) assigned the performance modeling group to prooving the statements wrong. previous ref:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the door

what the group came back with was that i had slightly understated the problem, if you took into account RPS-miss .... the relative disk system thruput had declined by more than a factor of 10 (aka cpu/memory goes up by a factor of 50+, disk thruput goes up by a factor of 5-, relative disk system thruput declines by a factor of more than 10). There was also an big issue with significant increase in 3880 processing overhead as well as anytime the 3880 had to switch channel interfaces.

I got nailed on this because I had built a bullet proof I/O supervisor .... so the disk engineering labs could run the testcells in an operating system environment. The first time they switched a string of 16 3350s from 3830 controller to first 3880 (over a weekend), they first suggested that I had made software changes over the weekend that resulted in the big performance hit. Fortunately this was six months prior to FCS of 3880s .... and there was time to do some tweaking of the m'code. misc. refs:
https://www.garlic.com/~lynn/subtopic.html#disk

the result was reformulated into recommended system configuration presentation and initially given at Share 63, presentation B874 (8/18/84). Summary from the paper:
DASD subsystems have been crucial to the success of time-sharing systems for over twenty years. Hardware has evolved and components get bigger and faster at differing rates. Faster CPUs are now available with parallel processing capabilities that exceed the traditional notions of IO requirements. Bigger external storage as well as larger and faster memories are coming on line that will require even more effective storage and performance management. If the full system potentials are to be realized, the effectiveness of the user IO is going to have to be improved.

Configuration of DASD subsystems for availability and performance is accomplished by using many dedicated channel paths and keeping strings short. The requirement for high path availability to an arm to support good response leads to the less than 25% busy channel guidelines, etc. Where this is too expensive or impractical for space, cost, or configuration reasons, compromises must be made. DASD capabilities, quantified by reliability, throughput, response time and cost, can make an application successful or unsuccessful. Equally important are the effects of the application environment. An understanding of this environment as well as the DASD parameters usually is required for successful application management. An extensive data base cataloging the systems past performance, coupled with a calibrated model provides what is effectively an expert or knowledge based system for exploring these compromises.

Storage management, the system centered management of capacity and performance, is required to deal with the complexities of active and inactive data. Because of the large number of DASD and connections involved, the effects also are difficult to simulate and measure precisely. More attention to the IO subsystem, in particular, the user data base IO, is required to realize the potential of current and future technologies.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Tue, 02 Jul 2002 20:03:21 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Be warned with Lynn and the other IBM folk, (I like Lynn), but they have a tendency to quote 32-bit benchmarks against 60 and 64-bit CDC and Cray benchmarks, and this is why Bailey's first rule in benchmark is the way it is.

I don't have proof anymore but i believe the RAIN4 numbers were for 32 bit and the "RAIN" numbers wree for "double precision" 64bit.

"fast double precision" was introduced for 168-3 (not on initial 168-1 machines) ... and so the 9.1 secs should be for RAIN ... as was the 6.77 secos for the 91.

the interesting numbers are the 3031 and 158 numbers. The processor engine in the 3031 and 158 were the same; however in the case of the 158 .... there were "integrated channels" ... aka there was two sets of microcode running on the same 158 processor engine .... the microcode that implemented the 370 instruction set ... and the microcode that implemented the I/O support ... and the processor engine basically had to time-slice between the two sets of microcode.

For the 3031, there were two "158" processor engines ... one processor engine dedicated to the 370 function (i.e. the 3031) and a second "158" processor engine (i.e. the "channel director") that implemented all the I/O function outboard.

The dates for some of the machines (note 4341 and 3031 were about the same time):


CDC 6600          63-08 64-09     LARGE SCIENTIFIC PROCESSOR
IBM S/360-67      65-08 66-06 10  MOD 65+DAT; 1ST IBM VIRTUAL MEMORY
IBM S/360-91      66-01 67-11 22  VERY LARGE CPU; PIPELINED
AMH=AMDAHL        70-10           AMDAHL CORP. STARTS BUSINESS
IBM S/370 ARCH.   70-06 71-02 08  EXTENDED (REL. MINOR) VERSION OF S/360
IBM S/370-145     70-09 71-08 11  MEDIUM S/370 - BIPOLAR MEMORY - VS READY
IBM S/370-195     71-07 73-05 22  V. LARGE S/370 VERS. OF 360-195, FEW SOLD
Intel, Hoff       71              Invention of microprocessor
Intel DRAM        73              4Kbit DRAM Chip
IBM 168-3         75-03 76-06 15  IMPROVED MOD 168
IBM 3031          77-10 78-03 05  LARGE S/370+EF INSTRUCTIONS

and to repeat the numbers for rain/rain4:

                  158               3031              4341

Rain              45.64 secs       37.03 secs         36.21 secs
Rain4             43.90 secs       36.61 secs         36.13 secs

also times approx;
                   145                168-3              91
                   145 secs.          9.1 secs          6.77 secs

rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in
35.77 secs.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

6600 Console was Re: CDC6600 - just how powerful a machine was

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 6600 Console was Re: CDC6600 - just how powerful a machine was
it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Tue, 02 Jul 2002 20:13:38 GMT
"Russell P. Holsclaw" writes:
The earlier support for the 2250 as a console, in OS/360, was actually cooler than the "device independent" console software later built into the system to support other display-type devices. One of the neat features was a "Help" facility that could be selected via light-pen. It displayed syntax diagrams graphically for the various console operator commands.

there were a number of 2250 "models" ... a 2250m1 direct channel attach with its own controller, a 2250m4 .... which came with its own 1130 and some others.

in the late '60s, somebody at the science center ported spacewars from pdp to the 1130/2250m4 (my kids played it in the mid '70s).

lincoln labs had one or more 2250m1 attached to 360/67 and somebody there wrote fortran graphics package for CMS to drive the screen.

the university i was at also had 2250m1 .... and I hacked the CMS editor with the 2250m1 support code from lincoln labs to generate a full screen editor ... circa fall '68.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Tue, 02 Jul 2002 20:51:29 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Betcha the 158 was microprogrammed by the 360/50 crowd. ("Looks like a duck, walks like a duck...")

So what was the story behind the 3033? A 168 on steroids? I recall a service bureau using one of those as a primary workhorse.


the story is that a 3031 was 158 with channel director and 3032 was 168 with channel director. The 3033 started out to be the 168 wiring diagram mapped to chips that were about 20 percent faster than 168 chips ... and with about ten times the circuits/chip. The initial remapping resulted in the 3033 being about 20 percent than 168/303s. Before any machines shipped, there was then an effort to redo the logic and utilize higher onchip density that got the 3033 up to about 50 percent faster than 168/3032.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Wed, 03 Jul 2002 16:13:46 GMT
Charles Shannon Hendrix writes:
Why do I see so many notes about the 4341, but not the 4381. I programmed a 4381 in 1986-1989 in college.

Was the '81 much faster than the '41? I ask because the '81 I used felt a lot faster than what I've been told about the CDC 6600, but then again, maybe it was just I/O speed.


these particular notes (that i happened to have laying around) came from some work that i was doing for some endicott 4341 performance engineers. they wanted a benchmark run between 3031 and 4341 (this is pre-4381 ... aka after the endicott engineers produced the 4341 ... they later went on to produce the 4381). The endicott performance engineers were having trouble getting machine time to do the benchmark (also at the time, rain/rain4 were one of the few widely run benchmarks).

Basically the processor hardware engineers got the first machine built and the disk engineering/product-test labs got the second machine built, aka in addition to developing new disk drives they validated existing disks against new processors as they became available. The processors in the disk engineering lab had been running "stand-alone" applications (FRIEND, couple others) for the testing .... problem was that the testcell disks under development tended to sometimes deviate from normal operational characteristics (MTBF for a standard MVS when operating a single testcel was on the order of 15 minutes).

As something of a hobby, i rewrote the I/O supervisor to make it absolute bullet proof, aka no kind of i/o glitches could make the system crash. As a result it was installed in all the "test" processors in the disk engineering and product test labs .... and they were able to do concurrent, simulataneous testing of 6-12 testcells (instead of scheduling stand-alone time for one testcell at a time) on each processor (as needed).

I then got the responsibility of doing system support on all those machines and periodically would get blamed when things didn't work correctly and so had to get involved in debugging their hardware (as part of prooving that the software wasn't at fault). One such situation was the weekend they replaced the 3830 control unit for a 16-drive string of 3350s (production timesharing) with a "new" 3880 control unit and performance went into the can on that machine. Fortunately this was six months before first customer ship of the 3880 controller so there were times to make some hardware adjustments (I make this joke at one point of working 1st shift at research, 2nd shift in the disk labs, and 3rd shift down at STL, and also couple times a month supporting the operating system for the HONE complex in palo alto).

In any case, at that particular point that there were two 4341s in existance, one in edicott and one in san jose disk machines. Since I supported the operating system for san jose disk ... and since while the machines might be i/o intensive ... the workload rarely exceeded 5 percent cpu utilization. They had 145, 158, 3031, 3033, 4341, ect. machines that I could worry about and had some freedom in doing other types of things with.

So i ran the rain/rain4 benchmarks for the endicott performance engineers and got 4341 times (aka they couldn't get time on the machine in endicott because it was booked solidly for other things), 3031 times, and 158 times. They previously had collected numbers for the 168-3 and 91 times for rain/rain4 ... and of course rain had been run on 6600 (numbers they sent to me along with the benchmarks to run).

There may have been other benchmark runs made by other people ... but I didn't do the runs and didn't have the data sitting around conveniently. I posted some numbers that I had conveniently available.

misc. disk engineer related posts:
https://www.garlic.com/~lynn/subtopic.html#disk

misc. hone related posts:
https://www.garlic.com/~lynn/subtopic.html#hone

random post about working 4-shift work weeks (24hrs, 7days):
https://www.garlic.com/~lynn/2001h.html#29 checking some myths

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Wed, 03 Jul 2002 16:27:15 GMT
Charles Shannon Hendrix writes:
How did the two stay in sync? I mean, I have vague and fuzzy recollections of doing assembler on a 4381 and since the programs are a mix of I/O and calculation, I assume this is very important.

basically instruction processors and I/O processors were architected to be asynchronous .... effectively in much the same way that machines with multiple instruction processors (SMP) are typically architected so that the multiple instruction processors operate asynchronously.

The instructions for th3 360/370 i/o processors were, in fact called "channel programs". You could write a channel program ... and signal one of the asycnronous "channel processors" to begin asynchronous execution of that channel program. The channel program could cause asynchronous interrupts back to the instruction processor signling various kinds of progress/events.

On some of the machines, the i/o processors were real, honest to goodness independent asynchronous processors. On other machines, a common microcode engine was used to emulate both the instruction processor and multiple i/o (channel) processors. Machines where a common processor engine was used to emulate multiple processors (cpus, channels, etc) where typically described as having "integrated" channels.

158s, 135, 145, 148, 4341, etc ... were "integrated" channel machines (aka the native microcode engine had microcode for both emulating 370 processing and microcode for performing the channel process function and executing "channel programs"). 168 machines had outboard channels (independent hardware boxes that implement the processing of channel programs). Channels processors and instruction processors had common access to the same real storage (in much the same way that multiple instruction processors have common access to the same real storage).

For the 303x line of machines .... they took a 158 integrated channel machine .... and eliminated the 370 instruction emulation microcode ... creating a dedicated channel program processing machine called a "channel director". The "channel director" was then a common component used for 3031, 3032, and 3033 machines ... aka they were all "outboard channel" machines (having dedicated hardware processing units for executing channel programs) ... as opposed to "integrated channel" machines.

A 3031 was then a 158 with just the 370 instruction emulation microcode and reconfigured for "outboard channel" operation rather than "integrated channel" operation. A 3032 was then a 168 that was reconfigured to use "channel director" for outbarod channels (rather than the 168 outboard channel hardware boxes).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Wed, 03 Jul 2002 16:56:15 GMT
Anne & Lynn Wheeler writes:
shift in the disk labs, and 3rd shift down at STL, and also couple times a month supporting the operating system for the HONE complex in palo alto).

HONE was the system infrastructure that supported the people in the field, salesmen, marketing people, branch office people, etc.

At one time the US HONE system in Palo Alto had grown into the largest single-system-image complex in the world. At one time, I knew it had something over 40,000 defined "userids".

The US HONE system was also cloned for a number of country and/or regional centers around the world.

Also, in the early '80s, the Palo Alto complex was extended with redundant centers in Dallas and Boulder for disaster survivability (my wife and I later coined the terms disaster survivability and geographic survivability when we were doing HA/CMP) ... online workload was spread across the three datacenters, but if one failed the remaining two could pick up.

Nearly all of the application delivery to branch & field people were written in APL ... running under CMS. One of the most important were the "configurator" applications. Starting with the 370/125 (& 115), it was no longer possible for a salesman to manual fill-out a mainframe machine order .... they all had to be done interacting with HONE configurator.

random ha/cmp refs:
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Wed, 03 Jul 2002 16:57:13 GMT
Anne & Lynn Wheeler writes:
basically instruction processors and I/O processors were architected to be asynchronous .... effectively in much the same way that machines with multiple instruction processors (SMP) are typically architected so that the multiple instruction processors operate asynchronously.

also some smp related postings:
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

: Re: AS/400 and MVS - clarification please

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: : Re: AS/400 and MVS - clarification please
Newsgroups: bit.listserv.ibm-main
Date: Wed, 03 Jul 2002 16:47:18 GMT
Rick.Fochtman@BOTCC.COM (Rick Fochtman) writes:
OK...am I the only one who "knew" the 2321 data cell famous early "mass storage" we used in developing a large (really) application .... Finest day was when an executive was touring, looking to see if the application (ALIS if anyone was there.. Advanced Life Insurance System) was on target... AND HE TURNED THE DATA CELL OVER... OOPS - there is your master file spread in little strips all over the floor... Now that was an interesting "failure" to recover from! Janice Winchell

some random old 2321 postings ...
https://www.garlic.com/~lynn/2000.html#9 Computer of the century
https://www.garlic.com/~lynn/2000b.html#41 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2001.html#17 IBM 1142 reader/punch (Re: First video terminal?)
https://www.garlic.com/~lynn/2001.html#51 Competitors to SABRE?
https://www.garlic.com/~lynn/2001e.html#42 OT: Ever hear of RFC 1149? A geek silliness taken wing
https://www.garlic.com/~lynn/2001e.html#50 "IP Datagrams on Avian Carriers" tested successfully
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2001n.html#61 Google Archive
https://www.garlic.com/~lynn/2002.html#16 index searching
https://www.garlic.com/~lynn/2002.html#22 index searching
https://www.garlic.com/~lynn/2002f.html#3 Increased Paging in 64-bit
https://www.garlic.com/~lynn/2002g.html#84 Questions on IBM Model 1630

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Wed, 03 Jul 2002 17:10:31 GMT
Charles Shannon Hendrix writes:
Why do I see so many notes about the 4341, but not the 4381. I programmed a 4381 in 1986-1989 in college.

Was the '81 much faster than the '41? I ask because the '81 I used felt a lot faster than what I've been told about the CDC 6600, but then again, maybe it was just I/O speed.


oh yes, i did do a post referencing URL somebody's linpack table ... and extracted several of the entries for the posting (including a number of 4381 entries ... besides 6600). Note the original table at the referenced URL includes information about compiler and options used .... i scrubbed that info ... trying to reduce the size of the posting ... people wanting to see the full information should go to the reference URL

the 4381 linpack entries from that posting


IBM 4381 90E                                         1.2
IBM 4381-13                                          1.2
IBM 4381-22                                          .97
IBM 4381 MG2                                         .96
IBM 4381-21                                          .47
IBM 4381-11                                          .39

ref linpack posting
https://www.garlic.com/~lynn/2002i.html#12

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

trains was: Al Gore and the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: trains was: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Wed, 03 Jul 2002 19:27:03 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Talking with Europeans (I just got back from London yesterday after also being in Scotland and Iceland on this trip): they have practically no place to park, their roads are much smaller, they have different land uses than the US. Take a European to Nevada outside the cities and they practically never seen open land like that.

when I was in high school, i lived in a county that was the same size as the state of massachusetts but only had a total population of 50,000.

slightly related:
https://www.garlic.com/~lynn/2002d.html#32 Farm kids

it was quite an adjustment moving to boston. I drove cross country in the winter .... crossing into mass. i observed that there were county two-lane mountain roads in the west better built than the mass pike.

there were two claims by various long time mass. residents/natives .... 1) the frost heaves caused all the problems on the pike (frost heaves is a problem out west also ... but they build the road bed appropriately) & 2) road repair was a thriving lobby in the state and become dependent on doing major road repairs every year (somebody joked about water soluble asphalt being used for mass. roads).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Wed, 03 Jul 2002 19:32:22 GMT
Charles Shannon Hendrix writes:
Mostly I just notice that I see a lot more references to the 4341 instead of the 4381, and I mean in general, not just in your posts.

Was the '81 a less popular machine?

I remember the one I used in Richmond, VA, and there was one in city hall in Newport News, VA. But most of what I saw was 3xxx or old 370 machines, not counting things like the baby-mainframes and the AS/400s.


4341 was possibly one of the best price/performance machines for its time. slightly related
https://www.garlic.com/~lynn/2001m.html#15 departmentl servers

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Wed, 03 Jul 2002 20:45:38 GMT
Anne & Lynn Wheeler writes:
4341 was possibly one of the best price/performance machines for its time. slightly related
https://www.garlic.com/~lynn/2001m.html#15 departmentl servers


the talk referenced in the above posting about 11,000+ vax machines ... was given in 1983 (spring?)

note as per:
https://www.garlic.com/~lynn/2002f.html#0

the total world-wide vax ships as of the end of 1982 was 14,508 and the total as of the end of 1983 was 25,070.

from
https://web.archive.org/web/20050207232931/http://www.isham-research.com/chrono.html

4341 announced 1/79 and fcs 11/79 4381 announced 9/83 and fcs 1q/84

workstation and PCs were starting to come on strong in the departmental server market by the time 4381s started shipping in quantity.

also per:
https://www.garlic.com/~lynn/2002f.html#0

while the total number of vax shipments kept climbing thru '87 ... they were micro-vax. combined 11/750 & 11/780 world wide shipments thru 1984 was 35,540 ... and then dropped to a combined total of 7,600 for 1985 and 1660 for 1986.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

: Re: AS/400 and MVS - clarification please

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: : Re: AS/400 and MVS - clarification please
Newsgroups: bit.listserv.ibm-main
Date: Wed, 03 Jul 2002 20:56:15 GMT
vbandke@BSP-GMBH.COM (Volker Bandke) writes:
Spaghetti : type of noodles

Schleuder: Tumbler, Twister

Noodle snatcher sounds nice to me

Thanks

Now - How do I support that under Hercules <b.s.g>


the "BB" in "BBCCHHR" ccw convention is for addressing the 2321 "bins". i always thot of the 2321 action more like a washing machine as the bins rotated. The sound of 2321 at boot/ipl was distinctive ... a ker-chunk, whirl, ker-chunk, whirl, ker-chunk, whirl, ... as the volser for each bin was read (aka read the strip, put it back, rotate).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM was: CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM was: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jul 2002 14:23:01 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
I thought IBM had a strict no benchmark release policy.

just a.f.c. now.


i get to invoke the 5 year expiration rule ... or maybe the machine no longer exists rule. Machine "Function Specification" documents used to include instruction timings and details like difference between whether base &/or index register was used, etc.

Maybe what I did was run sample program not benchmarks. Sometimes when I didn't have direct access to the hardware I would ask people at other locations to run some sample program .... I got some good/different numbers from places like BNR.

There was a situation that created some ambivalence. I wanted dynamic adaptive to not take into account workload and different machine models but the future. So for the Resource Manager .... I put in some timing code at boot/ipl and dynamically adjusted various factors that had been previously pulled out of a static table based on cpuid.
https://www.garlic.com/~lynn/subtopic.html#fairshare

Part of this was that you couldn't assume what possible model numbers might be invented in the future. Probably the wierdest story was AT&T longlines getting an early pre-release version and it disappearing into their corporate structure (actually they got source for a heavily modifed kernel including some other stuff also). Ten years later somebody handling the AT&T longlines account came tracking me down. Turns out that AT&T longlines was still running the same source and it had propagated out into the corporate infrastructure ... and they just kept moving the kernel to the latest machine models. Over a period of 10 years some of the things had changed by factors of fifty to hundred, but that little ol Resource Manager just kept chugging along.

In any case, not only did the change eliminate needing to have model numbers (and corresponding values) in predefined static table ... and therefor didn't need to have preknowledge about the future ... there were also these things called clones that had there own convention for cpuids (which the little ol resource manager chugged along on also).

clones ... plug compatible manufactur cpus (or PCMs cpus).

Another PCM story ... that they couldn't really hold against me for because it was done while still an undergraduate ... was building the first PCM controller ... and getting blamed for helping starting the PCM controller business.
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"Mass Storage System"

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Mass Storage System"
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jul 2002 14:28:59 GMT
mikekingston@cix.co.uk (Michael J Kingston) writes:
Film chip 35 by 70 millimetres - capacity approx five times ten to the sixth data bits Plastic cells each hold 32 chips A cell file can rapidly select any of its 2250 cells for delivery to a chip reader

there recently has been a new running thread on 2321s in bit.listserv.ibm-main

I think this had some discussion in this n.g. within the past year ... as one being at LLNL (which i only heard about ... didn't actually see).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM was: CDC6600 - just how powerful a machine was it?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM was: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jul 2002 14:32:37 GMT
Anne & Lynn Wheeler writes:
There was a situation that created some ambivalence. I wanted dynamic adaptive to not take into account workload and different machine models but the future. So for the Resource Manager .... I put in some timing code at boot/ipl and dynamically adjusted various factors that had been previously pulled out of a static table based on cpuid.
https://www.garlic.com/~lynn/subtopic.html#fairshare


oh, and there was a general user command that would yield up at least one of the values; the operating system just ran the "benchmark" for you automagically.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

pop density was: trains was: Al Gore and the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: pop density was: trains was: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jul 2002 14:46:35 GMT
eugene@cse.ucsc.edu (Eugene Miya) writes:
Some what. Still most mtn. roads aren't paved. I know people in the Silicon Valley who still have 2 miles of dirt road to get to their house.

... i thot was slight connection since I started the trip over (paved) country mountain roads that were subject to frost heaves (and in much better condition) ... and ended on the masspike which is part of the interstate highway system .... and which failed much of the time to meet interstate highway system standards.

the other slight connection was that mass. residents claimed that the name of the major paving/road company and the name of a certain federal sec-DOT was the same (there is also a federal bldg. in cambridge by that name).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

pop density was: trains was: Al Gore and the Internet

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: pop density was: trains was: Al Gore and the Internet
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jul 2002 14:55:05 GMT
Anne & Lynn Wheeler writes:
... i thot was slight connection since I started the trip over (paved) country mountain roads that were subject to frost heaves (and in much better condition) ... and ended on the masspike which is part of the interstate highway system .... and which failed much of the time to meet interstate highway system standards.

... driving from out west to mass in the winter ... the road on that trip that was in the absolutely worst condition (including some mountain county roads subject to severe frost heaves) was the (interstate) masspike (and the mass residents claimed that it was a regular and well expected problem that needed major fixing every year ... and implied that it could have possibly been adverted with better construction ... i don't remember for sure ... but something about the depth of the road bed and the materials used; something about the depth of masspike roadbed thru frost heave area was something like half that used on some county roads out west in similar situations).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM was: CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM was: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jul 2002 15:14:39 GMT
Anne & Lynn Wheeler writes:
i get to invoke the 5 year expiration rule ... or maybe the machine no longer exists rule. Machine "Function Specification" documents used to include instruction timings and details like difference between whether base &/or index register was used, etc.

ok, maybe some 4341s ... or 158s, or 3031s, might possibly still exist ... but I could invoke the no longer be sold by ibm rule.

also when I did this particular benchmark it was still in the '70s (2nd 4341 built ... before any shipped to customers) ... and the issues about benchmarking results weren't as strict at that time (including possibly some functional specifications still having meaningful data).

the worst "benchmarking" scenario that i'm aware of is the MVS/CMS backoff done by CERN (on the same hardware). Even tho it was a SHARE report .... some internal organization had it stamped "Confidential, Restricted" (only available on need-to-know basis ... as least so far as employees were concerned).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jul 2002 18:30:15 GMT
"Rupert Pigott" <dark.try-eating-this.b00ng@btinternet.com> writes:
How about adding 25,000 remote terminals, and maybe 1000 disk drives out there pumping 250MB/s through the system, doing something useful with most bytes and all in 64MB?

redoing "routes" for a bit more than 25k terminal/users:
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#153 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000.html#61 64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2001k.html#26 microsoft going poof [was: HP Compaq merger, here we go again.]

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jul 2002 18:40:11 GMT
Steve O'Hara-Smith writes:
The PC that used to take up to 5000 ftp sessions at once and one day kicked out two and a quarter terabytes of data (30MB/s average) was rather slow by todays standards.

remember when the netscape browser download had machines 1-19? that was somewhat alliviated when they got netscape20 machine ... a large sequent that was configured to handle 20,000 sessions at once and I believe was the first kernel that had a serious fix for the finwait problem for webservers.

http wasn't so much of an issue of the number of concurrern sessions, it was the number of sessions endings per second and the length of the dangling finwait queue (high activity web servers spending 98 percent of total cpu running the finwait list). tcp finwait hadn't seen it before, even with relatively high number of concurrent sessions because tcp was somewhat presumed to be a connection protocol that lasted for some time. http1.0 is effectively a connectionless protocol being driven over a connection protocol (as a result http would drive the number of tcp session terminations per second thru the roof, as well as causing a large amount of session setup/tear-down packet chatter).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Thu, 04 Jul 2002 18:51:39 GMT
"Douglas H. Quebbeman" writes:
Actually, a CDC 6600 with 60 terminals hanging off it was terribly bogged down, giving crappy response time to all 60 users. I can't imagine trying to hang more _character mode_ terminals off such a system.

But the comparisons often made to IBM mainframes with 1000s of terminals isn't fair; unless I'm mistaken, those 1000s of terminals were usually block-mode terminals, or ones which were at least operating in block mode. Not having to process an interrupt for every daggone character allows you to hang a lot more of them onto the muxes, and more muxes on the channels.

CDC also provided for block-mode terminals and transaction- oriented subsystems, but I personally never saw those in use.


in the early '90s the configuration referenced in the previous post about routes was actually a cluster of SMP mainframes .... but still having avg. peak loading of 3500 transactions per second .... 1500 or so weren't real transactions ... they were device interrupts from the ticket & boarding pass printers around the world; human operated terminals was only seeing about 2000 "transactions" a second ... about 1/4th of the 2000 or around 500/second were requests to find a route between two airports (i.e. flight segments).

now, back with cp/67 on a 360/67 single processor (say maybe comparable to a large AT w/80287 co-processor) we supported 75-80 "active" 2741 terminal "mixed-mode" users with subsecond trivial response (of course this had line interrupts not character interrupts) ... mixed-mode ... apl, program development, document preparation, source editing, compilation, program debug & test, etc.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM was: CDC6600 - just how powerful a machine was it?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM was: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jul 2002 09:37:25 GMT
Anne & Lynn Wheeler writes:
(including possibly some functional specifications still having meaningful data).

oops ... finger/brain check

that was funcational characteristics manuals ... for instance for 360/67

A27-2719 IBM System/360 Model 67: Functional Characteristics

gave detailed instruction timings

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jul 2002 10:00:30 GMT
Anne & Lynn Wheeler writes:
now, back with cp/67 on a 360/67 single processor (say maybe comparable to a large AT w/80287 co-processor) we supported 75-80 "active" 2741 terminal "mixed-mode" users with subsecond trivial response (of course this had line interrupts not character interrupts) ... mixed-mode ... apl, program development, document preparation, source editing, compilation, program debug & test, etc.

there was a lot of work to get cp/67 up to that level of performance. when I first got a copy of cp/67 at the university in jan. '68 ... it was doing good supporting 30 mixed-mode users and/or concurrent mixed-mode and a guest operating system like MFT.

over the next six months I significantly reduced general pathlengths and introduced "fastpaths" (fastpath is methodology for optimized pathlength for the most common case(s) .... in contrast to just straight-forward optimized pathlength for all cases). ref to report I gave at fall '68 SHARE meeting as to the result of some of the pathlength work for guest operating systems:
https://www.garlic.com/~lynn/94.html#18 CP/67 and OS MFT14

Of the next 18 months at the university I also rewrote dispatch/schedule and implemted fair share scheduling, rewrote paging subsystem and implemented clock replacement algorithm, rewrote "DASD" support .... implementing ordered-seek queueing for 2311 and 2314 disks and chained scheduling for 2301 drums.

The orders seek queueing reduced latency and increase thruput of both 2311 and 2314 disk drives for both paging and user i/o. The 2301 was a fixed head device that doing one i/o operations at a time was subject to average rotational latency delay for every i/o operation. In this mode, the 2301 had peak sustained paging rate of about 80 page i/os per second. With chained-scheduling, multiple requests were ordered in a single i/o ... so rotational latency typically applied to just the first operation .... given a dedicated channel the 2301 would then see peek sustained thruput of 300 page i/os per second.

The other feature affecting performance that I did as an undergraduate was to implement support for being able to page selected portions of the kernel ... reducing the total fixed storage requirements ... freeing up more space user execution. Machine at the university was 768k memory ... 192 4k pages ... but that would be reduced to as little as 104 "available" 4k pages under heavy load. Some simple kernel paging could easily pick up 10 percent real storage for application execution.

Eventually all of the above changes (except for kernel paging) was picked up and distributed as part of the standard release (they also distributed various other stuff that I had done like the TTY/ascii terminal support).

Much of the above was carried over in the CP/67 to VM/370 port ... except for the paging and dispatch/scheduling algorithm changes. I was able to re-introduce those in the resource manager product:
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

which went out initially as PRPQ (special offering) but was shortly changed into standard product status.

misc. general refs:
https://www.garlic.com/~lynn/subpubkey.html#technology
including
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jul 2002 10:32:39 GMT
Charles Shannon Hendrix writes:
The WWW is what basically amounts to a graphical, markup-based virtual 3270.

one could claim that html .... was cambridge's GML by way of all the ibm systems at CERN (aka GML was done by "G", "M", & "L" at cambridge, before becoming SGML ... and then morphing into stuff like HTML, XML, etc).

3270s did have some number of drawbacks. The 3274 controller (for 3278, 3279, etc terminals ) was slower than the original 3272 controller for 3277 terminals, in part because a lot of "head" electronics was moved out of the terminal and back into the 3274 controller (making the cost of each 3278/3279 terminal cheaper).

With some local electronic tinkering it was possible to adjust the key repeat delay and key repeat speed to just about any value you wanted (on the 3277) ... improving the ability to navigate the cursor around the screen. You could also get a keystroke FIFO box for the 3277. The 3270 had a characteristic if you used to fast typing ... that if you happen to hit a key at just the same instant that something was being transferred to the screen ... the keyboard would lock and you would have to het the reset buttom. For fast typists ... that just about threw you into half-duplex mode of operation to avoid having to deal with the interruption of the keyboard lockup and having to hit reset.

All of that was lost in upgrade to 3274/3278/3279 (since all the required electronics were now back in the controller and not in the terminal). It never really came back until you got ibm/pc with 3278/3279 terminal emulation.

The other impact was that while local 3274s (direct channel attach) exhibited significantly better human factor response than remote 3274s controller (i.e. connected over some sort of 9.6kbit telco line ... with multiple attached terminals all sharing the controller rate) ... the 3274 command processing time on the channel was excessive.

A fully configured system tended to have a lot of disk controllers and a lot of 3274 controllers all spread out on the available channels (with some disk controllers and some 3274 controllers attached to each channel). It turned out that 3274 slowness was causing high channel busy time and typically interferring with disk thruput. This wasn't immediately recognized .... however I was doing a project implementing kernel aupport for HYPERchannel Remote Device Adapter (basically channel extender over T1 telco line) for large numbers of local 3274 controllers. You would remove the 3274 controllers from all the channels and put them in a remote building. You would then attach a HYPERchannel A22x to the channel and setup a HYPERchannel network to some number of A51x boxes at the remote site. The A51x boxes emulated mainframe channels and you could attach "local" controllers (like 3274) to the A51x boxes and they would think they were talking to real channel.

The result was you could put a couple hundred people and their terminals at a remote site and they typically couldn't tell the difference between being local or remote. This is in contrast to "remote" 3274s where the response and human factor degradation was significant.

A side effect of this project was that it appeared that total disk thruput (and therefor total system thruput) went up by about 10-15 percent. Further analysis showed that the HYPERchannel A22x boxes that were directly attached to mainframe channel had significantly lower channel "overhead" for doing the same exact operation compared to configuration with all the local 3274s controllers attached to the real channels. This discovery resulted in some number of presentations and configuration adviseries as to not mixing 3274 controllers and the same channels with anything else of significant thruput importance (like disk controllers).

slightly related discussion with respect to some disk thruput & controller issues
https://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please

some past HYPERchannel postings
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Slightly related HYPERchannel issue ... mentioned in some of the above postings. I had also did the standard mainframe product support for RFC1044 (aka hyperchannel) and did some tuning at Cray Research ... where we got sustained thruput between a cray machine and 4341-clone that was nearly equal to 1.5mbyte/sec hardware channel speed ... with only relatively modust cpu utilization. By comparison the "standard" base support (non-rfc1044) had trouble getting 44kbyte/sec while nearly saturating a 3090 cpu.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unisys A11 worth keeping?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unisys A11 worth keeping?
Newsgroups: alt.folklore.computers,comp.sys.unisys
Date: Fri, 05 Jul 2002 11:27:24 GMT
jmfbahciv writes:
I think that's why a lot of people liked TOPS-10. No matter who or what you were, you were guaranteed to have some knowledge that others didn't have about the system. Even the most lowly drudge could be important and help somebody floundering on the terminal next door.

one of the things with vm ... besides being a time-sharing service ... was that tymshare started hosting vm releated newsgroup for the share community in the mid-70s (hosted on their vm-based time-sharing service). the archive:
http://vm.marist.edu/~vmshare/

started in the '80s a lot of the activity started move to various mailing lists .... first on bitnet/earn and then on the internet (bitnet was the vm-based corporate funded educational network in the US and earn was similar corporate funded educational network in europe).

random bitnet/earn refs:
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#39 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000b.html#67 oddly portable machines
https://www.garlic.com/~lynn/2000c.html#61 TF-1
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000f.html#22 Why trust root CAs ?
https://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2000g.html#39 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#19 What is "IBM-MAIN"
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#25 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001h.html#65 UUCP email
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#57 Computer Naming Conventions
https://www.garlic.com/~lynn/2002d.html#33 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002e.html#6 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?

from above vmshare ref:
Welcome to the VMSHARE Archives

About VMSHARE

VMSHARE has been the conferencing system of the VM Cluster of SHARE since August 1976. After VMSHARE was closed down in August 1998 it was decided that the database should be kept available for reference. Read here the announcement of that by Ross Patterson. The best way to get a feeling for what VMSHARE meant to its users is probably by browsing through the VMSHARE Archives where you will find appends like this. It may also be helpful to read Melinda Varian's History of VM to get a better understanding of the community that has developed around VM and VMSHARE.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jul 2002 11:49:51 GMT
Brian Inglis writes:
What kind of config was that? AT a PPOE we managed to get 30KB/s sustained TCP/IP throughput over twin 128kbps SNA links (Canada-UK) using TCP over SNA tunnelling with no significant loading, although IIRC both the TCP and SNA setups were patched up to date and tweaked according to SE instructions.

see RFC1044 ... if nothing else find pointer to the RFC from my rfc index at:
https://www.garlic.com/~lynn/rfcietff.htm

Basically a channel attached hyperchannel A22x box on ibm mainframe connected to a hyperchannel network. i wrote all the code ... device driver and various optimized paths for the box in the base product.

The original mainframe TCP/IP product ... in the'80s initially only supported the 8232 ... basically a pc/at with channel attach card and some number of LAN cards. The 8232 wasn't really a tcp/ip box .... but a channel to LAN gateway ... so all the TCP/IP to LAN/MAC level stuff had to be done in the mainframe (which accounted for a lot of the cpu overhead processing). The channel attached HYPERchannel box was a real TCP/IP router ... which allowed a lot of the processing for the 8232 to be bypassed.

This was also the basis for what we used for the mainframe part of our internal corporate highspeed backbone. This internal corporate highspeed backbone is what the NSF audit claimed was five years ahead of all bid submissions for the NSFNET1 backbone. For the internal backbone I had done some additional stuff that didn't appear in the product (like rate-based pacing ... which the audit cliamed was included in the five years ahead .... and 15 years later it still looks to be five years ahead ... internet2 is looking at it tho). random refs:
https://www.garlic.com/~lynn/internet.htm

misc. 8232 refs:
https://www.garlic.com/~lynn/99.html#36 why is there an "@" key?
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?

misc. stuff about hsdt ... high speed data transport:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

in the following ref:
https://www.garlic.com/~lynn/94.html#33b High Speed Data Transport (HSDT)

somebody in the SNA group had posted an announcement for a new newsgroup. The contrast was significant.

tale slightly out of school:
https://www.garlic.com/~lynn/2000c.html#58

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

AS/400 and MVS - clarification please

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AS/400 and MVS - clarification please
Newsgroups: bit.listserv.ibm-main
Date: Fri, 05 Jul 2002 10:54:09 GMT
Anne & Lynn Wheeler writes:
the result was reformulated into recommended system configuration presentation and initially given at Share 63, presentation B874 (8/18/84). Summary from the paper:

slightly related to recommendations configuring disk controllers on channels (with respect to 327x controllers):
https://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine was it?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

AS/400 and MVS - clarification please

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AS/400 and MVS - clarification please
Newsgroups: bit.listserv.ibm-main
Date: Fri, 05 Jul 2002 10:57:24 GMT
Anne & Lynn Wheeler writes:
basically 2303 and 2301 were the same physical device. 2303 was used with standard disk controller. 2301 was essentially a 2303 that read/wrote four heads in parallel (and had four times the data transfer rate of the 2303) and had a high-speed controller.

2301 held about 4mbytes of data. TSS & CP/67 formated 2301 with 9 4k pages on a pair of 2301 "tracks" (eight physical "2303" tracks).


slightly related to programming i/o support for 2301:
https://www.garlic.com/~lynn/2002i.html#42 CDC6600 - just how powerful a machine was it?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jul 2002 12:14:36 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
Oh yes. I remember using a 3274 (well, that and a herd of UNIX minis which emulated them and connected via an X.25 card hacked to talk SNA- style SDLC) which was connected to the remote mainframe via a piece of wet string which wobbled at 9600bps. This was over a 3-year period and it was painful, especially after a visit to the site where the mainframes lived and I got to see what the response should really be like (much < 1 second as opposed to several seconds)

there were a couple level of "arguments". there were a lot of the TSO crowd claiming that sub-second response wasn't required. Some number of detailed human factor detailed studies then were done that showed that sub-second response was significant. Those studies significantly aided the case for having local 327x controllers at remote location using hyperchannel as a channel extender (as opposed to using sna-based remote 327x controllers).

Then there was a east research center that was really proud of the fact that the had .22 second response under heavy load and provided one of the best time-sharing services in the world. We then pointed out that we had a .11 second response with effectively the same load and the same hardware. There was then some discussion whether less than .20 second response really had any meaning (i.e. could humans tell the difference between .11 second response and .20 second response).

the sna group were less than thrilled about various of these activities (also see the recent posting mentioning tcp/ip and vtam/sna). they were really not thrilled that i was part of the four person group that created the first non-ibm controller and started the pcm controller business. another slightly related sna/hyperchannel comparison:
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2001k.html#21 OT: almost lost LBJ tapes; Dictabelt
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jul 2002 16:05:30 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
I'm sure I heard somewhere (may have even been from yourself?) that it's difficult to notice the difference in response times below 100ms, but any increments above that are quite obvious.

shortly after getting hired .... i got to go to an internal technical conference held in DC at old marriott. mills gave a talk on superprogrammer. somebody from ykt gave a talk on event time perception threshold, across a wide range of people it seemd to vary between 100ms and 200ms.

prior ref:
https://www.garlic.com/~lynn/2001h.html#48 Whom Do Programmers Admire Now???

a later study in the early '80s found that predictability was also important .... if people could perceive the delay, they could get into a pattern anticipating it ... if the delay was longer than anticipated, it interrupted the (human) pattern and it then took the person twice as long to "recover". If the delay was about two seconds, and it went to four seconds .... a human took an additional two seconds (six seconds total) to recover. The theory was that the person's attention started to wonder when their expectation failed and the attention "recovery" time was equal to the amount of time spent "wondering". A supporting example was if the delay extended into the minutes, you might even leave your desk to do something else.

other random refs:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#24 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#25 How many Megaflops and when?
https://www.garlic.com/~lynn/2000c.html#64 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#40 360 CPU meters (was Re: Early IBM-PC sales proj..
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jul 2002 16:22:15 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
Interesting that even on our somewhat rickety setup, TSO (and TSO alone) still managed sub-second responses where a significant screen update wasn't involved. Anything which used CICS or COMIS was dreadful to use, though. Never got much of a chance to find out how the VM response was, though, in spite of trying to beg, steal or borrow a CMS or PROFS ID for the best part of a year (I eventually gave up; unfortunately account alloc and charging was a bit anal, so I couldn't get one even though I was part of the MIS group!)

cms subsecond .2 or less was measured even with complete screen refresh ... although that got harder to do when 3274s were first introduced since worst case 3274 controller latency could be upwards of half second. It was unusual for TSO avg. response to be subsecond. past discussion on 3274 latencies:
https://www.garlic.com/~lynn/2001m.html#19

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Fri, 05 Jul 2002 21:42:23 GMT
Brian Inglis writes:
My VM HPO systems always gave less than .1 second trivial interactive response -- even if it meant spending a weekend moving hot and cold spots around between packs. Page down does not seem quite as fast on a local Windows PC editor app as those systems. OTOH most Unix xterm and editor windows seem to be fast enough, even running with a Windows X server.

exchange from last year on slightly related topic
https://www.garlic.com/~lynn/2001f.html#57

one of the "new" releases saw a 10-15 percent degradation compared to a previous release .... which raised a lot of attention as to performance in general. One of my things that had been laying around had to do with the internal processing of full-screen i/o transactions. If you relied on the standard product internal response numbers it was based on elapsed time between certain physical events ... actually 3 different response events .... with the avg. time for each one going into the calculation.

In the above reference, I had collapsed the three separate events into a single event .... resulting in higher aggregate thruput because of better optimization processing ... but the internal avg. response calculation might look worse (since I had a single event that was three times longer than previous three single events each one-third as long). Until my changes were merged into the standard product ... a system with my changes in this area with .1 second resposne was actually three times better than an unmodified system that produced a calculation of .1 second response.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

AFP really was 2000: CDC6600 - just how powerful a machine was

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: AFP really was 2000: CDC6600 - just how powerful a machine was
 it?
Newsgroups: alt.folklore.computers,comp.sys.cdc
Date: Sat, 06 Jul 2002 09:42:12 GMT
"John Keeney" writes:
Yes, Doug, but since the octal was there it was still readable with minimal effort. I confess I probably would have felt compelled to...

with enuf practice it becomes as easy to read the octal/hex as it were character (this is somewhat analogous to learning a foreign language, the first stage is being able to do the translation in real time, the next stage is not needing to do translation ... the alternate method has equivalent meaning). Besides dumps ... i learned something similar dealing a lot with punched cards ... being able to read the holes as easily as character or hex.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

wrt code first, document later

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: wrt code first, document later
Newsgroups: alt.folklore.computers,comp.lang.forth
Date: Sat, 06 Jul 2002 09:59:09 GMT
Steve O'Hara-Smith writes:
Do I have to go through this thread and dig out quotes - some people in this thread seem to expect the design to match the implementation or find error in the implementation. This started with the idea that the user manual could be written first and still be expected to be correct when the code was delivered.

i would say ... depends ... if it is a user interface ... then it is possilbe that design work on the interface could contribute to the understanding and context of the implementation. However, I found some amount of implementation stuff ... a specification in english didn't contribute anything to the implementation (at the time, possibly because there was inadequate english to represent the context of the implementation).

there is the joke(?) in the resource manager. I was doing all this stuff to make it dynamic adaptive based on load and configuration. I got told by some marketing people that the "state of the art" was implementations with manual tuning parameters for system programmers ... and it was necessary to add additional tuning parameters to the resource manager or it wouldn't be considered modern.

So i implemented some tuning parameters. I then wrote the documentation and manuals as part of shipping the resource manager as a product.

now the "real" test of the resource manager were the 2000 automated benchmarks that took three months to run that validated the resource managers ability to dynamically adapt to a broad range of configurations and workloads. in some sense there were some heuristics in the automated benchmarking manager that looked at results of previous benchmarks and then chose load/workload for the next benchmark (aka not only was there dynamic adaption developed for the resource manager ... but there was also dynamic adaption developed for the benchmarking methodology used to validate the resource manager).

So the resource manager ships with documentation, formulas for the dynamic adaption, and formulas for the tuning parameters, and all the source code. It is possible to read the documentation and formulas and validate it against the sorce code. Note however, the tuning parameters have very limited effect ... which wasn't ever discovered/realized.

Part of the dynamic adaptive operation of the resource manager implementation was interative feedback/feedforward operation. One of the things that you can do in an interative cycle is control the degrees of freedom of the various items. Giving the dynamic adaptive items more degrees of freedom than the tuning parameters met that the dynamic adaptive code could compensate for any manual fiddling of the tuning parameters.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unisys A11 worth keeping?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unisys A11 worth keeping?
Newsgroups: alt.folklore.computers,comp.sys.unisys
Date: Sat, 06 Jul 2002 16:14:35 GMT
jmfbahciv writes:
No. He worked on that architecture after the Unix SMP. IIRC, the Unix SMP was based on System V and done in (guessing) 1986 or 1987. His OSF consulting happened after that. OSF was not supposed to be yet another Unix flavor when he started that project.

OSF had dec, mit, cmu afs, ibm dfs, and some ucla locus for the distributed computing envirionment (locus did both process migration and partial file caching).

dec & ibm had jointly funded/backed project athena (kerberos, X, palladium, etc). IBM had funded cmu (mach, camelot, andrew stuff). IBM had also funded UCLA (aka aix/370/ps2 was locus .... as opposed to pc/rt's aix ... which was a at&t system v).


OSF Announcement Press Conference, Tuesday, May 17, 1988.
The Auditorium at Equitable Center, Boston

7 Speakers; John Young acted as moderator and master of ceremonies.

-- John Young, Hewlett-Packard
   - Introduction and Moderator
- We 7 are competitors: Apollo, Digital, H-P, Groupe Bull, IBM, Nixdorf,
         Siemens.
- Knock-down, drag-out battle, but we've taken the gloves off to re-define
the rules for the next round.
- Have jointly funded $90 million Open Software Foundation
   - OSF will develop a software environment, including application interfaces,
advanced systems extensions, and a new operating system, using
         POSIX definitions are the starting point.

-- Jacques Stern, Chairman and CEO, Groupe Bull
- Four major customer needs/wants:
       - To easily use applications software on computers from multiple
vendors
       - To integrate or unify distributed applications and resources
across systems from different vendors and in geographically
diverse locations.  ("Interoperability")
- To use the same operating system on many classes of computers,
         from a workstation to a supercomputer.
- To have a voice in the formation of standards.
   - OSF will address these needs with an open application environment
specification.

-- Ken Olson, President of Digital Equipment Corporation
   - Industry has improved its products through technology, creativity,
good business sense.   Vendors have independently chosen the
         "best" architecture.
- Different operating systems have been created to match the architecture
on which they run.
- The consequence is that users must tailor applications to operating
         systems, and applications can't easily be moved.
- True applications portability requires an international standardized
         application environment -- a software platform that provides
clear specifications for how all applications can interface.
- Vendors then provide the tailoring to match their systems to this
applications environment.  Can also add enhancements and features
         that add value to the open system, but don't affect compliance.
- Truly open systems require a kind of public trusteeship in which many
         players have access to the system and a voice in determining its
future.
- OSF will ensure that such a trusteeship is in effect.

-- John Doyle, Executive VP, Hewlett-Packard, and Chairman of the OSF Board
- Truly open systems require software built with an open decision and
         development process.  OSF addresses these needs.
- Impetus was widespread and deep concern about the future of open
operating systems.
- But a bigger idea emerged: to make is easier for users to mix and match
         computers and software from different vendors.
- Specifications and products developed by OSF will meet all of the needs
         identified by (earlier speakers) today:
- Application Portability
- Easier integration of distributed applications and resources from
different vendors.
      - Run on a wide range of processors, from workstations to supercomputers.
- Live up to the name: OPEN Software Foundation: pursue a
         vendor-neutral process
- 7 Guiding Principles will guide OSF's operation:
- Seek best technologies from a wide range of sources.  Membership
open to all for-profit and non-profit organizations, worldwide.
         All members able to provide inputs on their needs.
- Ensure openness by supporting accepted international industry
         standards.  Build on existing standards, rather than
starting from scratch.  POSIX will be starting point; X/Open
will be used as well.
- Work closely with university and industry research organizations to
         obtain innovative technologies.  Have established a Research
Institute to fund and oversee relevant research.
      - Decision making process will be visible to foundation members.  The
results will be publicly available.
- At various stages, licensees will have timely access to the source
code for ease in designing their own applications or porting to their
         own hardware.
- Consistent and straightforward procedures for licensing source code.
         Non-members may obtain source code licenses.
- Offering will not favor any given hardware architecture.
- Offerings will be phased to give lead time to application developers:
- OSF Application Environment Specification, Level 0, being released
         today.  Includes POSIX, X/Open Portability Guide, X Windows.
- OSF AES, Level 1, will expand to areas such as interoperability and
         user interfaces.  OSF will produce an operating system consistent
with the Level 1 specifications.
- OSF will provide validation test suites for members and customers
to verify conformance.
      - Lots more to come.

-- Tom Vanderslice, Chairman and CEO of Apollo Computer
-- Viable international organization: more than $90 Million in initial
funding.
-- Membership fees provide additional support:
       - $25000 annually for profit-making organizations
- $5000 annually for non-profits.
   -- Yesterday (5/16) sent out hundreds of invitations, worldwide, to
hardware and software suppliers and universities.   Membership
open to all.
-- Will receive licensing feeds from those who chose to adopt its
         software.
-- Management and technical know-how.  Borrowed experts from sponsors.
         Will begin hiring immediately.  Expect to attract "best and brightest"
because should be an interesting place to work.
-- OSF has access to some technological assets from members.   Will base
its development efforts on its own research as well as on
         technologies licensed from members.  Those under consideration are:
- Apollo's Network Computing System
           - Bull's UNIX system-based multiprocessing architecture
- Digital's user interface toolkit and style guides for X Windows
- Hewlett-Packard's National Language Support
- Nixdorf's relational database technology
           - Siemen's OSI protocol support
Will include features to support current System V-based and
         Berkeley-based UNIX applications.  Operating System will use core
technology from a future version of IBM's AIX as a development base.

-- Claus Kessler, President, CEO and Chairman of Siemens
   -- University research has always played a key role in advancement of
operating systems technology.
   -- Impressive results:
- MIT's X Windows
- Berkeley's utilities, tools and virtual memory support for UNIX
- University of Karlsberg's work on OSI and large networks
      - University of Wisconsin's contributions to TCP/IP and OSI
-- OSF will sponsor research on open software and technology that
         contribute to its goals.
-- OSF has created a research institute to build relations and
interfaces with university and research organizations worldwide.
-- Will be structured by a formation committee.  Members so far:
      - Dr. Lynn Conway, University of Michigan
- Proferssor Michael Dertouzos, MIT
      - Dean James F. Gibbons, Stanford
- Professor Roger Needham, University of Cambridge
- Dr. Raj Reddy, Carnegie-Mellon
- Professor George Turin, University of California, Berkeley

-- Klaus Luft, Chairman of the Executive Board, Nixdorf Computer
   --OSF is unusual: right from the start, launched worldwide.
-- No standard can be a true standard unless it is an international
standard.  No open standard is genuinely open unless it is open
worldwide.
   -- All the major computer vendors are international.  But more important
is the fact that many of their customers operate internationally.
   -- OSF is committed to international standards.   OSF's OS will conform,
right from the start, with the X/Open specification.   Will work
with X/Open and ISO to advance new standards.
-- OSF development will be carried out on an international basis.  Goal
         is to access the widest possible range of talents and technologies.
There will be more than one research center.
   --  OSF will work closely with universities and research laboratories
throughout the world.  OSF management will be an international team.

-- John Akers, Chairman and CEO, IBM
   -- Computer industry has grown and prospered because its products serve
a wide range of customer needs.  We expect that to continue.
   -- But we must be responsive to many different customer requirements.
In particular, those customers currently using UNIX want:
- Ability to select from a wide range of application software, and to
use that software on a variety of systems from different vendors;
      - To choose hardware and software that meets their needs and solves
their problems, with the expectation that it will all work together.
      - To be able to choose a software environment that spans a wide range
of processors.
-- We've concluded that these customers can be best served if an
independent body, beholden to no one vendor but benefiting from
         the expertise and support of many, can create a common set of
specifications for a POSIX and X/Open-based software environment.
   -- We believe the OSF products will complement the many unique architectures
our industry will continue to offer, and that our customers will be
the winners.
-- OSF participants are all in a race for customer preference and
         loyalty.  We will all be adding value to differentiate our
products.

Q & A, moderated by John Young, Hewlett-Packard
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Application Environment Specification, Level 0

Operating System
  POSIX Standards: ANSI, ISO, FIPS
X/Open XPG3, base level

Languages
  C: ANSI X3J11
FORTRAN: ANSI X3.9-1978, ISO 1539-1980(E), FIPS 069-1
  Pascal: ANSI X3J9, ISO 7185-1983, FIPS 109
Ada: ANSI/MIL 1815A-1983,FIPS 119
Basic:
Minimal Basic: ANSI X3.60-1978, FIPS 068-1
      Full Basic: ANSI X3.113-1987, FIPS -68-2
Cobol: ANSI X3.23-1985 High Level, FIPS 021-2
  LISP: Common LISP, ANSI X3J13

User Interface
X Window System Version 11, ANSI X3H3
  Libraries: X language bindings, ANSI X3H3

Graphics Libraries
GKS, ANSI X3.124-1985, FIPS 120
PHIGS, ANSI X3H3.1

Network Services
Selected ARPA/BSD Services
  TCP (MIL-STD-1778), IP (MIL-STD-1777)
SMTP (MIL-STD-1781), TELNET (MIL-STD-1782)
FTP (MIL-STD-1780)
Selected OSI Protocols

Database Management
  SQL: ANSI X3.135-1986 (with 10/87 addendum), levels 1 & 2, FIPS 127

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

wrt code first, document later

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: wrt code first, document later
Newsgroups: alt.folklore.computers,comp.lang.forth
Date: Sat, 06 Jul 2002 16:43:08 GMT
jmfbahciv writes:
OH, YEAH! Implementing the ^T was a god-send. <CTRL>T was command that could be typed at any time and would display the incremental runtime of your job since the last ^T. Before we had that, people would go out to the machine room and look at the SYSDPY display to see if his/her job was getting any attention at all. Before the SYSDPY trick, we'd look at the lights.

starting with CP/67, "#CP Q TIME" gave the application ("running" not incremental) both "virtual cpu" used and "total cpu" used for that virtual machine (total was the virtual time spent in the application process as well time spent in the kernel on behalf behalf of the virtual machine). The 360/67 had a 13 point something microsecond timer ... and typical systems ran with total accounted for time ran within a couple percent of wall clock (when wait time accounting was added in). Very little of the instructions executed in cp/67 happened "unmeasured". About the only place that wasn't directly accounted for against some process (besides wait state) was the part in dispatch when attempting to select the next task and some scheduling process. This "overhead" was originally hitting 15 percent or more under heavy load (the system delivered to me in jan. '68). At least by the start of 1970 ... and probably by the end of 1968 I had reduced this "overhead" to one or two percent.

the resource manager that i did for vm/370 had an enhance "#CP IND" command which was a recent running avgs for total system cpu utilization, paging rate, various q-lengths/size and "response". Response was actually the avg. "interactive" queue service time ... which is similar but could be significantly different as per
https://www.garlic.com/~lynn/2002i.html#52
https://www.garlic.com/~lynn/2001m.html#19

aka ... the resource manager couldn't do a good job unless it accurately measured everything (with no overhead).

you could sort of follow your own progress via the "q time". The "indicate" command was a sensitive issue in some circles since it provided information about total system activity. Most places thot informed users aided in the intelligent use of the system. but there were some places that thot that users should be as isolated/insulated as possible from what was happening concurrently on the system.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

wrt code first, document later

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: wrt code first, document later
Newsgroups: alt.folklore.computers,comp.lang.forth
Date: Sat, 06 Jul 2002 16:46:23 GMT
jmfbahciv writes:
OH, YEAH! Implementing the ^T was a god-send. <CTRL>T was command that could be typed at any time and would display the incremental runtime of your job since the last ^T. Before we had that, people would go out to the machine room and look at the SYSDPY display to see if his/her job was getting any attention at all. Before the SYSDPY trick, we'd look at the lights.

... oh yes, CMS from the start also had the blip command which you could turn off/on. with "blip on" .... for every two seconds of virtual execution time (didn't include cp kernel time) cms would "wiggle" the 2741 type-ball (i.e. not actually type anything).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Sat, 06 Jul 2002 19:10:26 GMT
Charles Shannon Hendrix writes:
What is rate-based pacing?

This sounds a lot like rate-limiting, where either the application or the TCP/IP stack limits throughput.


rate-based pacing can be used for rate-limiting. one of the reasons for rate-based pacing is that window-based pacing is non-stable as is slow-start. in fact window-based pacing (in part what slow-start uses as a base implementation) is a misuse of a paradigm. window-based pacing was originally developed for direct end-to-end connections and overruning buffers at the end-point. Window-based pacing is possibly 2-3 levels of indirection from having anything to do with network congestion.

I've conjectured that possibly part of the reason for trying to leverage window-based pacing to addressing congestion issues is that many of the machines & systems used in the '80s had eather very poor hardware and/or software support for time-based facitlities.

rate-based pacing was one of the features for HSP (high speed protocol), standards activity in the late '80s. However, part of the problem was that HSP was being worked on in ANSI (x3s3.3) targeted for ISO ... and ISO had this strong OSI bent .... aka if it violated OSI model it wouldn't get very far. Much of HSP had to do with taking the level 4 (transport) interface directly to the LAN interface ... approximately the middle of level 3 (network). ISO sort of tried to squint real hard and ignore that LANs had collapsed into a single layer ... the bottom half of layer 3, and all of layer 2 and 1 (aka not only did LANs collapse several layers into one ... but the LAN boundary didn't correspond to any defined OSI boundry ... being in the middle of layer 3). HSP was to cut directly from layer 4 interface directly to LAN interface. Again the ISO forces had to squint a lot when objecting to HSP .... it bypassed the level 3 boundary interface and talked directly to the LAN boundary interface ... but at least it had a boundary interface that corresponded to defined OSI layer (level 4) ... which LANs didn't.

random rate-based pacing, slow-start, congestion, and hsp postings.
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/99.html#0 Early tcp development?
https://www.garlic.com/~lynn/99.html#114 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/99.html#115 What is the use of OSI Reference Model?
https://www.garlic.com/~lynn/2000b.html#1 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#5 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#9 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency (was Re: Ms employees begging for food)
https://www.garlic.com/~lynn/2001b.html#57 I am fed up!
https://www.garlic.com/~lynn/2001e.html#24 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#25 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001k.html#62 SMP idea for the future
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2001n.html#27 Unpacking my 15-year old office boxes generates memory refreshes
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002c.html#54 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#26 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#46 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#49 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#50 Why did OSI fail compared with TCP-IP?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Sat, 06 Jul 2002 19:21:01 GMT
Charles Shannon Hendrix writes:
SNA seems more like an umbrella of support from IBM to make it all work than a communications protocol.

When I last worked with it, every "SNA" connection we had as a different medium and protocol.


well SNA was primarily an infrastructure for managing communication with large terminal populations ... mainframe application talking to upwards of 60,000 to 100,000 terminals (maybe more). in that sense it included a communication protocol (but not a networking protocol). Another example might be the data processing infrastructure that talks to the head-end of majority of the cable tv systems using LU0 for addressing every set-top box.

sna had a mainframe api (like lu6.2), vtam (sscp, or pu5) and 37xx (ncp, or pu4). there was some conjecture that driving factor in the pu5/pu4 evoluation/definotion was largely because of a project that i worked on as undergraduate that created PCM controller business. misc. refs:
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

wrt code first, document later

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: wrt code first, document later
Newsgroups: alt.folklore.computers,comp.lang.forth
Date: Sat, 06 Jul 2002 19:33:47 GMT
Tim Shoppa writes:
I think this is a well-known phenomenon when designing user interfaces - you gotta give the user something to click on or watch or twiddle while the machinery is doing the real job. So we have elevator buttons you can push multiple times to "make the elevator come faster", parameters you can twiddle that shouldn't need twiddling (has the vast game in twiddling Window's networking MTU dissipated yet?), tachometers in cars with automatic transmissions, etc.

but this took it slightly further ... it was just another pretty button to push ... the system programmers and performance tuners were supplied with detailed documentation, formulas and all the source code for examination in detail. I've even been told about people being taught the details in CS classes.

having learned to drive on non-synchromesh truck ... starter was peddle on the floor and you had to listen to the sound of the engine and double clutch just right to shift gears; none of this sissy tachs & synchromesh stuff.

https://www.garlic.com/~lynn/38yellow.jpg

38chevy?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unisys A11 worth keeping?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unisys A11 worth keeping?
Newsgroups: alt.folklore.computers,comp.sys.unisys
Date: Sun, 07 Jul 2002 13:24:35 GMT
jmfbahciv writes:
Ah, I never saw this; thanks :-). It was a dream of JMF to take what he learned not to do (based on dabbling with a number of OSes) and design an OS from the ground up. When he was assigned the OSF work, he thought he'ld have a chance to fulfill that dream. Never happened and now I know why ;-).

watching what happened in FS .... i realized that a large group of people (re-)developing an OS from scratch had very low probability of being successful ... especially if anybody thot it was strategic ... the more important people thot it was ... the more resources would be put on it ... and the less likely it was to happen.

I did get involved in advocating doing a revised kernel ... but the proposal was to take bits and pieces of existing things and with a small group of people cobble them all together and then incrementally revise using new programming methodologies. Unfortnately, it got a lot of interest and had lots of resources assigned to it. Towards the end it had 300 people writing documentation for a year ... before it was decided to terminate. This was pre-OSF and pre-HA/CMP.

this was further born out by (at least) pink and spring.

random rewrite postings:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/2001l.html#25 mainframe question
https://www.garlic.com/~lynn/2001l.html#42 is this correct ? OS/360 became MVS and MVS >> OS/390

old fs postings:
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#17 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#21 OT?
https://www.garlic.com/~lynn/2000f.html#27 OT?
https://www.garlic.com/~lynn/2000f.html#28 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#56 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#18 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001f.html#30 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001n.html#46 Blinking lights
https://www.garlic.com/~lynn/2001n.html#65 Holy Satanism! Re: Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#43 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002c.html#1 Gerstner moves over as planned
https://www.garlic.com/~lynn/2002c.html#19 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002d.html#10 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#27 iAPX432 today?
https://www.garlic.com/~lynn/2002e.html#44 SQL wildcard origins?
https://www.garlic.com/~lynn/2002f.html#42 Blade architectures
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past

misc. pink
https://www.garlic.com/~lynn/2000e.html#42 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000e.html#48 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP

spring posting:
https://www.garlic.com/~lynn/2001j.html#32 Whom Do Programmers Admire Now???

and of course ha/cmp (which wasn't a new operating system):
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

BIOMETRICS

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: BIOMETRICS
Newsgroups: alt.computer.security
Date: Sun, 07 Jul 2002 13:32:21 GMT
"John Stadelman" writes:
anyone aware of any research studies going on with regard to any type of biometrics?

x9 web site currently has reference to x9.84 standard (note that main x9 web site announcements tend to change relatively frequently):
http://www.x9.org

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

subjective Q. - what's the most secure OS?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: subjective Q. - what's the most secure OS?
Newsgroups: comp.security.unix
Date: Sun, 07 Jul 2002 14:58:18 GMT
david20 writes:
A secure OS should provide division of privilege features. The all (root) or nothing approach is inherently insecure.

A secure OS should minimise the possibilities and effects of buffer overflows eg APIs to kernel mode routines should always use counted rather than null terminated strings.


most of the OS considered really secure (lots of break-in attempts and detailed source review) have either been small compact monitors (very little to break) or implementations that have components broken into small compact pieces with extremely strongly enforced partitions/barriers between the components.

Multics is held up as one example.

The other examples are a couple of the virtual machine implementations where the basic kernel is relatively compact and there is compact, easy to understand, strongly enforced API. Part of the issue is complexity itself tends to contribute to failures (security or otherwise); in part is that with complexity comes mistakes (either code developers, sysadms, sysprogs, application developers, etc).

KISS can be considered a prime principle of security.

Sometimes there are references to security by obfuscation thru complexity. The problem with complexity is that over time it increases the probability that the developers themselves will make mistakes creating exploit exposures. Any contribution to security that complexity might contribute is frequently extremely transient.

One of the highest security ratings was to a VMS kernel that was wrappered with some form of virtual machine monitor. There has also been some references to uses of ibm virtual machine implementations being used in high integrity installations (in some sense various of the time-sharing service bureaus needed extremely high strength integrity/security because of the open access by a diverse community ... and real revenue was at stake). I once had to help support a system that had fairly open access by MIT and BU students.

Faulty paradigm and obfuscation can also lead to mistakes. Probably the most noted recently (over the past ten years) is the implicit length paradigm in common C strong handling as being one of the single largest causes of security exploits (so by implication one would might conclude that anything involving c programming might be eliminated as a candidate).

random buffer overflow refs:
https://www.garlic.com/~lynn/99.html#219 Study says buffer overflow is most common security bug
https://www.garlic.com/~lynn/2000.html#30 Computer of the century
https://www.garlic.com/~lynn/2000g.html#50 Egghead cracked, MS IIS again
https://www.garlic.com/~lynn/2001c.html#32 How Commercial-Off-The-Shelf Systems make society vulnerable
https://www.garlic.com/~lynn/2001c.html#38 How Commercial-Off-The-Shelf Systems make society vulnerable
https://www.garlic.com/~lynn/2001n.html#30 FreeBSD more secure than Linux
https://www.garlic.com/~lynn/2001n.html#71 Q: Buffer overflow
https://www.garlic.com/~lynn/2001n.html#72 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#76 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#84 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2002.html#19 Buffer overflow
https://www.garlic.com/~lynn/2002.html#20 Younger recruits versus experienced veterans ( was Re: The demise of compa
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002.html#26 Buffer overflow
https://www.garlic.com/~lynn/2002.html#27 Buffer overflow
https://www.garlic.com/~lynn/2002.html#28 Buffer overflow
https://www.garlic.com/~lynn/2002.html#29 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#33 Buffer overflow
https://www.garlic.com/~lynn/2002.html#34 Buffer overflow
https://www.garlic.com/~lynn/2002.html#35 Buffer overflow
https://www.garlic.com/~lynn/2002.html#37 Buffer overflow
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002.html#39 Buffer overflow

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Hercules and System/390 - do we need it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hercules and System/390 - do we need it?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 07 Jul 2002 23:04:02 GMT
IBM-MAIN@ISHAM-RESEARCH.COM (Phil Payne) writes:
There have been alternative OSes on System/370 - UTS and Aspen come to mind.

UTS is still running in at least two sites.


there were some stories about source code audits of Aspen with regard to whether or not it contained any RASP code.

there were at least three major cp-based service bureaus .... all with heavily modified versions of CP & CMS ... NCSS, IDC, and Tymshare. Both NCSS & IDC were formed in the CP/67 time-frame, both including former IBMers that had worked on CP/67 (I have a tale about as an undergraduate having to backfill in teaching a cp/67 class because the designated IBMer had left to form NCSS).

Tymshare started an OS rewrite from scratch called GNOSIS. When M/D bought Tymshare ... a number of things were spun off ... including GNOSIS into something called KeyKos. There were Keykos efforts within the past 5-6 years or so with benchmarks about providing significant amount of MVS functional emulation but with transaction thruput comparable to or better than TPF (although gnosis/keykos were done by different people than RASP/aspen .... there were some similarities between the RASP/aspen design point and at least the keykos design point ... the original gnosis objective was somewhat different than what evolved into keykos).

Another from the period that was somewhat similar was MTS at univ. of mich. ... which also had a significant implementation of OS/360 in MTS allowing the execution of numerous OS/360 applications.

slightly misthreaded ... i think the reference to OS/360 under CP/67 ... wasn't to the running of MFT or MVT in a virtual machine ... i believe the reference was to the os/360 simulation provided in CMS so that numerous OS/360 applications could execute (like compilers). There used to be a joke about the implementation of the 32kbyte code-size OS/360 in cms was significantly more effecient and frugal than the implementation of OS/360 in MVT.

random gnosis/keykos postings:
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#22 No more innovation? Get serious
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001n.html#10 TSS/360
https://www.garlic.com/~lynn/2002f.html#59 Blade architectures
https://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: learning how to use a computer)
https://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?

misc. MTS postings:
https://www.garlic.com/~lynn/93.html#15 unit record & other controllers
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/2000.html#89 Ux's good points.
https://www.garlic.com/~lynn/2000.html#91 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000f.html#52 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
https://www.garlic.com/~lynn/2002f.html#37 Playing Cards was Re: looking for information on the IBM 7090

misc. uts & apsen refs:
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001l.html#18 mainframe question
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Hercules and System/390 - do we need it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hercules and System/390 - do we need it?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 07 Jul 2002 23:17:43 GMT
Anne & Lynn Wheeler writes:
there were at least three major cp-based service bureaus .... all with heavily modified versions of CP & CMS ... NCSS, IDC, and Tymshare. Both NCSS & IDC were formed in the CP/67 time-frame, both including former IBMers that had worked on CP/67 (I have a tale about as an undergraduate having to backfill in teaching a cp/67 class because the designated IBMer had left to form NCSS).

also still around from NCSS is nomad .... misc. urls
http://www.decosta.com/Nomad/famewall.html
http://www.decosta.com/Nomad/

remember in the tale below .... both Tymshare & NCSS are cp/cms based time-sharing services:
http://www.decosta.com/Nomad/tales/history.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

privileged IDs and non-privileged IDs

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: privileged IDs and non-privileged IDs
Newsgroups: comp.security.unix
Date: Mon, 08 Jul 2002 18:25:35 GMT
noname writes:
I wonder if it is a popular and good practice to create separate IDs for different purposes? More specifically, if I am a system administrator of a system, do I create 2 IDs for myself: One is for normal (non-privileged) use, the other to administer the system? (The root or superuser ID is not to be used as far as possible.)

in the past, IDs have been associated with "user-ids" ... but as things have gotten more complex ... they are migrating towards "role-ids". Part of the issue is that role-ids may have different authentication requirements (some may be simple password; others may require 3-factor authentication, including something like a fingerprint) and might even require things like specific physical location.

a basic tenet is that there never be any ambiquity when performing authentication for an ID .... that there is never more than one person associated with the authentication. that doesn't preclude one person having more than one ID (but it does preclude two or more people sharing the same ID). Multiple people may share the same role ... but never should there be people sharing the same role-id.

Even if different authentication requirements aren't currently supported, sometimes multiple role-ids might be assigned to a person in anticipation of migration to more sophisticated security environment.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unisys A11 worth keeping?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unisys A11 worth keeping?
Newsgroups: alt.folklore.computers,comp.sys.unisys
Date: Tue, 09 Jul 2002 16:17:33 GMT
cbh@ieya.co.REMOVE_THIS.uk (Chris Hedley) writes:
Going off topic (what, in a.f.c? :) I recall that earlier versions of SysV (at least the ones I used) had much of the networking stuff living in userland, and the progression to more integrated networking in later versions of SVR3 saw a large chunk of it moving into kernel mode; subsequently people have wanted to move it back to user space, just like it was originally... Then there's the subject of intelligent controllers. When I started out the various interfaces to discs, tape, network etc had their own CPU so did their own processing and DMAd the results; then someone said "hey, we now have so much CPU power and we can't possibly use it all, let's take the processors off the controllers and use our hugely fast main CPU cluster to control them" (must admit I was pretty sceptical at the time regarding this move); soon afterward we see intelligent controllers appearing and being touted as "the next big thing." I suppose it can't be long before history repeats itself yet again and we see everything in kernel space and dumb interfaces replacing controllers...

for some high-speed networking it wasn't so much where the processing was but how many buffer copies were being done. Even if there was a case of relatively few instructions being executed on the main processor ... it might still be dragged to a crawl if there were a large number of buffer copies (especially on cache machines where both the from & to target storage locations might be dragged thru the cache resulting a large number of processing cycles being consumed ... along with possible invalidation of large portions of in-use data ... which will then have to be reloaded).

some of the unix async i/o issues have to do with trade-off between the application having to support various serialization paradigms in the use of data vis-a-vis doing some number of buffer copies (some asyncr i/o implementations totally eliminate buffer copies; doing i/o directly in/out of application buffer memory).

random past buffer copy/async i/o threads
https://www.garlic.com/~lynn/93.html#32 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#00 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/98.html#34 ... cics ... from posting from another list
https://www.garlic.com/~lynn/2000b.html#5 "Mainframe" Usage
https://www.garlic.com/~lynn/2001c.html#26 The Foolish Dozen or so in This News Group
https://www.garlic.com/~lynn/2001d.html#59 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001k.html#62 SMP idea for the future
https://www.garlic.com/~lynn/2002e.html#34 Lisp Chips
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002g.html#5 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Does Diffie-Hellman schema belong to Public Key schema family?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does Diffie-Hellman  schema belong to Public Key schema family?
Newsgroups: sci.crypt
Date: Tue, 09 Jul 2002 15:53:27 GMT
"nntp.lucent.com" writes:
the Public key mates with the private key. Each one is the only for the other.

as an aside note .... the technology is asymmetric key; the business process use of asymmetric key technology is public/private key. It is the business process that establishes the convention that some asymmetric keys are "public" and some asymmetric keys are "private" (aka the handling of the asymmetric keys establish whether they are public and private).

The common associated business processes associated with public/private key convention are 1) digital signature and 2) encryption.

In digital signature business process, the hash of the message is "encrypted" with the private key. Anybody with the public key can verify the message signature by computing the message hash and comparing it with the value of the "decrypted" digital signature (aka the original hash). The business convention specifies that only the owner of the private key ever has access to (use of) its value and therefor only the owner can generate digital signatures with that private key. Arbritrary populations have access to the public key and therefor can validate the digital signature. The digital signature convention works based on the business process private key specification that there is only one owner that has access to a particular private key.

The other common use of asymmetric keys is to address the issue of key distribution problems found in typical secret key encryption paradigms. Public keys can be freely distributed and anybody can encrypt messages using the public keys ... but only the entities with access to the corresponding private keys can decode the message.

Note that the business process scenario of encryption may have fewer privacy requirements regarding the private key (exclusivity to a single individual) than the digital signature business process scenario. The use of public/private keys in the digital signature business process scenario tends to get into issues about individual authentication and other business issues like non-repudiation.

However, that doesn't preclude relaxing business process privacy requirements in the digital signature case to purely message authentication, aka a "private key" is secured but not necessarily limited to a single individual.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

CDC6600 - just how powerful a machine was it?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CDC6600 - just how powerful a machine was it?
Newsgroups: alt.folklore.computers
Date: Tue, 09 Jul 2002 19:06:54 GMT
tmm@spamfilter.asns.tr.unisys.com (Tim McCaffrey) writes:
Well, you didn't have it setup correctly :). MSU had a 6500 running upto 100 terminals (response was good at 30, tolerable at 60, and very bad at 100). Of course, most of the interactive traffic load was handled by a Interdata 7/32 (or 8/32?) and the Argus PP. Most terminal responses were handled by Manager, a single process on the 6500 (so, no swapping needed). MSU spent a lot of effort customizing SCOPE to handle interactive usage. It paid off when the Cyber 170/750 showed. That box could handle 200 users easily.

i worked on the original terminal controller that was built with the interdata/3 ... later enhanced to interdata/4 using multiple interdata/3s as linescanners. That along with the ibm channel interface started the PCM controller business. Later perkin/elmer bought them up. Not more than a couple years ago, i ran into perkin/elmer terminal controller doing some pretty heavy duty work in a large transaction shop.
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Hercules and System/390 - do we need it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hercules and System/390 - do we need it?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 09 Jul 2002 19:25:21 GMT
Anne & Lynn Wheeler writes:
remember in the tale below .... both Tymshare & NCSS are cp/cms based time-sharing services:
http://www.decosta.com/Nomad/tales/history.html


as an aside ... the largest time-sharing installation of them all was the internal IBM HONE system supporting all the branch office people around the world. Just the US-HONE complex in california (before being replicated in dallas & boulder for load balancing and disaster survivability) was probably larger than IDC, NCSS, & Tymshare combined. HONE was also cloned in EMEA and AFE as well as some number of subsideary clones in some individual countries (I hand carried parts of a clone when emea hdqtrs first moved to paris and set it up). Just the EMEA HONE in Paris was probably as large as IDC, NCSS, & Tymshare combined. misc. The UK HONE system in Havant was also pretty good size. misc HONE refs:
https://www.garlic.com/~lynn/subtopic.html#hone

In addition to the focus, nomad, etc, also note that System/R (the original relational database) was totally developed on VM platform and then there was technology transfer from SJR to Endicott for the SQL/DS product. Later one of the people in the referenced meeting below handled the SQL/DS technology transfer from Endicott to STL for DB2 (long way around since SJR and STL are only about 10 miles apart).
https://www.garlic.com/~lynn/95.html#13 SSA

further aside, the tymshare complex and the us hone complex were only a couple miles apart.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Does Diffie-Hellman schema belong to Public Key schema family?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does Diffie-Hellman  schema belong to Public Key schema family?
Newsgroups: sci.crypt
Date: Tue, 09 Jul 2002 19:01:35 GMT
Daniel Mehkeri writes:
Eh? No. Not always. And certainly not in the discrete-log based systems we're talking about. There's a definite difference between DL-based public and private keys. They are not interchangeable.

sorry ... didn't mean to imply that for the general case that keys were either interchangeable or not interchangeable ... i attempted to state that asymmetric key is technology issue .... the semantics of the terms "public" & "private" is a business issue (and as you pointed out that to satisfy the business semantics of "private", an asymetric key technology might not have interchangeable key implementation).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TCPA

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TCPA
Newsgroups: sci.crypt
Date: Tue, 09 Jul 2002 21:47:44 GMT
"Maciek" writes:
Is anyone familiar with this specification ?? Regards, Maciek

their website
https://web.archive.org/web/20030210092326/http://www.trustedcomputing.org/tcpaasp4/index.asp

they have pointers to specifications on the above page.

i was panelist in assurance session in what (I believe) was tcpa track at the intel developer's conference last year. i mentioned some more details of the AADS strawman ... and made the claim that it sould able to perform all the stated duties of a TPM (packaged on a motherboard instead of in a dongle or card) w/o changing its implementation. see old reference to aads chip strawman at:
https://www.garlic.com/~lynn/x959.html#aads

note that the idea for TPM isn't new ... i was involved in looking at possibility of a tpm-like chip on PC motherboard 20 years ago.

random past posting on tpm
https://www.garlic.com/~lynn/aadsm5.htm#asrn4 assurance, X9.59, etc

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

A Lesson In Security

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A Lesson In Security
Newsgroups: sci.crypt,alt.hackers.malicious
Date: Wed, 10 Jul 2002 02:26:37 GMT
ashwood@msn.com (Joseph Ashwood) writes:
Expected value of secret, credit card probably < $100,000, and in US maximum of $50 provided fraud is reported in a timely manner Lifetime of secret, 3 years System lifetime, 40 years Acceptable insurance against leakage 1:2^32, or insurance = 2^32 seconds Expected decrease in breaking cost, beat Moore's law, double power every year

somewhat different way of looking at part of the credit card issue:
https://www.garlic.com/~lynn/2001h.html#61 Net banking, is it safe????

.... or security proportional to risk

note that merchants are typically held financially responsible (this isn't the consumer centric view of fraud ... this is the fraud centric view of potential exposure).

one of the things that x9.59 is targeted at addressing is removing the cc# from an exploit as a secret (aka capturing an existing transaction log with cc# is not sufficient to execute fraudulent transactions). misc refs:
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subintegrity.html#fraud

- Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Unisys A11 worth keeping?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Unisys A11 worth keeping?
Newsgroups: alt.folklore.computers
Date: Wed, 10 Jul 2002 15:08:52 GMT
Pete Fenelon writes:
That NeXT look and feel is seductive. Very clean and functional. I used AfterStep, Bowman and WindowMaker for quite a while -- but nothing quite matches the feel of NeXTstep on "black hardware". I was always amazed at how fast my little NeXTstation used to fling windows and graphics round the screen - 25MHz 68040 consistently outperforming my P233 with accelerated graphics card, and often as nippy as my P3/500!

side note, next platform was mach out of CMU ... aka
https://www.garlic.com/~lynn/2002i.html#54

misc. refs:
http://burks.bton.ac.uk/burks/foldoc/17/80.htm
http://www.channelu.com/NeXT/NeXTStep/3.3/nd/DevTools/14_MachO/MachO.htmld/index.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

A Lesson In Security

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A Lesson In Security
Newsgroups: sci.crypt
Date: Wed, 10 Jul 2002 15:30:54 GMT
Michael Sierchio writes:
Merchants are held responsible for mail order transactions. Any purchase in which the card is not physically presented is a mail order transaction, and that includes online purchases.

merchants are held responsible for just about any charge backs. the rules of proof can be different with "card present" and "cardholder present" and whether track 1&2 is readable in the original auth and if the merchant has the original signed receipt.

there are still things like the waiter swipe fraud that has been written up. waiter in nyc had a little magstripe reader inside their jacket and when they took the card ... they also swiped the card inside their jacket and recorded the information in a PDA. That night the contents of the PDA were mailed over the internet to somebody across the country or on the other side of the world. Within a couple hrs, counterfeit cards were on the street being used. Since the waiter swipe fraud recorded tracks 1&2 ... it would pass the electronic auth.

Other types are to create counterfeit cards just using transaction information (card number, expire date, etc) ... and then scratch the magstripe. The scratch means that it fails the magnetic swipe ... and the merchant then decides to take it with physical impression (and manually enters the number on the POS terminal for the electronic auth).

There are also skimming & counterfeit cards with respect to debit (not just credit).

general
https://www.garlic.com/~lynn/subintegrity.html#fraud

some specifics.
https://www.garlic.com/~lynn/aadsm6.htm#pcards The end of P-Cards?
https://www.garlic.com/~lynn/aadsm6.htm#pcards2 The end of P-Cards? (addenda)
https://www.garlic.com/~lynn/aadsm6.htm#pcards3 The end of P-Cards? (addenda)
https://www.garlic.com/~lynn/aepay6.htm#ccfraud2 "out of control credit card fraud"
https://www.garlic.com/~lynn/aepay9.htm#skim High-tech Thieves Snatch Data From ATMs (including PINs)
https://www.garlic.com/~lynn/aepay10.htm#3 High-tech Thieves Snatch Data From ATMs (including PINs)
https://www.garlic.com/~lynn/2001f.html#40 Remove the name from credit cards!

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Does Diffie-Hellman schema belong to Public Key schema family?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does Diffie-Hellman  schema belong to Public Key schema family?
Newsgroups: sci.crypt
Date: Wed, 10 Jul 2002 15:43:06 GMT
David Hopwood writes:
Can people please stop saying this? It is not true in general, and it is not even true for RSA once encoding methods are taken into account. I'm sure this "explanation" contributes to some of the confused ideas I've heard from newbies about signatures.

or i would claim that the degree of approximation to "reality" in the previous statement is possible order of magnitude closer than numerous of the descriptions I've heard about public/private key technology serving any useful business purpose (aka for instance any of the SSL operations that go on in the world).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HONE was .. Hercules and System/390 - do we need it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: HONE was .. Hercules and System/390 - do we need it?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 10 Jul 2002 16:18:24 GMT
one of the things that I liked about HONE was that they had enormous appetite for pushing the technology envelope. They had an extremely demanding service and clientele (all the company's salesmen and field support people in the world). I supplied them heavily customized kernel and other functions.

The basic service delivery was a large CMS\APL (and then APL\CMS) application that provided a "padded cell" time-sharing environment with lots of customized functions that the uwer could invoke. The total dependency on APL gave HONE a very large appetite for CPU cycles. The major APL HONE application was code-named Sequoia (most of the time, a HONE user saw little or no native CMS user interface).

Starting with the 370/125 ... it was no longer possible for a salesman to place a machine order without the use of HONE (i.e. in the 360 days, a salesman could fill out an order form for a customer and get a machine, starting with 370/115&125 ... order specifications were generated by HONE thru a "configurator" application).

In the VM/370 release 2 time-frame ... I also provided HONE "shared modules" and PAM (CMS paged mapped filesystem support). Shared modules was a mechanism that CMS executables could be identified as containing "shareable" segments. When the CMS supervisor went to load CMS executable code, it would look it the control information for the "shareable" option and then invoke CP kernel options to "load" the appropriate segments in "shared mode". "Shared Modules" feature was dependent on the executables being resident in a paged mapped filesystem (not a normal CMS filesystem).

The base CP system had a method of defining shared segments ... but it used a mechanism that involved a kernel resident module that defined the memory space (and the segments of the memory that were "shareable") and the place in the CP filesystem that the memory image was to be located. Changes required rebuilding & rebooting the kernel. Furthermore, the invokation of the memory image was only available thru the virtual IPL/boot command.

"Shared Modules" had the advantage that there was no CP kernel processes involved (no rebuilding the kernel, etc). For VM/370 release 3, a subset of the CMS changes were merged into the product .... and a new CP interface to the standard CP saved memory images was created. This allowed an APL or GML processor to be loaded as shared segments within the CMS environment with out having to reboot the virtual machine. However, it continued to have enormous drawbacks compared to the shared module implementation. The full-blown paged mapped filesystem only saw limited release in the XT/370 product.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Does Diffie-Hellman schema belong to Public Key schema family?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does Diffie-Hellman  schema belong to Public Key schema family?
Newsgroups: sci.crypt
Date: Wed, 10 Jul 2002 18:28:48 GMT
djohn37050@aol.com (DJohn37050) writes:
Yes, perhaps a better term for signing is perform an operation with the private key.

note

1) in the original posting both the term encrypt and decrypt were double quoted. in many instances this is used to alert the audience to a possible use of the term different (or broader &/or narrower) than what they might be expecting. In a more formal presentation ... there might have been a reference number and a detailed explanation in the appendix about what might be the audiences' expected use of the term vis-a-vis the particular use of the term:
https://www.garlic.com/~lynn/2002i.html#67

2) in the attached, i would claim that even the DSA private key "operation" is covered. The intent of doing the DSA private key "operation" is not so much to hide the data but to guarantee the integrity of the data. Note however there have been some mechanisms that do "encrypt" a whole message solely for integrity purposes (not for privacy reasons) ... although the "use" semantics might possibly be used to infer integrity (as opposed to "known" semantics which presumably reference to secrecy and privacy).

https://www.garlic.com/~lynn/secure.htm
encryption
(I) Cryptographic transformation of data (called 'plaintext') into form (called 'ciphertext') that conceals the data's original meaning to prevent it from being known or used. If the transformation is reversible, the corresponding reversal process is called 'decryption', which is a transformation that restores encrypted data to its original state. (C) Usage note: For this concept, ISDs should use the verb 'to encrypt' (and related variations: encryption, decrypt, and decryption). However, because of cultural biases, some international usage, particularly ISO and CCITT standards, avoids 'to encrypt' and instead uses the verb 'to encipher' (and related variations: encipherment, decipher, decipherment). (O) 'The cryptographic transformation of data to produce ciphertext.' (C) Usually, the plaintext input to an encryption operation is cleartext. But in some cases, the plaintext may be ciphertext that was output from another encryption operation. (C) Encryption and decryption involve a mathematical algorithm for transforming data. In addition to the data to be transformed, the algorithm has one or more inputs that are control parameters: (a) key value that varies the transformation and, in some cases, (b) an initialization value that establishes the starting state of the algorithm. [RFC2828] (Reversible) transformation of data by a cryptographic algorithm to produce ciphertext, i.e. to hide the information content of the data. [ISO/IEC WD 18033-1 (12/2001)] [SC27] The process of making information indecipherable to protect it from unauthorized viewing or use, especially during transmission or storage. Encryption is based on an algorithm and at least one key. Even if the algorithm is known, the information cannot be decrypted without the key(s). [AJP)


misc for completeness ... encryption is the "cryptographic transformation" and digital signature is a "value computed with a cryptographic algorithm".
digital signature
(I) A value computed with a cryptographic algorithm and appended to a data object in such a way that any recipient of the data can use the signature to verify the data's origin and integrity. (I) 'Data appended to, or a cryptographic transformation of, a data unit that allows a recipient of the data unit to prove the source and integrity of the data unit and protect against forgery, e.g. by the recipient.' (C) Typically, the data object is first input to a hash function, and then the hash result is cryptographically transformed using a private key of the signer. The final resulting value is called the digital signature of the data object. The signature value is a protected checksum, because the properties of a cryptographic hash ensure that if the data object is changed, the digital signature will no longer match it. The digital signature is unforgeable because one cannot be certain of correctly creating or changing the signature without knowing the private key of the supposed signer. (C) Some digital signature schemes use a asymmetric encryption algorithm (e.g., see: RSA) to transform the hash result. Thus, when Alice needs to sign a message to send to Bob, she can use her private key to encrypt the hash result. Bob receives both the message and the digital signature. Bob can use Alice's public key to decrypt the signature, and then compare the plaintext result to the hash result that he computes by hashing the message himself. If the values are equal, Bob accepts the message because he is certain that it is from Alice and has arrived unchanged. If the values are not equal, Bob rejects the message because either the message or the signature was altered in transit. (C) Other digital signature schemes (e.g., see: DSS) transform the hash result with an algorithm (e.g., see: DSA, El Gamal) that cannot be directly used to encrypt data. Such a scheme creates a signature value from the hash and provides a way to verify the signature value, but does not provide a way to recover the hash result from the signature value. In some countries, such a scheme may improve exportability and avoid other legal constraints on usage. [RFC2828] A cryptographic method, provided by public key cryptography, used by a message's recipient and any third party to verify the identity of the message's sender. It can also be used to verify the authenticity of the message. A sender creates a digital signature or a message by transforming the message with his or her private key. A recipient, using the sender's public key, verifies the digital signature by applying a corresponding transformation to the message and the signature. [AJP] A data appended to, or a cryptographic transformation of, a data unit that allows a recipient of the data unit to prove the origin and integrity of the data unit and protect the sender and the recipient of the data unit against forgery by third parties, and the sender against forgery by the recipient. [ISO/IEC 11770-3: 1999] Data appended to, or a cryptographic transformation of, a data unit that allows the recipient of the data unit to prove the origin and integrity of the data unit and protect against forgery, e.g. by the recipient. [ISO/IEC FDIS 15946-3 (02/2001)] A cryptographic transformation of a data unit that allows a recipient of the data unit to prove the origin and integrity of the data unit and protect the sender and the recipient of the data unit against forgery by third parties, and the sender against forgery by the recipient. NOTE - Digital signatures may be used by end entities for the purposes of authentication, of data integrity, and of non-repudiation of creation of data. The usage for non-repudiation of creation of data is the most important one for legally binding digital signatures. [ISO/IEC 15945: 2002] [SC27] A digital signature is created by a mathematical computer program. It is not a hand-written signature nor a computer-produced picture of one. The signature is like a wax seal that requires a special stamp to produce it, and is attached to an Email message or file. The origin of the message or file may then be verified by the digital signature (using special tools). The act of retrieving files from a server on the network. [RFC2504] A method for verifying that a message originated from a principal and that it has not changed en route. Digital signatures are typically generated by encrypting a digest of the message with the private key of the signing party. [IATF][misc] A non-forgeable transformation of data that allows the proof of the source (with non-repudiation) and the verification of the integrity of that data. [FIPS140] Data appended to, or a cryptographic transformation of, a data unit that allows the recipient of the data unit to prove the origin and integrity of the data unit and protect against forgery, e.g. by the recipient. [ISO/IEC 9798-1: 1997] [SC27]


as an aside in the above with regard to non-repudiation .... see detailed definition involving requirement for non-repudiation service(s).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Does Diffie-Hellman schema belong to Public Key schema family?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Does Diffie-Hellman  schema belong to Public Key schema family?
Newsgroups: sci.crypt
Date: Wed, 10 Jul 2002 19:16:36 GMT
oh, and i forgot ... also from
https://www.garlic.com/~lynn/secure.htm
digital signature algorithm (DSA)
(N) An asymmetric cryptographic algorithm that produces a digital signature in the form of a pair of large numbers. The signature is computed using rules and parameters such that the identity of the signer and the integrity of the signed data can be verified. [RFC2828] This algorithm uses a private key to sign a message and a public key to verify the signature. It is a standard proposed by the U.S. Government. [misc]

Digital Signature Standard (DSS)
(N) The U.S. Government standard that specifies the Digital Signature Algorithm (DSA), which involves asymmetric cryptography. [RFC2828] A U.S. Federal Information Processing Standard proposed by NIST (National Institute of Standards and Technology) to support digital signature


==================================================

from FIPS186-2 ... it doesn't doesn't actually make the statement that the specified algorithms are all "cryptographic" algorithms. However, there is fairly common use of the definition of "encrypt" or "encryption" that the DSS algorithmic transformation of the SHA-1 would be considered a cryptographic transformation (as per its use in the common definitions for DSA & DSS above).

The FIPS186-2 does mention some of the business processes involved for "private" keys ... aka "never shared" and "can be performance only by the possessor of the user's private key". Even with common defintion of both DSA & DSS mentioning cryptographic transformation ... I still felt that I might make use of the quotations around "encrypt" and "decrypt" in an attempt to avoid any knee-jerk reaction to the particular use.
https://www.garlic.com/~lynn/2002i.html#67

FIPS186-2 can be found at:
http://csrc.nist.gov/publications/fips/

from fips186-2
Explanation: This Standard specifies algorithms appropriate for applications requiring a digital, rather than written, signature. A digital signature is represented in a computer as a string of binary digits. A digital signature is computed using a set of rules and a set of parameters such that the identity of the signatory and integrity of the data can be verified. An algorithm provides the capability to generate and verify signatures. Signature generation makes use of a private key to generate a digital signature. Signature verification makes use of a public key which corresponds to, but is not the same as, the private key. Each user possesses a private and public key pair. Public keys are assumed to be known to the public in general. Private keys are never shared. Anyone can verify the signature of a user by employing that user's public key. Signature generation can be performed only by the possessor of the user's private key. A hash function is used in the signature generation process to obtain a condensed version of data, called a message digest (see Figure 1). The message digest is then input to the digital signature (ds) algorithm to generate the digital signature. The digital signature is sent to the intended verifier along with the signed data (often called the message). The verifier of the message and signature verifies the signature by using the sender's public key. The same hash function must also be used in the verification process. The hash function is specified in a separate standard, the Secure Hash Standard (SHS), FIPS 180­1. FIPS approved ds algorithms must be implemented with the SHS. Similar procedures may be used to generate and verify signatures for stored as well as transmitted data.

=====

however FIPS186-2 does specifically refer to ECDSA as being a cryptographic transformation:

=====
1. INTRODUCTION

This publication prescribes three algorithms suitable for digital signature (ds) generation and verification. The first algorithm, the Digital Signature Algorithm (DSA), is described in sections 4 ­ 6 and appendices 1 ­ 5. The second algorithm, the RSA ds algorithm, is discussed in section 7 and the third algorithm, the ECDSA algorithm, is discussed in section 8 and recommended elliptic curves in appendix 6.

7. RSA DIGITAL SIGNATURE ALGORITHM

The RSA ds algorithm is a FIPS approved cryptographic algorithm for digital signature generation and verification. This is described in ANSI X9.31.

8. ELLIPTIC CURVE DIGITAL SIGNATURE ALGORITHM (ECDSA)

The ECDSA ds algorithm is a FIPS approved cryptographic algorithm for digital signature generation and verification. ECDSA is the elliptic curve analogue of the DSA. ECDSA is described in ANSI X9.62. The recommended elliptic curves for Federal Government use are included in Appendix 6.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Fw: HONE was .. Hercules and System/390 - do we need it?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fw: HONE was .. Hercules and System/390 - do we need it?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 10 Jul 2002 20:33:43 GMT
wmklein@IX.NETCOM.COM (William M. Klein) writes:
I don't want to get into an "How long have you been a systems programmer" thread, but it always used to "amuse" me when "we, customers" would get APL "dumps" when using some of the predecessors to IBMLink. I once (again, as a customer) worked with someone in IBM for quite a while because one of the reproducible "failures" would put me into native CMS - so that I could "search around" and find LOTS of interesting stuff that I was never supposed to see.

various people have commented that the service processor on the 3090 would periodically let some CMS'es show thru (aka 3090 service processor were a pair of 4361s running a customized version of vm/370 release 6 ... with all the service panels written in "IOS3270").

random past posts:
https://www.garlic.com/~lynn/96.html#41 IBM 4361 CPU technology
https://www.garlic.com/~lynn/99.html#60 Living legends
https://www.garlic.com/~lynn/99.html#61 Living legends
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001f.html#8 Theo Alkema
https://www.garlic.com/~lynn/2001f.html#9 Theo Alkema
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HONE

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: HONE
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 10 Jul 2002 21:48:34 GMT
Anne & Lynn Wheeler writes:
one of the things that I liked about HONE was that they had enormous appetite for pushing the technology envelope. They had an extremely demanding service and clientele (all the company's salesmen and field support people in the world). I supplied them heavily customized kernel and other functions.

One of the other functions was SMP support. I had released the Resource Manager as a PRPQ ... which quickly migrated to a standard product. I was told that it was going to be the first charged-for licensed SCP code (and so they made me spend all this time with business practices people regarding establishing the ground rules for licensing and charging for SCP code; ... code charged for prior to that had been non-SCP/non-kernel code ... aka applications, etc). Original "blue letter" for availability with VM/370 Release 3
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

It turns out the resource manager was a lot of code that had been dropped in the CP/67 to VM/370 rewrite ... and the resource manager code was SMP sensitized (from cp/67 smp support). I was also using the code as part of a 5-way SMP project called VAMPS (that never shipped as a product, but I got to put a lot of stuff down into the micro-code, more than had been in either VMA or ECPS). In any case, when SMP support was incorporated into the standard product with VM/370 release 4, something like 80 percent of the Resource Manager code migrated into the base (non-charged for SCP code).

Prior to that, HONE because of its APL-affinity and hunger for CPU cycles, I built a VM?370 release 3 sysetm for HONE with SMP support and they upgraded all the processors to 2-cpu units (at least at US HONE complex in california).

misc. microcode discussions
https://www.garlic.com/~lynn/submain.html#360mcode

misc. smp discussions
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

McKinley Cometh

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: McKinley Cometh...
Newsgroups: comp.os.vms,comp.arch
Date: Thu, 11 Jul 2002 02:29:02 GMT
David Gay writes:
Some order confusion. PowerPC was derived from Power. Power was used in of IBM's workstation products of the time -- how are you defining "mainstream products" ?. I'm sure somebody here can fill in the details on the other Power_n versions.

801 came out of research. then there was a big project called Fort Knox that would have replaced the numerous m'code engines around the company (including those used in the low-end and mid-range 370s); that got killed. At about this time i believe you started to see some 801 chip engineers showing up working on risc projects at other companies.

Then 801 ROMP (Research OPD Micro Processor) project was started to be a office products displaywriter follow-on with CPr as base operating system (and PL.8 language). I believe there was then some analysis that while a ROMP-based displaywriter was cost effective given enuf seats sharing the same machine ... that the least expensive ROMP-based displaywriter was still more expensive that the most expensive acceptable displaywriter configuration. That spawned the morphing of ROMP/CPr into the PC/RT workstation using the company that had been hired to do the PC/IX port .... doing the port to VMR machine abstraction layer (which retained some amount of the original PL.8 technology & technicians).

Then came RIOS/POWER (and RS/6000, as follow-on to PC/RT) ... and then somerset project (joint with motorola) for power/pc (aka 601) (also involving apple). Up until somerset/powerpc a basic premise of 801 chip designs had been non-cache coherent shared memory multiprocessor (actually no multiprocessing except for a special 4-way RSC aka "RIOS single chip" implementation that didn't support cache coherency). RS/6000 workstations continued on with both RIOS/POWER & POWER/PC chipsets (for some time you could tell power from power/pc based on whether they supported multiprocessor configurations or not).

With the as/400 moving to a power/pc chipset ... some things sort of came full circle back to Fort Knox.

motorola bought out somerset in '98 ... and ibm came out with chipset that was rios/powerpc merge.

27 years of IBM risc:
http://www.rootvg.net/column_risc.htm

note the above leaves out the CPr & PL.8 work. it also leaves out fort knox. it also leaves out PC/RT using ROMP which was targed as an office product division (aka OPD) displaywriter. also there is comment about aix/ps2 in the above. aix for the pc/rt was a at&t system v port by the same people that had done the pc/ix implementation. AIX/PS2 (and its companion AIX/370) was a Locus implementation from UCLA. There was also a BSD port to pc/rt from ibm called AOS (that was to the bare metal w/o any vrm).

random url with locus refs:
http://plasmid.dyndns.org:81/plasmidweb/joehopfield.htm

random past ROMP, somerset, fort knox postings:
https://www.garlic.com/~lynn/98.html#26 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/98.html#27 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#129 High Performance PowerPC
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000.html#59 Multithreading underlies new development paradigm
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2001c.html#84 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#23 IA64 Rocks My World
https://www.garlic.com/~lynn/2001h.html#69 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#28 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001j.html#37 Proper ISA lifespan?
https://www.garlic.com/~lynn/2002c.html#40 using >=4GB of memory on a 32-bit processor
https://www.garlic.com/~lynn/2002g.html#12 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#14 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#39 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002h.html#63 Sizing the application

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HONE

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HONE
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 11 Jul 2002 02:57:01 GMT
edgould@AMERITECH.NET (Edward Gould) writes:
Anne & Lynn,

When you were coding the SMP 67 support, what was then thought the maximum CP's that were ever thought to be "maximum" then?


charlie did most of the original cp/67 smp support (he also invented the compare&swap instruction .... the mnemonic CAS was chosen because they are charlie's initials); i just made sure that my code was also multiprocessor enabled from both a coding practice standpoint and some choice of implementation strategies that were conducive to multiprocessor operation.

Any thot of maximum at that time was more than two ... charlie had previously worked on the only 360/67 triplex (and I remember seeing a 360/62 system reference that called for 4-way smp).

There were two custom 370 SMP projects (predating VM/370 release 4) ... one was VAMPS that I worked on that would support up to five-way (because of hardware limitation) and something called "logical machines" that I worked on with Charlie and a couple other people in cambridge (logical machines was a 16-way SMP using 158 processor technology). Of course we had engineers in KGN & POK for the actual hardware stuff. Neither VAMPS nor logical machines shipped as a product.

random ref:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/95.html#5 Who started RISC? (was: 64 bit Linux?)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

HONE

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HONE
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 11 Jul 2002 03:11:55 GMT
SEYMOUR.J.METZ@CUSTOMS.TREAS.GOV (Shmuel Metz , Seymour J.) writes:
I don't know about IBM, but industry wide it was around 16, with big clusters at 2 and 4. The idea of 256-way and up would have been considered a wild fantasy.

The problem with 360 & 370 was the extremely strong memory consistency requirement. I've claimed that part of the very strong aversion to SMP & any memory consistency in 801 for so long was the problems encountered with the 360/370 strong memory consistency model. See "27 years of risk" reference in (thread in comp.arch somewhat sidetracked about mvs running on power):
https://www.garlic.com/~lynn/2002i.html#81 McKinley Cometh

HONE (us hone in california) in late '70s was 8-way cluster of 2-processer smp machines. The closest were possibly some airline res TPF systems ... but TPF didn't have SMP support ... so it was purely single processors. As an aside topic drift ... IBM mainframes tended to "suffer" a 10 percent thruput slowdown in 2-way (compared to uniprocessor) that allowed for cross-cache chatter. The 3081 was not a full IBM SMP ... being two processors sharing some number of hardware components. The 3083 was something of an afterthot for the TPF industry .... disabling the 2nd processor allowed the cycle delay for the cross-cache chatter to be eliminated for straight single-processor thruput.

Later when we were doing HA/CMP ... we also were participated in SCI and FCS standards activities .... looking at both 256-machine clusters (with FCS) as well as 256-machine shared memory. Both Sequent and DG produced 256 intel processor SMP using the dolphin SCI chip. Convex produced 256 hp/risc processor SMP using custom SCI hardware.

random refs:
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001l.html#16 Disappointed
https://www.garlic.com/~lynn/2001n.html#83 CM-5 Thinking Machines, Supercomputers
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

next, previous, index - home