List of Archived Posts

2005 Newsgroup Postings (08/31 - 09/18)

Article: The True Value of Mainframe Security
Intel engineer discusses their dual-core design
Innovative password security
What ever happened to Tandem and NonStop OS ?
What ever happened to Tandem and NonStop OS ?
What ever happened to Tandem and NonStop OS ?
Innovative password security
What ever happened to Tandem and NonStop OS ?
EBCDIC to 6-bit and back
EBCDIC to 6-bit and back
What ever happened to Tandem and NonStop OS ?
What ever happened to Tandem and NonStop OS ?
Is there any RFC for telnet proxy?
One more about SYRES Sharing
Multicores
DUMP Datasets and SMS
DUMP Datasets and SMS
DUMP Datasets and SMS
address space
address space
address space
Multicores
Multicores
What ever happened to Tandem and NonStop OS ?
Hi-tech no panacea for ID theft woes
Hi-tech no panacea for ID theft woes
What ever happened to Tandem and NonStop OS ?
What ever happened to Tandem and NonStop OS ?
Canon Cat for Sale
Documentation for the New Instructions for the z9 Processor
What ever happened to Tandem and NonStop OS ?
z/VM performance
PKI Certificate question
Digital Singatures question
What is CRJE
PKI
PKI
CRJE and CRBE
storage key question
What ever happened to Tandem and NonStop OS ?
how password is stored and check the authentication??
how password is stored and check the authentication??
VMFPLC2 to load EREP PTFs
Security of Secret Algorithm encruption
hasp, jes, rasp, aspen, gold
HASP/ASP JES/JES2/JES3

Article: The True Value of Mainframe Security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Article: The True Value of Mainframe Security
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers,bit.listserv.vmesa-l
Date: Wed, 31 Aug 2005 18:34:29 -0600
"Shmuel Metz (Seymour J.)" <shmuel+ibm-main@PATRIOT.NET> writes:
IBM had RSS and General HoneyBULL had Multics.

multics was on the 5th floor of 545 tech sq.
https://www.multicians.org/

cp67 and misc. other stuff
https://www.garlic.com/~lynn/2004o.html#38 SHARE reflections

was done by the science center on the 4th floor of 545 tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech

as part of cp67 morphing into vm370 ... the group absorbed the boston programming center on the 3rd floor (545 tech sq). eventually vm370 outgrew the 3rd floor and moved out to the old sbc building in burlington mall (this was after sbc had been sold off as part of some settlement).

the science center machine room was on the 2nd floor.

refs:
https://www.garlic.com/~lynn/2005o.html#45 Article: The True Value of Mainframe Security
https://www.garlic.com/~lynn/2005o.html#46 Article: The True Value of Mainframe Security
https://www.garlic.com/~lynn/2005o.html#47 Article: The True Value of Mainframe Security

and anothor ref
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

from
https://www.garlic.com/~lynn/2005k.html#30 Public disclosure of discovered vulnerabilities
https://www.garlic.com/~lynn/2005k.html#35 Determining processor status without IPIs

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Intel engineer discusses their dual-core design

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Intel engineer discusses their dual-core design
Newsgroups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch,alt.folklore.computers
Date: Wed, 31 Aug 2005 19:53:56 -0600
keith writes:
Wasn't the 3033 a 3168 on steriods? ...complete with dual I-streams?

303x machines were organized to use the 303x channel director.

the 370/158 had integrated channels ... i.e. the 158 engine was shared between the microcode that executed 370 instructions and the microcode that executed channel (I/O) programs.

for the 303x channel director they took the 370/158 and removed the microcode that executed 370 instructions leaving just the channel execution microcode.

a 3031 was a 370/158 remapped to use a channel director box i.e. a 3031 was a 370/158 that had the channel program microcode removed ... leaving the engine dedicated to executing the 370 instruction microcode.

in some sense a single processor 3031 was a two 158-engine smp system ... but with one of the 158 engines dedicated to running 370 instruction microcode and the other processor dedicated to running the channel program microcode.

a 3032 was a 370/168 modified to use the 303x channel director.

a 3033 started out being a 370/168 wiring diagram remapped to new chip technology. the 168 chip technology was 4 circuits per chip. the 3033 chip technology was about 20% faster but had about ten times as many circuits per chip. the initial straight wiring remap would have resulted in the 3033 being about 20% faster than 168-3 (using only 4 cicuits/chip). somewhere in the cycle, there was a decision to redesign critical sections of the machine to better utilize the higher circuit density ... which eventually resulted in the 3033 being about 50% faster than the 168-3.

basic 3033 was single processor (modulo having up to three 303x channel directors for 16 channels). you could get two-processor (real) 3033 smp systems (not dual i-stream).

there was some internal issues with the 3031 being in competition with 4341 ... with the 4341 being significantly better price/performance. A cluster of six 4341s also were cheaper than a 3033 also with much better price/performance and higher aggregate thruput. Each 4341 could have 16mbytes of real storage and 6channels for an aggregate of 96mbytes of real storage and 36 i/o channels. A single processor 3033 was still limited to 16 i/o channels and 16mbytes.

somewhat in recognition of the real stroage constraint on 3033 thruput ... a hack was done to support 32mbytes of real storage even tho the machines had only 16mbyte addressing. A standard page table entry had 16bits, 12bit page number (with 4k pages giving 24bit real storage addrssing), 2 defined bits, and two undefined bits. The undefined bits were remapped on the 3033 to be used in specifying real page numbers. That allowed up to 14bit page number ... with 4k pages giving up to 26bit real storage addressing (64mbytes). channel program idals had been introduced with 370 ... allowing for up to 31bit real storage addressing (even tho only 24bits were used). This allowed the operating system to do page I/O into and out of storage above the 16mbyte line.

around the same time that the product test lab (bldg. 15) got their 3033, they also got a brand new engineering 4341 (for the same purpose doing channel disk i/o testing). we could co-op the 4341 in much the same way that the 3033 was co-op'ed. In fact, for a period of time, I had better access to the 4341 for running tests than 4341 product people in endicott did; as a result I got asked to run some number of benchmarks on the bldg. 15 4341 for the endicott 4341 product people. minor past refs:
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#22 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#37 IBM was: CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2003.html#10 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof

some number of past posts on 303x channel director:
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/97.html#20 Why Mainframes?
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/2000c.html#69 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#21 S/360 development burnout?
https://www.garlic.com/~lynn/2000g.html#11 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002i.html#23 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002p.html#59 AMP vs SMP
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2003g.html#32 One Processor is bad?
https://www.garlic.com/~lynn/2003.html#39 Flex Question
https://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2004e.html#51 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#50 Chained I/O's
https://www.garlic.com/~lynn/2004.html#9 Dyadic
https://www.garlic.com/~lynn/2004.html#10 Dyadic
https://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor
https://www.garlic.com/~lynn/2004n.html#14 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#26 CAS and LL/SC
https://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005h.html#40 Software for IBM 360/30
https://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof

in the late 70s and early 80s, 4341 competed with and sold into the same market segment as vax machines
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000c.html#83 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#9 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#10 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2002h.html#52 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002i.html#30 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#1 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2002k.html#3 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2003c.html#17 difference between itanium and alpha
https://www.garlic.com/~lynn/2003c.html#19 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#0 big buys was: Tubes in IBM 1620?
https://www.garlic.com/~lynn/2003d.html#33 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003d.html#61 Another light on the map going out
https://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003i.html#5 Name for this early transistor package?
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004f.html#39 Who said "The Mainframe is dead"?
https://www.garlic.com/~lynn/2004g.html#24 |d|i|g|i|t|a|l| questions
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004j.html#57 Monster(ous) sig (was Re: Vintage computers are better
https://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
https://www.garlic.com/~lynn/2004m.html#59 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004m.html#63 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004q.html#71 will there every be another commerically signficant new ISA?
https://www.garlic.com/~lynn/2005f.html#30 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#59 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005m.html#8 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#12 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#10 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#11 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#12 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#16 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#47 Anyone know whether VM/370 EDGAR is still available anywhere?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Innovative password security

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Innovative password security
Newsgroups: alt.computer.security
Date: Thu, 01 Sep 2005 15:50:37 -0600
nunodonato writes:
sorry to disagree but i prefer the master password. remembering ONE password is not hard, even if it has to be a bit more complicated than the usual.

biometric scans are very secure, thats for sure, but its no so pratical.. and you need another device for that. what happens when you go to a place where it is not available?


the issues are what are the treats and the countermeasures.

biometric information can be left around all over the place ... and once compromised it can be a lot more difficult to re-issue a thumb than it is to replace a compromised password (although there have been a couple recent news items attempting to address compromised biometrics).

frequently, access passwords tend to be shared-secrets .... they tend to be exposed in a lot more different places ... it is one of the reasons for security recommendations that there has to be a unique shared-secret for every unique security environment. This in turn leads to people having several scores of different (shared-secret) passwords that result in the difficult (human) memory problem 2and in turn results the (shared-secret) password management problems.
https://www.garlic.com/~lynn/subintegrity.html#secret

The master password scenario tends to be simply a secret ... as opposed to a shared-secret ... which tends to imply that there are a much fewer places where they are exposed and may be subject to compromise.

The basic model tends to be that there is some sort of container for the authentication material ... either a software/file container ... or a separate hardware token container.

The (master) password tends to be a countermeasure for a lost/stolen "container" (whether it is a real physical container or purely software container).

At a 100k foot level ... it is two-factor authentication:
• container (hardware or software), something you have
• (secret only, not shared-secret) password, something you know

... lots of 3-factor related authentication posts
https://www.garlic.com/~lynn/subintegrity.html#3factor

multi-factor authenticatin carries with it the implication that the different authentication factors are subject to different kinds of vulnerability and threats (for instance something you are biometric value and a something you know password value transmitted in the same communication may be subject to a common evesdropping vulnerability and replay attack ... negating the benefit of having multi-factor authentication).

the overall integrity can be related to how easy it is to steal the container, whether the owner realizes the container has been stolen (physical or software copy), and how hard it is to crack the (master) pin/password.

a separate hardware container may be harder to steal than a software file resident on an extremely vulnerable internet connected PC. Vulnerable, internet connected PC may also be subject to keyloggers (capturing the master password) and sniffing (capturing the actual shared-secret passwords as they are being used).

So compare various threat models to hardware token with private key and infrastructures that replace shared-secret password registration with registration of public keys ... and digital signature verification in lieu of password checking.

Requiring unique shared-secret registration for every unique security domain is because the shared-secret is used for both authentication as well as origination (i.e. knowing the shared-secret can be sufficient for impersonation). A public key can only be used for authentication, but not for impersonation ... so the same public key can be registered in a large number of different places w/o increasing the threat of impersonation (that can happen if the same shared-secret is repeatedly used).

Correctly implemented digital signature protocols result in a unique value for every authentication, eliminating threat of evesdropping and replay attacks for impersonation.

A real hardware token tends to eliminate electronic, software theft (which can happen with emulated software containers).

So a hardware token tends to require physical stealing the object.

For this situation, pin/password (required for token operation) is a countermeasure for physical lost/stolen token ... as long as the pin/password hasn't been written on the token.

A hardware token with a built in fingerprint sensor ... might leave around a latent print on the sensor ... so if the token is stolen .. the thief may be able to lift the latent print and spoof the sensor. Some organizations are featuring "line sensor" (where you have to drag you finger across the sensor) as a security enhancement (compared to full finger sensors where a latent print may be left around).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Thu, 01 Sep 2005 19:55:40 -0600
scott@slp53.sl.home (Scott Lurndal) writes:
Actually, capacity planning courses of the day (do they still have them?) would usually [t][pr]each that there was a knee in the throughput/utilization curve somewhere around 65-70%. Higher utilizations would led to less optimal throughput. IIRC.

depending on the system structure. much of the performance modeling and workload profiling that was pioneered at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

is what eventually evolved into capacity planning
https://www.garlic.com/~lynn/submain.html#bench

we got cp67 (and then vm370) tuned to the point ... a lot of it was the rewrites i had done for i/o subsystem, paging, replacement algorithms, dispatching, and scheduling ..
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/submain.html#adcon
https://www.garlic.com/~lynn/submain.html#mmap

where it was common to have systems with mix-mode workloads operating at 100 percent cpu utilization and essentially immediate interactive response.

systems that have difficulty doing dynamic resource allocation well ... have problems handling instantaneous service requests. A lot of real-world environments can be very bursty types of activity. if you have difficulty doing dynamic resource allocation well ... the only method of handling bursty activity is operating with significant amount of spare headroom.

it helps being able to do well balanced system configuration ... not only system structures and opertaion ... but also decent job of balanced hardware configuration.

note however this aspect can be constantly changing based on combination of workload and advances in hardware. dynamic adaptive resource managerment can be interesting across a wide range of workloads and hardware configurations. part of this may sometimes covered as gradeful degradation under stress conditions.

for instance the following theme is about typical hardware configurations where cpu and real storage resources increases by a factor of 50 times ... but disk thruput only increased by a factor of 5 times (relative system disk thruput decreased by an order of magnitude in the period). as more and more systems became primarily disk bottleneck rather than processor bottleneck (over this period), optimizing disk I/O (including caching) became more and more important.
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Do or
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005n.html#29 Data communications over telegraph circuits

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Fri, 02 Sep 2005 11:11:25 -0600
scott@slp53.sl.home (Scott Lurndal) writes:
Actually, capacity planning courses of the day (do they still have them?) would usually [t][pr]each that there was a knee in the throughput/utilization curve somewhere around 65-70%. Higher utilizations would led to less optimal throughput. IIRC.

re:
https://www.garlic.com/~lynn/2005p.html#3 What ever happened to Tandem and NonStop OS ?

however there various kinds of other complex systems that experience non-graceful degradation operating near saturation.

air traffic and chicago airport
https://www.garlic.com/~lynn/2005o.html#24 is a computer like an airport?

railroad operation in texas
https://www.garlic.com/~lynn/2005o.html#29 Penn Central RR computer system failure?

countermeasures needed for traffic gridlock in donwtown areas.

I've also claimed that as little as one percent or less feral drivers results in the difference between free flowing heavy traffic and stop and go. these are drivers that make frequent rapid lane changes, also attempt to cross all lanes in close vacinity to exits and entrances. some interstate highway sections are being designed with feral driver countermeasures; I270 north of DC beltway has physical barrier between thru-lanes and local access lanes (with off/on ramps), i5 north of seattle, etc.

that still leaves feral drivers frequent, rapid lane changes on the thru-lanes ... single incidence can result in cascading braking involving hundreds vehicles ... and precipitate transition from heavy free flowing traffic to stop&go. one might be tempted to wonder that if driving privileges on interstates were revoked for feral drivers, whether you might see 5-10 percent gasoline conservation savings.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Fri, 02 Sep 2005 14:20:47 -0600
echomko_at_@polaris.umuc.edu (Eric Chomko) writes:
Do you have marketing numbers to back that up? Exactly what did DEC make microprocessor-wise that turned them any sort of profit from 1980 to 1992?

1988 idc report on vax 1978-1987, sliced and diced by us/non-us, model, year, etc.
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
https://www.garlic.com/~lynn/2005f.html#37 Where should the type information be: in tags and descriptors

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Innovative password security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Innovative password security
Newsgroups: alt.computer.security
Date: Fri, 02 Sep 2005 19:30:05 -0600
"Hairy One Kenobi" <abuse@[127.0.0.1]> writes:
Nice post.

The big argument between SSO (as the subject has called for at least seven years) and non-SSO has always been that loss of a single credential exposes everything, vs. username couplets stuck on Post-Its all over the place (been there, etc.)

Shame that we no longer have the option for *two* independent passwords (possibly one of HP/Compaq/DEC's patents). That was a useful compromise (as well as allowing the requirement for *two* people to authorise a privileged login)

But. The only way it ever works with any degree of safety is to not have the store on the (vulnerable) local machine.

And that brings the issue of a juicy target that you - as the user - has to trust absolutely. Excellent for corporations, not so hot for individuals, IMHO.


re:
https://www.garlic.com/~lynn/2005p.html#2 Innovative password security

a person-centric token ... say with digital signature verification as the mechanism for implying something you have authentication (i.e. hardware token that calculates key pair and never exposes the private key) ... then the person can determine how many tokens and/or how many environments used with each token.

an institution might be concerned about the integrity of the token ... but using a single token with multiple institutions doesn't impact any specific institution. using a single token for multiple institutions or unique token per institution ... is a person-centric consideration (modulo the integrity level of the token).

however if a person tends to carry all tokens on the same ring ... then whether they are carrying a single token or multiple tokens on that ring has little impact on the lost/stolen threat scenario ... they will all tend to be lost/stolen at the same time.

the objective of multiple tokens is if they have independent threats ... if they are subject to a common threat then the advantage of multiple tokens is lost.

there is a similar argument about multiple credit cards as countermeasure for lost/stolen threat ... which is negated if they are all carried in the same wallet ... since the lost/stolen scenario tends to be the whole wallet ... not individual contents.

so if you really want to get fancy ... some topic drift to security proportional to risk:
https://www.garlic.com/~lynn/2001h.html#61

one of the other countermeasures to lost/stolen in an online environment is the person recognizing that there has been a lost/stolen compromise and reporting it (limiting the scope/duration of the compromise). many of the PC/software and pure password based infrastructures can suffer a lost/stolen compromise w/o the person recognizing it has happened.

in any case, in a person-centric scenario ... a person wishing to have multiple tokens ... should recognize that they would be using multiple tokens instead of single token as a countermeasure to common lost/stolen threat scenario ... which means that the person needs to be prepared to keep the different tokens physically separate.

some past posts on person-centric models
https://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
https://www.garlic.com/~lynn/aadsm19.htm#14 To live in interesting times - open Identity systems
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#47 the limits of crypto and authentication
https://www.garlic.com/~lynn/2003e.html#22 MP cost effectiveness
https://www.garlic.com/~lynn/2003e.html#31 MP cost effectiveness
https://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb?>?
https://www.garlic.com/~lynn/2005g.html#47 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005g.html#57 Security via hardware?
https://www.garlic.com/~lynn/2005m.html#37 public key authentication

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Sat, 03 Sep 2005 09:43:11 -0600
Larry Elmore writes:
Personally, I'd love to have some kind of directional EMP "gun" to zap the electrical systems of some of these feral drivers' cars. BTW, I love that term -- much politer than what I usually call them, and more descriptive to boot.

note that feral drivers rapid lane change and the resulting cascading breaking can not only force transition from heavy flow to stop&go ... but also tends to create this accordion effect. The cascading braking accordion effect is one of the scenarios that is heavy contributor to rear-end collisions.

there is this interesting intermediate heavy traffic flow ... where the traffic isn't quite heavy enuf to make permanent transition to stop&go ... but the accordion effect brings some area of the traffic to a stop ... and then it opens up again ... until the next rapid lane change resulting in cascading braking and repeat of the accordion effect. It is the heaviest part of the accordion effect where traffic (momentarily) comes to a stop that there is highest probability of rear-end collision.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

EBCDIC to 6-bit and back

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: EBCDIC to 6-bit and back
Newsgroups: bit.listserv.vmesa-l
Date: Sat, 03 Sep 2005 10:37:21 -0600
Alan Ackerman writes:
Maybe this will help?

From: <http://everything2.org/index.pl?node=ALC>

ALC (thing) by Mit Wahsmircs (2.3 y) (print)

?

Thu Mar 22 2001 at 12:44:24

Airlines Link Control - a primitive data communications protocol, devised by SITA, and therefore peculiar to the airline industry. It works by having a central hub poll outlying "interchanges" (cluster controllers). ALC is a 5 bit code, thus limiting the character set to the basic alphanumerics (no lower case here!), and a handful of punctuation marks. There is no error correction, merely a rudimentary error detection. If a transmission error is detected, REENTER (or in more minimalist implementations RENT) is displayed on the terminal, and the user is expected to re-input the failed command. If the link fails altogether UNAVBL is displayed - the user then waits until some long-suffering techie restores service (AVBL). In spite of its age and deficiencies, ALC is still quite widely used, as it is simple and it works. There is also a large amount of cheap, pre-owned terminal equipment available. Nowadays, one can expect to find ALC encapsulated in a marginally less archaic protocol (X.25, for example).


... snip ...

there are a couple other issues here. in the 70s, in conjunction with Moldow, my wife produced AWP39 which was a peer-to-peer networking architecture ... which casued a lot of heartburn from the SNA crowd. This was before she was con'ed into going to POK to be in charge of loosely-coupled architecture. A couple past posts mentioning AWP39:
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5

during her stint in pok, she produced Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

Part of issue is that SNA is a misnomer ... not even having a networking layer ... it primarily is a centralized telecommunication structure. The first appearance of network layer was in APPN ... which SNA group non-concurred with even announcing. After 6-8 weeks escalation, the APPN announcement letter was rewritten to remove any implication that APPN and SNA might be related. We actually had some number of friendly arguments with the chief APPN architecture on why didn't he come work on a modern networking infrastructure.

My wife later did a stint as chief architect of amadeus (the other res system). One of the areas of contention was whether it would be x.25 based or sna based. My wife came down on the side of x.25 ... which led to a lot of lobbying to have her replaced ... which happened shortly later. It turned out that it didn't do any good since amadeus went x.25 anyway.

recent, random amadeus reference:
http://www.btnmag.com/businesstravelnews/headlines/article_display.jsp?vnu_content_id=1001015510 Star Alliance Cites 'Firm Commitments' For Amadeus IT

my wife also co-authered and presented corporate response to large fed gov. bid ... where she layed out the foundation for 3-tier architecture. We took that work and expanded on it during the SAA period ... which somewhat can be construed as the SNA crowd attempting to put the client/server genie back into the bottle ... i.e. return to the days of terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation

going around making customer executive presentation on 3-tier architecture and middle layer during this period tended to cause a lot of pushback from the SNA/SAA people
https://www.garlic.com/~lynn/subnetwork.html#3tier

for some other topic drift and folklore. as undergraduate ... I had added ascii/tty terminal support to cp67. I had tried to do some tricks with 2702 telecommunication controller ... which almost, but didn't actually work. this led to a univ. project that reversed engineered the ibm channel interface and built our own channel board for a minicomputer and programmed the minicomputer to emulate ibm controller. somewhere there is this article that blames four of us for precipitating the oem, plug-compatible controller business.
https://www.garlic.com/~lynn/submain.html#360pcm

the plug-compatible controller business has been blamed for contributing to FS
https://www.garlic.com/~lynn/submain.html#futuresys

specific reference
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

and quote from above:
IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its R&D spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals.

... snip ...

there has been numerous claims that many SNA characteristics were result of the above strategy.

there is a FS & res. system tiein. The folklore is that one of the final nails in the FS coffin was study by the Houston Science Center. The FS hardware architecture was extremely complex ... and Houston did some modeling that if you moved the Eastern 370/195-based res system (one of the precursors to amadeus) to FS ... and the FS machine used the same level of hardware technology as the 195, it would have the thruput of ACP running on a 370/145 (the complexity of the hardware could result in something like 30:1 thruput degradation).

... for even more drift ... some years ago, we got the opportunity to rewrite pieces of major res. systems. one that we completely rewrote from scratch was routes. a couple past posts:
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2003o.html#17 Rationale for Supercomputers
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2005k.html#37 The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)
https://www.garlic.com/~lynn/2005o.html#24 is a computer like an airport?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

EBCDIC to 6-bit and back

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: EBCDIC to 6-bit and back
Newsgroups: bit.listserv.vmesa-l
Date: Sat, 03 Sep 2005 10:56:11 -0600
Anne & Lynn Wheeler writes:
Part of issue is that SNA is a misnomer ... not even having a networking layer ... it primarily is a centralized telecommunication structure. The first appearance of network layer was in APPC ... which SNA group non-concurred with even announcing. After 6-8 weeks escalation, the APPC announcement letter was rewritten to remove any implication that APPC and SNA might be related. We actually had some number of friendly arguments with the chief APPC architecture on why didn't he come work on a modern networking infrastructure.

oops, sorry, slight brain/finger check ... that is APPN not APPC.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Sun, 04 Sep 2005 09:24:46 -0600
Peter Flass writes:
People follow too close; the more the driver in front of me rides his/her brakes, the farther I drop back. I seldome have to use my brakes at all. Of course when traffic is heavy, the more room you leave the more people cut in. To bring this back to computing, it would be interesting to run traffic simulations making different assumptions about driver behavior.

there was an article on such work within the past six months ... but I can't seem to find it now. quick search engine use doesn't find it ... although it turns up lots of articles on modeling driver and traffic behavior.

this is recent article that slightly touches on the subject
http://www.theoaklandpress.com/stories/090305/opi_200509030016.shtml

it sort of uses aggresive driving in lieu of my feral driver label ... although disregard for others can be somewhat related to undomesticated.

my observation wasn't on the behavior of the vast majority of the drivers ... but that takes only a couple drivers engaging in rapid lane changing to precipitate transition from heavy flow to stop&go.

another recent article (that i wasn't looking for) that slightly touches on the following too closely
http://communitydispatch.com/artman/publish/article_1772.shtml

part of it is having spent a large amount of time on dynamic resource management, congestion control, and graceful degradation ... complex systems that fail in this respect have some interest.
https://www.garlic.com/~lynn/subtopic.html#fairshare

when we were running high-speed backbone ... and not allowed to bid on nsfnet backbone (could be considered the operational transition to modern internet ... as opposed to technology). However, did talk nsf into audit of what we had running ... which resulted in the observation that what we had already running was at least five years ahead of all nsfnet bid submissions to build something new. however I had implemented some congestion control stuff ... that something similar will appear in internet2 (so there is some claim that it is going to be closer to 20 years than 5 years).

similar past comments on the subject
https://www.garlic.com/~lynn/internet.htm#0
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003g.html#36 netscape firebird contraversy
https://www.garlic.com/~lynn/2004l.html#5 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004m.html#62 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over from sci.crypt
https://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#10 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005l.html#16 Newsgroups (Was Another OS/390 to z/OS 1.4 migration

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Sun, 04 Sep 2005 12:02:46 -0600
ref:
https://www.garlic.com/~lynn/2005p.html#3 What ever happened to Tandem and NonStop OS?
https://www.garlic.com/~lynn/2005p.html#4 What ever happened to Tandem and NonStop OS?
https://www.garlic.com/~lynn/2005p.html#7 What ever happened to Tandem and NonStop OS?
https://www.garlic.com/~lynn/2005p.html#10 What ever happened to Tandem and NonStop OS?

at a more philosophical level ... there is relationship between graceful degradation, failure resistance, systemic failures of complex systems, etc.

graceful degradation, failure resistance, and any resiliancy in complex systems can be

1) lots of redundant resources. an issue in a culture focused on next quarterly report, there has been a tendency to optimize away all redundant resources (cutting costs, improving profit) ... little planning for the 50year and 100year events.

2) extremely agile operaton. this can be viewed from the perspective of Boyd's OODA-loops
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

where resources can be quickly reallocated to the biggest problem or bottleneck. this of course implies that people have fundamental understanding of the reasons for why things work they way they do and have accurate observations. in boyd's OODA-loop ... aka observe, orient, decide, and act .... the faster that you can iterate the OODA-loop, the more effective the actions. orient implies that you can place observations in context and understand them. some number of people may believe that they have an highly efficient OODA-loop ... mistakenly believing that they understand what is going on ... but actually it may be a very static environment and they are just performing their operations by rote. It is when things are constantly changing ... that the real test occurs whether people really can observe, orient (understand), decide and act.

when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

some number of the systems were fast restart/recover/take-over. we also coined the term disaster survivability and geographic survivability to differentiate from disaster recovery.
https://www.garlic.com/~lynn/submain.html#available

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Is there any RFC for telnet proxy?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is there any RFC for telnet proxy?
Newsgroups: comp.security.firewalls
Date: Mon, 05 Sep 2005 10:23:13 -0600
xiebopublic writes:
Thank you very much for your advice. I've tried the search engine of www.rfc-editor.org, and it has no any result for "telnet proxy" or "telnet gateway". I also have tried google, but not result too. That's why I post it here.

one pointed to by
http://www.rfc-editor.org/repositories.html

not specifically search
https://www.garlic.com/~lynn/rfcietff.htm

but categorization (organized by keywords found in titles and abstracts) ... click on Term (term>RFC#) in RFCs listed by section ... and find/move down to "proxy"
proxy
see also firewall
3820 3666 3665 3620 3603 3527 3487 3413 3361 3319 3313 3261 3238 3143 3135 3040 2844 2843 2607 2577 2573 2543 2322 2273 2263 2186 1919


clicking on the RFC number, brings up the RFC summary in the lower frame. clicking on the ".txt=nnn" field retrieves the actual RFC.

proxies started out being stub applications that did application-level sanity checking of incoming requests (aka you actually had an application that listened on the socket ... accepted the tcp connection ... did minimal processing and then created a new tcp connection to the "real" application, forwaring the information). much of the early checking was trying to catch things like buffer overflow exploits
https://www.garlic.com/~lynn/subintegrity.html#overflow

for instance

1919 I Classical versus Transparent IP Proxies, Chatel M., 1996/03/28 (35pp) (.txt=87374) (Refs 959, 1383, 1597)

in contrast, early firewalls started out doing various kind of checking & filtering below the application level.

early on, you also had (port) wrappers ... possibly running on same machine (rather than boundary machine). the wrappers might provide things like different authentication checking (aka rather than have straight telnet userid/password ... front-end providing more sophisticated authentication processes ... before directly contacting telnet). i don't have keyword entry for wrappers.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

One more about SYRES Sharing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: One more about SYRES Sharing
Newsgroups: bit.listserv.ibm-main
Date: Mon, 05 Sep 2005 10:34:22 -0600
Terry Sambrooks writes:
Hi,

I have been following the dialogue about sharing SYSRES both within and across SYSPLEX, and I have an old fashioned concern.

Whilst accepting the technical feasibility, the issue I have is with the concept of introducing a potential single point of failure into an environment which may have reliability as one of its principal aims. The concept of disk mirroring and duplexing within modern DASD systems is fine, but if human error introduces a fault the consequences can be just a tad embarrassing.


starting in the early 80s, some study found that software & people errors were starting to dominate failure (rather than hardware failures).

single sysres in combination with change-control staging between test, current, and previous volumes ... would be used to address primary (non-hardware) failure modes. various dasd hardware redundancy techniques can be used to address residual hardware failure issues.

the issue is whether any human resources are spent managing correctness of independent, but nearly identical sysres ... or automating much of that process and devoting the human resources to managing a single, common change process.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multicores

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multicores
Newsgroups: comp.arch
Date: Tue, 06 Sep 2005 10:18:46 -0600
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
To avoid context switching, you don't need to have another whole core. Just use register renaming, and have a multi-threaded CPU.

First, make the fastest possible single core you can.

Then, make it multithreaded, so that the only context switches you have are the unavoidable ones due to procedure calls.

Then, if you still have die area, add cores.


that was sort of the 370/195 dual i-stream proposal from the early 70s. the issue was that most codes kept the pipeline only about half-full. having dual instruction counters and dual registers ... with pipeline tagging registers and i-stream had a change of maintaining close to aggregate, peak thruput of the pipeline.

Amdahl in the early 80s ... had another variation on that.

running straight virtual machine hypervisor ... resulted in context switch on privilege instructions and i/o interrupts (between the virtual machine and the virtual machine hypervisor), including saving registers, other state, etc (and then restoring).

starting with virtual machine assist on the 370/158 and continuing with ECPS on the 138/148 to the LPAR support on modern machines ... more and more of the virtual machine hypervisor was being implemented in the microcode of the real machine ... aka the real machine instruction implementation (for some instructions) would recognize whether it was in real machine state or virtual machine state and modify the instruction decode and execution appropriately. one of the issues was that microcode tended to be a totally different beast and with little in the way of software support tools.
https://www.garlic.com/~lynn/submain.html#mcode

Amdahl 370 clones implemented an intermediate layer called macrocode that effectively looked and tasted almost exactly like 370 ... but had its own independent machine state. this basically allowed almost exactly moving some virtual machine hypervisor code to macrocode level ... w/o the sometimes difficult translation issues ... while eliminating standard context switching overhead (register and other state save/restore).

it was also an opportunity to do some quick speed up. standard 370 architecture allows for self-modifying instructions ... before stuff like speculative execution (with rollback) support ... the checking for catching whether the previous instruction had modified the current (following) instruction being decoded & scheduled for execution ... frequently doubled 370 instruction elapsed time processing. macrocode architecture was specified as not supporting self-modifying 370 instruction streams.

i've frequently claimed that the 801/risc formulation in the mid-70s,
https://www.garlic.com/~lynn/subtopic.html#801

was opportunity to do the exact opposite of other stuff in the period: combination of the failure of future system project (extremely complex hardware architecture)
https://www.garlic.com/~lynn/submain.html#futuresys

and heavy overhead performance paid by 370 architecture supporting self-modifying instruction stream and very strong memory coherency (and overhead) in smp implementations.

separating instruction and data caches and providing for no coherency support between stores to the data cache and what was in the instruction cache ... precluded even worrying about self-modifying instruction operation. no support for any kind of cache coherency ... down to the very lowest of levels ... also precluded such support for multiprocessing operations.

i've sometimes observed that ibm/motorola/apple/etc somerset was sort of taking the rios risc core and marrying it with 88k cache coherencyl. recent, slightly related posting
https://www.garlic.com/~lynn/2005o.html#37 What ever happened to Tandem and NonStop OS?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DUMP Datasets and SMS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DUMP Datasets and SMS
Newsgroups: bit.listserv.ibm-main
Date: Tue, 06 Sep 2005 14:37:44 -0600
"Edward E. Jaffe" writes:
SNA is indeed a lower layer than NJE. VTAM is a subsystem that implements SNA (e.g., LU2, LU6.2) and non-SNA (e.g., LU0) under z/OS, z/VM, and z/VSE -- not a protocol.

Why has TCP/IP so surpassed SNA?


and sna is not a network, basically SNA/vtam is a terminal control system ... it didn't even have a network layer. my wife and moldow did peer-to-peer networking architecture (AWP39) in the early SNA days ... and got lots of grief from the SNA people ... slightly related posting
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 8-bit and back

this was before she was con'ed into going to POK to be in charge of of loosely coupled architecture. In POK, she was responsible for Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

which didn't see much play until parallel sysplex ... except for possibly the ims hot-standby work.

APPN was possibly the first SNA-related architecture with network layer support. However, the SNA organization non-concurred with announcing APPN and it took several weeks of escalation until the APPN announcement letter was rewritten to not claim any relationship at all between SNA and APPN.

While arpanet and OSI had true networking layer ... they shared common characteristic with SNA ... effectively a homogenuous architecture. A big step forward for arpanet was the big cut-over to internetworking protocol on 1/1/83 (gateways and the internetworking of possibly heterogeneous networks).

I've often claimed that one of the reasons that the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

.. technology developed at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

... was larger than the arpanet from just about the beginning until sometime in the summer of 1985 ...
https://www.garlic.com/~lynn/internet.htm#22

was that the primary internal network technology effectively had a form of gateway support built into every node.

For instance, NJE implemented the traditional arpanet, osi, sna homogeneous architecture. NJE also made a serious mistake of intermixing networking protocol fields and job control fields. NJE was also scaffolded off the HASP networking control (early on some of the code still carried the "TUCC" customer identification). The HASP heritage basically mapped networking nodes into unused psuedo device table entries ... which (originally) had limitation of 255 entries. A large HASP/JES2 system might have 60-80 psuedo devices leaving only 170-190 entries free for defining network nodes.

By the time NJE was announced a made available to customers, the internal network already had over 255 nodes (and the arpanet had approx. 250 nodes at the 1/1/83 switch-over ... a point in time that the internal network was nearing 1000 nodes).

NJE would trash traffic (even traffic purely passing thru), where NJE didn't have valid definitions for both the origin and destination nodes. Once the internal network exceeded the NJE table size ... you had to relegate NJE nodes to purely boundary end-points (they couldn't be trusted with generalized network traffic).

The arbitrary NJE confusion of mixing networking fields and job control fields ... frequently had the effect if some network traffic originated from an NJE node that was at different level than the destination node ... slight field changes between releases could result in bringing down the destination node MVS system. In a world-wide network spanning numerous datacenters and organization ... it was just about guaraneteed that you wouldn't have all NJE nodes in the world operating at the same release. There is somewhat infamous folklore story of some traffic originating at a san jose NJE node causing MVS systems in Hursley to crash.

As a result, a large library of specialized gateway code grew-up for the core internal networking technology. All NJE nodes were placed behind real networking nodes ... where specialized gateway code was defined for each connected NJE release. It was the responsibility of the gateway code to recognize incoming NJE traffic (destined for a local NJE node), convert the incoming NJE header to canonical format and then reformat to exactly corresponds to the destination NJE node (to prevent constant MVS system crashes all over the world).

For some slight topic drift, there was this project in the late 80s called HSP (high-speed protocol) that was attempting to address some of the performance issues with TCP. It identified that typical TCP pathlength implementation was around 5000 instructions and required five buffer copies (and it was starting to be a concern that for large message sizes, cache effects and processor cycle overhead related to buffer copies might exceed actual instruction execution). In any case, HSP was moving to zero buffer copies and significant cut in pathlength. However, a somewhat corresponding function done thru VTAM was measured around 150,000 instructions and 15 buffer copies.

We tried to get HSP introduced into ANSI x3s3.3 body for standards work (iso charactered us standards body responsible for standards corresponding to OSI levels 3&4). However, ISO (& ANSI) had policy that standards work couldn't be done on protocols that violated OSI.

HSP protocol work was a problem because

1) it was going directly from transport (4/5) interface directly to lan/mac interface. this bypassed network (3/4) interface and so violated osi

2) it supported internetworking ... OSI doesn't have an internetworking layer ... and so violated OSI.

3) LAN/MAC interface sits somewhere in the middle of OSI layer 3/networking and therefor violates OSI. therefor any protocol that supports LAN/MAC interface also violates OSI.

In any case, arpanet, OSI, and SNA all suffered common shortcoming dictating a homogeneous environment. The internal network technology developed at the science center avoided this shortcoming from just about the original implementation. arpanet overcame this shortcoming in the great conversion to internetworking protocol on 1/1/83.

There is some folklore that the specifics of SNA terminal control specification is somewhat the outgrowth of future system ... which was canceled before even being announced
https://www.garlic.com/~lynn/submain.html#futuresys

a specific reference
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

from above:
IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its R&D spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals.

... snip ...

And future system motivation is somewhat blamed on plug compatible controllers.

when i was an undergraduate, I had added the tty/ascii terminal support to cp67. I had tried to do some fancy 2702 program to dynamically identify terminals and allow any terminal to dial-in on any port. It turns out that I could do dynamic terminal recognition, but 2702 had a hardware restriction that prevented allowing any terminal to dial-in on any port.

this somewhat spawed a university project that reversed engineered the channel interface and built a channel inteface board for a minicomputer. The minicomputer was programmed to emulate terminal controller ... and do terminal and baud rate determination. somewhere there is a rightup that blames four of us for spawning the plug compatible controller business.
https://www.garlic.com/~lynn/submain.html#360pcm

In the mid-80s I got involved in doing somewhat similar repeat .. which I had the opportunity to present at an SNA architecture review board meeting in raleigh (after my talk, the guy that ran the ARB wanted to know the responsible person that authorized me to present).
https://www.garlic.com/~lynn/99.html#66 System/1
https://www.garlic.com/~lynn/99.html#67 System/1
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP

there were minor other SNA issues. internal product houses that would develop there own box and created an exact state regression test according to official SNA specification ... would find that there could be significant differences from what was the SNA specification and how VTAM actually operated (it didn't matter what the SNA specificatioo said, to ship it had to interoperate with VTAM).

slightly drifting the topic in another direction, one of the mainframe early tcp/ip product issues was the level of programming required to support the availble interface controller box. The early implementation would consume a 3090 processor getting 44kbytes/sec thruput. I had modified the standard product and added rfc 1044 support ... and some tuning at cray research between a 4341-clone and a cray ... the 4341 got 1mbyte (channel) speed thruput using only modest amount of 4341 processor.
https://www.garlic.com/~lynn/subnetwork.html#1044

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DUMP Datasets and SMS

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DUMP Datasets and SMS
Newsgroups: bit.listserv.ibm-main
Date: Tue, 06 Sep 2005 15:02:23 -0600
ref:
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS

... slight additional drift on the internet side.

about the time of the 1/1/83 great switchover to tcp ... bitnet ... similar technology to the internal network ... but a completely separate network ... was comparable size to the arpanet.
https://www.garlic.com/~lynn/subnetwork.html#bitnet

and this was expanded international as earn ... reference from 1984
https://www.garlic.com/~lynn/2001h.html#65

internet had two parts of the success ... the internetworking technology ... which saw conversion to in the great 1/1/83 cut-over ... and real live production backbones that internetworked different networks ... a major advance was the NSFNET backbone.

we were operating an internal high-speed backbone at the time, but weren't allowed to bid on NSFNET. however a technical audit by NSF came up with some statement that what we were operating was at least five years ahead of all NSFNET bid submissions. minor reference
https://www.garlic.com/~lynn/internet.htm#0

our internet standards index
https://www.garlic.com/~lynn/rfcietff.htm

misc. historical references:
https://www.garlic.com/~lynn/rfcietf.htm#history

NSFNET bid and award announcement reference
https://www.garlic.com/~lynn/internet.htm#nsfnet

the NSFNET program announcement text:
https://www.garlic.com/~lynn/2002k.html#12

and posting reference about the NSFNET bid award:
https://www.garlic.com/~lynn/2000e.html#10

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DUMP Datasets and SMS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DUMP Datasets and SMS
Newsgroups: bit.listserv.ibm-main
Date: Tue, 06 Sep 2005 16:49:28 -0600
Anne & Lynn Wheeler writes:
and sna is not a network, basically SNA/vtam is a terminal control system ... it didn't even have a network layer. my wife and moldow did peer-to-peer networking architecture (AWP39) in the early SNA days ... and got lots of grief from the SNA people ... slightly related posting
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 8-bit and back


one of the other issues with respect to the internal network vis-a-vis the internet (even tho the majority of the internal network had little to do with SNA/vtam) ... was the transition between 83 & 85 with the number of nodes in the internet exceeding the number of nodes in the internal network. obviously the internet got gateway technology allowing the interconnecting of multiple disjoint infrastructures ... allowing it to catch up with the internal network technology (at least with that respect).

however, the SNA/vtam faction heavily enforced position regarding interconnection of PCs and workstations via terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation

vis-a-vis tcp/ip implementation turning PCs and workstations into peer-to-peer nodes.

this continued thru the 80s ... and somewhat cumulated in SAA .. which can be construed as an attempt to return the client/server genie to its bottle, helping reinforcing the terminal emulation paradigm.

besides the previous references of real peer-to-peer being at odds with SNA's underlying centralized terminal control (and homogeneous) paradigm ... we ran into disagreements when we started pushing 3-tier architecture, middle layer and middleware in the SAA hey day. my wife had co-authored and presented a response to a large campus-like environment, federal networking request. In it she had started the formulation of 3-tier architecture. We expanded on that and started doing middle layer (aka 3-tier architecture) customer executive presentations. It involved tcp/ip, ethernet, client/server, and 3-tier architecture. This caused all sorts of pushback from T/R, SNA, and SAA organizations.
https://www.garlic.com/~lynn/subnetwork.html#3tier

some specific
https://www.garlic.com/~lynn/99.html#201 Middleware - where did that come from?
https://www.garlic.com/~lynn/99.html#202 Middleware - where did that come from?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

address space

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: address space
Newsgroups: bit.listserv.ibm-main
Date: Tue, 06 Sep 2005 17:59:51 -0600
Jay Maynard writes:
ISTR this was because the answer wasn't specified in the POO when the machines that implementd the feature were being developed, and so the POO was changed to make it model-dependent because two systems were implemented differently and couldn't be changed to be consistent. Am I correct, or is there a parity check in that block?

basically xa (frequently called 811, aka nov. 1978) architecture had an architecture, access registers, program call, etc. basically move all sort of library and system of the same address space as the application ... but retain the efficiency of pointer passing and branch&link.

the initial transition from mvt real memory to vs2/svs ... basically layed mvt out in 16mbyte virtual memory. the prototype for vs2 was done on 360/67 by borrowing CCWTRANS from cp67 and hacking it into an MVT kernel along with some stub virtual memory and paging support.

The transition from VS2/svs to VS2/mvs gave each application its own 16mbyte virtual address space ... however since the pointer passing paradigm was so engrained (from mvt days), the kernel occupied 8mbytes of each application address space (theoretically leaving 8mbytes for application code).

However, there was this issue with subsystem code ... which is sort of half-way between kernel stuff and application stuff. With subsystem code in their own address space, continuing to support pointer passing paradigm became more difficult. Infrastructures that grew up with virtual address space tended to do things like message passing rather than pointer passing. The subsystem hack in MVS was to define the "common segment" ... basically some stub subsystem code in the application address space could copy the parameters into an area of the common segment, and make a supervisor call ... passing a pointer to the common segment resident parameters. Execution would eventually pop-up in the subsystem which then would have the common segment at the same virtual address ... so the passed address pointer would work.

The problem was that there was just a single common Common Segment for the whole environment. By the 3033 time-frame, it was common to find installations with four and five megabyte common segments ... 16mbytes address space minus 8mbyte kernel, minus 5mbyte common segment ... leaving only 3mbytes (and sometimes less) for applications.

Worley was in large part responsible for the 3033 dual-address space and cross-memory support. Basically applications could do a kernel call for some subsystem service passing a address pointer. The subsystem eventually received control ... and used the address pointer to pick up the data out of the secondary address space (rather than requiring its movement to common segment area).

trivia ... worley also worked on risc chips and eventually left to join HP. What current chip architecture was he one of the primary architects for?

some past worley specific references:
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#57 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2002g.html#18 Black magic in POWER5
https://www.garlic.com/~lynn/2003j.html#35 why doesn't processor reordering instructions affect most
https://www.garlic.com/~lynn/2004f.html#28 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004f.html#29 [Meta] Marketplace argument

misc. past aos2 prototype postings:
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2002c.html#52 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002p.html#49 Linux paging
https://www.garlic.com/~lynn/2002p.html#51 Linux paging
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003k.html#27 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line

misc past 3033 dual-address space postings:
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#58 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#18 Black magic in POWER5
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002q.html#1 Linux paging
https://www.garlic.com/~lynn/2003c.html#13 Unused address bits
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003m.html#29 SR 15,15
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004f.html#27 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#53 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005c.html#63 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005f.html#7 new Enterprise Architecture online user group
https://www.garlic.com/~lynn/2005f.html#57 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

address space

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: address space
Newsgroups: bit.listserv.ibm-main
Date: Tue, 06 Sep 2005 18:22:58 -0600
another hack done for 3033 ... was 32mbyte support (besides dual-address space and cross memory support)

basically 370 still had only 16mbyte real and 16mbyte virtual support (note that 360/67 had both 24bit and 32bit virtual address modes).

i/o wasn't keeping up so the best way to improve performance was to avoid as much i/o as possible ... that required keeping large amount of stuff in real memory. that strategry was starting to be a thruput bottleneck for the 3033. for one thing ... not only were 4341s beating 3031 and vaxes in performance and price/performance ... but cluster of six 4341s was beating 3033 in aggregate performance and priace/performance (a 3033 having 16 channel and 16mbytes of real stroage, six 4341s having aggregate of 36 channels and 96mbytes of real storage).

so the gimmick was to define 14bit real page numbers. standard page table had 16bits, 12-bit (real) page number (times 4k pages gives 16mbyte real addressing), 2 defined flag bits, and 2 undefined bits. They took two undefined bits and allowed them to be prefixed to the (real) page number ... so instead of being able to specify 4096 4k real pages ... it could specify up to 16384 4k real pages (64mbytes). Instructions could only generate 24bit virtual addresses ... but relocate hardware could take a 24bit virtual address and convert it to a 26bit real address.

thus was born the 16mbyte line.

kernel code running in real address made ... might be required to access some information in a virtual page above the 16mbyte line. The hack was to use some slight-of-hand virtual memory system tricks to copy/move the virtual page from above the line to below the line.

this carried over to 32mbyte 3081s running in 370 mode.

misc. past postings on the 3033 32mbyte hack
https://www.garlic.com/~lynn/2005.html#34 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005.html#43 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2002c.html#40 using >=4GB of memory on a 32-bit processor
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2003d.html#26 Antiquity of Byte-Word addressing?
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#0 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2004.html#17 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004n.html#50 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design

some more recent postings on the 4341 subject:
https://www.garlic.com/~lynn/2005.html#34 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005.html#51 something like a CTC on a PC
https://www.garlic.com/~lynn/2005b.html#20 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#63 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#27 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#11 Cerf and Kahn receive Turing award
https://www.garlic.com/~lynn/2005d.html#30 The Mainframe and its future.. or furniture
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005e.html#13 Device and channel
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#30 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#36 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#59 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005g.html#24 DOS/360: Forty years
https://www.garlic.com/~lynn/2005h.html#11 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005h.html#43 Systems Programming for 8 Year-olds
https://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005m.html#8 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#12 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#10 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#11 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#12 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#16 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#29 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005n.html#36 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#47 Anyone know whether VM/370 EDGAR is still available anywhere?
https://www.garlic.com/~lynn/2005o.html#16 ISA-independent programming language
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

address space

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: address space
Newsgroups: bit.listserv.ibm-main,bit.listserv.vmesa-l
Date: Wed, 07 Sep 2005 11:46:12 -0600
Bruce Black writes:
Gosh, the APL ball. That brings back memories.

For you newbies, he is referring to the IBM Selectric Typewriter, which had a replaceable ball with all the characters on it, depending on what special character set you wanted. Selectrics were also used as interactive terminals. APL uses a lot of arcane characters, so you needed a special Selectric ball and key tops to program in APL, or read the code. Selectrics were fun to watch when typing fast. The ball would spin and tilt just like those gut-spewing rides at traveling carnivals.

Neat language, too. Back at Brown University in the early 70s, I worked with OS and CP/67 and CMS, but I learned APL and as an exercise I wrote the equivalent of the CMS file editor in a half-page of APL code (of course the CMS editor of the time was a simple line-oriented editor)


I have an 2741 apl ball sitting here next to my screen.

cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

ported apl\360 interpretor (done at phili science center) to cms. part of this was that the apl\360 monitor supported swapping 16k-32k real memory apl workspaces. they used a storage allocation mechanism that allocated a new block of stroage on every assigned ... until it reached the top of the workspace ... and then invoked garbage collection to compact all allocated storage. this worked fine in 16k real storage workstapce ... but looked like page thrashing running in cms virtual memory environment with possibly hundreds of kbytes of mbytes of address space. the apl storage allocation mechanism had to be reworked for operation in large virtual memory environment.

besides making cms\apl available as product (for both external and internal users) ... it was also made available on cambridge's cp67 service. back then, apl was getting heavy use for lots of business modeling "what if" questions (some of the stuff that you find being implemented these days with spreadsheets). one of the early, heavy users of cms\apl on cambridge machine, were business planning people from corporate hdqtrs in armonk. They loaded the most secret of corporate information on the cambridge machine and then ran various business models. one of the reasons for using the cambridge system was that it was the first that offered reasonably large APL workspaces that allowed dealing with more real-world environments.

this somewhat created interesting security issues since the cambridge machine was also being used by numerous univ. & col. students in the boston area (i.e. mit, harvard, bu, etc). misc. comments about cp67 based commercial time-sharing services
https://www.garlic.com/~lynn/submain.html#timeshare

DP division also started deploying a large number of apl-based applications for sales, marketing and field people on their own cp67 systems ... which was later migrated go vm370. eventually all sales, marketing and field people worldwide were using vm370 for their day-to-day business.
https://www.garlic.com/~lynn/subtopic.html#hone

cms\apl also caused some complaints from iverson, folkoff, et all at phili science center ... because it introduced system calls that allowed apl to do things like read & write files (also allowing apl applications to address some more real world operations, including huge amounts of data). however, this violated the purity of the apl language ...which had been restricted to purely doing things internally within the self-contained apl workspace (modulo typing on the 2741). this was later resolved with the introduction of apl shared variables as a paradigm for accessing external facilities.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multicores

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multicores
Newsgroups: comp.arch,bit.listserv.vmesa-l
Date: Wed, 07 Sep 2005 11:58:46 -0600
"robertwessel2@yahoo.com" writes:
Windows, like pretty much any other OS, will dole out CPU time in time-slice increments to threads that run CPU bound, but only rarely does a thread (in most systems) ever run out a time slice before blocking on some event (and thus threads are rarely CPU bound).

i had done dynamic adaptive scheduling back in the 60s as an undergraduate. this was sometimes referred to as fairshare (or wheeler) scheduler ... because the default scheduling policy was fairshare

basically light-weight interactive tasks got shorter quanta than more heavy-weight longer running tasks. advistory deadline was calculated proportional to size of the quanta and the recent resource utilization. quanta could span certain short i/os (disk, etc) ... so heavy utilization wasn't necessary restricted to absolutely pure cpu bound (just reasonably high rate of cpu consumption).

light-weight interactive tasks would tend to have more frequent quanta with shorter deadlines ... and heavy weight tasks would have less frequent larger quanta. interactive tasks would appear to be more responsive if they could complete within a few of the shorter quanta and/or if they haven't been using a lot of resources recently. basically the objective was to uniformally control of the overall rate of resource consumption (regardless of the quanta size).

the other issue was that this was done as modifications to existing cp67 system that had an extremely complex scheduler that wasn't directly controlling resource utilization ... just moving priorities up & done and itself could consumer 10-15 percent of total cpu utilization. so a major objective of the scheduler change was to not only implement effective resource consumption supporting a variety of scheduling policies ... but do it with as close as possible to zero pathlength.
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multicores

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multicores
Newsgroups: comp.arch,bit.listserv.vmesa-l
Date: Thu, 08 Sep 2005 12:56:39 -0600
"robertwessel2@yahoo.com" writes:
Of course that's not anything like the way it actually works. If two threads, one high and one low priority, are ready to run, the high priority thread will be dispatched, and time slices have nothing to do with it. In the basic case a runnable high priority thread will prevent all low priority threads from running, period. If two threads of equal priority are both runnable, then, and only then, will time slices come into play to switch between those equal priority threads. That being said, many OSs will apply a dynamic temporary priority boost to lower priority threads if they've not been allowed to run for an extended period of time so that they're not starved of CPU time completely. And there are, of course, more complex schedulers that work not just on thread priority, but take into account system policies that say things like "guarantee at least 15 % of the CPU to job#3." Those often work by watching the run history and adjusting a more traditional priority scheme to meet the goals.

dynamic adaptive resource management that i was doing in the 60s as undergraduate ... used advisery deadlines for dispatching/scheduling (effectively based on size of quanta and the amount of resources the task was getting relative to its target resources).

priorities were an aspect of scheduling policy ... default policy was fair share and default priority mechanism adjusted a tasks relative portion of fair share (translated into more or less than overall infrastructure fair share).

evolution of that infrastructure were things like partitioning resources among departments and then making individual fair share a portion of the related departments allication ... as opposed to overall total system resources.

one of the reasons for doing the change ... was to drastically reduce the pathlength of the then available implementation which was starting to take 10-15 percent of elapsed time ... and used a priority scheme more like that outlined above. previous post
https://www.garlic.com/~lynn/2005p.html#21 Multicores

the code that i had done as undergraduate was eventually picked up and shipped in the cp67 product. Much of the code was then dropped in the morphing from cp67 to vm370. however it was re-introduced when i did the resource manager for vm370.
https://www.garlic.com/~lynn/subtopic.html#fairshare

one of the other distinctions of the resource manager was that it was chosen to be the guinea pig for licensed/priced kernel software.

with the 6/23/69 unbundling announcement (large part motivated by various litigation activities), there was a switch to start charging/licensing application softwoare ... although kernel software still was free.
https://www.garlic.com/~lynn/submain.html#unbundle

the resource manager got tapped to be priced/licensed kernel software. I got the reward to spend time over period of several months with various business groups working out the changes for pricing for kernel software. the guidelines established with the resource manager was that kernel software not directly involved with hardware support (like device drivers) would still be free ... but other kernel software could be priced separately.

this caused a problem for the next base kernel release where they were going to ship SMP support (something else that didn't make the transition from cp67 to vm370 ... even tho all the invention leading up to the compare and swap instruction had been done on cp67). I had included a bunch of stuff in the resource manager that SMP support needed. However the pricing policy wa that SMP support had to be free (aka hardware support) ... so it was a violation of the policy with you had to buy the resource manager in order for SMP support to work. Eventually they moved stuff required by SMP support out of the licensed/priced resource manager into the base (free) kernel.
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Thu, 08 Sep 2005 20:28:35 -0600
Peter Flass writes:
My understanding is that a "workstation" is a single-user system. If so, workstations certanly didn't obsolete minis. The PC obsoleted both, except for specialized applications, either as a single-user workstation replacement or in its "server" guise. Only a few workstation manufacturers survive. Sun and IBM are major players in the server market as well as WS. Maybe only SGI doesn't sell servers?

small historical mention of (very) early sun days
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party

for total topic drift ... more mention of DataHub ... also referenced in the above post
https://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious
https://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun?
https://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect
https://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness
https://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
https://www.garlic.com/~lynn/2004f.html#16 Infiniband - practicalities for small clusters

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hi-tech no panacea for ID theft woes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hi-tech no panacea for ID theft woes
Newsgroups: alt.computer.security
Date: Fri, 09 Sep 2005 11:36:20 -0600
Unruh <unruh-spam@physics.ubc.ca> writes:
Actually no. Common sense is our intuitive solution to problems based on past experience. For most of these electronic things past experience is a very poor guide, and thus so is common sense. Especially when allied with an almost complete ignorance with how it all works. There is nothing in past experience which would say that opening a letter was dangerous in and of itself. Opening an email is. There is nothing in past experience that says that the actions of someone 5000 miles away could be of danger to you. On the net there is.

some related comments regarding some of the threats and countermeasure issues:
https://www.garlic.com/~lynn/aadsm20.htm#23 Online ID Theives Exploit Lax ATM Security
https://www.garlic.com/~lynn/aadsm20.htm#41 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm20.htm#43 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm20.htm#44 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#0 ID theft ring proves difficult to stop

there is always the issue that crooks may be going after the low-hanging fruit ... and in a target rich environment ... closing one vulnerability may just find the crooks moving on to a different vulnerability. that is typically where a detailed threat model can come in handy.

some mention that there is difference between identity fraud and account fraud, even tho lots of identity theft stories tend to lump them together (i.e. account fraud just needs to counterfeit authentication w/o necessarily requiring any identification):
https://www.garlic.com/~lynn/2003m.html#51 public key vs passwd authentication?
https://www.garlic.com/~lynn/aadsm20.htm#17 the limits of crypto and authentication
https://www.garlic.com/~lynn/2005j.html#52 Banks
https://www.garlic.com/~lynn/2005j.html#53 Banks
https://www.garlic.com/~lynn/2005l.html#35 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005m.html#42 public key authentication

and lots of posts on account harvesting for fraud purposes
https://www.garlic.com/~lynn/subintegrity.html#harvest

and for a little drift ... post on data breach vulnerability and security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61 Security Proportional To Risk<

note part of the issue is that sometimes there is confusion between identification and authentication ... recent post touching on some of the confusion issues:
https://www.garlic.com/~lynn/aadsm20.htm#42 Another entry in the internet security hall of shame

it is possible to come up with countermeasures that make account fraud much more difficult (by strengthen various authentication weaknesses) ... independent of addressing identity fraud issues. a simple example of the difference is say it was possible for somebody to open an offshore anonymous bank account ... and be provided with authentication technology for performing transactions. by definition, there has been absolutely no identification involved (and the authentication technology could still prevent fraudulent account transactions).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hi-tech no panacea for ID theft woes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hi-tech no panacea for ID theft woes
Newsgroups: alt.computer.security
Date: Fri, 09 Sep 2005 13:31:17 -0600
"Brett Michaels From Poison" writes:
I'm talking along the lines of end users, which I beleive are the number one weakness in any security structure. Most end users don't know a hammer from a nail when it comes to computer security. I'm not speaking common sense on a specific user, but rather a general base of common sense.

If these end users were more educated and used more common sense measures, eg. not opening unknown attachments, not writing your pin on your mac card, this would allow IT Admins to concentrate their efforts on more difficult security measures. Some end users actually do "dumb things" more than anyone realizes. As a security auditor, the place we find the largest pool of weaknesses is end user behavior/lack of policy adherance.


ref:
https://www.garlic.com/~lynn/2005p.html#24 Hi-tech no panacea for ID theft woes

nominally multi-factor authentication requires that the different factors be subject to different vulnerabilities ... i.e. from 3-factor authentcation model
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


... a something you know PIN is nominal a countermeasure to lost/stoeln something you have physical card.

an institutional-centric view has been that shared-secret pin/password based something you know implementations require that the person have a unique pin/password for every unique security environment (as countermeasure to somebody in one environment attacking another environment ... say, part-time employee in garage ISP accessing people's online web financial services ... assuming common password for both environments).
https://www.garlic.com/~lynn/subintegrity.html#secrets

from a person-centric view, as the number of electronic proliferated, people may now be faced with memorizing scores of unique & different pin/passwords. one of the consequences is that you find people making lists and storing them in their wallet. also some study claimed that something like 30 percent of the people write their PINs on their debit cards.

so a common lost/stolen scenario is the wallet is lost ... which includes any lists of pin/passwords and all cards (including cards that have pins separately written on the cards. as a result, there is a common vulnerability (failure mode) for lost/stolen wallet that effects all cards and some number of recorded pins/passwords ... defeating the objecting of having multi-factor authentication.

another threat/exploit for account fraud is getting people to divulge the information on their cards and related information (phishing attacks).

so there is a requirement for two countermeasures

1) making valid account transactions based on a something you have physical object ... which uses some paradigm where the owner of the physical object isn't able to verbally disclose the information 2) eliminate the enormous proliferation of the shared-secret paradigm ... resulting in the impossible requirement for people to memorize scores of different pieces of information.

so one implementation uses asymmetric cryptography where keys are generated inside a chip/token and the private key is never divulaged. proof of possesing the chip/token (something you have authentication) is done with digital signatures ... which doesn't expose the private key. It is possible for the person possessing the token to proove that they have the token ... but they aren't able to divulge the information required for the proof (i.e. the private key contained in the token). The digital signature methodology generates a new value on every use ... so the operation is resistant to replay attacks (somebody having recorded a previous use).

That still leaves shared-secret vulnerabilities associated with memorizing human factors (and countermeasure against lost/stolen token). Using a chip/token would allow a PIN to be used for correct operation of the chip/token ... w/o requiring the PIN to be recorded. That makes the PIN a secret (as opposed to shared-secret) and eliminates the shared-secret based security requirement for having a unique PIN for every environment (if person has a single PIN for everything they do ... it is less of a problem to memorize ... and also opens the possibility of making it more complex than four numeric digits).

Such an approach makes phishing attacks for account fraud much more difficult ... since the person can't even divulge information in the token that they don't now (crooks can't simply ask tens of thousands of people to type in their account numbers and PINs and then go off and extract money, they now actually require the exact physical token).

it also makes crooks work harder for physical stealing tokens and also obtaining the associated PIN (much higher effort in order to perform a fraudulent transaction).

note also that a countermeasure associated with online transaction environment and lost/stolen (physcial) tokens ... is the owner is likely to notice that it is missing and report it, resulting in the associated account access being deactivated. In the phishing (also record/replay, key logger, etc) scenarios, the victim might not realize that there is money leaking out of their account until weeks later.

so much of the current electronic based account fraud could be eliminated ... forcing it purely to stealing physical object (where a crook actually has to physically take them one or two at a time, can't program a computer to lift millions)... which also will nominally have a much shorter window of (crime) opportunity (unitl it is reported lost/stolen).

The other way of looking at it is that the fraud ROI (return on investment) is significantly reduced (enormous increase in physical effort, limited window of opportunity).

You still have some number of social engineering attacks (other than the phishing kind) ... where the crook convinces the victim to perform the actual transaction (as opposed to the crook obtaining sufficient information to perform the transactions themselves). Some of these are currently getting wide-spread coverage under the heading of some sort of scam.

misc. past person-centric related postings:
https://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
https://www.garlic.com/~lynn/2003e.html#22 MP cost effectiveness
https://www.garlic.com/~lynn/2003e.html#31 MP cost effectiveness
https://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2005g.html#47 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005g.html#57 Security via hardware?
https://www.garlic.com/~lynn/aadsm19.htm#14 To live in interesting times - open Identity systems
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#47 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#41 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/2005m.html#37 public key authentication
https://www.garlic.com/~lynn/2005p.html#6 Innovative password security

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Fri, 09 Sep 2005 14:36:30 -0600
iddw@hotmail.com (Dave Hansen) writes:
FWIW, I believe the term originally came from retail markets where, for example, Whirlpool would sell their washing machines to Sears, who would slap a Kenmore label on them, and sell them to the public. In that scenario, the OEM is Whirlpool, because they were ones who actually built the equipment. The term makes sense here.

I think it transmogrified from that sensible meaning to "the manufacturer of the final product." Note that, in the above example, Whirlpool manufactured the final product, and they are the OEM. And now, when Cambridge Instruments designs an e-beam lithography system that incorporates a PDP-11 as the control computer, Cambridge becomes the OEM, because they built the system that is sold to the, er, consumer.


OEM .. original equipment manufacture

... however it was also sometimes used as in

OEM ... other equipment manufacture

with somewhat similar sense as

PCM ... plug compatible manufacture

in the tales about mainframe plug compatible (clone) controller
https://www.garlic.com/~lynn/submain.html#360pcm

the story starts back with 2702 telecommunication controller. the university had type-I and type-III linescanners installed in the 2702 (for 2741 and 1052s). the university was getting some number of tty/ascii terminals and needed to upgrade the 2702 with type-II linescanner that supported tty terminals.

the field-installable 2702 type-II linescanner kit came in a couple big boxes labled "heathkit" (as in original equipment manufacture?).

the basic cp67 terminal support did dynamic terminal identification between 2741 and 1052 (the 2702 had "SAD" command where you could dynamically associate a specific linescanner with a particular port).

I had to add tty/ascii terminal support to cp67 ... and so i thot it would be neat if i could dynamically recognize tty, 2741, and 1052 including allowing any dial-up terminal to connect to any dial-up number (in practice this nominal met that there was rotary with a single dial-in number ... and the box would find the first unused number/port).

it turned out that 2702 had a hardware restriction ... while you could use the SAD command to dynamically associate linescanners and ports ... there was a hardware restriction that required the line speed oscillator to be hard-wired to specific ports. This wasn't a problem for 2741s & 1052s since they operated at the same baud rate ... but it was a problem to include TTY terminals since they operated at a different baud rate.

that sort of kicked off a university program to reverse engineer the mainframe channel interface and build a channel interface card for a minicomputer (started with interdata/3) programmed to emulate the 2702 (but handling both dynamic baud rate and dynamic terminal).

somewhere there is a write-up blaming four of us for the PCM clone controller business
https://www.garlic.com/~lynn/submain.html#360pcm

the single interdata/3 morphed into cluster with interdate/4 as main processor and multiple interdata/3s as dedicated linescanners. interdata was later bought by perkin/elmer and the box sold under the PE brand. I ran into somebody in the 90s that claimed that they had made quite a good living in the 80s selling the boxes into gov. installations. They made some comment that the mainframe channel interface board appeared to not have changed at all from the original one that had been done at the univ. for the interdata/3 in the 60s.

so the PCM clone controller business has been blained as motivating factor in the future system project
https://www.garlic.com/~lynn/submain.html#futuresys

recent post related to this subject:
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back

and folklore has it that the combination of clone controller market and the failure of FS gave rise to a lot of the characteristics of SNA .... a collection of paths that sometimes touches on the subject:
https://www.garlic.com/~lynn/subnetwork.html#3tier

for some total topic drift ... a slight minor tandem related tale. jim had been one of the main people behind SQL and original relational database implementation
https://www.garlic.com/~lynn/submain.html#systemr

his departure to tandem caused something of a stir; some past postings mentioning the subject
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002o.html#75 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2004c.html#15 If there had been no MS-DOS
https://www.garlic.com/~lynn/2004l.html#28 Shipwrecks
https://www.garlic.com/~lynn/2004l.html#31 Shipwrecks
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Sat, 10 Sep 2005 09:47:22 -0600
Bernd Felsche writes:
Not much of a choice if the unsupported instruction is frequently trapped. It's a work-around at best; until the application can be recompiled to what's supported by native hardware. There are hundreds to thousands of excess hardware cycles expended to trap an illegal instruction. On Unix-like operating systems, the context switches are a huge overhead. Inline floating point emulation is quite a bit faster.

cp67 had to do this with 360 privilege instructions for virtual machine support (so that the privilege operation followed virtual machine rules ... rather than real machine rules).

for most environments, end-users aren't directly focused on kernel overhead. on cp/67 was constantly out front ... with processor utilization constantly tracked and broken out by time in the kernel and not-in-the kernel. this was further highlighted ... that end-users could see the difference between running on the "real" hardware and running in a virtual machine (i've periodically commented about other environments where end-users hardly give a second thot to why can't they run the same application w/o the kernel). in any case, very early on as an undergraduate ... i rewrote significant portions of the cp67 kernel to reduce such overhead. this is an old posting giving part of a report I gave at the fall '68 SHARE meeting in boston
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

where i had reduced the total time spent in the kernel from 534 seconds to 113 seconds

early in development of 370 ... there was joint project between cambridge
https://www.garlic.com/~lynn/subtopic.html#545tech

and endicott to create 370 virtual machines under cp67 (on real 360/67). the architectures were similar ... but 370 had some new instructions ... and the virtual memory tables had some differences. the new instructions (both privileged and non-privileged) had to be simulated and some number of things also operated differently (like the virtual memory table format).

complicating the implementation was that the cambridge system also provided time-sharing services to some number of students in the boston area ... and 370 hadn't been announced (and all information about it had to be kept strictly confidential).

the implementation of 370 virtual machines had to be done on the cambridge system ... w/o allowing any hint of 370 to leak out.

as a result ... the modified cp67 kernel supporting 370 virtual machines ... didn't actually run on the real hardware (which might have it exposed to non-employees) ... but ran in its own 360/67 virtual machine. this modified cp67 kernel provided 360 w/o dat (dynamic address translation ... aka virtual memory), 360 w/dat (i.e. 360/67), 370 w/o dat, and 370 w/dat ... depending on the options selected.

once that was running ... then a cp67 kernel was modified to run on 370 w/dat machine (using new privilege instructions and the 370 flavor of virtual memory tables rather than 360/67 version).

the resulting (recursive virtual machine) environment was then
• 360/67 real hardware • cp67-l kernel providing virtual machines and time-sharing service • cp67-h kernel running in 360/67 virtual machine and providing both 360 and 370 virtual machines • cp67-i kernel running in 370 virtual machine and providing 370 virtual machines • cms running in 370 virtual machine

the above environment was operational a year before the first engineering 370 with hardware dat support was operational (a 370/145 in endicott). in fact, the cp67-i kernel was brought in as sort of software validation of the machine.

when cp67-i was first booted on this engineering 370/145, it failed. after some diagnostics, it turned out that the engineers had implemented two of the new 370 DAT instructions reversed (b2xx opcodes, RRB and PTLB instructions had op-codes reversed). The cp67-i kernel was patched to correspond to the (incorrect) hardware organization and rebooted (and everything ran fine after that).

later in the vm370 time-frame starting with 370/158 ... sthe hardware for ome of the 370 privilege instructions were modified so that it could operate in two-modes ... 1) real machine mode and 2) virtual machine mode. the vm370 kernel ran all virtual machines in problem mode ... which resulted in exception interrupts into the kernel whenever a privilege/supervisor state instruction was encountered. The kernel would then have to emulate the instruction and return control to the virtual machine. The 370/158 introduced a new machine state flag ... which indicated that things were running in virtual machine supervisor mode. The hardware for some number of privilege instructions was then modified so that the instruction would be directly executed (according to virtual machine architecture, rather than real machine architecture) rather then exception interrupt into the kernel.

The ECPS work for 370 138/148 expanded on that work in two ways 1) added virtual machine mode support for additional privilege instructions (not done in the 370/158 effort) and 2) new privilege instructions to speed up vm370 kernel operation.

The low and mid-range 360s/370s were vertical microcoded machines ... instructions looked very much like simplified machine language. THe technology avg. about ten microcode instructions executed for every 370 instruction. A kernel microcode performance assist was to take high-use code paths in the vm370 kernel and recode them directly in microcode (getting approx. 10:1 performance boots). Each code sequence in the vm370 kernel would then be replaced with a single new instruction (that effectively invoked the new microcode implementation). Analysis for this first involved a frequency analysis of kernel paths ... and then sorted by aggregate time ... picking out the most highly used 6k bytes of kernel paths.

A summary of the initial analysis is given here
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

where the highest used 6k bytes of kernel path ... accounted for 79.33 percent of total kernel execution time (giving about a 10:1 elapsed time reduction when moved directly into microcode). misc. past posts on this topic:
https://www.garlic.com/~lynn/submain.html#mcode

... an anecdote about 370 dat information leaking out.

the first 370s were announced and shipped w/o virtual memory even being mentioned. however there was a news article that appeared giving some details about 370 dat. there was also speculation about 370/145. the 145 had several rows of lights given various processor state indications. the label for one of these lights was "xlate" ... even before dat was announced (given rise to speculation that xlate stood for dynamic address translation).

investigation into the basis for the news article turned up that they had obtained a copy of a confidential document discussing various details about 370 dynamic address translation facility. as an outgrowth of the investigation, all copying machines in the corporation were retrofitted with small serial number glued underneath the glass ... so that it showed up on all copied pages (you might not be able to tell who made the copies ... but you could identify which machine the copies were made at).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Canon Cat for Sale

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Canon Cat for Sale
Newsgroups: alt.folklore.computers
Date: Sun, 11 Sep 2005 20:53:12 -0600
et472@FreeNet.Carleton.CA (Michael Black) writes:
I'm not sure when Jef Raskin started to create the Macintosh that he foresaw what would eventually come. It's hard to say since the history books aren't really clear about the software as originally envisioned, they just talk about the hardware, but given that he later did create such a thing it seems odd the history books don't talk of him creating it as the Mac.

The second bit is more definite. The Mac didn't morph because he left Apple. The original project was morphed because of those in charge from above, and Jef Raskin left Apple because what he originally envisioned was hijacked into what became the Mac we know.


my brother was a regional apple rep ... and sometimes when he came into town there would be dinners with some of the apple people. i remember some dinners before mac was announced, having arguments about uses for mac ... which i somewhat characterized as mac advocates position was that macs would never be allowed for any other purpose than the kitchen table and never for any commercial purposes. i claimed that (at the time) such a position would mean that it be a constant money loosing proposition. my brother had a lot of business selling apple-IIs into all sorts of markets.

the ibm/pc argument was that it could be used for pc-based commercial applications ... as well as terminal emulation allowing single desk footprint for both mainframe connection as well as local (in large part business) computing. i was able to upgrade from home 3101 ascii terminal ... to ibm/pc with terminal emulation. misc. past posts about terminal emulation helping with uptake of ibm/pc ... and then later ... the terminal emulation install base somewhat inhibited the transition of of PCs to full peer-to-peer networking nodes
https://www.garlic.com/~lynn/subnetwork.html#emulation

before the ibm/pc came along ... i tried to get my brother to help me with some form of terminal emulation using apple-II.

part of the objective was trying to get timely downloads of vmshare (off of tymshare) that i could upload and redistribute internally (there was nearly world-wide access to tymshare via various dial-up as well as tymnet)
http://vm.marist.edu/~vmshare/

however, eventually i was able to work a deal where a monthly vmshare dump tape from tymshare ... i then computed the changes and distributed the updates to numerous internal locations over the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

... of course one of the locations was the HONE system
https://www.garlic.com/~lynn/subtopic.html#hone

which provided access to all sales, marketing and field people world-wide (although at the time, the consolidated US HONE datacenter was maybe 20 miles closer to the tymshare datacenter than i was).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Documentation for the New Instructions for the z9 Processor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Documentation for the New Instructions for the z9 Processor
Newsgroups: bit.listserv.ibm-main,bit.listserv.vmesa-l,alt.folklore.computers
Date: Mon, 12 Sep 2005 09:48:11 -0600
Bill Fairchild writes:
And there are some machine instructions for which op codes will apparently never be provided. E.g., the technical pub titled S/370 Extensions (or something like that) in the early 1980s described some instructions in which certain frequently executed MVS functions, like adding/deleting an FRR, were implemented as single instructions in order to enhance the MVS supervisor performance. These were described in that book with hex constants rather than op codes, and I don't believe IBM has ever granted these instructions the legitimacy of an op code. And then there are some proprietary instructions, documentation for which is available at ca. $10K per photocopied page. These may or may not have op codes, but they will never be documented in the Principles of Operations and they may or may not have Assembler-supported op codes.

recent discussion of one of the original efforts along this line
https://www.garlic.com/~lynn/2005p.html#14 Multicores

description of selection process
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

there were two types of assists ...

1) those that changed the rules for privileged instruction execution ... so it recognized virtual machine mode ... and didn't have to interrupt into the kernel (with the associated state-switching overhead) for simulation

2) the new type in ECPS ... which took sections of kernel code and dropped them into the 138/148 microcode ... replacing them with new op-code.

the big issue in the low and mid range machines was that the vertical microcode implementation of 370 was running an avg. of about ten microcode instructions per 370 instruction. the ecps effort got close to 1:1 move of 370 instructions to microcode for a performance speed up of 10:1.

things were a little different in the high-end machines which tended to have horizontal microcode and later direct hardware implementation. rather than avg. microcode instructions per 370 instruction ... things tended to be measured in avg. machine cycles per 370 instruction. 370/165 avg. 2.1 machine cycles per 370 instructions. hardware & microcode enhancements for 370/168 reduced that to avg. 1.6 machine cycles per 370 instruction. further optimization on the 3033 got it down to around one machine cycle per 370 instruction (i.e. 370 instructions were effectively running at almost hardware speed, closing the performance difference between direct microcode instructions and 370 instructions).

things got more complicated on 3081. other than hardware directly executing privilege instructions using virtual machine rules (and eliminating all the state save/restore stuff involved with interrupting into the kernel) ... one-for-one movement of 370 kernel instructions to microcode could be really embarresing ... to expand 3081 capability even with limited microcode store, the 3081 would page some microcode ... using a FBA piccolo disk drive managed by the service processor. if the new microcode happened to be of the paged variety, it would be significantly slower than the kernel 370 instruction implementation.

in this time-frame, Amdahl introduced a new mechanism for addressing the opportunity. high-end (acutally most) microcode tended to be difficult to code and debug (besides having less & less performance differential vis-a-vis 370 instructions). one of the few remaining performance differentials in high-end pipelined machines between 370 and microcode was the 370 architecture allowing self-modifying instructions (some amount of pipeline degradation checking to see if the previous instruction had modified the current instruction). Amdahl introduced "macrocode" which was essentially the 370 instruction set with 1) restriction eliminating self-modifying instruction support and 2) its own registers and execution environment. Amdahl used macrocode to implement its hypervisor execution environment (effectively an optimized subset version of what might be found in hypervisor kernel) ... which IBM evneutally responded to with PR/SM ... precursor to the current LPAR support.

lots of collected m'code posts
https://www.garlic.com/~lynn/submain.html#mcode

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Mon, 12 Sep 2005 21:19:48 -0600
Giles Todd writes:
Sequent were doing machines like that then (Symmetry? There were two models with different names, the other being based on a NatSemi CPU). Not cheap, but affordable enough that CIX (a bulletin board with ideas above its station, based in Surrey, England) bought one of the multiple-8086 ones in 1991. 4.2BSD based OS with a System V emulator nailed on top for those who preferred that environment.

some old news ...

International Business Week, 9/26/88

• In a few months Intel will introduce its 80486 microprocessor
   - mainframe on a chip
- 1 million transistors
   - equal to low-end IBM 3090
• New age of personal mainframes
• Microprocessors will replace central processing units in mainframes
• Today Intel directly sells 20% of world's microprocessors
   - 70% of all PCs use Intel technology
- 386 chip could be in up to half "IBM-type" PCs next year
 * Intel's business is booming
- first half 1988 earnings up 211% to $224 million
- on revenue up 64% to $1.4 billion
• The 486 chip should sell initially in 1989 for about $1,500
   - expect 486 based PC in 1990 for about $20,000
- such a PC should be able to run dozens of simultaneous large apps
   - will greatly concentrate power into the executive suite
• Competing RISC designs promise ten-fold speed-ups
- Sun's Sparc chip is licensed by AT&T, Unisys and Britain's ICL
- MIPS Computer Systems RISC chip is licensed to Tandem Computers
   - Sun and MIPS RISC chips will be produced and marketed by
TI, LSI Logic, Fujitsu, Cypress Semiconductors and others
   - Motorola and Intel have introduced their own RISC chips
• Superfast chips of any design will open new opportunities in
application areas
- Intel safe in office applications thanks to $12 billion worth of
     IBM-type PC programs but other areas are more open
• Mainframes could be built using these microprocessors
   - standardized chips encourage mainframe cloning
- startups spared the up front costs of developing CPU and S/W
- Zenith Data Systems and Compaq Computer are planning to move
into minicomputers and mainframes using Intel's chips
   - Intel already packages its chips into systems ranging from
printed-circuit boards to finished PCs
   - Intel is in a 50-50 joint venture with Siemens to build a
fault-tolerant mainframe called Biin (buy-in?)
- Biin is due out this October
- Intel's systems business is 27% of total revenue this year and
     is expected to be 33% in 1990
• Intel is improving its manufacturing capability
   - closed 8 outdated plants
- spending $450 million this year on leading-edge equipment and
new plants
- let 25% of its workforce go, 6,000 workers and managers
   - revenue per employee doubled last year to $100,000
• Parallel systems promise even better price/performance
   - Intel has sold nearly 150 hypercubes containing 8 or more MPUs
- Hypercube can outperform a Cray at certain tasks
- Sequent Computer Systems of Beaverton, Oregon builds parallel
systems of mainframe performance that sells for one tenth the price
   - Sequent's 120 MIP Symmetry system supports hundreds of simultaneous
users, putting it in the range of IBM's biggest 3090
   - But Sequest's Symmetry costs $5,000 per MIP and IBM costs $120,000
- Intels biggest parallel system delivers 512 MIPs with 128 MPUs,
more than any commercial maimframe
- With 32 MPUs, Intel's IPSC machine outperforms a Crey X-MP/12
     by 40% (100 megaflops vrs 70 megaflops)
- Cost per megaflop: Intel=$10,000; Cray=$100,000
   - Others are selling parallel systems built from Motorola, Intel,
Inmos or National Semiconductor MPUs
- Boston's Stratus Computer, based on Motorola MPUs, has installed
1,444 systems at places such as Gillette, Visa International, and
     the National Association of Securities Dealers
- Stratus' sales were $184 million in 1987 and have been compounding
     at roughly 50% a year

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

z/VM performance

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z/VM performance
Newsgroups: bit.listserv.vmesa-l
Date: Tue, 13 Sep 2005 11:28:42 -0600
Steve_Domarski/Marion_County_Property_Appraiser writes:
Consider QUERY SRM. The DISPATCHING MINOR TIMESLICE is the time a particular machine gets to get work done. I have found that reducing this number helps but increasing from the default (which is set by VM at IPL) can hurt real bad depending on the kind of work load. Try reducing by 1 and work down to until you can see a change then increase by 1.

one of the things/command that i put into the original resource manager (just a little short of 30 years).

the previous code had a table of known processors with a corresponding slice/quantum value for each known processor model (and couldn't handle an unknown processor).

at the time, there was some experience with customers that migrated the same kernel across a wide-range of processor models ... some of the processors might not have existed at the time the kernel originally shipped.

there is an anecdote about a specially built kernel (with a lot of features that weren't in standard product release) that had been provided to AT&T longlines. the national account execute ... tracked me down nearly ten years later ... asking me what can be done about longlines ... they were still running the same kernel (which had been migrated across a number of different processor models).

there was a related joke when the resource manager was originally being evaluated for product (not about this in specific). an enormous amount of work had been done on dynamic adaptive technology. somebody in some corporate position propagated a statement that the most modern performance technology had enormous numbers of tuning paramenters allowing customers to tailor the system for there specific operation ... and that the resource manager couldn't ship until it also was provided with enormous numbers of tuning parameters. anyway code was put in to provide various kinds of tuning parameters (the joke is in the way some of those parameters actually got used).

misc. past fairshare and working set postings
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PKI Certificate question

Refed: **, - **, - **, - **
From: lynn@garlic.com
Subject: PKI Certificate question
Newsgroups: microsoft.public,outlook.contacts
Date: 13 Sep 2005 17:08:54 -0700
No Spam wrote:
The Company recently implemented PKI certs to digitally sign and/or encrypt email. Questions I have:

I thought a user should be able to publish her/his Public Key to the GAL so others could import and use it? In other words, if I wanted to send an encrypted email to a user, I should be able to retrieve her/his Public key from the GAL entry and encrypt the message and attachments. The user retrieving that email would employ her/his Private key to decrypt.

But the way ours is set up (I think), the user has to send me a digitally signed email first, and then I have to store the key in my Address Book.

IMHO, keeping an Address Book kind of defeats the pupose of having the GAL on the Exchange Server.. Now I have multiple entires for each person - just to be able to use the PKI cert.

(I do understand I probably need the Address Book entry as a container for correspndents external to the Company, not on our Exchange server or its affiliated trusted domains.)

What am I missing as far as The GAL and PKI certs?


the basic technology called asymmetric (key) cryptography; what one key encodes the other key decodes (differentiate from symmetric key cryptogray where the same key is used for both encryption and decryption.

a business process is defined called public key; one of the key-pairs is identified as "public" and freely made available, the other of the key-pair is identified as "private", kept confidential and never divulted.

a business process is defined called digital signatures ... which is a form of something you have authentication ... from 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


where a hash of a message/document is calculated and encoded with the private key. the message/document and the associated digital signature is transmitted. the recipient recalculates the hash of the message/document, decodes the digital signature with the appropriate public key and compares the two hashes. if the two hashes are the same, then the recipient can assume that

1) the message/document hasn't been modified since the digital signature
2) something you have authentication ... the entity transmitting the message/document has access to and use of the corresponding private key.

nominally a recipient loads a trusted public key repository with public keys and caracteristics associated with the public key.

in the early 80s, there was a situation somewhat similar to the "letters of credit" model from sailing ship days ... envolving offline email of the period. The recipient dialed their local (electronic) post office, exchanged email, and hung up. At that point they might have some first time email from a total stranger ... and not having either local information about the total stranger and/or access to some other information source about the total stranger.

certification authorities were defined (somewhat similar to financial insitutions issuing letters of credit) that validated some piece of information and created a document that attested to them having validated the particular piece of information. They digitally signed these documents called digital certificates.

Recipients or relying parties were expected to have preloaded the public keys of the better known certification authorities into their trusted public key repositories. Now when a recipient received some first time communication from a total stranger ... there would be the 1) message, 2) appropriate digital signature, and 3) corresponding digital certificate iissued by some well-known certification authority.

The digital signature verification process previously described ... not morphs into a two-part operation when first time communication with total strangers is involved. Now the recipient first computes the hash on the digital certificate and then decodes the digital signature (on the digital certificate) with the appropriate public key (for the corresponding certification authority previously stored in the recipients trusted public key repository). The two hashes on then compared, if they are equal then the recipient can assume

1) the digital certificate hasn't been modified since the digital signature
2) something you have authentication, that the digital certificate originated from the corresponding certification authority

now the recipient can recompute the hash on the actual message/document and decode the (message's) digital signature (using the public key taken from the attached & validated digital certificate). If the two hashes are equal, then the recipient can assume

1) the message hasn't been modified since the (message) digital signature
2) something you have authentication as indicated by the information supplied by the certification authority in the digital certificate.

In all cases ... both PKI operations and certificate-less operation
https://www.garlic.com/~lynn/subpubkey.html#certless

the recipient (relying party) has some trusted public key repository that has been loaded with public keys trusted by that recipient. In the original case, the recipient is dealing directly with parties that the recipient knows and trusts.

In the PKI case, the recipient is dealing with total strangers, but "relies" on their trust in the certification authority.

In the common SSL type scenario ....
https://www.garlic.com/~lynn/subpubkey.html#sslcert

the client browser contacts a server ... and the server returns a digital signed message along with their digital certificate. The client validates the digital certificate (by validating the certification authorities digital signature using the appropriate public key from the client's trusted public key repository) and then compares the domain name in the SSL certificate with the domain name typed into the browser as part of the URL. If they are the same, the client is possibly really dealing with the server they think they are dealing with. To confirm, the client has to validate the server's digital signature on the returned message (using the server's public key taken from the digital certificate).

the client now generates a random (symmetric) session key and encodes it with the server's public key and sends back the encrypted session key and message that has been encrypted with the session key. the server uses their private key to decode the random (symmetric) session key ... and then uses that key to decrypt the actual message.

the issue with these kind of SSL domain name certificates is to provide the client with an extra level of confidence that the server they think they are communicating with ... is actually the server they are communicating with.

The process at the PKI certification authority is that somebody applies for a domain name SSL certificate and provides a lot of identification information. The certification authority then contacts the domain name infrastructure and gets the identification on file as to the owner of that domain name. then the certification authority has the expensive, error-pone, and time-consuming task of attempting to match the identification information provided by the certificate applicant with the identification information on file with the domain name infrastructure.

If the PKI certification authority feels that the two batches of identification information appear to refer to the same entity ... they then will generate a digital certificate (certifying that they have checked the two batches of identification information).

However, there has been an issue where communication with the domain name infrastructure has been compromised and incorrect identification information substituted at the domain name infrastructure. A countermeasure to such a vulnerability (somewhat backed by the PKI certification authority ... since the whole trust root for their certification is accurate domain name identification information on file with the domain name infrastructure) is to have entities register a public key when they register a domain name. Then all future correspondence from the domain name owner can be digitally signed, and the domain name infrastructure can validate the digital signature using the onfile public key.

Now, the PKI certification authority could also require that applicants for SSL certificates, also digitally sign their application. Now the PKI certification authority can replace an expensive, error-prone, and time-consuming identification process with a much simpler, less expensive and reliable authentication process ... by retrieving the onfile public key from the domain name infrastructure and validating the digital signature on the SSL certificate application.

this however, creates something of an issue for the PKI certification authority industry. If the certification authority can make use of onfile (certificate-less) public keys, then the rest of the world might also (doing away with the requirement for the digital certificates and therefor also the certification authority).

one could imagine a highly optimized SSL protocol ... that when the client requests the server's ip-address from the domain name infrastructure (which is the first step that happens in all internet operations today) ... that the domain name infrastructure piggy-backs in the response any related onfile public key and SSL options.

Now the client in their initial message to a server, generates a secret (symmetricial) session key and encodes it with the public key for that server (returned by the domain name infrastructure as part of returning the ip-address) ... and then encrypts the rest of the message (with the secret session key). The server gets the message, decodes the secret session key with the server's private key ... and then decrypts the rest of the message with the secret session key.

there are similar scenarios for almost all other PKI relying-party operations when it turns out that the relying-party actually has their own repository about the communicating party and/or have online access to an authoritative agency with real-time, online information about the communicating party. sample recent posting on the subject:
https://www.garlic.com/~lynn/aadsm21.htm#4

Digital Singatures question

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Digital Singatures question
Newsgroups: comp.security.misc
Date: Wed, 14 Sep 2005 21:40:57 -0600
"Matt" writes:
I'm currently studing for the Security+ exam and the Sybex book I'm reading really does't fill in the gaps with cryptography. I have read articles on the web and other chapters from different books.

Can someone please explain Digital signatures to me? I understand that they are used to validate who a person is. I don't understand how they are created and what key is used to encrpyt them, etc.. Really couldn't find to much more info on them.

Also is there a difference between a cipher and a hash? Is ciphertext the same thing as a message digest? When you do use a cipher or hash does the other side needs to know what algorithm you used? Is this sent with the message??

Can someone please help clear up these topics or quide me towards some reading material that will.


nist has secure hash standard .... basically computes a short-hand representation of a document.
http://csrc.nist.gov/cryptval/shs.htm

and the digital signature standard
http://csrc.nist.gov/cryptval/dss.htm

the basic technology is asymmetric (key) cryptography; what one key (of a key-pair) encodes, the other key (of the key-pair) decodes (differentiates from symmetric key cryptography where the same key is used for both encryption and decryption.

a business process is defined called public key; where one key is designated as "public" and freely distributed and the other key is designated as "private" and kept confidential and never divulated.

there is a business process called digital signature. somebody computes the hash of a message/document and encodes the hash with their private key to create a digital signature ... and then transmits the message/document along with its digital signature.

the recipient recomputes the hash of the document, decodes the digital signature with the appropriate public key and compares the two hashes. if the two hashes are the same, then the recipient can conclude

1) the document hasn't been modified since the digital signature was computed 2) something you have authentication ... aka that the originator has access to and use of the corresponding private key.

from 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


given that the key designated "private" is appropriately guarded, kept confidential and never divulated ... then a digital signature validated with the corresponding "public" key would only have originating from the designated "private" key owner.

to further increase the integrity of digital signature operations, hardware tokens can be used, where a public/private key pair is generated inside the token, the public key is exported, and the private key is never revealed. the hardware token is required to perform digital signature operations (further strengthening the integrity of the something you have authentication operation).

a straight-forward deployment is to take something like RADIUS ...
https://www.garlic.com/~lynn/subpubkey.html#radius

which is used by the majority of the world-wide ISPs for dial-up customer authentication ... typically using password ... and replace the registration of a shared-secret password
https://www.garlic.com/~lynn/subintegrity.html#secret

with registration of the client's public key. Then instead of using password authentication, where the client transmits the passowrd ... the client instead computes a digital signature (using a defined message and their private key). The server then validates the digital signature with the registered public key (for something you have authentication, in place of the password, shared-secret, something you know authentication).

there was a business process created called PKI involving certification authorities and digital certificates to address the first time communication between strangers for the offline email environment of the 80s (somewhat analogous to the "letters of credit" from the sailing ship days). THe scenario involves somebody dialing up their local (electronic) post office, exchanging email, and hanging up. They then may be faced with handling first-time email from a straonger ... having no local information about the person originating the email and/or having any online access to authoritative source for obtaining information about the originator.

A more detailed description of that scenario
https://www.garlic.com/~lynn/2005p.html#32 PKI Certificate question

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What is CRJE

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is CRJE
Newsgroups: bit.listserv.ibm-main
Date: Thu, 15 Sep 2005 15:02:00 -0600
Raymond.Noal@ibm-main.lst (Raymond Noal) writes:
CRJE - Conversational Remote Job Entry - RIGHT!!

Frank Krueger wins the Cigar.


as undergraduate, in the fall of '68 ... i started on modification of HASP ... that replaced the 2780 support (eliminate unnecessary code that wasn't needed) and replaced it with tty & 2741 terminal support ... and editor syntax borrowed from the CMS editor ... for a CRJE implementation.

some past posts about earlier having done the modifications to the cp67 kernel to add tty support
https://www.garlic.com/~lynn/submain.html#360pcm

this had involved trying to use the 2702 communication controller in automatic terminal recognition. the original cp67 code had used the SAD command (specified which line-scanner was used for which ports) as part of automatic terminal recognition for 2741 and 1052 terminals. it turned out that 2702 had short-cut that allowed line-scanner to be dynamically re-assigned ... but the hardware oscillator controlling baud rate was hired wired. 2741 and 1052 operated at same baud rate. as long as you were dealing with hardwired lines ... it wasn't a problem ... however if you wanted to allow 2741, 1052, AND tty dial-in to a common port pool ... then it was a problem (while 2702 linescanner could be switched to whatever port a tty dialed in on ... if it happened to be hard wired for 2741/1052 baud rate ... it wouldn't work correctly)

in any case, that somewhat launced univerisity effort to build our own telecommunication controller that could do dynamic terminal and line-speed recognition; reverse engineer the ibm channel interface, build own channel adapter board for a minicomputer, and program the minicomputer to emulate the 2702 (with added capability).

somewhere there is an article blaming four of us for helping spawn the plug compatble controller business.

in turn, the plug compatible controller business is then blamed for helping justify FS project ... which was eventually canceled w/o even being announced
https://www.garlic.com/~lynn/submain.html#futuresys

recent posts covering some of the same topics
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005p.html#16 DUMP Datasets and SMS

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PKI

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI
Newsgroups: comp.security.misc
Date: Thu, 15 Sep 2005 21:58:43 -0600
"Matt" writes:
I posted a question yesterday and had an overwhelming response that was very helpful.

As for my next question, can you please help me understand the topic of PKI. I understand that it is based on asymmetric key exchange, with Public and private keys. The part that throws me off is why and how the CA is involved.

Does the public key stay with the user and then get verified at a CA or RA somewhere? And what are the differences between the CA nad RA? When I go to a website that needs a certificate I'm getting a certificate from that site and having it verified somewhere?? Or is this alreay done somehow. How do you get a public key from a CA?


RA is registration authority ... which is involved in registering the public key ... this is sort of like registering a pin or a password ... like when you open an account with an ISP and register a password for authentication purposes in association with using that account. a public key is similarly registered and used for performing authentication operations in conjunction with validating digital signatures. recent mention of registration authority
https://www.garlic.com/~lynn/aadsm20.htm#44 Another entry in the internet security hall of shame

CA is certification authority ... which is involved in certifying information that is to be associated with a public key.

a certification authority typically issues a digital certificate which is a representation of the certification process done by the certification authority.

PKI considered harmful
http://iang.org/ssl/pki_considered_harmful.html#patterns_id_myth

more PKI considered harmful
http://iang.org/ssl/pki_considered_harmful.html

straight forward use of public key and digital signatures for authentication w/o requiring PKI, certification authorities and digital certificates.
https://www.garlic.com/~lynn/subpubkey.html#certless

the pervasive authentication infrastructure in the world tends to be radius. various comments about using public key registration for radius digital signature for authentication operaton (w/o requiring PKI, certification authorities, and/or digital certificates)
https://www.garlic.com/~lynn/subpubkey.html#radius

another widely used and pervasive authentication infrastructure is kerberos ... some number of collected posts on using digital signatures in kerberos environments (again w/o needing PKI, certification authorities, and/or digital certificates)
https://www.garlic.com/~lynn/subpubkey.html#kerberos

related series of recent posts
https://www.garlic.com/~lynn/aadsm21.htm#2 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#5 Is there any future for smartcards?
https://www.garlic.com/~lynn/aadsm21.htm#7 simple (& secure??) PW-based web login
https://www.garlic.com/~lynn/aadsm21.htm#8 simple (& secure??) PW-based web login

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PKI

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PKI
Newsgroups: comp.security.misc
Date: Fri, 16 Sep 2005 08:05:59 -0600
for another popular public-key, non-PKI, certificate-less based infrastructure,
https://www.garlic.com/~lynn/subpubkey.html#certless

where the public key is registered in lieu of pin/password ... besides radius
https://www.garlic.com/~lynn/subpubkey.html#radius

and kerberos
https://www.garlic.com/~lynn/subpubkey.html#kerberos

see also ssh ... there is the comp.security.ssh newsgroup and numerous references that can be found using search engine
http://www.openssh.com/
http://www.ssh.com/

for ietf, rfc references ... see
https://www.garlic.com/~lynn/rfcietff.htm

in the RFCs listed by section, click on Term (term->RFC#) and scroll down to "kerberos"
kerberos
see also authentication , security
4121 4120 3962 3961 3244 3129 2942 2712 2623 1964 1510 1411


... and
remote authentication dial in user service (RADIUS )
see also authentication , network access server , network services
4014 3580 3579 3576 3575 3162 2882 2869 2868 2867 2866 2865 2809 2621 2620 2619 2618 2548 2139 2138 2059 2058


... clicking on the RFC numbers, brings the RFC summary up in the lower frame. clicking on the ".txt=" field in the summary, retrieves the actual RFC.

for IETF standards related to digital certificates and PKI
ITU public key certificate (X.509)
see also International Telecommunications Union , public key infrastructure
4059 4055 4043 3820 3779 3739 3709 3647 3280 3279 3161 3039 3029 2587 2585 2560 2559 2528 2527 2511 2510 2459 1424 1422 1114

public key infrastructure (PKI)
see also authentication , encryption , public key
4108 4059 4056 4055 4051 4050 4043 4034 3874 3851 3820 3779 3778 3770 3741 3739 3709 3653 3647 3562 3447 3379 3354 3335 3281 3280 3279 3278 3275 3174 3163 3161 3156 3126 3125 3110 3076 3075 3039 3029 2986 2985 2943 2931 2898 2847 2807 2803 2802 2797 2726 2693 2692 2587 2585 2560 2559 2537 2536 2535 2528 2527 2511 2510 2459 2440 2437 2404 2403 2385 2315 2314 2313 2311 2202 2154 2137 2085 2082 2065 2025 2015 1991 1864 1852 1828 1810 1751 1544 1424 1423 1422 1421 1321 1320 1319 1186 1115 1114 1113 1040 989


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CRJE and CRBE

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CRJE and CRBE
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 16 Sep 2005 08:13:16 -0600
Terry Sambrooks writes:
Having worked with HASP (no JES2) and pseudo 360/20-25 Work Stations my guess is that CRJE was applicable to the use of Monitors such as the old 2260 and on into 3270 kit, whereas CRBE may well have been a hybrid use of RJE workstations but with some interaction. I seem to recall that DATA 100 equipment like the key-to-disk systems had RJE built in and could be controlled from a monitor rather than having to insert control statements into a physical card reader.

I had done added 2741 and tty terminal support to HASP and edit syntax borrowed from the (cp67) cms editor ... for an early CRJE. recent post
https://www.garlic.com/~lynn/2005p.html#34 What is CRJE

misc. other posts mentioning hasp
https://www.garlic.com/~lynn/submain.html#hasp

and various posts related to working on 360 clone telecommunication controller
https://www.garlic.com/~lynn/submain.html#360pcm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

storage key question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: storage key question
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 16 Sep 2005 08:30:53 -0600
"Craddock, Chris" writes:
This one cracks me up. "Virtual Storage for Personal Computing". It still shows up in documentation all over the place but the "product" probably hasn't seen a PSW since before the Reagan administration. Did it even run on XA? I never laid eyes on it when you could actually buy it and that had to be at least 25 years ago.

VSPC was originally going to be called PCO (personal computing option ... term slightly patterned after tso) ... until somebody pointed out PCO was a term already in use in France.

PCO was going to be something of an entry level TSO ... and somewhat setup as competition for CMS in that market segment (alternative was running vm370 on the bare hardware with CMS for the conversational support ... with guest batch virtual machine).

there was various kinds of internal competition going on ... and the PCO development organization had a couple people that had written a PCO performance simulator. The PCO performance simulator would simulate various kinds of interactive workload ... and then got the corporation to insist that the CMS development group run similar types of (real) workload and compare the performance numbers. This ploy side tracked a significant portion of the CMS development group resources (running real workload benchmarks to be compared against the PCO simulated benchmarks) for several months. For the simulated benchmarks ... the PCO simulator was showing that PCO was two to ten times faster than CMS (real live benchmarks) ... that lasted until they finally got PCO running ... and it turned out PCO was more like 1/10th the performance of CMS (so much for the correctness of the PCO simulator and the people responsible for it).

a big issue with the vm370 strategy was the fixed vm370 kernel size on the small 370s (real 256kbyte machine and the fixed vm370 kernel around 110kbytes). this was significantly alleviated moving into 138/148 timeframe ... where 1mbyte real storage got to be more common. As part of the early ECPS effort for 138/148 ... some early ecps posts
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

there was even an effort to have vm370 preloaded on every shipped machine (somewhat like lpar is part of shipped machines today). The "vm370 preloaded and part of every (at least endicott mid-range) machine" strategy was eventually overruled by corporate people that were constantly claiming vm370 was non-strategic and shouldn't even exist.

misc. past posts mentioning pco and/or vspc:
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
https://www.garlic.com/~lynn/2002h.html#51 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002q.html#26 LISTSERV Discussion List For USS Questions?
https://www.garlic.com/~lynn/2003k.html#0 VSPC
https://www.garlic.com/~lynn/2004.html#4 TSS/370 source archive now available
https://www.garlic.com/~lynn/2004g.html#47 PL/? History
https://www.garlic.com/~lynn/2004m.html#54 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#0 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#17 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005d.html#74 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What ever happened to Tandem and NonStop OS ?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What ever happened to Tandem and NonStop OS ?
Newsgroups: alt.folklore.computers
Date: Fri, 16 Sep 2005 09:58:26 -0600
echomko_at_@polaris.umuc.edu (Eric Chomko) writes:
I have looked at a few and they appear mundane compared to CISC. RISC sort of makes you want to write primatives right away to get away from the limited set rather than relying on it exclusively as you can do with CISC. I understand that the speeds we have today could not be achieved with CISC-only technology.

i've frequently claimed that the 801/risc project in the 70s
https://www.garlic.com/~lynn/subtopic.html#801

was an adverse reaction to the future system project ... and was looking at doing the exact opposite.

future system was going to be a complete replacement for 360 with an extremely complex hardware architecture. it was killed before ever being announced ... but while it was in progress, consumed an enormous amount of corporate resources ... in addition to diverting/eliminating most work on possibly alternatives during the period.
https://www.garlic.com/~lynn/submain.html#futuresys

at the time, there was a long playing cult film down in central sq (had been playing every night for something like 13 years at the time). i made some unflattering comparisons between the film and the FS project regarding the inmates being in control of the asylum

801 was supposedly going to make an enormous amount of software/hardware trade-offs, drastically simplifying hardware features and implementations. A drastically simplified hardware would also help an extremely complex compliler and software infrastructure do things like maximize performance optimization (i.e. enormous strides in software state-of-the-art and tools would more than compensate for simplified hardware implementation).

there was an advance technology conference in POK in the 70s, where a group of us presented a design for 16-way smp 370 ... and the 801 group presented 801, cp.r, pl.8, etc. one of the 801 people objected to the 16-way smp presentation ... claiming to have looked at the standard vm370 product kernel code and found that there were no control blocks that had fields that provided support for smp operation. the counter argument was that it was smop ... you write the code for it ... various past posts about smp support
https://www.garlic.com/~lynn/subtopic.html#smp

and some specific posts about a 5-way smp project involving heavy microcode changes
https://www.garlic.com/~lynn/submain.html#bounce

so when the 801 people presented ... I observed that most of the cp.r and pl.8 code hadn't even been written yet (aka the amount of proposed code to be written for 801 was possibly a couple orders of magnitude larger than the additions to the 16-way smp code that needed to be added to the 370 kernel)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

how password is stored and check the authentication??

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how password is stored and check the authentication??
Newsgroups: sci.crypt,comp.lang.c,comp.lang.c++
Date: Fri, 16 Sep 2005 11:34:25 -0600
"FX" writes:
Well, it goes like this. U sign up for email, ur password is taken, operated b one way hash! the hash used may be MD5 (good one). The hashed result is stored into database. Whenever u access ur mail with that passoword, it is operated by the same hash (md5) to get the same result. If ur password is different, the resulted hash would not match, hence access would b denied. Hence password is quiet secured and cannot be retrieved, it can however be reset by some special defined means...

majority of ISPs have an intermediary process involving radius ... that provides authentication, authorization, and accounting management and administration.

small confession ... for a small startup isp in previous lifetime, i was actually involved in configuring radius for real livingston box

since then it has become an ietf standard

from
https://www.garlic.com/~lynn/rfcietff.htm

in the RFCs listed by section, click on Term (term->RFC#)

and scroll down to

remote authentication dial in user service (RADIUS )
see also authentication , network access server , network services
4014 3580 3579 3576 3575 3162 2882 2869 2868 2867 2866 2865 2809 2621 2620 2619 2618 2548 2139 2138 2059 2058

clicking on the rfc number brings up the rfc summary in the lower frame (if you are using frames).

clicking on the ".txt=nnn" field in a rfc summeary, retrieves the actual RFC.

radius tends to support a number of various authentication methods, for instance if you configure PPP on your personal machine for use with ISP ... you may be presened 3-4 different options ... which includes clear-text transfer of a password ... but also stuff like CHAP (challenge response).

there have even been some number of radius versions done where a public key is registered in lieu of a password and the client performs a digital signature operation ... with the server performing digital signature validation using the on-file public key.

besides ISPs using radius for login, email authentication, newsgroup authentication, etc. ... there are also major applications (like some of the database systems and web servers) providing radius interfaces for performing authentication operations.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

how password is stored and check the authentication??

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: how password is stored and check the authentication??
Newsgroups: sci.crypt
Date: Fri, 16 Sep 2005 12:59:23 -0600
Mike Amling writes:
Email passwords are not secure. Standard POP3 sends the password itself, without encryption. Any sniffer at any hop can retrieve it.

some number of ISPs support ssl/tsl connection for authenticated connections to 1) pop3, 2) smtp, and 3) nntp (newsgroups).

almost all email/news clients support the following options/defaults:

encrypted ssl pop3 port defaults to 995

encrypted ssl smtp port defaults to 465

encrypted tls smtp port default is to also use port 25 (standard non-encrypted smtp port)

encrypted ssl nntp port defaults to 563

all of the above can use user/password authentication transmission thru an encrypted session.

I do have a niggling problem with gnuemacs gnus news client use of openssl on port 563 ... it appears that either openssl or gnus is dropping an interrupt someplace and hanging gnuemacs ... but only when posting ... not when reading. the post actually gets done ... but I have to kill the openssl process twice ... to get gnus back ... which is just about to happen in a second right after i hit ctl-c-c to post this.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

VMFPLC2 to load EREP PTFs

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VMFPLC2 to load EREP PTFs
Newsgroups: bit.listserv.vmesa-l
Date: Sat, 17 Sep 2005 00:50:30 -0600
Ranga Nathan writes:
I downloaded the EREP PTF ( UV60895 ) from IBM and trying to apply the PTF. According to the EREP instructions, I need to load ERPTFLIB TLBXXXXX files from the detersed file to disk A using VMFPLC2 commmand. However this command seems to work only with tape (181) but we do not have a tape drive and the file is on a minidisk. Can someone suggest a solution?

some topic drift, old post on vmfplc2 format (originally from very long ago and far away, but after the edf filesystem shipped):
https://www.garlic.com/~lynn/2003b.html#42 VMFPLC2 tape format
https://www.garlic.com/~lynn/2003b.html#43 VMFPLC2 tape format

it was somewhat a modified tape dump format ... that could do 4k physical block writes.

i was doing a backup/archive system for internal deployment and initially made some modifications (to vmfplc2). part of this was to get better utilization out of 6250bpi tapes (and fewer inter-record gaps). one was to allow larger physical tape block size (for large files). the other was to combine the FST physical record and the first block of the file into a single physical record. for small files that resulted in going from minimum of two physical records and two inter-record gaps to a single physical record and a single inter-record gap.

I also made some minor changes to make sure that buffers and transfers were page aligned ... which significantly improved performance when it was dealing with the cms page mapped filesystem stuff that I had done
https://www.garlic.com/~lynn/submain.html#mmap

there was a bunch of stuff keeping track of what had been archived, being able to do incremental stuff, keeping track of tapes. i even wrote an emulated standard label processing for helping manage the tapes.

this went thru a couple of internal releases and then morphed into workstation datasave facility product, which morphed into adsm, which then morphed into the current tsm. misc. collected past postings
https://www.garlic.com/~lynn/submain.html#backup

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Security of Secret Algorithm encruption

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Security of Secret Algorithm encruption
Newsgroups: sci.crypt
Date: Sat, 17 Sep 2005 14:24:23 -0600
"William L. Bahn" writes:
What I'm curious about is, under those conditions, how complicated the algorithm would need to be in order to have a good chance at defeating a concerted effort to break the cipher. In other words, how difficult is it to attack an arbitrary and unknown algorithm? What kind of attacks would be used?

secret algorithms can be considered from the standpoint of defense-in-depth ... where effort for attackers is additive based on the depth of defense and assuming straight-forward frontal attack.

the counter argument is multiple security layers tend to add complexity ... making it more dfficult for the defenders to correctly maintain (or even understand) at all times (KISS) and even creating cracks that attackers can use for compromise (again KISS).

there was a story earlier about some encryption device that used secret algorithm that was supposed to be widely deployed ... however the units were classified and the local security officers had to sign away their future; so the security officers kept the units locked in vaults with extremely limited access. as a result the units weren't being used for their intended purpose (in part because the loss of a unit was perceived as being an enormous threat).

a corollary is that there tends to be scale-up and long-term deployment issues with secret-based operations. if nothing else, for infrastructures dependent on secrets, it is convenient that secrets are easily changed in the face of possible compromise ... secret algorithms tend to be harder to replace/update than keys when there has been compromise.

so the threat models are not only how difficult are frontal attacks ... but also given some compromise by any method ... how difficult is the remediation.

sometimes strict technology orientation is myopic and focused on single point events. long term institutional issues frequently have to also consider long term (and continued) operational characteristics spanning a large number of possible different events.

and at least for human element and operational characteristics ... KISS.

long ago and far away ... there once was a corporate rulling that passwords had to be of certain complexity and changed on frequent intervals. a parody was done ... minor reference.
https://www.garlic.com/~lynn/2001d.html#51 A beautiful morning in AFM.
https://www.garlic.com/~lynn/2001d.html#52 A beautiful morning in AFM.

as a separate issue, i asserted that key-based locks were more vulnerable to brute-force attacks than passwords ... and if frequency of change was a valid countermeasure ... then all physical locks should be rekeyed every couple hrs ... and new physical keys issued for everybody.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

hasp, jes, rasp, aspen, gold

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: hasp, jes, rasp, aspen, gold
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 17 Sep 2005 23:29:52 -0600
n4sx@ibm-main.lst (Allan Scherr) writes:
VS1's spooler was JES. HASP was added to VS2 as JES2 and ASP was added as JES3. I recall that Tom Simpson, the creator of HASP was pissed that his product wasn't called JES1.

crabtree and some of the others went to the "jes" gburg group and did jes2. my wife was in the group for a while ... before being con'ed into going to pok to be responsible for loosely-coupled architecture. she was one of the catchers in the jes group for ASP ... for the transition to JES3. misc. collected hasp postings
https://www.garlic.com/~lynn/submain.html#hasp

in pok, she did Peer-Coupled Shared Data architecture ... which didn't see a lot of uptake until parallel sysplex ... except possibly for some of the ims hot-standby work.
https://www.garlic.com/~lynn/submain.html#shareddata

simpson went to harrison and did RASP ... which was somewhat of page mapped operating system (say a cross between mts and tss/360) ... as opposed to VS2 which was somewhat MVT running in a virtual address space. RASP never got announced ... and simpson left and joined Amdahl, setting up operation in dallas ... where he did a reborn RASP ... there was some litigation on this which resulted in some experts doing code examination looking for code contamination from the original RASP.

when gold was going on in sunnyvale ... i tried to suggest that gold might be layered on top of the effort in Dallas (somewhat akin to the 370 unix implementation done for AT&T on top of stripped down tss/370 kernel). there seemed to be some amount of contention that was going on between the sunnyvale gold people and the dallas group.

a minor aspen reference found with search engine about being Amdahl's future operating system
http://www.cics-mq.com/CICSIntro.htm

misc. past posts mentioning either gold &/or RASP
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001l.html#18 mainframe question
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2005m.html#4 [newbie] Ancient version of Unix under vm/370
https://www.garlic.com/~lynn/2005m.html#7 [newbie] Ancient version of Unix under vm/370

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

HASP/ASP JES/JES2/JES3

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HASP/ASP JES/JES2/JES3
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers,bit.listserv.vmesa-l
Date: Sun, 18 Sep 2005 10:47:42 -0600
Allan Scherr writes:
JES was introduced with release 1 of VS1 and was its only spooler. JES2 and JES3 were introduced in Release 2 of VS2 (MVS). Release 1 of VS2 (SVS- Single Virtual Storage) still used the old OS/360 Reader and Writer as the "Type I" support. HASP and ASP were available, as they had been, as Type II support. The two first releases of VS1 and VS2 were announced when S/370 hardware was announced and were available almost at the same time. For the MVS release, which occurred about 15 months later, the HASP and ASP groups were drafted into the product development division. They were generally upset about being subjected to the discipline of the more formal development process we used. My recollection was that HASP/JES2 had the highest bug count of any MVS component during System Test (and after).

Does anyone remember where the term spooler comes from? SPOOL = simultaneous peripheral operations on line. The OS/360 Reader-Writer was the first multiprogramming capability that was released.


previous posts
https://www.garlic.com/~lynn/2005p.html#34 What is CRJE
https://www.garlic.com/~lynn/2005p.html#37 CRJE and CRBE
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold

past posts mentioning hasp
https://www.garlic.com/~lynn/submain.html#hasp

370 was initially announced w/o virtual memory and address relocate. basically 360 genre of operating systems continued to run. the low-end and mid-range machines tended to be microcode upgrades ... however real hardware field upgrades had to be purchased and installed on customer 370/155 and 370/165 (and weren't cheap).

the full 370 architecture was in the 370 architecture "redbook" ... distributed out of architecture group in pok in dark red 3-ring binder. early on this was in cms script ... and the principle of operations was a subset ... depending on which option at command invokation, you printed the full architecture book or the principle of operations subset.

the hardware upgrades to 155/165 were fairly expensive & extensive. I remember virtual memory hardware resolution meeting in pok ... the 165 engineers wanted to drop some of the features in the 370 virtual memory architecture (specifically selective invalidate instructions). Their position was that to do the selective invalidate instructions would add six month elapsed time to the engineering work, delaying the corporate announcement of virtual memory for 370 by 6m. the batch virtual memory operating systems (primarily vs2) said that they didn't care ... vs2 would never, ever domore than five page i/os per second ... and the page replacement algorithm would batch those page replacement selections apporx. once per second. The difference between doing a single FTLB once per second or five selective invalidates once per second was negligible.

i also remember getting 3rd shift in pok machine room for some work ... and ludlow(?) and some others working on the prototype for SVS ... basically they borrowed ccwtrans from cp67 and cribbed it together with some low-level stub code to lay mvt out in 16mbyte virtual address space (ccwtrans is what handled translation of ccws from virtual address to real addresses, managed pinning the required virtual pages in real memory during the i/o, etc). they were doing the initial work on 360/67.

reference to old time-line
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033

from that time-line


IBMS/370ARCH.    70-06 71-02 08  EXTENDED (REL. MINOR) VERSION OF S/360
IBMS/370-145     70-09 71-08 11  MEDIUM S/370 - BIPOLAR MEMORY - VS READY
IBMS/370-155     70-06 71-01 08  LARGE S/370
IBMS/370-165     70-06 71-04 10  VERY LARGE S/370
IBMS/370.VS      72-08 73-08 12  VIRTUAL STORAGE ARCHITECTURE FOR S/370
IBMVM/370        72-08 72+??     MULTIPLE VIRTUAL MACHINES (LIKE CP/67)
IBMOS/VS2(SVS)   72-08 72+??     VIRTUAL STORAGE VERSION OF OS/MVT
IBMOS/VS2(MVS)   72-08 74-08     MULTIPLE VIRTUAL ADDRESS SPACES
IBMMVS-JES3      72+?? 75-10     LOOSE-COUPLED MP (ASP-LIKE)
IBMMVS-JES2      72-?? 72-08     JOB-ENTRY SUBSYSTEM 2 (HASP-LIKE)

...

some issue with above ship dates ... virtual memory operating systems (vm-72??, svs-72??) whouldn't ship before virtual storage hardware was available (73-08??) ... note however, 370/145 was simple re-impl of a different floppy ... 155/165 required extensive hardware field upgrade to add virtual memory support.

another reference giving some vs1, vs2, jes timeline
http://www.os390-mvs.freesurf.fr/mvshist2.htm

a hasp/jes2 time-line from the jes2 discussion group archives
http://listserv.biu.ac.il/cgi-bin/wa?A2=ind9204&L=jes2-l&T=0&P=352

note in the above, it lists 8/69 as date of HASP II V.2 with BSC RJE (I had also gotten tty & 2741 terminal support running in HASP with a editor with syntax borrowed from cms editor for early crje).

another history of hasp & jes2 (share presentation)
http://www.redbug.org/dba/sharerpt/share79/o441.html

note the above mentions the first SHARE HASP project meeting was at Houston in March 1968. Three people from Cambridge had come out and installed cp/67 at the university in Jan. of 1968. I then got to attend the Mar68 SHARE meeting in Houston where CP/67 was announced.

the above also has following mention of FS:
You may recall a not-so-secret project that IBM was working in the early '70s. The "Future System" (FS) project was to be a successor to the S/370. When AOS1 and AOS2 came online, the FS project was dragging on and on. IBM management made the decision to develop MVS as a sort of stopgap measure to keep OS/VS alive until FS came up.

(FS eventually was given up too, but many of its ideas and concepts formed the nucleus for the System 38.)

The HASP team was reassembled for MVS in a "burned out A&P" storefront. There they put together the first cut of JES2 (its new name). The machine they tested it on was an S/370 model 162: yet another model that never officially existed outside of IBM. The 162 was a model 165 that had been refitted for FS.

JES2 multiaccess SPOOL was preceded by similar mods written at NIH and Mellon. Likewise, NJE was preceded by user written modifications at the University of Iowa and Triangle Universities Computation Center.

When FS was buried, one of the designers was assigned to the JES2 team, where he authored HASPSNA -- the SNA support for JES2. (This was way back when SNA was first being promulgated, and was known as "Single Network Architecture" instead of "Systems Network Architecture".) This same guy (one of the speakers at this session) told a horrifying war story: once upon a time an errant 3330 selector channel at a customer location rewrote the entire JES2 checkpoint dataset, shifting its contents left by one bit! He remembers spending one very long night repunching the checkpoint dataset from a dump.


... snip ...

other collected FS postings
https://www.garlic.com/~lynn/submain.html#futuresys

Before the announcement of 370 virtual memory ... there was an incident where a copy of a corporate document discussing virtual memory leaked. an investigation somewhat narrowed the leaked copy down to someplace in the northeast. after that all corporate copying machines were retrofitted with unique small serial numbers under the glass ... which would show up on all copies made from the machine. also, all the 370/145 shipped had an "xlate" label for one of the psw bits on the front light rollers.

as mentioned in prevous post
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
... my wife did a stint in the gburg jes group and was one of the "catchers" for ASP ... for turning it into JES3 ... this was before she was con'ed into going to pok to be in charge of loosely-coupled architecture.
https://www.garlic.com/~lynn/submain.html#shareddata

another project that went on before 370 virtual memory announcement was joint project between endicott and cambridge
https://www.garlic.com/~lynn/subtopic.html#545tech

to modify a cp/67 kernel to provide 370 virtual machines (support 370 virtual memory architecture). cp/67 already provided 360 virtual machines with 360/67 architecture virtual memory. however, 370 definition had different control register and table formats and some new instructions. the modifications to cp/67 kernel required modifications to simulate the new instructions and when translating virtual tables to shadow tables ... they went from 370 format to 360 format.

there was also a security issue. most of the work went on in cambridge on the cambridge cp/67 service. however, this service also had some number of non-employee use, including some number of students from colleges & univ. around the cambridge area. as a result, rather than modifying the production kernel that ran on the real machine ... the kernel modifications were all done on a cp/67 kernel that was kept isolated in a virtual machine (provided by the "real" kernel running on a "real" machine). for some topic drift ... this security isolation characteristics was also used by a number of commercial time-sharing service bureaus (and rumored, also some number of gov. agencies)
https://www.garlic.com/~lynn/submain.html#timeshare

once that was operational, another set of changes were made to the cp/67 kernel so that it used 370 table definitions and instructions. the resulting environment was somewhat


cambridge real 360/67
cp/67-l kernel running on real machine
cp/67-h kernel running in 360 virtual machine providing 370 simulation
   cp/67-i kernel running in 370 virtual mahcine providing 370 simulation
cms running in 370 virtual machine

this was operational a year before the first 370/145 engineering model in endicott was working. for early 370/145 engineering validation, a copy of the cp/67-i kernel was taken to endicott and booted on the real machine ... and immediately failed. after some investigation, it turned out the engineers had (incorrectly) reversed two of the new B2xx opcodes. the cp/67-i kernel was patched to correspond with the (incorrect) hardware implementation and then things appeared to run fine.

the "cp/67-i", "cp/67-h", and "cp/67-i" effort was the first major use of the cms multi-level source update mechanism.

the original cms update command took a single "UPDATE" file and applied it to a single source file using sequence numbers in cols. 73-80. The UPDATE source file needed to have the sequence numbers manually typed. The resulting (temporary) updated file was then compiled/assembled. Periodically *UPDATES* were applied permanently by replacing the original source file with the updated temporary file (which then may or my not be resequenced).

because I was generating such a large amount of source code changes as an undergraduate ... I had created a pre-processor for the update command which would generate the sequence numbers of new/replaced update statements (before passing it to the UPDATE command). My process had slightly modified update control statements with "$" that was recognized by the update pre-processor for automatically generating sequence numbers in a temporary update file before it was passed to the update command.

somewhat as part of the joint endicott/cambridge effort for cp/67-h and cp/67-i kernels ... there was the multi-level update effort.

basically the source update command file was enhanced to interatively apply a series of different updates to source file before assembling.

this further evolved in vm/370 as was shipped with the vm/370 product ... as part of the customer source level maint. infrastructure ... aka monthly "PLC" (program level change) tapes were shipped with source level maintenance (along with binary image for customers not wanting to do source level rebuild of their system).

there is folklore that this gave jes2 group some amount of trouble. supposedly jes2 group did all of the development and source code maintenance under cms and used the cms multi-level update process. however, for mvs product release ... the cms source updates then had to be entered into the mvs-based source control system ... before generating official product systems for customer ship (which was not a source level maint process ... but the resulting assembled and/or compiled executables). i heard rumors that it gave the jes2 group fits trying to get the straight-forward cms multi-level updates into the mvs-based source control system.

part of this had come out of history of hasp also being shipped with full source and to generate a hasp system required assembling the full source.

a few past posts discussing cms multi-level update procedure, and/or source maint/control systems:
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/94.html#52 Measuring Virtual Memory
https://www.garlic.com/~lynn/99.html#180 The Watsons vs Bill Gates? (PC hardware design)
https://www.garlic.com/~lynn/2001h.html#57 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#46 Blinking lights
https://www.garlic.com/~lynn/2002.html#20 Younger recruits versus experienced veterans ( was Re: The demise of compa
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2003.html#43 Timesharing TOPS-10 vs. VAX/VMS "task based timesharing"
https://www.garlic.com/~lynn/2003.html#58 Card Columns
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#75 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#77 unix
https://www.garlic.com/~lynn/2003j.html#14 A Dark Day
https://www.garlic.com/~lynn/2003k.html#46 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004d.html#69 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004m.html#30 Shipwrecks
https://www.garlic.com/~lynn/2005b.html#58 History of performance counters
https://www.garlic.com/~lynn/2005c.html#67 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005o.html#46 Article: The True Value of Mainframe Security
https://www.garlic.com/~lynn/2005i.html#30 Status of Software Reuse?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

previous, next, index - home