List of Archived Posts

2005 Newsgroup Postings (04/05 - 04/22)

IBM Password fun
System/360; Hardwired vs. Microcoded
Mozilla v Firefox
Mozilla v Firefox
System/360; Hardwired vs. Microcoded
System/360; Hardwired vs. Microcoded
Where should the type information be: in tags and descriptors
new Enterprise Architecture online user group
Mozilla v Firefox
TLS-certificates and interoperability-issues sendmail/Exchange/postfix
Where should the type information be: in tags and descriptors
Mozilla v Firefox
Mozilla v Firefox
ISA vs. patent/trademark
Where should the type information be: in tags and descriptors
Where should the type information be: in tags and descriptors
Where should the type information be: in tags and descriptors
Mozilla v Firefox
Is Supercomputing Possible?
Where should the type information be: in tags and descriptors
Some questions on smart cards (Software licensing using smart cards)
Where should the type information be: in tags and descriptors
System/360; Hardwired vs. Microcoded
the relational model of data objects *and* program objects
System/360; Hardwired vs. Microcoded
Where should the type information be: in tags and descriptors
the relational model of data objects *and* program objects
the relational model of data objects *and* program objects
Where should the type information be: in tags and descriptors
Where should the type information be?
Where should the type information be: in tags and descriptors
Where should the type information be: in tags and descriptors
the relational model of data objects *and* program objects
Where should the type information be: in tags and descriptors
[Lit.] Buffer overruns
Where should the type information be: in tags and descriptors
Where should the type information be: in tags and descriptors
Where should the type information be: in tags and descriptors
Where should the type information be: in tags and descriptors
[Lit.] Buffer overruns
OTP (One-Time Pad Generator Program) and MD5 signature
Moving assembler programs above the line
Moving assembler programs above the line
Moving assembler programs above the line
DNS Name Caching
Moving assembler programs above the line
Moving assembler programs above the line
Moving assembler programs above the line
1403 printers
Moving assembler programs above the line
Where should the type information be?
1403 printers
1403 printers
Where should the type information be: in tags and descriptors
1403 printers
What is the "name" of a system?
1401-S, 1470 "last gasp" computers?
Moving assembler programs above the line
Where should the type information be: in tags and descriptors
Where should the type information be: in tags and descriptors
Where should the type information be: in tags and descriptors
Where should the type information be: in tags and descriptors
single-signon with X.509 certificates
Moving assembler programs above the line

IBM Password fun

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Password fun
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 05 Apr 2005 10:09:28 -0600
edgould@ibm-main.lst (Ed Gould) writes:
Hi,

Two days ago I complained on here about IBM's insistance on a "strong" password silliness for a subscription to their HOT TOPICS bulletin.

I sent in a *NICE* email and ask them to revisit the password issue. I got back the following reply: ------------------------------------------------------

The site you have mentioned is being closed down at the end of May. It will then be converted to another site
https://www-1.ibm.com/support/mysupport/us/en/
This site runs on what is called an IBM ID, This id is used on most of IBM´s sites. If you already have one then you can use that here as well. It works the same way as the site you have linked to below.


ref. to password rules from long ago and far away
https://www.garlic.com/~lynn/2001d.html#51 OT Re: A beautiful morning in AFM.
https://www.garlic.com/~lynn/2001d.html#52 OT Re: A beautiful morning in AFM.
https://www.garlic.com/~lynn/2001d.html#53 April Fools Day

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360; Hardwired vs. Microcoded

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360; Hardwired vs. Microcoded
Newsgroups: alt.folklore.computers,comp.arch
Date: Tue, 05 Apr 2005 11:37:46 -0600
"Lee Courtney" writes:
At HP we went though what I believe was a similar exercise with the HP 3000 Series 64. When VISION was delayed in lieu of SPECTRUM, the 3000 product line was in a bad place with no imminent performance upgrade in the pipeline. As a stop-gap measure we shipped disc-caching as the Series 68 and Series 70. When the customer upgraded they got a new face-plate (64-68) and a microcode tape. I believe the 70 included additional memory (important for disc caching). But, many customers were under-whelmed with the amount of change they saw for the dollars they paid. However, a great product with lots of customer benefit that really saved HP's bacon at the time.

Sounds like the 168 was a field upgrade from the 165. Was the original 165 designed to accommodate virtual memory, or was VM an after-thought?

On a related note I am assuming that the 158 did not share any lineage with the 155? That the 155 was more a backwards leaning 360'ish machine than forward leaning 370 architecture machine. The timing of the 155/165 relative to the 158/168 announcements might lead one to believe there was some linkage between the systems.

Thanks!

Lee Courtney


158/168 were new technology ... compared to 155/165 ... especially the faster real memory technology. however, i believe the architecture of the native engines were the same and so the same microcode could work.

155/165 weren't designed for virtual memory and required extensive hardware retrofit to install it in existing boxes.

in the early 70s ... most of my POK visits involved 370 architecture meetings ... and didn't run into many cpu engineers ... other than who might show up in such meetings.

in the mid/late 70s ... i got involved with the cpu engineers working on the 3033 ... i was even invited to their weekly after work events, if i happened to be in town (honorary cpu engineer?).

part of the issue was that 370 (non-virtual memory) was just an interim/stop-gap. the real follow-on was going to be FS (future system). there was huge resources poured into FS ... and it was finally canceled w/o being announced (and few were even aware of it).
https://www.garlic.com/~lynn/submain.html#futuresys

as noted in previous posts on FS ... i didn't make myself very popular with the FS crowd ... somewhat panning their effort (there was this long-playing cult film down in central sq ... and I made the analogy to FS of the inmates being in charge of the asylum).

as in other references to FS, FS was supposedly spawned by the emergence of the 360 control clone market ... example:
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System

... aka in above refernece it mentions that 2500 were initially assigned to the project designing FS.

I worked on a project as an undergraduate that created a clone telecommunication controller (reverse engineer the ibm channel, adapt a interdata/3, etc)
https://www.garlic.com/~lynn/submain.html#360pcm

there later was a write up blaming the four of us for spawning the controller clone business.

FS may have then contributed to creating 370 clone mainframe market. Amdahl may have left to form his own 370 company in disagreement over the FS direction/strategy (as opposed to bigger, faster 370).
https://www.garlic.com/~lynn/2005e.html#29 Using the Cache to Change the Width of Memory
https://www.garlic.com/~lynn/2005e.html#35 Thou shalt have no other gods before the ANSI C standard

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Tue, 05 Apr 2005 13:03:29 -0600
"Phillip M. Jones, C.E.T" writes:
Ahh! but that is only a weak excuse. As you can pickup a brand new windoze keyboard at OfficeMax for 30 bucks or less.

Now on a Macintosh That's another story. Mac keyboards can only be found at Apple stores, Apple sanctioned outlets (CompUSA being an example) or Catalog only. Place like OfficeMax, and Staples, and Office Depot just don't carry them.

I uses a MacAlley iKey USB extended keyboard I bought when I purchased my G4-500.

I use compressed air to blow it out every so often. But key feel, and ability to type all the characters are as good now as then. My problem is my Hunt and Peck typing system.

I was brought up in an era when if men/boys were caught in typing class they were considered "gay". because Typing was a womens' only profession.

Boys were expected to do vocational training, (Car repair, metalworking, Woodworking, Electronics).

Also, if girls were caught taking VoTech training they also were considered "gay" as well. (Note I am using the modern term, then they used the 50/60ites term which was not as endearing).

In any event I never took typing. and By the time computers rolled around I s too old and had too much arthritis to take the classes. So its amazing that I ell as well as I do.


i was greasing trucks, tractors, plows and driving flatbed truck the summer i turned 9; starter motor was a pedal you pushed on the floor; didn't have syncro-mesh ... had to double-clutch most of the vehicles. one of the trucks and some of the tractors didn't even have starter motors, old fashioned hand crank at the front. one of the things i still don't think i've figured out is how you get fence posts in perfectly straight-line ... even when you have stretched the top wire as a guide (mine were always off a couple inches one way or another).

when i was about 11-12, i found a 30s-era typewriter and a instructional typing manual also published in the 30s ... and taught myself how to type. i got up to about 35-40wpm.

its a laptop ... nearly 3month old vaio a290 with 17in screen (hard disk completely wiped with fresh FC3 install). I called sony ... but they said take it into the store i bought it from.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Tue, 05 Apr 2005 17:41:30 -0600
Anne & Lynn Wheeler writes:
still don't think i've figured out is how you get fence posts in perfectly straight-line ... even when you have stretched the top wire as a guide (mine were always off a couple inches one way or another).

something i learned how to use when i was a kid:
http://website.lineone.net/~dave.cushman/fencepliers.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360; Hardwired vs. Microcoded

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360; Hardwired vs. Microcoded
Newsgroups: alt.folklore.computers,comp.arch
Date: Wed, 06 Apr 2005 09:27:32 -0600
mschaef@fnord.io.com (MSCHAEF.COM) writes:
Can anybody talk a little more about what made the 195 special?

an unrelated 195 story ... the 195 had 64 instructions in the pipeline and no branch prediction ... a branch (that wasn't to an instruction already in the pipeline ... aka loop) would cause the pipeline to draine. unless it was very specialized looping codes ... normal instruction stream ran the 195 at about half peak ... because of the frequence of branches.

i got somewhat involved when the 195 product engineers were looking at adding two processor 195 support ... actually the original hyperthreading (that intel processors have recently gone thru a phase); basically a second set of registers and psw and the pipeline would have red/black flag ... indicating which instruction stream stuff was associated with. it looked like SMP to software ... but to the hardware it was hyperthreading two instruction streams (dual i-stream). the idea was that if avg. software only kept the pipeline half full because of branches ... that two instruction streams had a chance of keeping the pipeline full ... and achieving peak instruction processing rates (these machines had no caches ... and so were much more sensitive to memory latencies).

however, it was never announced and shipped.

sjr/bld28 still had a 195 in the late 70s ... running heavy batch workload. palo alto science center had an application that they submitted ... but because of the processing queue ... it only got run about once every three months. pasc finally did some work on the application for checkpointing and running under cms batch in the background on their 370/145 vm/cms system. it would soak up spare cycles on the machine offshift and weekends. the elapsed time was slightly better than the 3month turnaround at the sjr 195 (because of the long work queue).

gpd was also running an application on the 195 that was getting excessive long turn arounds ... air bearing simulation ... for design of the new disk floating heads.

we had done this work over in the disk engineering lab (bldg 14) and product test lab (bldg 15) so that they could run under operating system environment. prior to that, they were running dedicated stand-alone ... they had tried MVS at one time and were getting about 15minutes MTBF when running a single test cell. bullet proofing the operating system I/O subsytem allowed them to operate half-dozen or so test cells concurrently w/o failing.
https://www.garlic.com/~lynn/subtopic.html#disk

in any case, the disk product test lab in bldg. 15 tended to get something like the 3rd machine model ... after the first two that cpu engineers were testing with. because of the operating system work for the disk guys ... i would have access to these new machines. when endicott was building the 4341 ... the endicott performance people asked me to do benchmarks on the bldg. 15 4341 (because i had better access to a 4341 than they did in endicott).

anyway, bldg. 15 also got an early 3033. 3033 ran about half the speed of 195 peak ... but about the same as the 195 for most normal workloads. while the disk regression tests in bldg. 15 were i/o intensive ... they bairly blimped the cpu meter. so one of the things we thought would be useful was get the air bearing simulation application up and running on the bldg. 15 3033 ... where it could soak up almost the complete cpu with little competition ... much better than a couple turn-arounds a month on 195 across the street in sjr/28.

minor past posts mentioning the air bearing simulation application:
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002j.html#30 Weird
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2003b.html#51 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003j.html#69 Multics Concepts For the Contemporary Computing World
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004b.html#15 harddisk in space
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns

some past posts mentioning 195
https://www.garlic.com/~lynn/2000c.html#38 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE? Big Iron
https://www.garlic.com/~lynn/2000f.html#13 Airspeed Semantics, was: not quite an sr-71, was: Re: jet in IBM ad?
https://www.garlic.com/~lynn/2000f.html#21 OT?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000g.html#18 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001c.html#27 Massive windows waisting time (was Re: StarOffice for free)
https://www.garlic.com/~lynn/2001h.html#49 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flushed on a page fault ?
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#38 Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#41 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2001n.html#80 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001n.html#86 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002h.html#23 System/360 shortcuts
https://www.garlic.com/~lynn/2002.html#50 Microcode?
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#30 Weird
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#59 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2002o.html#44 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
https://www.garlic.com/~lynn/2003b.html#51 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#37 "average" DASD Blocksize
https://www.garlic.com/~lynn/2003f.html#33 PDP10 and RISC
https://www.garlic.com/~lynn/2003g.html#20 price ov IBM virtual address box??
https://www.garlic.com/~lynn/2003h.html#47 Segments, capabilities, buffer overrun attacks
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003j.html#41 Of what use 64-bit "General Purpose" registers?
https://www.garlic.com/~lynn/2003j.html#69 Multics Concepts For the Contemporary Computing World
https://www.garlic.com/~lynn/2003l.html#48 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2004b.html#15 harddisk in space
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#13 Yakaota
https://www.garlic.com/~lynn/2004c.html#29 separate MMU chips
https://www.garlic.com/~lynn/2004e.html#1 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE
https://www.garlic.com/~lynn/2004f.html#58 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#22 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#24 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?
https://www.garlic.com/~lynn/2004l.html#2 IBM 3090 : Was (and fek that) : Re: new computer kits
https://www.garlic.com/~lynn/2004l.html#59 Lock-free algorithms
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#12 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#2 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#19 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/94.html#39 IBM 370/195
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360; Hardwired vs. Microcoded

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360; Hardwired vs. Microcoded
Newsgroups: alt.folklore.computers,comp.arch
Date: Wed, 06 Apr 2005 12:18:51 -0600
ref:
https://www.garlic.com/~lynn/2005f.html#4 System/360, Hardwired vs. Microcoded

195 ran about 10mips peak ... but most normal codes (not specifically designed for looping in the pipeline) ran more like 5mips.

related to previous post about 168 ... 168-3 was about a 3mips machine
https://www.garlic.com/~lynn/2005f.html#1 System/360, Hardwired vs. Microcoded

the 3033 started out being a mapping of the 168 design to faster chip technology resulting in about 20% higher mips. the chips in the 3033 were much higher density than those used in 168 (by about factor of ten times) ... but the design started out only using the same number of cicuits/chip as in the 168 original implementatation. part-way thru the design ... there was an effort to do partial redesign and leverage much higher circuit chip density ... which brought the 3033 up to about 4.5mips (50% more than 168).

moving the air bearing simulation application from the sjr/28 195 to the 3033 in bldg.15 ran about the same speed ... but the 3033 was effectively cpu idle with no backlog (of cpu related work) ... significantly improving the design cycle for the new 3380 floating heads
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Thu, 07 Apr 2005 19:36:50 -0600
"Stephen Fuld" writes:
I don't think we would necessarily be better off. I just don't see using MVS as a "desktop" OS, and even VM/CMS is far from what most programmers today would like to use. Besides, there have been "desktop 370s" and they didn't sell very well. They were expensive and somewhat limited.

there was this joke in the early MVS time-frame that CMS had a 64kbyte MVT simulator (built into the CMS kernel) and MVS had an 8mbyte MVT simulator (and CMS did a very respectable job with its 64kbyte MVT simulator ... compared to MVS's 8mbyte MVT simulator).

the first deckstop 370 was the xt/370 ... which was a co-processor board for an xt/pc. the CMS applications and orientation was significantly more resource (disk I/O and real storage) hungry compared to dos applications of the period (somewhat analogous to the difficulty they had craming mac/os into the early mac machines).

the initial project was called washington and it was going to ship with 384k bytes memory for the 370 side. the cp kernel was specially modified to do all i/o from interprocessor communication to an application called cp/88 running on the 8088 side (which then do various disk, screen, keyboard, etc i/o).

the combination of the cp kernel fixed memory requires and the cms applications of the period (which typically might be used for interactive computing and/or program development) resulting in severe page thrashing. the page thrashing was aggravated by the fact that the page i/o had to be passed over to cp/88 which then would perform the operation on a 100ms/operation hard disk (when mainframe users were use to doing multiple collected in i/o operations in 16ms or less).

i showed a number of page trashing benchmarks and they then blamed me for causing the product ship being delayed while they upgraded the memory from 384k to 512k. the extra memory slightly mitigated some of the worst page thrashing scenarios ... but it still couldn't redo the significant memory and disk i/o requirements of the typical cms operations. As a side note, the earlier cp67/cms had been able to support multiple users on 256k byte 360/67.

even when the hardware (real memory sizes and disk thruput) started to catch up with the typical CMS application operational target ... it was still stuck with a lot of the mainframe oriented application pricing structures.

there was a project in the early to mid-80s that looked at redoing a cms-type implementation that would be agile ... somewhat the boyd influence
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

and portable across a number of different platforms ... but it somewhat collapsed when they started trying to through everything into the project ... including the kitchen sink.

misc. past washington, xt/at/370 postings
https://www.garlic.com/~lynn/94.html#42 bloat
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#44 Blade architectures
https://www.garlic.com/~lynn/2002f.html#49 Blade architectures
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2002f.html#52 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003h.html#40 IBM system 370
https://www.garlic.com/~lynn/2003m.html#27 Microsoft Internet Patch
https://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor
https://www.garlic.com/~lynn/2004f.html#58 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
https://www.garlic.com/~lynn/2004m.html#7 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#10 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#11 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#13 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004p.html#27 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005f.html#1 System/360; Hardwired vs. Microcoded

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

new Enterprise Architecture online user group

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: new Enterprise Architecture online user group
Newsgroups: comp.arch
Date: Fri, 08 Apr 2005 09:45:01 -0600
"Michel Hack" writes:
The net of all of this is that IBM mainframes have had mechanisms to manage large amounts of memory for nearly 20 years. The 31-bit addressing limit still applied to the maximum size of a single object (e.g. an array), which may have limited scientific applications, or (more recently) image processing -- but commercial applications (the primary market for this architecture) were served well enough that the pressure to move to 64-bit was not as great as one might have thought. (The multiple-space features still exist in 64-bit mode, btw, but the motivation to exploit this would most likely be to structure space, not to expand it.)

recent post on some of the evolution of dual-address space, program call, and access registers (originating from 24-bit addressing days):
https://www.garlic.com/~lynn/2005b.html#53 The mid-seventies SHARE survey

note that this sort of like using segment registers to "window" across larger address space, except kernel calls weren't needed. early ROMP documentation sort of alluded to something similar by claiming 40-bit addressing (but only 32-bits at a time) and then later RIOS claiming 52-bit addressing (again only 32-bits at a time).

misc. past program call references:
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#36 What is MVS/ESA?
https://www.garlic.com/~lynn/2001d.html#30 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001h.html#73 Most complex instructions
https://www.garlic.com/~lynn/2001k.html#16 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003c.html#13 Unused address bits
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004e.html#41 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#52 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2005c.html#63 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005c.html#67 intel's Vanderpool and virtualization in general

lots of past references to dual-address space, etc
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#58 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#18 Black magic in POWER5
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002q.html#1 Linux paging
https://www.garlic.com/~lynn/2003c.html#13 Unused address bits
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003m.html#29 SR 15,15
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004f.html#27 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#53 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005c.html#63 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Fri, 08 Apr 2005 10:38:55 -0600
stanb45@dial.pipex.com (Stan Barr) writes:
...and I use a laptop-size keyboard on one of my machines, only the basic keys and doesn't take over all the workspace. I can't understand this fascination for having *hundreds* of keys so you have to keep moving your hands away from the main keys, it's bad enough having to dive over to the mouse all the time!

in late 70s, early 80s, the san jose human factors group had built a chord keyboard ... somewhat in the shape of a mouse, sort of rounded half-sphere with figner tip depressions ... which had multi-position rocker switches; touch typists that would spent a little time with the device ... claimed they quickly exceeded qwerty typing speed (80 wpm not unusual).

you marry such a design with an optical mouse ... with one for both hands ... you never have to leave position.

the original prototype for the eraser head "mouse" (between g/h/b keys) was two thumb sliders in front of the space bar (one controlling x-motion and the other controlling y-motion).

quicky search engine use ... a couple descriptions/discussions of chord keyboards:
http://deafandblind.com/computer-keyboard.html

augment had a much simpler chord keyboard for one hand leaving the other hand available for mouse
http://www.idealliance.org/papers/dx_xml03/papers/05-00-00/05-00-00.html
http://www.comphist.org/computing_history/new_page_11.htm

somewhere i thot i had a URL for picture of augment chord keyboard, but can't seem to find it at the moment

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

TLS-certificates and interoperability-issues sendmail/Exchange/postfix

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TLS-certificates and interoperability-issues sendmail/Exchange/postfix ..
Newsgroups: comp.security.unix,comp.mail.misc,comp.mail.sendmail
Date: Fri, 08 Apr 2005 12:02:00 -0600
another kind of attack on the SSL domain name certificates ... is the limited use of SSL.

the original SSL was supposed to be for the whole e-commerce experience ... starting at the time you selected the web site. the browser then checked that the URL that was typed in matched the domain name in the certificate that the webserver provided (after validating the certificate with a public key in the browser's trusted public key repository and validating that the server could do some valid operation with a private key that corresponded to the public key in the SSL domain name certificate).
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

the problem was that most web sites found SSL too expensive for most operations ... as a result you went to a website with the URL that you had provided, but it wasn't using SSL and so there was no cross-check that the website your were visiting was the website you thot you were visiting (corresponding to the URL that you had typed in).

later in the shopping experience, a button was provided that would take you to an SSL payment website. However, since you hadn't typed in the URL for the SSL payment website ... but the URL had been provided by the button ... it is likely that any fradulent e-commerce site would provide a button with a URL that corresponded to some SSL domain name certificate that was under the crooks control (i.e. the crooks would make sure that the URL invoked by a button from a fraudulent website ... would correspond to a URL in a valid SSL domain name certificate that they had obtained).

some additional topic drift ... recent domain name infrastructure integrity news items:
http://it.slashdot.org/it/05/04/08/1528213.shtml?tid=172&tid=95&tid=218
http://isc.sans.org/diary.php?date=2005-04-07

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Fri, 08 Apr 2005 13:50:01 -0600
"gerard46" writes:
| If you are talking VM/370 then I think it was about 800k.

For a single user? 256k, maybe even 128k if you don't mind a little paging. I supported VM/370 in 512k, and that was with a 256k VS/1 V=R region, and about 20 CMS users. ________________________________Gerard S.


it depends when you are tallking about ... late 70s and continuing into the 80s, there was increasing bloat ... with more and more code migrating to the fixed kernel. also you have to consider the "working set" sizes of the applications that were also bloating (less & less space for paging because of fixed kernel requirements and larger and larger working set sizes as applications bloated).

original cp/40 with cms ran on 256k 360/40 (custom modified with virtual memory hardware) supporting multiple cms users. cp/40 then morphed into cp/67 with the availability of 360/67

here is some paging and storage size benchmark from '68 ... where i artificially would reduce amount of pageable real memory
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14

cp/67 fixed kernel size continued to grow over time. part of the issue was people from traditional operating system viewed the kernel as the place to implement function ... as opposed to the baseic cp/67 premise of being a hypervisor/microkernel.

the summer of '69, i did a stint at recently formed BCS and started fealing that the kernel size was getting out of hand. As part of that i created the pageable kernel paradigm (which was not shipped as part of cp/67 ... but was picked up later for vm/370). The implementation didn't use virtual memory ... it just moved 4k hunks of kernel code to&from memory & disk.

In order to do it, I had to facture some number of low-usage routines that were much larger than 4k. One was console functions. Previously console functions had been one large module (with internal address constant pointers). With the fracture, the command lookup was in fixed kernel ... but most of the actual console command handling code was split out into individual routines (each less than 4k and package on 4k boundaries).

CP67 and CMS shipped effectively everything as source ... except for a modified BPS loader that was used for kernel initialization. The BPS loader still contained a limit of 256 entry symbols/points. With the facture of routines for 4k page packaging ... a lot of new entry symbolds were introduced which drive the total number of cp67 ESD entries over 256 ... causing the modified BPS loader to fail ... and you could no longer initialize the kernel. This caused me quite a fit.

In any case, by the end of the cp67 cycle ... I don't remember any 256k 360/67 still being used. Then there was the morph to vm/370. I don't remember the release 1 customers ... but I do remember being called down to NYC to a nowegian shipping company that was trying to get vm/370 running on a 256kbyte 370/125 ... and having huge difficulties. I did several things to (late release 2?) vm/370 fixed kernel to get it back comperable to cp67 kernel (aka under 100k bytes).

as i mentioned in the recent xt370 post ...
https://www.garlic.com/~lynn/2005f.html#6 Where should the type information be: in tags and descriptors

the original xt/370 going to ship with 384k (370) memory ... which was to hold both fixed kernel as well as all application paging. Benchmarking that I did on the machine showed almost everything you wanted to do resulted in some kind of page thrashing. The page thrashing was exacerbated by all I/O operations were handed off to CP/88 monitor ... running on dos which in turned translated it to dos i/o to a 100ms/operation hard disk.

on a real mainframe ... there were much faster hard disks ... and for big operations ... you could have large multi-block transfers ... which tended to mitigate human factor perception related to latencies.

Almost any interactive thing you might try and the xt370 side ... tended to compare very poorly with similar application designed specific for the resources available in typical dos environment.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Fri, 08 Apr 2005 15:45:53 -0600
Brian Inglis writes:
2780/3780 JES2/3 links are notoriously finicky and tend to drop the connection if it gets something it doesn't like; sometimes it even brings JES down, which effectively shuts down MVS.

it used to be when JES crashed it would cause the whole system to crash ... including MVS.

one of the early JES networking problems was that it had jumbled up the design of the JES networking header fields ... and slight changes in JES networking fields could cause the system to crash.

one of the issues on the internal network (larger than the arpanet/internet until about mid-85)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

was JES boundary nodes that could crash each other ... different JES releases might have slight variation in networking header fields, sufficient to cause other JES nodes (on the internal network) at slightly different releases to crash.

one of the other early JES networking design problems was that it built its networking node definitions using the old time hasp psuedo device table ... which had 255 entries (one byte index). a typical installation might have 60-80 psuedo devices defined ... leaving something like 160-190 entries for networking node definitions. further exaserbating the problem was that JES would discard all traffice where it didn't recognize either the destination or the origin node.

the internal network had quickly passed 255 nodes in the mid-70s (something that the arpanet/internet didn't do until after the great 1/1/83 switch over to internworking protocol) ... and so JES was unusable on the internal network for other edge/boundary nodes. Eeven as an edge/boundary node, it still had the habit of discarding traffic where it didn't recognize the origin (a MVS system with a large and varied user population found it impossible to juggle the 160 or so network node definitions to keep all the users happy all the time).

the tendency for different JES releases to crash each other resulted in a body of canonicalizing gateway code in the internal network mainstay nodes. This gateway code directly talking to a JES node would be setup with JES header rewrite code specific for that JES release. (in order to prevent one JES node from crashing other JES nodes).

there was an early-on infamous case where JES systems in San Jose (cal.) was causing MVS systems in Hursley (uk) to crash ... and they blamed it on the gateway code not keeping one JES system from crashing another JES system.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Fri, 08 Apr 2005 19:20:14 -0600
Brian Inglis writes:
One question I always meant to ask: did it never occur to the MVS weenies that adding similar code to JES itself might be a good idea? I'd have written a Script APAR DTD and been mailing IBM daily if I'd ever encountered such stupid problems, probably with suggested PTFs.

"customers" never had that large a network ... and always keep their "small" configurations in-sync .... and back then you never had customers "interconnecting" with one another.

it only became a problem in larger networks where organizations weren't keeping all their interconnected networking JES systems in-sync (aka working as designed).

basically the JES2 code was pretty much the HASP "networking" code (in the morphing and renaming from HASP -> JES2) ... some of the JES2 networking code still retained the "TUCC" identifier in cols 68-71 ... from hasp days from networking implementation for hasp by triangle university.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ISA vs. patent/trademark

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ISA vs. patent/trademark
Newsgroups: comp.arch,comp.arch.embedded,comp.arch.fpga
Date: Sat, 09 Apr 2005 11:44:17 -0600
jsavard writes:

And they can handle unaligned operands, but it takes at least four
instructions:

A     5,ALIGNED

becomes, say

LH    6,UALIGNED
SLL   6,16
IH    6,UALIGNED+2
A     5,6

or even

LC    6,UALIGNED
SLL   6,24
L     7,UALIGNED+1
SR    7,8
N     7,#X'00FFFFFF'
O     6,7
A     5,6

ICM was introduced with 370 ... insert character under mask

ICM   6,B'1111',UNALIGNED
AR    5,6

problem with LH was that it was arithmetic (not logical) and propagated the sign bit (and it required half-word alignment).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Sat, 09 Apr 2005 15:00:54 -0600
"David Wade" writes:
I did try ZED briefly but probably not enough to be a 1st DAN black belt, whereas with REXX/XEDIT I guess I was getting close:-) (former collegues may not share this view)

But if you have never used the full screen directory functions that are RDRLISt and FILELIST and the sname/stype/smode commands then you would not appreciate how powerfull a 3270 was. The things you could do with those and few well defined PF keys and a quick REXX exec. Then when you add in the prefix area and block moves and copies. If you really wanted to fly you might try split screen editing when you want to look at multiple files. Don't tell me you print it out! I will admit you needed a Co-ax attached local terminal, but once you had one you could stick line mode in dark place where no one wandered.

I still miss some of these things on Windows and Solaris, but for some reason Thomas Hessling Editor (THE) or even EMACS which provide much of the equivalent functionality do not seem to cut the mustard in the same way as the above did.... When on Windows I find myself using Explorer and PFE32, which I am sure are not as good. (though again the old right click and open with....)


theo alkema's fulist, browse, ios3270, etc predated RDRLIST & FILELIST. then somebody at SJR did a PC flist clone that was released thru the productivity program (fairly early in the dos cycle ... i think about the time of the internal only distribution referred to as dos1.8)

minor past fulist refs:
https://www.garlic.com/~lynn/2001f.html#8 Theo Alkema
https://www.garlic.com/~lynn/2001f.html#9 Theo Alkema
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#32 Alpha performance, why?
https://www.garlic.com/~lynn/2004f.html#23 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004q.html#63 creat

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Sun, 10 Apr 2005 08:02:57 -0600
"David Wade" writes:
things like FILELIST etc were actually pretty easy to debug as they were largeley written in REXX...

very early in rexx cycle (when it was still called rex and before it was released) ... i wanted to demonstrate the usefulness of rex as not just another command scripting language (i.e. sort of the norm at the time and being compared to the original cms EXEC and the newer EXEC2 from ykt).

there was ipcs, a dump analysis program ... written in some 40(?) klocs assembler; some of the objectivives were 1) working half-time over a 3month period ... 2) write in rex something that had ten times more function than the original ipcs, 3) was ten times faster than (the assembler) ipcs, and 3) (since this was in the start of the object-code-only debate/wars) do an implementation where the source had to be shipped.

misc. past posts
https://www.garlic.com/~lynn/submain.html#dumprx

... from long ago and far away
DUMPRX Release 2 - ENHANCED DUMPSCAN PROGRAM WRITTEN IN REX

DUMPRX release two is now available. DUMPRX is a program for processing IPCS abend PRB files. It is similar to the IPCS DUMPSCAN program. DUMPRX is almost completely written in REX and has several additional features not found in DUMPSCAN. The major new enhancement in release 2 of DUMPX is support for XEDIT-mode. DUMPRX will now operate as a XEDIT macro with all input and output being processed through a work file. Rather than displaying responses directly to the terminal, the information is inserted into the work file. This allows:

1) XEDIT commands to display, scan, and/or search the DUMPRX replies.

2) save and restore the complete log of the terminal session(s) associated with the DUMPRX analsysis of a particular PRB file.

3) allow the specificiation of particular screen tokens for use as arguments in DUMPRX commands.

Optionally available are updates to the CP source which enhance the ability of DUMPRX for use in problem determination.


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Sun, 10 Apr 2005 08:30:08 -0600
Anne & Lynn Wheeler writes:
Optionally available are updates to the CP source which enhance the ability of DUMPRX for use in problem determination.

so one of the CP source enhancements indirectly arose from
https://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors

when i was originally doing cp67 pageable kernel code and trying to figure out the modified BPS loader problem ... I realized that when the BPS loader exited to the loaded program ... it passed the address of its symbol table (and the number of entries) in registers. so I did this hack that before CP wrote the kernel boot image to disk, it would copy the complete symbol table ... entry point symbols, entry point symbol type type (ESDID), and addresses ... to the end of the pageable kernel so it would be included in the boot image written to disk.

this bit of cp67 code was never shipped to customers and in the morph from cp67 to vm370 pageable kernel support ... the part about appending the complete kernel symbol table was dropped.

as part of DUMPRX kernel debugging enhancements, i included the kernel updates to automatically capture and include the complete kernel symbol table as part of the boot image.

if DUMPRX found a complete symbol table ... it would optional display the appropriate addresses as "aboslute" ... or as relative to closest "type 0 kernel entry point symbol" plus displacement. this was an alternative to the "IPCS" supported option which allowed the appending of a kernel subset symbol table as part of post processing. One of the issues with doing it as part of post processing was that the selected subset table might be out-of-sync with the kernel actually being processed. having it appended as part of the base kernel image, made sure you were never dealing with an out-of-sync kernel symbol table.

couple old posts describing compiler/assembler generated ESD image format (i.e. the imput for loaded processing ... stuff that then went into the loader symbol table):
https://www.garlic.com/~lynn/2001.html#8 finding object decks with multiple entry points
https://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First video terminal?)
https://www.garlic.com/~lynn/2001.html#60 Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mozilla v Firefox

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mozilla v Firefox
Newsgroups: netscape.public.mozilla.general,alt.folklore.computers
Date: Sun, 10 Apr 2005 09:26:41 -0600
rsteiner@visi.com (Richard Steiner) writes:
My KVM switch uses Scroll Lock as the hot key (ScrLck-ScrLck-1 takes me to system #1, ScrLck-ScrLck-2 to system #2, etc).

i have two different switches ... one does double taping scrlck and then number; the other just does double taping scrlck and toggles between two systems.

I assume the first must be a subset of some switch that allows more than two system connections (and therefor need to select). the other is much simpler to just toggle between two systems.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Is Supercomputing Possible?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is Supercomputing Possible?
Newsgroups: comp.arch
Date: Sun, 10 Apr 2005 09:41:30 -0600
"Rupert Pigott" writes:
Cray did make a stab at the DB/server market, but they used a pile of SPARCs to do it in the form of the CS6400.

Here's some Oracle 7 related PR fluff for it.


we were sort of went the opposite with ha/cmp, minor reference:
https://www.garlic.com/~lynn/95.html#13

at the time, there was standardization on three interconnects ... hippi, somewhat out of lanl on standardizing cray channel; ... fcs, someout out of llnl (they had a copper serial that started out somewhat being the basis of fiber fcs); ... sci, somewhat out of slac.

(at least) convex, sequent, and dg did sci implementations (convex using 2-way HP boards for 128 processors; sequent and dg using 4-way intel boards for 256 processors).

past ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

random past dlm posts
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001.html#40 Disk drive behavior
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002f.html#17 Blade architectures
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002m.html#21 Original K & R C Compilers
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003d.html#2 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM
https://www.garlic.com/~lynn/2003i.html#70 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2003k.html#10 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#17 Dealing with complexity
https://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
https://www.garlic.com/~lynn/2004d.html#72 ibm mainframe or unix
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#5 Tera
https://www.garlic.com/~lynn/2004q.html#10 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2005.html#55 Foreign key in Oracle Sql

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Mon, 11 Apr 2005 21:26:38 -0600
"David Wade" writes:
I think most businesses face this type of dilema at some point or other in Business. When its time for a major change in paradigm how do you move forward without loosing too many existing users. IBM faced it at the time came to replace S/360. They chickened out and went for S/370 rather than Future Systems. Later when they tried to move folks from DOS to OS/2 they failed because at the same tim M$ offered an option (Windows/95) with a lower pain threshold...

it wasn't so much chickening out ... but rather a question of could it be done at all; during the period, I had made references to the effort akin to a cult film that had been long running down in central sq ... or the inmates being in charge of the institution.

one of the nails in the coffin was analysis by houston science center that if a FS machine was built from the fastest components then available ... bssically 195 ... then the thruput of existing applications running on 195 ... would drop to that of about the thruput of 370/145 (20 to 30 times slower); this was of particular concern to the high-end transaction customers ... like the "res" systems and the financial transaction systems.

misc past FS posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Some questions on smart cards (Software licensing using smart cards)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Some questions on smart cards (Software licensing using smart cards)
Newsgroups: alt.technology.smartcards
Date: Tue, 12 Apr 2005 09:00:59 -0600
"Denis Lavrukhin" writes:
I am working on a subject, so I have some questions.

The main requirement is to activate program. User has serial number and a smart card. We could work with any smart card or maybe with some of them.

One of the biggest problems from my point of view is to detect is it smart card or software emulator. Emulator should be detected on server side and if detected, activation stops. If no emulator detected, some data should be securely transmitted (or smart card unique data) and then program should be able to check for this data validity.

1. How could I determine is this is a smart card or emulator in common case (using high level API, such as MS cryptoapi)? 2. If I will require specific card which support cardlets (for example javacard based), could it help me to do that?

Please, help me or give me a links to articles which may help me.


basically asymmetric cryptography technology is mapped to a business process called public key .... where business process defines that one of a public/private key pair is made public and the other key is kept private and never divulged.

from 3-factor authentication paradigm
something you know
something you have
something you are


lots of past posts on 3-factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor

a public/private key implementation is a something you have authenticatione; aka somebody (within the public key business process definition) has access and use of the specific private key. the verification of a digital signature with a public key by the receiver (or relying party) implies that the originator had access and use of the corresponding private key.

one of the issues is that there has been lots of discussion about public/private key implementations within the context of PKIs ... where there are certification authorities that certify some amount of information and generate digital certificates as indication of their certification. There typically is little discussion about the integrity of the certification process and lots of discussion of the integrity of the digital certificates themselves.

The focus on the integrity of the digital certificates frequently obfuscates issues regarding integrity of the certification process as well as the integrity of the business implementation related to keeping the private key really private and never divulged. This can create some conflict for PKI oriented industries ... where the fundamental security of the whole infrastructure is based on the integrity of the private key protection and confidentiality ... while their business model is based on convincing everybody of the value of the digital certificates.

The focus on the integrity of the digital certificates tends to take focus away from the actual certification business process as well as any business process that prooves the integrity of protection and confidentiality of the private key.

so one possible mechanism is to provide a certification infrastructure related to the protection supporting the confidentiality of a private key ... so an online service that you send it a specific public key ... and the online service sends back an assurance level related to the protection of the associated private key. If there is no known assurance level for the protection of a private key that corresponds to a specific public key ... then assume the poorest possible assurance.

we actually looked at this quite a bit in connection with AADS
https://www.garlic.com/~lynn/x959.html#aads

in part, since we weren't distracted by the management of digital certificates, we could concentrate on other aspects of authentication infrastructures requiring assurance and integrity.

recent thread with lots more description of various strengths and vulnerabilities related to SSL-oriented digital certificate infrastructures:
https://www.garlic.com/~lynn/2005e.html#45 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005e.html#51 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005e.html#62 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005f.html#9 TLS-certificates and interoperability-issues sendmail/Exchange/postfix

lots of past postings about ssl digital certificate oriented infrastructures
https://www.garlic.com/~lynn/subpubkey.html#sslcert

There is a separate issue related to what/whether a digital signature means anything more than simple origin authentication. Sometimes there is semantic confusion since "digital signature" contains the word "signature". This can somehow get confused with the concept of "human signature".

A "human signature" tends to imply that a person has observed, read, approved, authorizes, and/or agrees with the contents being digitally signed. None of this is implied in the fundamental digital signature operation which is simple something you have authentication.

Some of this was attempted to be addressed by the EU FINREAD standard
https://www.garlic.com/~lynn/subintegrity.html#finread

where a certified environment is defined (a finread terminal) where there are some processes (imposed by the finread terminal) that may attempt to assure that some human has actually observed what they believe to be signing ... and that other steps or processes are involved.

However, in the basic finread definition ... for any particular digitally signed message ... there is no more proof that a finread terminal was involved ... than there is proof about the assurance level surrounding the protection of a private key (is it contained within a chip of a particular security level, and never has left that chip).

One of the possible extensions to the EU finread standard ... is not only making them a certified environment .... but also providing them with a private key embedded in their security module ... and the finread terminal digitally signs all transactions ... in addition to a private key signing contained in a smart card (or other hardware token).

There then is an online service that has registered all certified finread terminals and their corresponding public key. Then there are transactions with two digital signatures, one from the originator (say from some sort of hardware token with a registered public key that includes the assurance level of the associated hardware token) and one from the certified finread terminal (indicating that certain business processes were observed as part of the first digital signature signing).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers
Date: Tue, 12 Apr 2005 11:49:35 -0600
Steve O'Hara-Smith writes:
400KB or 800KB on the 5 1/4in floppies (860KB on the Lisa apparently) until IBM decided that the floppy everyone else got 400KB on could only hold 360KB. OTOH 8in floppies could hold 1MB as long ago as 1975 and ISTR a 2MB format just before they vanished from the scene.

problem i saw sporadically at the higher densities were interoperability issues across different floppy drives.

at one time i had an original 64kbyte ibm/pc that i had upgraded with more memory, 2nd internal floppy drive and two external teak 80track half-height floppy drives. i no longer have it or any floppy drives ... although i've got possible 100 5-1/4in floppies someplace.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360; Hardwired vs. Microcoded

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360; Hardwired vs. Microcoded
Newsgroups: alt.folklore.computers,comp.arch
Date: Tue, 12 Apr 2005 13:48:05 -0600
hancock4 writes:
I'm still confused as to the differences, if any, between the S/360-195 and the S/370-195. Could anyone explain the differences and how many built? (I think even the S/360 version had monolithic storage). I gather that both models evolved out of work for the S/360-91.

the 370/195 had the "new" (non-virtual memory) 370 instructions (like insert character under mask, etc).

i was also told the 370/195 had better fault tolerant characteristics and some retry of soft-errors (compared to 360/195) ... some statistic that the number of components in the 195 ... the probability of some kind of soft failures was on the order of daily..

sjr had 195 up thru the late 70s ... and besides the supercomputer venue some of the large financial houses and airlines had them for high end transaction processing (financial transaction switching and airline res system). the eastern airline res system was one of that i believe ran on 195 (south florida) and was part of the input into amadeus (my wife served stint as amadeus chief architect for a time).

also related references to the nail in FS (future system) coffin
https://www.garlic.com/~lynn/2005f.html#19 Where should the type information be: in tags and descriptors

i got involved with the 195 group when they were looking at adding dual i-stream support to 370/195 ... from the software standpoint it would look like dual-processor operation ... but it was very akin to the recent processor hardware threading ... aka the scenario was that most codes weren't able to keep the pipeline full because most branches causing pipeline stall/drain (mip/thruput nominally was about half of peak). there was some hope that dual i-stream support would keep the pipeline full ... and show aggregate peak thruput.

recent past post in this thread
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

the relational model of data objects *and* program objects

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: the relational model of data objects *and* program objects
Newsgroups: comp.databases.theory
Date: Tue, 12 Apr 2005 14:48:53 -0600
"erk" writes:
It's not an assertion, it's true by definition (modulo the original use of relation in mathematics). You can easily say "SQL DBMS" and be both correct and non-irritating. Given that this is a theory group, I think definitions are important. Not that they're not important outside theory...

my impression of the early systemr/sql work at sjr was that it was very much modeled after bank accounts (and Codd was also at sjr) .... two dimensional table structure with bank account number as the primary index and all the related information for each account very uniform. this was in the time-frame that one of the major people from IMS had joined a large financial institution ... and was putting together a larger IMS development group ... than the IMS development group in STL. there was a lot of focus on creating solutions to basic financial industry operations & business processes.

the row&column model was quite useful abstraction simplification that also shows up in the uptake of spreadsheet technology ... and fits well with the prevalent paper-based orientation of the period.

misc. past system/r postings
https://www.garlic.com/~lynn/submain.html#systemr

system/r site
http://www.mcjones.org/System_R/

SQL reunion
http://www.mcjones.org/System_R/SQL_Reunion_95/index.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360; Hardwired vs. Microcoded

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360; Hardwired vs. Microcoded
Newsgroups: alt.folklore.computers,comp.arch
Date: Tue, 12 Apr 2005 17:57:29 -0600
the death of FS also saw the (re-)ascension of marketing, accountants, and MBAs.
https://www.garlic.com/~lynn/submain.html#futuresys

801/risc could be viewed as a swinging to the opposite extreme ... demonstrating with cp.r and pl.8 software technology could be used to compensate for extremely simple hardware (lack of features)
https://www.garlic.com/~lynn/subtopic.html#801

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Wed, 13 Apr 2005 08:38:23 -0600
jmfbahciv writes:
There were lots of titles because a few people keep changing the header to reflect the contents. This irritates me because it makes backtracing exteremely difficult to do.

A better suggestion to Edward would be to read Hank's and mrr's posts which would keep him from fainting from the shock of a high post count.


I have a long collection of posts on assurance:
https://www.garlic.com/~lynn/subintegrity.html#assurance

buffer overflows
https://www.garlic.com/~lynn/subintegrity.html#overflow

and fraud, exploits, vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#fraud

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

the relational model of data objects *and* program objects

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: the relational model of data objects *and* program objects
Newsgroups: comp.databases.theory
Date: Wed, 13 Apr 2005 08:45:50 -0600
"erk" writes:
Spreadsheets and even SQL DBMSs are very different beasts, as has been described over and over again in these pages, and relational is different yet. If you're placing spreadsheets, relational, and SQL all in the same category, then there's not much left to discuss, I suppose.

while they may be different beasts ... they originally were targeted at very similar financial account oriented environments ... possibly in part because financial operations as early adopters had lots of money to spend on new stuff ... and there also possibly being some prospect of ROI ... for the money spent.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

the relational model of data objects *and* program objects

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: the relational model of data objects *and* program objects
Newsgroups: comp.databases.theory
Date: Wed, 13 Apr 2005 08:57:49 -0600
and small quote from previous reference
http://www.mcjones.org/System_R/
System R is a database system built as a research project at IBM San Jose Research (now IBM Almaden Research Center) in the 1970's. System R introduced the SQL language and also demonstrated that a relational system could provide good transaction processing performance.

.... which might be interpreted that it demonstrated a reaonable paradigm match up between system/r's transaction processing and financial infrastructure transaction processing.

now it is obvious that system/r transaction processing capability didn't exist in spreedsheet implementations ... however both basic information organizations just happened to conform pretty well to standard paradigm of some (financial) market segments that had money to spend on new products.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Wed, 13 Apr 2005 19:23:19 -0600
Peter Flass writes:
My feeling is that it isn't either-or, but few vendors have taken this line. I think PCs should have been marketed as super-intelligent terminals that interacted seamlessly with mainframes. Users shouldn't have had to know where applications resided, ran, or got their data from. They should have just run. You could sell timesharing with gee-whiz editing, graphing programs, or whatever.

can you say SAA (aka system application architecture)?

however, it seemed that a lot of SAA effort was spent on porting PC implemented applications to backend mainframe ... under the guise of execution location transparency.

somewhat more capable SAA-related effort was taking UCLA's locus and releasing it as AIX/370 and AIX/PS2.

we also took some heat from the SAA factions when we were coming up with 3-tier architecture
https://www.garlic.com/~lynn/subnetwork.html#3tier

since some might accuse the SAA forces of attempting to put the client/server genia back in the bottle ... and 3-tier architecture was just aggravating the situation.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: alt.folklore.computers
Date: Thu, 14 Apr 2005 08:01:35 -0600
Brian Inglis writes:
Ditto for all three items: slide rule, PDP-11 card, IBM template; also have: chem lab spatula, TECO 8/10/11 booklet, VAX 11/780 white booklet, S/370 yellow booklet, various VM cards/booklets; and I've got one of those IBM double ended crochet hook/slot tools, as a deinstall souvenir: anyone remember what they were used for?

early on in HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

there was a lot of physical install stuff going on and I needed in tool set ... so I put in an order for a standard FE tool briefcase. I got back a rejection because I wasn't an FE ... I was able to escalate and finally the order went thru. It has various kinds of stuff ... including various and sundry items for electric typewriter (selectric, computer terminals, etc) repair (the mechanical part of the golf ball movement).

it used to sit in my office and over the years ... some number of pieces have disappeared ... but none of the typewriter stuff. It is sitting in my current office.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Thu, 14 Apr 2005 08:36:40 -0600
Andrew Swallow writes:
When writing the cheques DEC $20k mini computers had a few additional costs.

Boasting is probably why minicomputer company DEC preferred selling £30k minis over $1k micros.

Even if you sell a thousand $1k micros they are still $1,000 machines and therefore lacking in prestige.

Externaly IBM considered PCs toys until the micro division had bigger sales that the mainframe division.

As to why DEC dropped its "mainframes" that was probably just returning to its roots when under financial pressure.


the entry level market tends to be much more competitive and the profit margin significantly slimmer. The profit off a thousand $1k micros is likely to be 1/10th the profit off a million dollar machine aka they might have to sell 10k (or even $50k) $1k micros to take home the money that they could get from one $1m machine.

it is one thing to have sales and possibly something completely different to have profit. I once heard a phrase that went something like ... they were loosing $5 on every sale ... but they were planning on making it up in volume.

for a little more topic drift ... some vax volume numbers:
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#5 Blade architectures
https://www.garlic.com/~lynn/2002i.html#30 CDC6600 - just how powerful a machine was it?

one of my assertion was that technology had advanced to the point where 4341s and vax machines hit a price/point for departmental computers ... and there was a huge proliferation in that market segment. however, moving further into the 80s ... you started to see that market segment erode and be taken up by larger workstations and PCs (it wasn't so much customers buying lots workstations ... but that they could replace their 4341/vax with a large workstation).
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2002i.html#30 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#4 HONE, ****, misc
https://www.garlic.com/~lynn/2002j.html#7 HONE, ****, misc
https://www.garlic.com/~lynn/2002j.html#34 ...killer PC's
https://www.garlic.com/~lynn/2002j.html#66 vm marketing (cross post)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers
Date: Thu, 14 Apr 2005 10:18:29 -0600
David Gay writes:
The context was PCs (presumably IBM compatible ones). 160k on the original single-sided, 8 sectors/track 5 1/4in floppies. Increased to 360k when they went to double sided and 9 sectors/track (I think there was also a double sided, 8 sectors/track stage, hence 320k). 1.44MB only happened when 3 1/2in floppies showed up.

(internal distribution) dos 1.8 had support for trying to cram 10 sectors per track ... 400k ... and before the AT came out there were 80track drives starting to appear and getting 800k (or 720k with 9sectors/track). I had an original IBM/PC ... that I eventually attached two external teac 80track drives. all used the same 5.25in floppies. I have hundred or so floppies in a box someplace with no drives to read them.

1.2mb high density (5.25in) floppies (and drives) were introduced with PC/AT In theory the high density drives could also be used to read/write regular 40track & 80track "regular" floppies (however, i remember having frequent problems with using normal 40track drive to read normal 40track floppies that had been written on high-density AT drive).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

the relational model of data objects *and* program objects

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: the relational model of data objects *and* program objects
Newsgroups: comp.databases.theory
Date: Thu, 14 Apr 2005 13:41:27 -0600
Anne & Lynn Wheeler writes:
and small quote from previous reference
http://www.mcjones.org/System_R/

System R is a database system built as a research project at IBM San Jose Research (now IBM Almaden Research Center) in the 1970's. System R introduced the SQL language and also demonstrated that a relational system could provide good transaction processing performance.


there have been a number things done for practical, implementation reasons, as opposed to theory and abstraction; there have been more than a few references to codd believing that SQL compromised his (codd's) relational model.

a big issue during the evolution of system/r was the battle with the earlier generation of physical databases ... was the excessive overhead of the index (it minimized some of the manual administrative overhead but typically doubled the physical disk storage and increase real memory and processing overhead). having a single "primary" index with all the rest of the fields (columns) related to the pimrary index ... normally in the same physical record ... allowed things like bank account transactions to update an account balance field w/o impacting the index. carefully constructed model could select a fairly stable characteristic as the primary index and make all the other fields (in the same physical record) related to the primary index (and related updates wouldn't impact index structure).

having single index with large body of information physically collected under that index also minimized the index overhead on retrieval. being able to update nearly all of the fields ... resulting in single physical record write w/o impacting the index ... also minimized the index overhead.

Mapping financial transactions to database "transactions" (with acid and commit characteristics) with most stuff in a singled physical record, resulted in a large class of transaction updates only involving a single physical record needing to be logged as part of acid/commit (and with no impact on things like the index, where an index change might involve a large number of physical records being changed).

when i did the original prototype DLM for ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

minor related posting
https://www.garlic.com/~lynn/95.html#13

one of the things i worked out was how to do fast commit across a distributed set of machines (sharing the same disk pool).

a normal fast commit would involve writing the "after image" record (s) and the commit record to the log. Logs are written sequentially and for high performance may having either a high-speed sequenctial I/O device and/or a dedicated disk arm. database transactions however tend towards (possibly randomly) scattered records all over a set of disks. ACID/commit require fairly time-ordered activity. Disk arms can have much higher performance if i/o is order by disk position rather than time-sequence. fast commit leaves updated record in main memory cache, sequentially writes the log record ... and does lazy writes of the ("dirty") cache record to database "home position". The lazy writes of dirty cache records can be ordered by disk arm location (rather than transaction time sequence) potentially significantly improving disk thruput.

At the time, fast commit was crippled for distributed machines. If a lock (and record) migrated to a different machine ... there was a force write of the (dirty cache) record to its database home position and the "migrated to" machine read the record from disk. This preserves the simplified transaction record logging and recovery ... somewhat implicit in the original RDBMS implementations.

The DLM prootype work allowed a cache-to-cache copy of the database record (even dirty records), potentially piggybacked on the same I/O operation that migrated the specific lock. This avoided having to force a disk transit (out and back in again). The problem was that this severely complicated log recovery in case of a failure. In the single processor case, there could be a whole sequence of (commuted) transactions to the same (dirty) record (none of which have shown up in the database home location for that record) ... that were all recorded sequentially in the same physical log. Recovery after a failure, just requires sequentlly reading the log and updating the home record locations for each correspond record.

In a distributed fast commit recovery ... multiple different transactions to the same record may in exist in several different physical logs (none of which have shown up yet in the home database location). The recovery process is attempting to recreate an integrated global sequence from multiple different physical ... when the typical transaction rate may be much higher than any global clock resolution (i.e. simple time-stamping of each record in the log may not be sufficient since it is very expensive to maintain fine-grain clock synchronization across multiple real machines).

So the approaches written up in the literature tends to involve things like log versioning or log virtual time (some global increasing value that allows recreating operation operation sequence from multiple different physical logs).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Fri, 15 Apr 2005 08:04:19 -0600
David Dyer-Bennet <dd-b@dd-b.net> writes:
Yes, there's a niche for mainframe computing, definitely. Generally, for single central systems for database and transaction-processing networks. For small to medium companies that's a super-mini in the old terms, probably a Sun server these days. For bigger companies, it's a full-blown mainframe.

DEC might have been able to keep the mainframe lines going, but they were looking fairly old at the time. And that's not the market where going head-to-head with IBM has ever been a big success.


there was a slightly separate issue regarding the "glass house" and that was managed storage ... backups, disaster/recovery, etc. ... regardless of what kind of processor was accessing the data.

the disk division was seeing lots of data migrating out of the glass house ... in part because the spigot/access to data in the glass house was severely constrained. the issue was frequently this data migration involved some of the highest valued corporate assets ... and if it got lost while out in the distributed wilds ... it could really be lost.

some past refs:
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2001.html#33 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001j.html#16 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001n.html#54 The demise of compaq
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2003f.html#27 Ibm's disasters in the 70's
https://www.garlic.com/~lynn/2003j.html#44 Hand cranking telephones
https://www.garlic.com/~lynn/2003m.html#12 Seven of Nine

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: alt.folklore.computers
Date: Fri, 15 Apr 2005 09:00:45 -0600
Jeff Teunissen writes:
[snip]

Mike Cowlishaw


[snip]

slightly related:
https://www.garlic.com/~lynn/2004d.html#17 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#26 REXX still going strong after 25 years

I'm trying to remember ... this may have been the VMITE that happened shortly after the first Chucky Cheese opened .... out behind the Blossom Hill shopping center (in an old, converted supermarket bldg).

For more topic drift, i had done backup/archive system that was deployed at several installations in the san jose area. Mark had helped me with "release 2" (and gave a presentation on it at this vmite). A group was assigned to the effort and it eventually morphed into Workstation Datasave Facility ... and then later into ADSM; it is now called TSM (tivoli storage manager) ... misc. postings:
https://www.garlic.com/~lynn/submain.html#backup

In any case, Mike is on the agenda .. from somewhere long ago and far away ...

Date: 01/21/80 18:17:21
From: STLVM2 xxxxxx
To: Distribution

Hi Everyone, This is the semi-final agenda. Please contact me if you have any questions or concerns. This agenda is only being sent to those of you who appear in the distribution list. If I over looked anyone, I hope you find out in time to do something about it!

GPD VM/370 Internal Technical Exchange

VM/370 Support
R93/D12
Santa Teresa Laboratory
San Jose, California 95150

VM/370 Internal Technical Exchange

On February 26, 27 and 28, GPD-IS will host a VM/370 Internal Users Meeting at the San Jose Research facility, building 028, 5600 Cottle Rd., San Jose, California 95193. The object of the meeting will be a technical exc hange of information among IBM locations using VM/370 around the world. P lease direct any questions about this meeting to me at xxxxxxxx. If you wi sh to use the internal network, my node is STLVM2 and my userid is xxxxxxxx.

Please bring your badge or ID card as it will be required for everyone.

The total meeting attendance will be limited to 260 so make your reservations early! If you are commimg from the USA you will need to make own room reservation. If you are comming from outside the USA you may send an ITPS to xxxxxx, R14/D184, Santa Teresa, AAST and she will make the requested reservations and send you a conformation ITPS. Please in clude your ITPS address as part of the original ITPS so Marge can send your conformation.

I will have all the presentations reproduced and distributed after the meeting to all attendees unless they request otherwise. I will have them ready before the meeting IF I receive them in time.

xxxxxxxx will be our secretary for the meeting this year. The message phone number is 8/xxxxxxxx. It will only be available from 1:30 to 3:30 PST. Any return travel conformations can be made through Marge at these times. There will be 2 phones available just outside the conference room for your use.

The emergency (personal emergency only) phone number is (408) xxxxxxxx. DO NOT USE THE EMERGENCY NUMBER FOR BUSINESS MESSAGES!

GPD VM/370 Internal Technical Exchange

The following is the agenda for the meeting.

Tuesday, February 26, 1980:


    9:00 -  W.W.            - VM/370 ITE Kickoff, Mr. W.W. is
the President of GPD.

9:30 -  Ray             - ITE Overview.

9:45 -  Forrest         - Dynamic Writable Shared Segment Over
                              View.
10:00 -  Jim             - System R , An Overview.
   10:30 -  Break
11:00 -  Gene            - Extended Networking.
11:30 -  Roy             - Network management and load balancing
tools.
   12:00 -  Lunch
1:00 -  Peter           - Network Response monitoning, Remote
                              Dial Support, and VM/370 HyperChannel
attachment
1:20 -  Jeff            - CADCOM - Series/1 to VM/370
Inter-program communications.
    1:35 -  Noah            - PVM - Pass Through Virtual Machine
Facility.
    2:00 -  Noah            - EDX on Series/1 as a VNET
workstation.
2:15 -  Tom             - Clock - Timer Driven CMS Virtual
machine.
    2:30 -  Break
3:00 -  Vern            - 3540 Diskett Read/Write Support in
                              CMS.
3:30 -  Bobby           - VM/SP - System Product offering
overview and discussion.
4:30 -  Closing         - from this point on there can be small
                              informal sessions of points of
interest.

Wednesday, February 27, 1980

9:00 -  Billie          - Common System Plans.
                              modifications and results.
9:30 -  Claude          - XEDIT Update.
            Nagib
10:00 -  Graham          - SNATAM - Controlling SNA devices
from CMS.
10:30 -  Break
   11:00 -  Mike Cowlishaw  - REX Executor.
11:45 -  Mark            - San Jose File Back-up System.
   12:00 -  Lunch
1:00 -  Albert          - VMBARS - VM/370 Backup and
Retrieval System.
1:45 -  Chris           - 6670 Office System Printer and
            Tom               VM/370
2:15 -  Break
    2:45 -  Rodger          - VM/370 Based Publication System.
3:15 -  Dieter          - Photo-composition Support in DCF
3:30 -  John            - VM Emulator Extensions.
Dave
    3:40 -  Tom             - DPPG Interavtive Strategy and VM/CMS

    4:30 -  Closing         - From this point on there can be small
informal sessions of points of
interest.
• 4:30 -  Editor Authors  - This will be an informal exchange of
                              information on the changes comming
and any input from users on edit
                              concepts. All those wishing to express
their opinions should attend.

Thursday, February 28, 1980:

9:00 -  Ed              - VM/370 Mulit-Drop Support
    9:30 -  Ann             - Individual Password System for VM/370.
10:00 -  Walt            - Individual Computing based on EM/YMS.
10:30 -  Break
11:00 -  Chris           - EM/370 - Extended Machine 370 and
                              EM/YMS Extended Machine Yorktown
Monitor System.
   12:00 -  Lunch
1:00 -  Simon           - Extended CMS Performance Monitoring
Facility
1:30 -  George          - Distributed Processing Maching
                              Controls.
2:00 -  Mike            - Planned Security Extensions to VM/370
                              at the Cambridge Scientific Center
2:30 -  Break
3:00 -  Steve           - Intra Virtual Machine
Synchronization Enqueue/Dequeue
                              Mechanisms.
3:30 -  Jeff            - Boulder F.E. Source Library Control
                              System.
4:00 -  Ray             - VMITE Closing.

... snip ... top of post, old email index

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Sun, 17 Apr 2005 14:34:16 -0600
rpl writes:
np, "White Plains" I got, "POK" was left field (and Dylan struck some sort of paisely flashback we won't go into).

I'm not a DECite as such, I have more experience on Big Blue big/medium iron than VAXen (though I think you're referring to /BAH).

But I'm *not* saying that Big Blue was a contender with their model-line at that time (mid 80s) for that market; DEC was and could've (IMHO) grabbed a good portion of any installation 8 < x < 20 terminals[1].


somewhat related
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Mon, 18 Apr 2005 00:10:39 -0600
rpl writes:
I'm surprised at the bang/buck ratio of 4341s vs 303xs (as well as that *any* 43xxs had MVS), though I have to admit I never fully understood the target market differences bettwen OS & DOS.

a couple benchmark refs:
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Mon, 18 Apr 2005 09:58:20 -0600
"Tom Linden" writes:
I acquired a 751 in early 82 and I think the 730 came out later that year

unfortunately, doesn't give breakdown for years 78-84

from:
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

posting of 1988 IDC report .... repeat of portion:


VAX SHIPMENTS - WORLD-WIDE
--------------------------
1978-
SYSTEM       1984      1985      1986      1987     TOTAL
--------   --------  --------  --------  --------  --------
11/725       1,100       400         0         0     1,500
11/730       5,550     1,500         0         0     7,050
11/750      16,340     3,900       990       370    21,600
11/780      19,200     3,700       670       370    23,940
11/782         310         0         0         0       310
11/785         300     2,700       850       200     4,050
MVI          2,200       600         0         0     2,800
MVII             0    10,900    25,000    29,000    64,900
82XX             0         0     1,875     2,795     4,670
83XX             0         0       500     1,000     1,500
85XX             0         0       725     1,380     2,105
86XX             0     1,200     1,200     1,200     3,600
8700             0         0       140       530       640
8800             0         0        80       420       500
--------  --------  --------  --------  --------
TOTAL       45,000    24,900    32,030    37,265   139,195

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 15 Apr 2005 11:53:50 -0600
Janne Blomqvist writes:
AFAIK Ericsson mainly used a proprietary language called "PLEX" for the switch code (AXE-10). In the late 80's they embarked upon a next generation switch project using C++, the infamous AXE-N project. About 8 years and 6 billion SEK later the AXE-N project was canceled. While projects obviously fail for a lot of different reasons, in this case a lot of blame has been attributed to the use of C++ (immaturity of C++ at the time, memory management issues, concurrency issues, fault tolerance etc. due to the lack of language support for these features). After AXE-N tanked they essentially restarted the project using another language, Erlang, which eventually resulted in the AXD switch which is what they peddle these days, if I'm not mistaken.

long ago we did a one week jad on taligent about what would it take to upgrade it for business critical dataprocessing (upgrade it with facilities to make it useful for business critical dataprocessing). first cut estimate was about a 30 percent hit to their current base and about 1/3rd increase in new frameworks (specifically for business critical dataprocessing).

as it happened, there were some "nextstep" people around during that period ... claiming that you could get just about everything needed for lots of business critical dataprocessing from nextstep.

random past taligent posts
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000e.html#48 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000.html#10 Taligent
https://www.garlic.com/~lynn/2001j.html#36 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow
https://www.garlic.com/~lynn/2002i.html#60 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002j.html#76 Difference between Unix and Linux?
https://www.garlic.com/~lynn/2002m.html#60 The next big things that weren't
https://www.garlic.com/~lynn/2003d.html#45 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003e.html#28 A Speculative question
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#15 A Dark Day
https://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
https://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Fri, 15 Apr 2005 17:18:27 -0600
at least one new buffer overflow article or book a week
http://www.linuxsecurity.com/content/view/118881/49/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

OTP (One-Time Pad Generator Program) and MD5 signature

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OTP (One-Time Pad Generator Program) and MD5 signature
Newsgroups: comp.security.firewalls,comp.security.misc,comp.security.unix
Date: Mon, 18 Apr 2005 10:06:13 -0600
Harald Hanche-Olsen writes:
Almost: You send 'choginzx', the server computes the MD5 signature, and checks it against the next MD5 signature in its file. If they match, it lets you in. It also marks that signature as having been used, so it cannot be used again to gain access.

(I'm confused by your use of the word recipient, though: This is for access control, not for sending and receiving encrypted messages.)


misc. discussions of one-time-password & associated internet standard
https://www.garlic.com/~lynn/2003m.html#50 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#0 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#1 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#2 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#3 public key vs passwd authentication?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moving assembler programs above the line

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving assembler programs above the line
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 18 Apr 2005 17:06:04 -0600
edgould@ibm-main.lst (Ed Gould) writes:
Bruce,

That was a good story, thanks for sharing.

My favorite was a "timing" issue a field was being updated between two instructions. Our SE suggested a CS (compare and swap) and it worked like a champ. The problem didn't happen on SVS (or MVT for that matter).

Nice having to debug a vendors code.


Test&Set atomic instruction was available on 360 ... basically for multiprocessor locking/synchronization ... 360/65 MPs and 360/67 MPs.

360/67 was the only 360 with virtual memory support ... very similar to what was later introduced in 370. 370/67 also had both 24-bit and 32-bit virtual addressing modes (32bit was dropped in 370, however 31bit addressing was later introduced with 370/xa on 3081s).

360/67 multiprocessing was announced with up to 4-way and a box called a channel director .... which allowed all processors to address all channels. I don't know of any 4-way machines that were build and I'm only away of a single 3-way machine was built ... the rest were 2-way SMPs.

The 360/67 "channel director" had a bunch of switch settings that could partition the configuration, associate various components in subset configurations, etc. The values of the switch settings were available in half-dozen or so control registers (used these days for access registers). The one 3-way SMP channel director also had a feature that it was possible to reset the channel director switch settings by loading values into the control registers.

at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

charlie was doing a lot of work on fine-grain locking and serialization (with cp67 and 360/67 smp) and invented a new instruction ... which was given a mnemonic of his initials, CAS. Then we had to come up with a name for the new instruction that matched his initials ... and compare&swap was born ... various smp and compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

trying to get compare&swap into 370 architecture ... the owners of the 370 architecture in POK said that there wasn't a big demand for any (new) SMP-oriented instruction (at least in the POK "batch" operating system ... they were satisfied using test-and-set instruction for the few locks that they had implemented for SMP support) ... and they suggested coming up with a non-SMP specific justification; thus was born the use of compare&swap for "interrupt-enabled", multi-threaded code (whether running on uniprocessor or multiprocessor hardware) ... and the description that was in the compare&swap programming notes (which have since been moved to the principles of operation appendix).

the 360/67 blue card had the layout for the 360/67 control registers.

my trusty "blue card" i borrowed from the "M" in GML ... and has his name "stamped" on the front .... aka GML (precursor to SGML, HTML, XML, etc) was also invented at the science center ... and "GML" are the three last name initials of the people involved at the science center:
https://www.garlic.com/~lynn/submain.html#sgml

misc. past 360/67 "blue card" references:
https://www.garlic.com/~lynn/99.html#11 Old Computers
https://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001c.html#15 OS/360 (was LINUS for S/390)
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001.html#69 what is interrupt mask register?
https://www.garlic.com/~lynn/2001.html#71 what is interrupt mask register?
https://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2003l.html#25 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#31 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003m.html#35 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2004e.html#51 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004l.html#65 computer industry scenairo before the invention of the PC?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moving assembler programs above the line

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving assembler programs above the line
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 18 Apr 2005 18:01:26 -0600
Eric Smith writes:
That's something that's puzzled me for a long time. Why did they only go to 31-bit addressing in 370/XA rather than 32-bit?

the story i always heard was that bxle/bxh loops could be used with negative values (addresses would be treated as positive ... but the increment could be either positive or negative and they wanted signed arithmetic to work on addresses).

i use to have the registered confidential 811 (for 11/1978) documents in a specially secured cabinet ... which had all those discussion.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moving assembler programs above the line

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving assembler programs above the line
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 19 Apr 2005 09:02:51 -0600
DASDBill2 writes:
User code was not all that would break. One of many reasons why IBM began moving some of their system services (catalog, e.g.) into private address spaces was to limit the damage done by IBM's own bugs, thus making it easier for them to catch and fix their own problems. Virtual Storage Constraint Relief was one of the chief benefits of this move, but system stability was also improved.

virtual storage constraint (way back in 70s) was the 16mbyte addressing was being totally eaten up by system stuff. separate address spaces can help with fault isolation and recovery strategies.

the basic (virtual storage constraint) issue then was the convention of passing pointers and the convention (inherited from os/360) of everything being in the same address space.

initially, SVS was essentially MVT laid out in a single 24-bit virtual address space. MVS enhanced that so there was a virtual address space per application ... but the kernel (and subsystems) still occupied/shared (each) address space ... essentially as it had in os/360 days.

with the proliferation of kernel and subsystem code ... there was starting to not be enuf space left (in 16megs) for application code ... and there was starting to be a migration of subsystems into their own virtual address space (to free up space for application code). this created a problem with preserving the pointer passing paradigm ... and so you got the common "segment" (place to stash data that was available to all virtual address spaces). Even with the migration of subsystems into their private spaces ... a typical late '70s shop had each 16mbyte virtual address space taken up with 8mbyte MVS kernel and 4mbyte common "segment" (and some were being forced to 5mbyte common "segment") ... leaving only maximum of 4mbytes (or 3mbytes) in each virtual address space for application.

the hardware started to be designed specially for the MVS implementation. the 168 & 3033 table look aside buffer used the virtual "8 mbyte" bit as one of the indexes; the result being that half of the TLB entries were used for the MVS kernel and the other half of the TLB entries were used for "application" space. If you had a different virtual memory paradigm ... say starting at virtual zero and only increasing as needed ... (at least) half of the TLB entries would likely be unused.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DNS Name Caching

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DNS Name Caching
Newsgroups: bit.listserv.ibm-main
Date: Tue, 19 Apr 2005 10:17:38 -0600
PTLyon@ibm-main.lst (Lyon, Patrick T) writes:
We recently went through a problem where Network Services changed a DNS entry, but the old IP address kept coming up. I am assuming that it was cached in the mainframe stack. It eventually came up with the correct address, but it took somewhere around 30 minutes.

Is there a setting in the IP config that controls this? Also, is there a command that can be issued to update cache, like on a windows machine IPCONFIG /FLUSH DNS?


slightly related ... providers ignoring DNS TTL?
http://ask.slashdot.org/askslashdot/05/04/18/198259.shtml?tid=95&tid=128&tid=4

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moving assembler programs above the line

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving assembler programs above the line
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 19 Apr 2005 11:26:11 -0600
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
MVS wasn't the first operating system on the S/370, and didn't run on the S/360. Previous[1] operating systems on S/360 ran in real mode because that was all there was[1]. They did *not* give the application complete control of the machine.

And as I know that the Devil is in the details, and that it pays to know what the actual problem is before changing things to correct it. In this case the detection of the existing breakage was due to adding an additional protection mechanism to a system that already had one.

[1] Other than those limited to the 360/67.

[2] Really a different release of the same, but rebranded, operating system.


... some of those details ...

TSS/360 was the "official" system for 360/67 ... while cp67 was done at cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

at one point i believe there were something like 1200 people working on tss/360 at a time when there were 12 people working on cp67.

initially, 370 was announced w/o virtual memory (you got "real storage" dos, mft, mvt continuing to run on 370s).

in preparation for virtual memory announcement for 370, cp67 morphed into vm/370, dos to dos/vs, mft to vs1, and mft to SVS.

the initial prototype for SVS was mvt with a little glue to setup a single virtual address space ... and "CCWTRANS" borrowed from CP67 to do the CCW chain scanning, pin virtual pages in real memory, create the "shadow" CCWs, translate virtual CCW addresses to real addresses, etc. basically you had MVT running in a single 16mbyte virtual address. Morphing from SVS to MVS gave each application their own 16mbyte virtual address space (although the kernel and some subsystems continued to reside in every virtual address space).

previous post
https://www.garlic.com/~lynn/2005f.html#43 Moving assembler programs above the line

part of the issue was that non-virtual memory 370 was just an interim step on the way to "FS" (furutre system)
https://www.garlic.com/~lynn/submain.html#futuresys

FS was going to be as bigger a departure from 360 than 360 had been a departure from eariler computers. supposedly one of the major justification was FS ... was the competition from clone controllers
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System

when i was an undergraduate, i got to work on a project that reversed engineered the ibm channel interface and built a channel interface board that went into a minicomputer ... which was programmed to emulate an ibm controller. four of us got written up being responsible for spawning clone controllers
https://www.garlic.com/~lynn/submain.html#360pcm

some folklore is that Amdahl left to build clone processors in a disagreement over the FS direction. in the early 70s, Amdahl gave a talk in mit auditorium where he was asked about what arguments did he use in business case to justify funding. he answered that there was already at least $100b invested in 360 customer application software, and even if ibm were to immediately walk away from 360s (veiled referenced to FS), there would still be customer demand for "360" processors at least thru the end of the century.

more folklore is that some of the FS refugees went to rochester and implemented FS as s/38.

Also that 801/RISC project
https://www.garlic.com/~lynn/subtopic.html#801

was at least partially motivated to do the exact opposite hardware implementation from FS (extremely simple rather than extremely complex).

virtual memory could be activated on 135 & 145 with just changing the microcode ... however significant hardware had to be added to 155s and 165s to get virtual memory capability.

the 370 "red book" (architecture manual done in cms script, superset of principles of operation, depending on which cms script option set ... you either printed the full architecture manual ... or just the principles of operation subset sections) had several more features in 370 virtual memory than what was announced.

one of the additional features was (shared) segment protection. The morphing of CMS from cp67 to vm370 was going to make use of the 370 shared segment protection. however, coming down to the wire, the 165 engineers at one point said that if they only had to do a subset of the 370 virtual memory architecture ... they could get it out six months earlier (or to do the full 370 virtual memory implementation would delay 165 virtual memory support by six months ... which would delay the 370 virtual memory announcement for all processors by six months).

POK operating system people (SVS) said they had no need for any of the additional 370 features and to go ahead with the subset ... in order to get virtual memory out six months earlier. this then hit CMS ability to do (shared) segment protection ... and resulted in doing a real hack that played with the storage protect keys.

this is similar ... but different to the POK operating system people saying that they didn't need any atomic synchronization primitive other than test&set for SMP support ... which drove the science center to come up with the application description use for compare&swap in multi-threaded environments ... recent reference
https://www.garlic.com/~lynn/2005f.html#41 Moving assembler programs above the line

vm microcode assist was added to 158 & 168 processors ... basically control register six was loaded and various privilege instructions would be executed (in the hardware) ... in problem state mode ... but following virtual machine rules (saving interrupt into cp kernel and software simulation). over the years this expanded into pr/sm and lpars.

this worked with various virtual guests machines ... but didn't work with CMS ... because of the hack playing with the storage protection keys ... wasn't understood by the microcode assists. A new scheme was invented for vm370 release 3 which eliminated the game playing with the storage protection keys for CMS shared segment protection ... allowing use of the vm microcode assist on 158 & 168 processors for cms environments. it used a gimmick that didn't protect shared segments ... but at task switch time would scall all pages in shared segments for modification (and if found, invalidate the page and force a refresh of unmodified page to be paged in from disk). The issue was that the overhead of scanning of (at most) 16 virtual pages (at every task switch) was less than the performance improvement using the hardware microcode assists (in cms intensive environments).

the problem was that for release 3 ... they also picked up a small subset of my virtual memory management changes (called DCSS):
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

which, at a minimum, doubled the number of shared pages that had to be scanned at every task switch. It turned out that the overhead scanning (at least) twice as many pages in shared segments ... was more than the performance savings from using the vm microcode assist ... i.e. by the time the change shipped to allow cms intensive environments to use microcode assist ... the performance trade-off for the change was no longer valid.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moving assembler programs above the line

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving assembler programs above the line
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 19 Apr 2005 13:22:49 -0600
Anne & Lynn Wheeler writes:
which, at a minimum, doubled the number of shared pages that had to be scanned at every task switch. It turned out that the overhead scanning (at least) twice as many pages in shared segments ... was more than the performance savings from using the vm microcode assist ... i.e. by the time the change shipped to allow cms intensive environments to use microcode assist ... the performance trade-off for the change was no longer valid.

i.e.
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line

note that this was further aggravated by smp support. the gimmick with allowing cms with shared segments to be run with the 158/168 microcode assist ... effectively involved the ruse that the "protected" pages became private for the duration of the running task ... and then the scan would revert any modified pages to their original, unmodified version (paged in from disk).

smp allowed multiple tasks to run concurrently with the same shared segments ... so it was no longer possible to maintain the ruse that the pages were private to the task running (a 2-way processor might have two virtual machines concurrently referencing the same virtual pages).

the smp stuff that i had done maintained the pre-release3 gimmick of playing games with storage protection keys
https://www.garlic.com/~lynn/subtopic.html#smp

besides the VAMPS implementation
https://www.garlic.com/~lynn/submain.html#bounce

where it was possible to modify the microcode to support the original shared segment protection ... i also deployed the smp support (with key protect hack) on the hone systems:
https://www.garlic.com/~lynn/subtopic.html#hone

hone was the internal vm-based (originally started with cp67) time-sharing service that eventually was used to support all world-wide marketing, sales, and field activities.

so there was two hacks for shipping smp to customers ... one was that much of the smp implementation was based on code in the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

and the other was creating replicated shared segment pages, one for each processor in an smp configuration (that continued the "protection" ruse of scanning shared pages for changes at task switch times).

as previously mentioned ... the smp support issue with the resource manager had to do with priced software. Up until the resource manager, application software was priced, but kernel software was free. The resource manager was chosen as guinea pig for pricing kernel software. I got to work with the business people one and off over six month period working on business practices for pricing kernel software. The guidelines that were come up with was that kernel software not directly involved in supporting hardware could be priced (i.e. nominally the resource manager had improved scheduling algorithms and so was not considered directly required for hardware support).

the problem with shipping smp support ... was that it was obviously directly needed for hardware support ... and therefore had to be free; however, if it had a prereq of the resource manager ... that violated the business practices. the solution (for shipping the smp code) was to remove 80-90 percent of the code from the original resource manager (that was required for smp support) and incorporate it into the "free base".

ok, so why couldn't the scanning of shared pages (at task switch) just be dropped? ... and the system revert to the protection hack based on storage protection keys.

well, the vm microcode assist for 168 cost something like $200k. customers that were guest operating system intensive bought the assist because it improved the performance of their guest operating system (svx, vs1, mvt, etc). leading up to release 3 ... a number of cms intensive shops were told that now vm microcode assist would enhance their performance also ... and it would be worthwhile spending the $200k on the assist (even tho by the time the support had shipped, the number of shared pages to be scanned had at least doubled and the performance trade-off comparison numbers were no longer valid).

In any case, it was considered too much of a gaffe to go around and tell all the CMS-intensive shops who had bought the 168 microcode assist that it wasn't worth it ... and VM was dropping support (reverything to the key protection hack).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moving assembler programs above the line

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving assembler programs above the line
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 19 Apr 2005 14:36:03 -0600
rlw@ibm-main.lst (Bob Wright) writes:
Anne & Lynn Wheeler wrote: > in preparation for virtual memory announcement for 370, cp67 morphed > into vm/370, dos to dos/vs, mft to vs1, and mft to SVS.

Actually, it was mvt to SVS and later to MVS.


re:
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line

sorry, fingerslip ... mft to vs1 ... MVT to svs (not mft to svs).

note however the very next paragraph did get it right
the initial prototype for SVS was mvt with a little glue to setup a single virtual address space ... and "CCWTRANS" borrowed from CP67 to do the CCW chain scanning, pin virtual pages in real memory, create the "shadow" CCWs, translate virtual CCW addresses to real addresses, etc. basically you had MVT running in a single 16mbyte virtual address. Morphing from SVS to MVS gave each application their own 16mbyte virtual address space (although the kernel and some subsystems continued to reside in every virtual address space).

I remember some pok visit where we were in the 706(?) machine room late 3rd shift ... and Ludlow(?) was putting together the AOS2 prototype (what was to become SVS) ... basically MVT with a little bit of glue code to create a single 16mbyte virtual address space (thus the eventualy name SVS ... or single virtual storage ... as opposed to the later MVS ... or multiple virtual storage); in effect, most of MVT just thot it was running in a 16mbyte real machine. The other piece that had to be done for AOS2 was that CCWTRANS was borrowed from CP67 to scan the virtual CCW programs, creating shadow/real channel programs, fixing virtual pages, translating virtual to real addresses, etc.

the actual reason for the pok visit was probably some 370 architecture review board meeting ... but there was also various kinds of technology transfer activities ... educating people in POK about virtual memory technology.

did have some problem with the group doing the page replacement algorithm for SVS (and used well into MVS lifetime). i had done global LRU approximation algorithm as an undergraduate (aka clock-like). The POK performance modeling group had come up with the idea that if the replacement algorithm replaced non-changed pages before selecting changed pages for replacement ... it was more efficient (i.e. you didn't have to write the change page out before taking over the real memory slot). no matter how much i argued against it ... they stead-fastly went ahead and did it any way.

It was well into the MVS life cycle ... that somebody noted that MVS was selecting high-used, shared program pages (aka linkpak, etc) for replacement before they were selecting lower-used, private data pages for replacement.

the clock thing had another aspect
https://www.garlic.com/~lynn/subtopic.html#wsclock

circa 1980 there was a person that wrote their stanford Phd thesis on clock replacement. I had done the original work in the late 60s while an undergraduate ... about the time there was an ACM paper on working set, local LRU replacement, etc. The "clock" stuff was global LRU replacement and there was significant opposition from the "local" LRU forces against granting the stanford Phd (involving global LRU).

however, in the early 70s, there had been a ACM article published by people at the grenoble scientific center on their modifications to CP/67 exactly implementing what was described in the late '60s ACM paper on working set. Grenoble was running cp/67 on 1mbyte 360/67 (154 "pageable" pages after fixed kernel requirements) and a 30-35 CMS intensive workload. At the same time, cambridge scientific center was running CP/67 on a 768kbyte 360/67 (104 "pageable" pages after fixed kernel requirements) and 75-80 CMS intensive workload (avg. CMS use was nearly the same at the two science centers).

The cambridge system with 75-80 users, 104 pageable pages and global LRU implementation had about the same response and thruput as the grenoble system with local LRU implementation with 30-35 users, 154 pageable pages (with the two CMS communities doing nearly same mix of activities). Having the two sets of performance comparison numbers helped break the long jam getting the stanford Phd thesis approved.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

1403 printers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1403 printers
Newsgroups: alt.folklore.computers
Date: Tue, 19 Apr 2005 15:18:18 -0600
Rich Alderson writes:
I remember a SHARE session (probably San Francisco, Feb '84) at which the experimental SJERPA (pronounced "sherpa") printer was described; this is what became the 6670, from what I can tell. The clarity of the print is no surprise when you consider that the mechanism is a big IBM copier.

the 6670/etc were ibm copier3 with computer interface to drive them. they could be used out in deparmental areas, local stock rooms, etc. among other things ... they inherited the copier3 capability of printing on both sides of the paper (duplex printing).

san jose research had also modified the driver to print the separator page (selected from alternate paper drawer and usually loaded with different color paper) with some random quotation. there were two files that random quotations could be selected from ... the 6670 quotation file or the ibm jargon definition file.

because they were out in open, "unsecure" areas (typical supply room where you would find copier, paper, etc) ... they were subject to security audit. during one such audit by security people from corporate hdqtrs ... they found a pile of output ... and the top file had a separator page with the (randomly selected) definition of auditor ... something about the people that go around the battlefield, after the war, stabbing the wounded. they complained loudly.

another situation was the april 1st password requirements memo. somebody had sent it to me ... and i had resent it to a couple other people. over the weekend somebody printed it on official corporate letterhead paper and placed it in all the official bulletin boards. monday morning, several people read it and thot it was real (even tho it was dated april 1st, which was the day before, sunday, and no corporate memos are dated sunday). ref to april 1st password memo:
https://www.garlic.com/~lynn/2001d.html#51 OT Re: A beautiful morning in AFM.
https://www.garlic.com/~lynn/2001d.html#52 OT Re: A beautiful morning in AFM.
https://www.garlic.com/~lynn/2001d.html#62 OT Re: A beautiful morning in AFM.

after an investigation into who was responsible ... there was an edict that went out that all corporate letterhead paper had to be kept in lock cabinet.

other past postings mentioning 6670
https://www.garlic.com/~lynn/99.html#42 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#43 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#52 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000d.html#81 Coloured IBM DASD
https://www.garlic.com/~lynn/2000e.html#1 What good and old text formatter are there ?
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002h.html#7 disk write caching (was: ibm icecube -- return of
https://www.garlic.com/~lynn/2002m.html#52 Microsoft's innovations [was:the rtf format]
https://www.garlic.com/~lynn/2002o.html#24 IBM Selectric as printer
https://www.garlic.com/~lynn/2002o.html#29 6670
https://www.garlic.com/~lynn/2003c.html#43 Early attempts at console humor?
https://www.garlic.com/~lynn/2004c.html#1 Oldest running code
https://www.garlic.com/~lynn/2004d.html#13 JSX 328x printing (portrait)
https://www.garlic.com/~lynn/2004k.html#48 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#61 Shipwrecks
https://www.garlic.com/~lynn/2005f.html#34 [Lit.] Buffer overruns

note that this was much more "personal" laser printer. from the mid-70s, there had been the 3800 datacenter laser printer ... which had paper feed rates measured in feet per second. some datacenters bypassed the boxed paper feed ... and had huge paper rolls feeding directly into 3800 (4-5 ft in diameter). this shouldn't be confused with the later 3820 laser printer ... which was desktop unit.

minor ref:
http://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV3103.html

from above:
The IBM 3800 laser-electrophotographic printer of 1975 had a speed of 20,000 lines a minute in preparing bank statements, premium notices and other high-volume documents. Laser beam paths were altered millions of times a second and were reflected from an 18-sided mirror that spun at 12,000 revolutions per minute. (VV3103)

...

some pictures:
http://ukcc.uky.edu/~ukccinfo/ibm3800.html
http://www-03.ibm.com/ibm/history/history/year_1976.html
http://pw1.netcom.com/~zmoq/pages/ibm370.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moving assembler programs above the line

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving assembler programs above the line
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 19 Apr 2005 15:33:29 -0600
patrick.okeefe@ibm-main.lst (Patrick O'Keefe) writes:
Certainly a program designed to run as a stand-alone program would not run under an operating system. The difference between stand-alone and operating system environments (even back in the pre-S/360 days) is far greater than upgrades to operating systems or even architecture levels. There is a huge amount of downward compatability with those upgrades. Standalone and operating environments are for the most part incompatable.

in the late '70s, there was the scenerio of some of the dasd testcell regression programs being modified to run under mvs ... however they found that the MVS MTBF with a single operational testcell (i.e. controller under development, would do all sorts of prohibited i/o things) was on the order of 15 minutes. as a result, testcell testing (bldgs. 14 & 15) was limited to dedicated stand-alone scheduled testtime running testing/reqression programs (stand-alone).

I redid the i/o subsystem to be bullet proof, so that half-dozen testcells could be operated concurrently ... misc. past posts
https://www.garlic.com/~lynn/subtopic.html#disk

unfortunately, i happened to document the information in a corporate confidential document ... including mentioning the 15min MTBF for MVS with a single testcell. even tho it was a corporate confidential document, the 2nd line MVS RAS manager in POK still caused quite a bit of uproar. It sort of reminded me of the CERN/SHARE study from the early 70s comparing CMS and TSO ... where the copies available inside the corporation were (re-)classified corporate confidential-restricted (available on a need-to-know basis only).

for a little more topic drift ... GML had been invented at the science center by "G", "M", and "L" ... misc. past posts
https://www.garlic.com/~lynn/submain.html#sgml

The early days: From typewriter to Waterloo Script:
http://ref.web.cern.ch/ref/CERN/CNL/2001/001/tp_history/

Waterloo Script was the Univ. of Waterloo version of CMS SCRIPT command. The above somewhat gives the history thread from GML and its evolution into HTML at CERN.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be?
Newsgroups: comp.arch.arithmetic,comp.arch,alt.folklore.computers
Date: Tue, 19 Apr 2005 16:34:28 -0600
"Stephen Fuld" writes:
My memory may be way faulty here, but wasn't /* */ for comments originally a S/360 thing? Perhaps in JCL or in S/360 assembler?

slash-asterisk punched into cols 1-2 of card ... indicated end-of-input ... sort of JCL thing.

example:


//name job parameters
//step exec forthclg
//sysin dd  *
...
fortran h program
/*
//go.sysin dd *
     ...
program data
/*

where forthclg is proclib stored jcl procedure (sort of like shell command script). this saved having to punch out all the individual cards in the proclib.

the first sysin-dd* card stream is easy to associate with the compile (1st step). the 2nd sysin-dd* card ... is qualified to associate it with the 3rd executable step ("go step" as opposed to the 2nd link-edit step).

forthclg jcl proclib example
http://www.jaymoseley.com/hercules/compiling/cprocs4.htm#FORTHCLG

above from the following web site:
http://www.jaymoseley.com/hercules/compiling/compile.htm

another example of jcl ....
http://66.102.7.104/search?q=cache:qcHRueOaEPgJ:www.slac.stanford.edu/cgi-wrap/getdoc/slac-r-151.pdf+%2Bforthclg+%2Bibm+%2Bjcl+%2B%22go.sysin%22&hl=en

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

1403 printers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1403 printers
Newsgroups: alt.folklore.computers
Date: Tue, 19 Apr 2005 16:47:08 -0600
Anne & Lynn Wheeler writes:
because they were out in open, "unsecure" areas (typical supply room where you would find copier, paper, etc) ... they were subject to security audit. during one such audit by security people from corporate hdqtrs ... they found a pile of output ... and the top file had a separator page with the (randomly selected) definition of auditor ... something about the people that go around the battlefield, after the war, stabbing the wounded. they complained loudly.

a little digging ... and the entry from the 6670 jargon file:
[Business Maxims:] Signs, real and imagined, which belong on the walls of the nation's offices:
1) Never Try to Teach a Pig to Sing; It Wastes Your Time and It Annoys the Pig.
2) Sometimes the Crowd IS Right.
3) Auditors Are the People Who Go in After the War Is Lost and Bayonet the Wounded.
4) To Err Is Human -- To Forgive Is Not Company Policy.


.....

and i tripped across an online version of the ibm jargon file at:
http://www.212.net/business/jargon.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

1403 printers

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1403 printers
Newsgroups: alt.folklore.computers
Date: Tue, 19 Apr 2005 17:38:39 -0600
and some totally unrelated references ... when i was looking for 3800 laser printer references:

Personal Recollections of the Xerox 9700 Electronic Printing System
http://www.digibarn.com/collections/printers/xerox-9700/stories.html

birth of the laser printer
http://www.computerhistory.org/events/lectures/starkweather_03251997/starkweather_xscript.shtml

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Tue, 19 Apr 2005 21:42:16 -0600
Charles Richmond writes:
Hold on there!!! That's why the original ARPANET had the IMP's (Interface Message Processors), because different machines and OS's were being networked together. All the IMP's ran the same software and could communicate easily.

the imps significantly increased the entry level costs for connecting to the arpanet. furthermore ... with the imps they had a homogeneous network with no requirement for the concept of internetworking of heterogeneous networks and gateways.

i've frequently asserted that was big reason that the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

had more nodes than the arpanet/internet from just about the start until around mid-85 (i.e. most of the internal network nodes had the equivalent of gateway and interoperability functionality ... which the apranet/internet didn't get until they converted off the imps to internetworking protocol on 1/1/83).

misc. past threads:
https://www.garlic.com/~lynn/internet.htm

the internal network was nearly 1000 nodes when the arpanet was around 250 nodes. reference to the 1000th node announcement
https://www.garlic.com/~lynn/internet.htm#22

there were other internal corporate networking protocols ... primarily those run on the primary batch system .... which had evolved out of HASP networking ... some of which seemed to have originated at TUCC (at least some of the code still had "TUCC" label on some of the statemtns). this implementation (which continued well thru the JES incarnations) mapped network nodes into the HASP 255 entry psuedo device table. A typical HASP/JES node might have 60-80 local psuedo devices ... leaving possibly 170-190 entries left for network node definitions (actually less than arpanet). The HASP/JES ndoes had to be restricted to boundary/end nodes since they had implementation that trashed traffic where either the origin node or the destination node was not recognized (with approaching 1000 nodes at the time when the arpanet was only 250 nodes ... HASP/JES nodes would have trashed most of the traffic ... if the internal network was dependent on their implementation).

another failure of the internal batch-platform implementation was that it had mixed up the architecture of the header field defintiions. It was common that traffic between batch-platform nodes at different release or version levels would result in crashing one and/or both systems (networking code as well as the whole operating system). There is the infamous case of traffic originating from a San Jose batch node casuing operating systems in Hursley (england) to crash.

the gateway capability in the primary internal network nodes was given the responsibility of taking traffic from the batch operating system network nodes and converting them to cannonical format ... and then on the links that directly talked to batch networking nodes ... translating the header format specifically for the version/release node that it was interfacing to.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

1403 printers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1403 printers
Newsgroups: alt.folklore.computers
Date: Wed, 20 Apr 2005 09:31:02 -0600
Joe Morris writes:
A couple of our IT managers went to the plant to talk about our problems with the 6670 performance and reliability late in its life; as I recall it the response they got was singularly unhelpful (essentially, "that's the way it is, take it or leave it.") I don't think we've bought an IBM printer since then.

there was a joke about tv advertisements and what not to advertise. at some point, the copier III was more prone to paper jams than some other leading copiers on the market. they did a tv advertisement about how much easer it was to clear a paper jam in the copier III (than other copiers). apparently the advertisement backfired ... reminding people of how frequently the copier had paper jams (don't highlight or remind people of weaknesses/problems).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What is the "name" of a system?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is the "name" of a system?
Newsgroups: bit.listserv.ibm-main
Date: Wed, 20 Apr 2005 10:00:19 -0600
gilmap writes:
Certainly: TCP/IP domain name. Unlike some of the others, it has a world-wide registry which guarantees uniqueness.

early on in DNS ... it supported multiple A-records ... i.e. mapping same domain name to multiple different ip-addresses ... sort of used for both availability and load-balancing.

for availability ... it allowed for same host to have multiple different internet connections ... frequently to different places in the internet backbone. It was dependent on clients having support for running thru the list of A-records on connect sequence (i.e. if the first address tried didn't connect, move on to the next one).

early in the life-cycle of one of the first big search engines ... they were started to have severe overload problems. the first attempt was to replicate the servers at different ip-addresses ... and use multiple A-record support to advertise to clients the whole list of ip-address as the same domain name. The problem is that standard multiple a-record support always starts at the first address. For a hack ... the added rotating address support to their domain name server ... which constantly rotated the list of addresses on every query. This helped to distribute the first ip-address tried across all the possible clients .... modulo the issue with ISP record caching i.e. the client gets the list from the local ISP cache rather than going all the way back to the authoritative DNS server for that domain ... slightly related recent post:
https://www.garlic.com/~lynn/2005f.html#44 DNS Name Caching

so the next hack they did was on the boundary routers for their service ... in the routers they implemented a list of backend "server" ip-addresses and a list of front-end "external" ip-addresses ... then there was a little bit of hack code that constantly changed the mapping of external ip-addresses to back-end, internal addresses ... to spread the incoming load across the backend servers.

the issue of a host with multiple ip-addresses to different parts of the network is referred to as multihoming. somewhere i have a copy of a ietf-draft on multihomeing from 1988 ... that never made it to RFC status. there is some discussion on multihoming in RFC1122 (also STD-3):
https://www.garlic.com/~lynn/rfcidx3.htm#1122
1122 S
Requirements for Internet hosts - communication layers, Braden R., 1989/10/01 (116pp) (.txt=289148) (STD-3) (See Also 1123) (Refs 768, 791, 792, 793, 813, 814, 815, 816, 817, 826, 879, 893, 894, 896, 922, 950, 963, 964, 980, 994, 995, 1009, 1010, 1011, 1016, 1042, 1071, 1072, 1108, 1112) (Ref'ed By 1127, 1180, 1190, 1191, 1207, 1219, 1254, 1256, 1329, 1349, 1370, 1379, 1385, 1403, 1433, 1455, 1517, 1531, 1533, 1541, 1561, 1577, 1620, 1626, 1644, 1716, 1745, 1755, 1770, 1812, 1819, 1885, 1933, 1937, 1970, 2001, 2131, 2132, 2176, 2225, 2309, 2331, 2353, 2360, 2391, 2414, 2461, 2463, 2474, 2481, 2488, 2498, 2521, 2525, 2581, 2663, 2678, 2694, 2757, 2760, 2784, 2822, 2834, 2835, 2893, 2914, 2923, 2988, 3021, 3022, 3081, 3135, 3154, 3155, 3168, 3175, 3259, 3260, 3360, 3366, 3390, 3435, 3449, 3465, 3481, 3490, 3522, 3684, 3720, 3821, 3884, 3948, 4035)


as always in my RFC index summary entries ... clicking on the ".txt=" field retrieves the actual RFC. also see
https://www.garlic.com/~lynn/rfcietff.htm

.... from rfc1122 contents:


3.3.4  Local Multihoming
3.3.4.1  Introduction
    3.3.4.2  Multihoming Requirements
3.3.4.3  Choosing a Source Address

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

1401-S, 1470 "last gasp" computers?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1401-S, 1470 "last gasp" computers?
Newsgroups: alt.folklore.computers
Date: Wed, 20 Apr 2005 11:57:54 -0600
haynes@alumni.uark.edu (Jim Haynes) writes:
My understanding is that the 1130 was a replacement for the 1620, which was also an odd little machine, low-end scientific computer with of all things variable word length. And then it also appeared as the 1800 for process control work. Maybe that makes it a forerunner of the Series/1

the person at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

that did the internal network software architecture, design, and implementation
https://www.garlic.com/~lynn/subnetwork.html#internalnet

had done an early implementation supporting communication between the center's 1130 (with attached 2250, aka 2250-4) and 360/67.

there was a later project that involved using a system/7 as a communication interface for the 360/67 ... a couple system/7 references from search engine:
http://www.ibm1130.net/functional/System7.html
http://domino.research.ibm.com/tchjr/journalindex.nsf/0/4ef2fdb289abe52785256bfa006840d8?OpenDocument
http://www.cod.edu/people/faculty/zlotow/IBM.htm

then "peachtree" came into the picture (pre-announce code name for series/1) and there were several people that were pushing peachtree for the basis of new mainframe communication controller ... for what was to be the 3705 (as having a much better design and architecture than the leading contendor for 3705 at the time).

somewhat totally unrelated ... i had been part of a project as an undergraduate that created mainframe clone controller started out using an interdata/3.
https://www.garlic.com/~lynn/submain.html#360pcm

there was a project in the 80s that did implement a 3705 clone using series/1 ... some past postings:
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#69 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moving assembler programs above the line

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving assembler programs above the line
Newsgroups: alt.folklore.computers
Date: Wed, 20 Apr 2005 12:15:39 -0600
mwojcik@newsguy.com (Michael Wojcik) writes:
Probably true, for most people who are used to stack-based systems, though I always felt activation records were a perfectly reasonable approach. They're more complicated than a function-call stack, and so have more overhead, but there are advantages, such as unifying memory (separate "heap" and "stack" areas aren't required) and making some errors easier to detect.

And, of course, they're pretty much a requirement for supporting continuations.


another aspect is that the 360 genre was pretty much pointer passing ... as opposed to value passing. in the transition from SVS to MVS ... they still found that they were crowding the 16mbyte address spaces ... so they had to push some of the services into separate address spaces. this created a paradigm problem for pointer passing. this was initially addressed with "common segment" ... started out as a one mbyte area that appeared in all address spaces ... and applications could push stuff into and pass a pointer when calling a service. as the number of services and applications grew the contention for "common segment" space grew and you were starting to see installations with 4 and 5 mbyte "common segments". Now this is in each virtual 16mbyte address space ... where the kernel already occupied 8mbyte of every virtual address space; with 4-5 mbyte "common segment" also occupying every address space ... that just left only 3-4mbytes for an application.

the 3033 started to address this with dual-address space support ... where a service (running in a different virtual address space) could reach into the virtual address space of the calling application. Note however, the call from the application to the service still had to still make a kernel call (as compared to previously where some service calls were simple branch&link).

with 370-xa (711 architecture) on the 3081 ... besides introducing 31-bit virtual addressing ... PC (program call) and access registers were also introduced. PC referenced a supervisor mapping table (in some sense analogous to supervisor virtual memory tables) that supported direct transfer between a calling application (in one address space) and a service (in another address space) w/o having to pass thru the kernel. access registers generalized the dual-address space support to multiple virtual address spaces.

some discussion of access registers
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/2.3.5?SHELF=EZ2HW125&DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/2.3.5?SHELF=EZ2HW125&DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.2.1.5?SHELF=EZ2HW125&DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.2.1.6?SHELF=EZ2HW125&DT=19970613131822

discussion of program call
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.26?SHELF=EZ2HW125&DT=19970613131822

various past postings mentioning dual address space support:
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#58 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#18 Black magic in POWER5
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002q.html#1 Linux paging
https://www.garlic.com/~lynn/2003c.html#13 Unused address bits
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003m.html#29 SR 15,15
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004f.html#27 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#53 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005c.html#63 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005f.html#7 new Enterprise Architecture online user group
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers
Date: Thu, 21 Apr 2005 09:54:50 -0600
"FredK" writes:
The TOPS-10 people rail against the VMS people. The VMS people rail against the UNIX people. The UNIX people rail against the Windows people. We're all doomed because we ignore history. PC people are ignorant fools. Linux people are worse. Nobody is building systems that are universal and take into account all possible interoperabilities. Gordon Bell was evil. Nobody since the TOPS-10 days knows how to design systems. Each reply should be a worm that continues to grow to maintain context.

All appear to be among the things in the thread and more.


vm370/cms contribution to vms.

in the early 70s, customers were buying up vm370/cms and the development group was growing like mad. the group had first absorbed the boston programming center and taken over the 3rd floor ... and when they outgrew that ... they moved out to the old sbc bldg in burlington mall (sbc having been sold to cdc has part of some gov. litigation).

the official corporate word to the vm370/cms group was constantly that the next release would be the last and there would be no more need for vm370. finally in 76, the group was told that it was official, there would be no more vm370/cms and burlington was being shutdown. the whole group would be transferred to POK to work on something called (internally only available) vmtool (which was purely targeted at supporting internal mvs/xa development ... and wouldn't be available to customers).

quite a few of the people at burlington mall weren't interested in that prospect and got jobs at DEC (a lot of them showing up in vms group).

one of the principle executives responsible for that decision then shows up a number of years later ... claiming that something like 11,000 vax sales should have been 4341/vm. minor reference:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

.. there was a minor joke that he possibly contributed as much to vax/vms success as anybody at DEC.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers
Date: Thu, 21 Apr 2005 10:35:08 -0600
Anne & Lynn Wheeler writes:
the official corporate word to the vm370/cms group was constantly that the next release would be the last and there would be no more need for vm370. finally in 76, the group was told that it was official, there would be no more vm370/cms and burlington was being shutdown. the whole group would be transferred to POK to work on something called the (internally only available) vmtool (which was purely targeted at supporting internal mvs/xa development ... and wouldn't be available to customers).

of course, endicott complained long & loud ... and finally it was decided that endicott could pick up responsibility for vm370 ... however all the burlington people still had to move to POK to work on the vmtool (some claim that mvs/xa couldn't meet its schedule w/o having fully populated vmtool effort supporting them).

endicott had done ecps (vm370 microcode performance assist) as a 145->148 enhancement
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

there was actually a swipe taken out shipping every 370/148 from the factory with vm370 prebuilt on the machine. The objective was to have it integrated into normal system operation (akin to all the LPAR stuff you seen on the big mainframe iron ... and its proliferation into many other hardware product lines). The holly grail was to have it so integrated there wouldn't be any requirement for customer system support people. In the 148 time-frame that was still not quite there ... and besides the official corporate position had constantly been that the next release of vm370 was the last (and so corporate vetoed any escalation of the penetration of vm370 into the market).

Endicott took another run at vm370 as part of the basic operation of every machine with the 4341 (akin wanting to do something along the lines of current day lpars and virtualization). By that time, they had started to build up some of their own inhouse expertise ... and the holy grail was still being able to have vm370 on every 4341 and not require any customer support people for it. Given that you force a lot of defaults ... they were coming within reach of the goal. However, they still were facing the constant corporate veto of any such escalation of vm370 use (although customers ordered huge numbers of 4341s and installed vm370 on them ... endicott wasn't allowed to take the next step and make it transparent part of every machine operation).

this is one of the things that customer feedback (and SHARE reports) constantly hit endicott on ... that typical 4341 installation required a lot more customer system support resources than the typical vax/vms installation (it wasn't just a matter of turn-key, out-of-the-box operation).

recent related posts
https://www.garlic.com/~lynn/2005e.html#57 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#7 new Enterprise Architecture online user group
https://www.garlic.com/~lynn/2005f.html#29 Where should the type information be?
https://www.garlic.com/~lynn/2005f.html#30 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#35 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#36 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#37 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#57 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Thu, 21 Apr 2005 13:13:35 -0600
Andrew Swallow writes:
Medium security room.

It is when they build a cage inside the above computer room and put pad locks on the inner door that you know the work is classified.


that is for gov. work.

there was a situation involving industrial espionage and theft of product secrets. there was some claim (in civil court action aspect of the siutation) that several billion dollars in revenue was at stake. as i understand it, the court made some statement about 1) attractive target (akin to the swimming pool argument, people aren't capable of resisting the temptation of several billion dollars)) and 2) having to demonstrate security proportional to what is at risk (if you are going to claim that several billion dollars is at risk ... you have to demonstrate security procedures proportional to what is at risk).

slightly different post on security proportional to risk:
https://www.garlic.com/~lynn/2001h.html#61

for more drift ... engineering testcells in bldg. 14
https://www.garlic.com/~lynn/subtopic.html#disk

bldg. was inside perimeter fence ... with door badge readers on the bldg. and additional door badge readers on interior machine room (which included motion/intrusion detection stuff).

inside the machine room were the "testcells" ... in addition to being organized for testing ... a testcell was inside a mini-room (maybe 6ft sq) composed of heavy-guage steel mesh (sides and top, bottom was solid metal sheet). door to each "testcell" had high-security, four number combination lock.

there was some separate commercial security for offices on exterior walls with windows. the windows had special "vibrators" (countermeasure for reflected beam off the window being able to decode interior sound ... aka reproduce vocal conversation). The sheetrock on the walls were replaced with a special version that had thin lead lining.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Where should the type information be: in tags and descriptors

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Where should the type information be: in tags and descriptors
Newsgroups: alt.folklore.computers,comp.arch
Date: Thu, 21 Apr 2005 13:16:26 -0600
another aspect that i frequently saw in and around silicon valley were the vans with lots of antannas parked outside of various development facilities ... doing regular electromagnetic emission audits.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

single-signon with X.509 certificates

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: single-signon with X.509 certificates
Newsgroups: comp.security.unix,comp.security,comp.security.misc,comp.security.gss-api
Date: Thu, 21 Apr 2005 17:33:12 -0600
thomasv@mac.com (Thomas Vincent) writes:
PKI is generally used for authentication and verifying the integrity of the data. The authorization is stored in the directory (LDAP) and or the application. It is hard to give you a complete answer when we don't know what the operating systems your using are. The fact that the digital certicate is on a USB token is irrelevant. The computer will simpley look at that as just another device aka. hard drive.

PKI is a messy business right now with a bunch of vendors (ENTRUST) trying to create stovepipe solutions. Basically because they know that PKI is largely becoming a commidity not something unique.

A quick search of google turns up a ton of information on the subject.


original PKINIT for kerberos was simple digital signature authentication ... along the lines of DSA or ECDSA (fips186) ... where the public key was registered in lieu of a pin/password (certificate-less and PKI-free operation)

the client asserts a userid or account, the server possibly sends some sort of (random) challenge (countermeasure for replay attacks), the client digitally signs the challenge (with their private key) and returns the digital signature. the server verifies the digital signature with the public key on file.

this preserves the existing business processes ... but eliminates the threats associated with shared-secrets and static data authentication. In addition to kerberos certificate-less authentication scenarios (no PKI) there are also RADIUS certificate-less authentication scenarios ... again where public key registration has been substituted for pin/password (shared-secret) registration ... and digital signature authentication is used in lieu of exchanging shared-secret.

the original PKI/certificate based model was to address the offline email scenario between parties that had not previously communicated (scenario from the early 80s). Somebody dials up their local (electronic) post-office, exchanges email, and hangs up. They possibly now have an email from some party they never previously communicated with. The issue was how to perform any sort of authentication when they had no other recourse as to information about the originating party (and no previous knowledge of the originating party). This is basically the "letter of credit" business model from the sailing ship days (when there was no online, electronic, and/or timely communication mechanism to obtain information about total strangers).

the early 90s saw evoluation of x.509 "identity" certificates. The issue for "trusted third party" certification authorities, issuing such x.509 "identity" certificates ... was what possibly information would be of use to future relying parties possibly performing authorizations based on the information content of an x.509 "identity" certificate. There was a trend to steadily increase the amount of identity information content ... perceived as increasing the value of such x.509 "identity" certificates ... and therefor increasing the possible perceived value of PKI and "trusted third party" certification authorities.

The issue in the mid-90s was the growing realization that these x.509 identity certificates, overloaded with significant identifity information, represented significant liability and privacy issues. As a result, there was some retrenchment to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

... basically certificates containing only something like an account number and a corresponding public key. However, it was trivial to show that such relying-party-only certificates were redundant and superfluous ... since the issuer, certification authority, and relying party ... already had the registered public key on file.

One of the places that such relying-party-only certificaets appeared was in the financial industry for use with financial transactions. Here not only was it possible to show that such certificates were redundant and superfluous (i.e. the customer's financial institution already had the customer's public key on file so sending back a copy of the public key; contained in the digital certificate, appended to every financial transaction was unnecessary) ... but the other aspect was that it represented enormous payload bloat. The nominal payment transaction is on the order of 60-80 bytes ... the nominal relying-party-only digital certificate was on the order of 4k-12k bytes. Appending such a relying-party-only digital certificate to every payment transaction represented a factor of one hundred times increase in the size of the transaction. Furthermore, the only purpose for appending a digital certificate to the transaction sent to the customer's financial institution ... was so that the customer's financial institution would have a copy of the customer's public key (which they already have when the public key was registered originally).

given that there is an existing relationship between two parties ... and/or the two parties have online, electronic, timely communication access to information about the other party .... PKI and digital certificates tend to be redundant and superfluous. It is possible to use existing business processes ... simply upgrading the technology to directly register a public key (in lieu of a pin, password, or other shared-secret) for authentication purposes.

The issue isn't so much one about new technology ... but the trusted third party certification model drastically changes the business process model ... and was originally intended for an offline world that hardly exists anymore.

Part of the problem with the trusted third party PKI certification authority model is the business and contractual relationships. Normally a relying party has a direct contract and/or an implied contract (based on exchange of value) with any certificaton authority. In the trusted third party PKI certification authority model ... the exchange of value is typically between the "key-owner" and the certification authority (i.e. the key owners buys a certificate). However, the use of the certificate is by the relying-party that is performing the authentication ... and the relying-party can have no direct or indirect contractual relationship with the certification authority.

In the federal gov. PKI scenario ... GSA attempted to get around this mismatch in the trusted third party certification authority business model ... by creating the legal fiction that the certification authorities were agents of the federal gov (i.e. GSA has a contract with the certification authorities saying that they are operating on behalf of the federal gov.). Then when some federal gov. agency does something based on relying on the information in such a digital certificate ... they have some possible legal recourse if the proper processes weren't followed. However, there have been quite a number of deployed and proposed PKI infrastructures .... where there were no provisions for creating any sort of business obligation by the certification authorities to the relying operations that were dependent on the information in certificates.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Moving assembler programs above the line

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving assembler programs above the line
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 22 Apr 2005 01:03:52 -0600
edgould@ibm-main.lst (Ed Gould) writes:
About 14(?) years ago we had an issue with APL. The only time we could time to IPL the system with APL was on a Sunday Morning. When I came in they told me to call into level 1 and raise the severity of the problem to a 1. They came in and over the phone they stepped me through the problem. They figured out what it was (I had zero knowledge of APL) .

My hat was off to them they really helped me out in a bind. The only reason we had APL to begin with was for PSF.

The APL people are one of the few groups at IBM that really helped out and went out of their way to really get the problem resolved. At one time the PSF people were that way but I think they got inundated with problems and couldn't take the OT.


Original 360 APL was apl\360 out of the phili science center.

some people at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

took apl\360 interpreter (leaving out all the monitor stuff) and adapted it to CMS and adding the capability to make system calls. the other item was the storage management had to be completely redone. under apl\360 there were typically 15k to 32k byte workspaces that were completely swapped. Storage management for assignments involved always assigning a brand new available storage location ... until it ran out of unallocated storage in the workspace ... and then it did garbage collection to reclaim unused storage. That wasn't too bad in 15kbyte workspace ... but try that in an 4mbyte or even 16mbyte cms virtual address space ... and you would constantly page thrash the machine.

besides being sold to customers, the science center also offered cp67/cms timesharing
https://www.garlic.com/~lynn/submain.html#timeshare

to various students in the cambridge area (MIT, BU, Harvard, etc) as well as internal corporate business accounts. There were some interesting security issues when business planners from corporate hdqtrs started using CMS\APL on the cambridge machine to analyse the most sensitve and confidential corporate business data (on the same machine that had some number of area college students). Back then, APL was frequently the vehicle for doing business what-if scenarios, the type of thing that is frequently done with spreadsheets today.

a clone of the cambridge cp67/cms system was eventually created for the data processing marketing division which grew into the HONE system
https://www.garlic.com/~lynn/subtopic.html#hone

supporting world wide sales and marketing people. For instance, in the early 70s when EMEA hdqtrs moved from white plains to paris ... I got to do the cloning of HONE system at the new EMEA hdqtrs outside paris.

Along the way, the palo alto science center did some enhancements to cms\apl and also tdid the 370/145 APL microcode performance assist (apl on 145 with the assist ran about as fast as apl on a 168 w/o the assist). This was released to customers as apl\cms for vm370/cms (as opposed to the original cms\apl done at the cambridge science center originally on cp67/cms and then made available on vm370/cms).

A couple issues was the lingering ambivalence about the system call interface done in cms\apl ... as well as not having a MVS version. Eventually a group was formed in STL to do vsapl ... which would support both vm370/cms and mvs and then picked up a lot of the stuff from apl\cms from the group at the palo alto science center. vsapl also got the shared variable paradigm for doing things like system calls and other features (replacing the system call implementation that had been done for cms\apl). the shared variable paradigm was consistant with the rest of the apl paradigm ... the cms\apl system call was quite out of place (although had been extremely useful for a number of years).

There was a separate issue at HONE. Since HONE was part of the sales and marketing division ... periodically they got a new executive that had been promoted up from branch manager at some large branch. The sales & marketing had constantly been indoctrinated that the strategic data processing was MVS ... and they were quite dismayed to find the whole internal corporate operation was mostly run on VM370 ... the internal corporate network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

was almost all vm370 as well as most of the internal mainframe machines.

The US HONE system was possibly the largest vm370 timesharing service in existance ... in addition to beng cloned at numerous locations around the world. In the late 70s it went thru a consolidation of all the US HONE centers to a single datacenter in cal. Then because of various disaster scenarios ... it was replicated first in Dallas and then in Boulder ... with load-balancing and fall-over across the distributed centers.

Most of the HONE services and applications were implemented in APL. Several times a new HONE executive (after general availability of APL also on MVS) would declare that HONE needed to migrate off of vm370 on to APL running on MVS. This would put a stop to all new HONE application development for something like 3-6 months until it became evident it wasn't possible ... and then the issue would be quietly dropped ... until the next new executive from the field showed up.

A major HONE applicatins were the order configurators ... and starting with the 370 115/125 ... salesman couldn't place an order w/o it having been processed by a HONE configurator.

For nearly 15 years i would custom craft HONE operating system kernels as well as provide various kinds of system support services .... sort of as a hobby in my spare time when i wasn't doing something else. At some point, the latest new executive was doing the latest round of why couldn't HONE be migrated to MVS ... and the issue came up that HONE wasn't really a standard release operating system. He then got upset when it dawned on him that it was supplied by somebody he had never heard of before ... and who showed up on no organizational charts anywhere.

slightly related recent posts:
https://www.garlic.com/~lynn/2005f.html#11 Mozilla v Firefox
https://www.garlic.com/~lynn/2005f.html#12 Mozilla v Firefox
https://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#59 Where should the type information be: in tags and descriptors

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

previous, next, index - home