List of Archived Posts

2009 Newsgroup Postings (10/01 - 10/22)

Best IEFACTRT (off topic)
Status of Arpanet/Internet in 1976?
IMS
Sophisticated cybercrooks cracking bank security efforts
Broken Brancher
Status of Arpanet/Internet in 1976?
Hexadecimal Kid - articles from Computerworld wanted
Evolution of Floating Point
Help: restoring textured paint - DEC
Evolution of Floating Point
Microprocessors with Definable MIcrocode
Microprocessors with Definable MIcrocode
Calling ::routines in oorexx 4.0
Microprocessors with Definable MIcrocode
Microprocessors with Definable MIcrocode
Calling ::routines in oorexx 4.0
Microprocessors with Definable MIcrocode
Broken hardware was Re: Broken Brancher
Microprocessors with Definable MIcrocode
What happened to computer architecture (and comp.arch?)
mainframe e-mail with attachments
Opinions on the 'Unix Haters' Handbook'
Rogue PayPal SSL Certificate Available in the Wild - IE, Safari and Chrome users beware
Opinions on the 'Unix Haters' Handbook'
Opinions on the 'Unix Haters' Handbook'
Opinions on the 'Unix Haters' Handbook'
Some Recollections
U.S. students behind in math, science, analysis says
U.S. students behind in math, science, analysis says
Justice Department probing allegations of abuse by IBM in mainframe computer market
Page Faults and Interrupts
Justice Department probing allegations of abuse by IBM in mainframe computer market
Justice Department probing allegations of abuse by IBM in mainframe computer market
U.S. house decommissions its last mainframe, saves $730,000
Google Begins Fixing Usenet Archive
Operation Virtualization
U.S. students behind in math, science, analysis says
Young Developers Get Old Mainframers' Jobs
U.S. house decommissions its last mainframe, saves $730,000
Disaster recovery is dead; long live continuous business operations
The Web browser turns 15: A look back;
U.S. house decommissions its last mainframe, saves $730,000
Outsourcing your Computer Center to IBM ?
Outsourcing your Computer Center to IBM ?
Outsourcing your Computer Center to IBM ?
The Web browser turns 15: A look back;
U.S. begins inquiry of IBM in mainframe market
U.S. begins inquiry of IBM in mainframe market
Opinions on the 'Unix Haters' Handbook'
Opinions on the 'Unix Haters' Handbook'
WSJ.com The Fallacy of Identity Theft
8 ways the American information worker remains a Luddite
Revisiting CHARACTER and BUSINESS ETHICS
E-Banking on a Locked Down (Non-Microsoft) PC
Should SSL be enabled on every website?
TV Big Bang 10/12/09
Opinions on the 'Unix Haters' Handbook'
U.S. begins inquiry of IBM in mainframe market
Rudd bucks boost IBM mainframe business
TV Big Bang 10/12/09
TV Big Bang 10/12/09
TV Big Bang 10/12/09
TV Big Bang 10/12/09
U.S. students behind in math, science, analysis says
The new coin of the NSA is also the new coin of the economy
The new coin of the NSA is also the new coin of the economy
Need for speedy cryptography
I would like to understand the professional job market in US. Is it shrinking?
The Rise and Fall of Commodore
DHL virus
cpu upgrade
"Rat Your Boss" or "Rats to Riches," the New SEC
I would like to understand the professional job market in US. Is it shrinking?
IBM Hardware Boss Charged With Insider Trading
Back to the 1970s: IBM in mainframe antitrust suit again
Status of Arpanet/Internet in 1976?
I would like to understand the professional job market in US. Is it shrinking?
Is it time to stop research in Computer Architecture ?
DNSSEC + Certs As a Replacement For SSL's Transport Security
Is it time to stop research in Computer Architecture ?
OpenSolaris goes "tic'less"???
big iron mainframe vs. x86 servers
OpenSolaris goes "tic'less"???
Excerpt from Digital Equipment co-founder's autobiography "Learn, Earn and Return"
Opinions on the 'Unix Haters' Handbook'

Best IEFACTRT (off topic)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Best IEFACTRT (off topic)
Newsgroups: bit.listserv.ibm-main
Date: Thu, 01 Oct 2009 19:38:41 -0400
rfochtman@YNC.NET (Rick Fochtman) writes:
ISTR a similar problem with the 360/67, when a BAL or its target were split across a page boundary. The dialectric material in the ROS needed replacement and the CE replaced it with three layers of SARAN Wrap and the machine worked perfectly after that. Of course the capacitive ROS had to be tightened to a different torque spec, but Burlington (or whereever) came through with the right specs for that as well.

sounds like hardware bug in specific machine ... since it would have been very evident on in lots of applications.

re:
https://www.garlic.com/~lynn/2009n.html#74 Best IEFACTRT (off topic)

however, charlie (invented compare&swap instruction ... "CAS" was chosen because they are charlie's initials) found a implementation flaw in 360/67s ... that I don't believe ever got fixed. he was trying to squeeze a couple more cycles out of cp67 kernel ... and the interrupt handlers had a LCTL CR0,CR0 ... loading segment table pointer control register (this was moved to CR1 in 370 architecture) ... so he no-op'ed the LCTL (since it was just reloading the value that was already there. System started failing. After lots of diagnoses ... turned up hardware design flaw. A page fault interrupt resulted in the look-aside buffer (DAT, associative array) having all the real page numbers set to zero ... but valid/not-valid indicators for the entries weren't reset. This resulted in any virtual page numbers already in the associative array being mapped to real page zero.

The "LCTL CRO" (even if reloading the same segment table pointer) would reset the associative array (setting all entries to invalid ... which had masked the associative array hardware problem on page fault. The "LCTL CRO" went back in (I think cheaper than trying to correct the problem.

misc. past posts mentioning multiprocessor and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

for other drift ...

trying to get compare&swap into 370 was initially rebuffed ... we were told that the favorite son operating system didn't feel that it was necessary ... that TEST&SET (from 360 multiprocessor support was more than adequate) ... if compare&swap was ever going to be justified for 370, a non-multiprocessor specific use for the instruction would be needed. thus was born the programming notes in the principles of operation ... on how to use compare&swap in multitasking (multithreaded) applications (whether or not they were running on multiprocessor machine).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Status of Arpanet/Internet in 1976?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Status of Arpanet/Internet in 1976?
Newsgroups: alt.folklore.computers
Date: Fri, 02 Oct 2009 09:29:47 -0400
jmfbahciv <jmfbahciv@aol> writes:
OK. I understand about the leased, dedicated lines. What if there was a host that wasn't on one. Did those old IM daemons have the ability to call a system that didn't ahve a dedicated line?

re:
https://www.garlic.com/~lynn/2009n.html#67 Status of Arpanet/Internet in 1976?
https://www.garlic.com/~lynn/2009n.html#69 Status of Arpanet/Internet in 1976?
https://www.garlic.com/~lynn/2009n.html#70 Status of Arpanet/Internet in 1976?

if there wasn't an active network connection ... the instant messaging wouldn't work. instant message (deamons) were individual end-user virtual machines that had no privileges. there were some privileges in the VNET (service virtual machine ... or virtual appliance using current nomenclature), but VNET did little to differentiate IM traffic from other traffic. A major reason for not having leased line was cost-savings. A lot of the view would have been end-user "on-demand" IM (dial-up) connections would have severely compromised any cost-savings.

some number of nodes had dialup lines ... 4800 or 9600 baud. it was akin to the later csnet dialup ... (decade) old post/reference to initial connection of SJR to csnet ... dialing UofDel (& PhoneNet relay):
https://www.garlic.com/~lynn/internet.htm#email821022

a csnet reference ... including 16Jan82 email (and mentions motivation that few univ. could get arpanet connect)
http://www.livinginternet.com/i/ii_csnet.htm

above also mentions bitnet ... which used similar technology to that used by the internal network (and a lot of the links were also funded by the corporation).

in some sense, the csnet, bitnet, and internal net "dial-up" connections were similar to usenet "dial-up" connections.

this is old reference to Mike releasing REX over VNET in early 79 at time when VNET was over three hundred.
https://www.garlic.com/~lynn/2007s.html#50 Running REXX program in a batch job

I don't have exact number in 76 ... but there was joint announce of JES2 NJI and new VNET product in 76 ... at time when internal network was already larger than what could be addressed by JES2 NJI. Part of the reason for joint announce was that VNET had a layered implementation that could implement a wide-variety of drivers/interfaces ... including being able to talk to JES2 NJI ... which was a completely different kind of network design/implementation. For whatever reason, by the time of BITNET ... about the only kind of drivers being shipped with VNET were the JES2 NJI compatible drivers (which had lower thruput and performance than the native VNET drivers). misc. past posts mentioning bitnet
https://www.garlic.com/~lynn/subnetwork.html#bitnet

BITNET growth possibly was motivation for JES2 NJI to redo its design to eventually support max. 999 nodes (as well as the VNET compatible drivers) ... but by the time that was done, the internal network was well over 1000 nodes. misc. past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

The limited number of nodes in JES2 NJI support was further aggravated by the fact that it would trash any traffic where its local table didn't have definition for either the origin or destination. This essentially precluded using JES2 NJI as any major intermediate node on the internal network.

Another problem with JES2 NJI was that it had jumbled network control with other JES2 operational information. Network traffic between incompatible JES2 releases could result in JES2 failure taking down the associated MVS system. As a result, there was a lot of VNET driver technology that grew up ... that would attempt to create canonical JES2 header information and then a "local" VNET driver talking directly to a real JES2 system would rewrite the header to make it compatible with that specific JES2 system. There was infamous incident regarding JES2 systems in San Jose causing MVS systems in Hursley (England) to crash (and attempts to blame VNET because the VNET MVS crash prevention hadn't been appropriately updated). misc. past posts mentioning HASP/JES2
https://www.garlic.com/~lynn/submain.html#hasp

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

IMS

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IMS
Newsgroups: alt.folklore.computers
Date: Fri, 02 Oct 2009 10:48:12 -0400
Al Kossow <aek@bitsavers.org> writes:
It was a significant development, along with CICS. I'll be adding more soon. The document I put up was the original design document from North American Aviation.

there use to be a joke that large percentage of such stuff had been developed at customer (&/or internal) installations (IMS, CICS, HASP, CP67, etc) ... and then were turned over to official "development" organizations to maintain them.

i've mentioned before that univ. I was at, the library had gotten ONR grant to do computer catalog ... part of the money went to buy a 2321 datacell. the project also got selected to be betatest site for original CICS product (which had been developed at customer location for their in-house use) ... and I got tasked with supporting & debugging CICS (first time out of single environment, ran into some glitches because it was being used differently than at the original customer site). misc. past posts mentioning cics (&/or bdam)
https://www.garlic.com/~lynn/submain.html#cics

for other drift ... when Jim left for tandem ... he tried to palm off consulting with the IMS group onto me:
https://www.garlic.com/~lynn/2007.html#email801016
in this post
https://www.garlic.com/~lynn/2007.html#1

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Sophisticated cybercrooks cracking bank security efforts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Sophisticated cybercrooks cracking bank security efforts
Date: 2 Oct, 2009
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2009n.html#71 Sophisticated cybercrooks cracking bank security efforts

some old email discussing a public-key PGP-like certificate-less implementation
https://www.garlic.com/~lynn/2007d.html#email810506
https://www.garlic.com/~lynn/2006w.html#email810515

now more than a decade later ... we were asked to consult with small client/server startup that wanted to do payment transactions on their server; the startup had also invented this technology called "SSL" they wanted to use; the result is now frequently referred to as "electronic commerce". now part of that effort (now called "electronic commerce"), we also had to do some end-to-end walkthru of various new businesses called Certification Authorities, that were issuing "digital certificates".

Now part of that effort was also something called the "payment gateway" ... some past posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

that would act as intermediary between webservers on the internet and the payment infrastructure. part of that we mandated something called "mutual authentication" (between the webservers and the payment gateway) ... which hadn't yet been implemented. However, by the time it was all done and deployed, it was obvious that the digital certificates were redundant and superfluous. Digital certificates are for trusted information, analogous to letters of credit/introduction from the sailing ship days ... for first time interaction between stangers ... where the relying party has no other recourse to information about the other party.

in the payment gateway scenario ... the payment gateway had to be preregistered at the webservers and the webservers had to be preregistered at the payment gateway ... invalidating basic justification for the digital certificates. then the digital certificates bascially became a side-effect of the software library being used (as opposed to useful business purpose).

As mentioned, as part of this thing we worked on now called "electronic commerce" ... we had to do end-to-end walk thru of various parts of SSL, including these things called Certification Authorities. There are lots of stuff about the integrity of the "digital certificates", that they issue ... but there is a lot less about the integrity of the processes using the certificates (some of which is behind many of the current SSL-related exploits). There is also little about the integrity of the information that goes into the digital certificates.

There are some work going on improving the integrity of the information that goes into SSL certificates ... however it represents something of a catch-22 for the Certification Authority industry ... since it may also sow the seeds of being able to have trusted public keys w/o requiring digital certificates
https://www.garlic.com/~lynn/subpubkey.html#catch22

recent posts in a thread SSL in a tcp-ip.protocol discussion group
https://www.garlic.com/~lynn/2009n.html#41
https://www.garlic.com/~lynn/2009n.html#44
https://www.garlic.com/~lynn/2009n.html#46
https://www.garlic.com/~lynn/2009n.html#51

early on working on "electronic commerce" I coined the tems certificate manufacturing and "comfort certificates" in attempt to differentiate most of SSL domain name certificates from PKI ... some past post
https://www.garlic.com/~lynn/subpubkey.html#sslcert

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Broken Brancher

Refed: **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Broken Brancher
Newsgroups: bit.listserv.ibm-main
Date: 2 Oct 2009 09:11:54 -0700
rfochtman@YNC.NET (Rick Fochtman) writes:
We had a similar problem on a 158 w/AP at Trailer Train, many moons ago.

remember that 158 & 3031 were the same engine.

after future system was canceled (was going to replace 360/370 and was as different from 360/370 as 360/370 had been from earlier generations) some past posts
https://www.garlic.com/~lynn/submain.html#futuresys

there was then a mad rush to get stuff back into the 370 product pipelines (since they had been allowed to go dry since future system was going to completely replace 370). it was going to take 7-8 yrs to do 370-xa & 3081 ... so there was a Q&D effort with 303x.

370/158 had integrated channel microcode (both 370 microcode and channel microcode shared the same engine).

for 303x ... there was channel director ... which was 158 engine with just the integrated channel microcode and no 370 microcode

3031 was 158 engine with just the 370 microcode (and no channel microcode) coupled with a 2nd 158 engine with the channel microcode (and no 370 microcode). (a 3031 AP would have been three 158 engines, two running 370 microcode and one running channel microcode)

3032 was 168 repackaged to use channel director

3033 started out as 168 wiring diagram using 20% faster chip technology. the chips also had something like ten times the circuits ... but extra circuits would go unused. Sometime before 3033 ship, some of the logic was redone to better utilize chip higher circuit density ... boosting 3033 to 50% faster.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Status of Arpanet/Internet in 1976?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Status of Arpanet/Internet in 1976?
Newsgroups: alt.folklore.computers
Date: Fri, 02 Oct 2009 13:49:03 -0400
Michael Wojcik <mwojcik@newsguy.com> writes:
That was pretty common. When I was working at IBM around 1990, I had a PC RT with IBM's "Megapixel" display, which was a Sony Trinitron tube driven by a card that provided 1024x1024 (hence the name) and 8-bit mapped color. It used a cable that bundled separate R, G, and B coax cables with BNC connectors. Sync was carried on G.

I had pc/rt with megapixel display in booth at interop '88 ... a booth at right angle to booth where case was (not the ibm booth) and got case to come over and install snmp on the machine as part of the show.

misc. past posts mentioning interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop88

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Hexadecimal Kid - articles from Computerworld wanted

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hexadecimal Kid - articles from Computerworld wanted
Newsgroups: alt.folklore.computers
Date: Fri, 02 Oct 2009 14:30:08 -0400
Walter Bushell <proto@panix.com> writes:
Yes, I saw that after one of the CA. quakes, the first two stories of the apartment building were crushed. Probably never got around to be updated or was perhaps built after it was know the building wasn't safe.

there was just short blurb on TV about bay bridge is the most fragile structure in the state of cal (from the standpoint of quake vulnerability) ... there was also comment that 1/3rd of structures in San Fran. were damaged (something I hadn't heard before).

for some other computer related stuff ... five minutes before the quake ... we were wheels up out of SFO (after having been delayed 20 mins). for flight to minneapolis (on the way to cray computers to do some rfc1044 tcp/ip thruput testing). part way thru the flt ... i noticed a lot of whispering in the galley and went back to find out what was going on ... they were discussing news of the quake.

misc. past posts mentioning 1044
https://www.garlic.com/~lynn/subnetwork.html#1044

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Evolution of Floating Point

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Evolution of Floating Point
Newsgroups: alt.folklore.computers
Date: Fri, 02 Oct 2009 14:51:16 -0400
Walter Bushell <proto@panix.com> writes:
For a long time that was true. Who could stand to expense of a model 370/155 which could not even reformat a movie and would eat you alive in electrical cost.

to say nothing about structural and floor space.

even later 3081 ... which was a whole lot faster but not a lot smaller.

in hsdt ... with corporate requirement to encrypt all links that left corporate pysical premises ... it wasn't too bad for the 9600 and 56k links ... even getting link encryptors ... but T1 & higher speed it was starting to represent a problem.

i had done some DES software benchmarking on 3081 and it would get about 150kbytes/sec ... so dedicated two-processor 3081 could just about handle DES encrypt/decrypt for full-duplex T1 link.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Help: restoring textured paint - DEC

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Help: restoring textured paint - DEC
Newsgroups: alt.folklore.computers
Date: Fri, 02 Oct 2009 21:50:34 -0400
Jim Haynes <jhaynes@alumni.uark.edu> writes:
That reminds me of a funny story I heard from an Amdahl engineer. Amdahl's machines were painted orange. One of their first customers was Texas A&M University. Their school color is maroon, while orange is the school color of arch-rival U. of Texas. So before Amdahl could install they had to send for a bucket of maroon paint.

science center eventually had 5 8-drive (actually 9-drives, but only eight addressable at a time) 2314s and a short 5-drive 2314 connected to the 360/67 spread over much of 2nd flr of 545 tech sq.

Eventually the CE (Fritz) repainted each 2314 controller cover a different color ... to help identify which 2314 was which (dark box on the right in this picture)
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_2314.html

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Evolution of Floating Point

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Evolution of Floating Point
Newsgroups: alt.folklore.computers
Date: Sat, 03 Oct 2009 09:44:56 -0400
jmfbahciv <jmfbahciv@aol> writes:
Now that I've thought about it more, the correct analogy is to replace all the footings while expecting the bridge to stay up and functioning while replacements are occurring.

we talked to 1-800 service (i.e. infrastructure that maps the 1-800 number to the "real" number) about doing that. they had been using a hardware fault tolerant platform with 5-nines availability requirement ... however anytime there was software upgrade ... it would blow a century of downtime.

in ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

we used replicated boxes and fall-over ... which would handle software upgrades w/o experiencing outage. of course, the hardware fault tolerant platform could have replicated boxes and fall-over ... but then it could meet five-nines requirement w/o requiring the fault tolerant hardware.

part of this was that the ss7 already hhad fault tolerant hardare and replicated T1 links to the 1-800 lookup (would time-out on waiting for reply and repeat it on a different T1 link).

in any case, it requires some amount of over-engineering/redundancy to provide availability and mask outages during maintenance and recovery.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Microprocessors with Definable MIcrocode

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microprocessors with Definable MIcrocode
Newsgroups: alt.folklore.computers
Date: Sat, 03 Oct 2009 17:24:35 -0400
Michael Black <et472@ncf.ca> writes:
RISC didn't exist until there was deliberate work on it in the early eighties and the name was created. That was a reaction to bloated instruction sets in the microprocessors, adding instructions that did everything but the kitchen sink, but were so specific they were rarely used, and they complicated the design badly. That bloat seemed to start with the Z80, building on the 8080 but adding lots of complicated instructions that were neat but were slow.

Anything before that were simply early microprocessors. They reflected what was possible, they reflected what had come before. They couldn't be reduced because they were an improvement on what came before.


I've claimed some of john's motivation for 801/risc in the 70s was as reaction to failing future system project ... some past future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

we were at an advance technology symposium in 76 pitching 16-way 370 ... and the 801 group pitched 801. one of the 801 guys made some quip about how would we ever get the kernel software to handle 16-way smp ... and i made some smart reference about it would be trivial SMOP. Then when 801 was pitching ... I made some comments about the limited number of "shared" objects ... since instead of segment table ... they had 16 "segment registers". their response was 801 was so simplified that it didn't have any protection domain ... the compiler would only generate correct code ... and the loader would only load correctly compiled programs. as a result, in-line application code would be able to switch segment register values as easily as programs change (addresses) general purpose registers.

misc past 801, iliad, risc, romp related email
https://www.garlic.com/~lynn/lhwemail.html#801

misc. past 801, iliad, risc, romp, rios, power, power/pc, etc posts
https://www.garlic.com/~lynn/subtopic.html#801

reference to john ("father of RISC architecture")
http://domino.research.ibm.com/comm/pr.nsf/pages/news.20020717_cocke.html

late 70s there were some effort to make common 801 (iliad) chips replacement for the vast variety of internal microprocessors used in low-end & mid-range 370s (the microprocessor for the 4341 follow-on, the 4381 started out going to be risc), embedded controllers ... and even the system/38 follow-on ... the as/400. iliad effort eventually floundered and there was retrenching to doing custom cisc microprocessors for many of these efforts (although later generation of as/400 eventually did move to power/pc).

as i referenced in other posts ... with the demise of iliad activity ... some number of the engineers left and show up as various other chip vendors working on risc efforts (amd 29k, hp snake, etc).
https://www.garlic.com/~lynn/2009n.html#48 Microprocessors with Definable MIcrocode

romp chip started out being standard 801 "closed" system with pl.8 and cp.r in joint project with office products division for a follow-on to the displaywriter. when that product was killed, there was some effort to find alternative product for the hardware ... and decision was made to market it as unix workstation. they got the company that had done the "pc/ix" port for the pc ... to do one to romp ... the unix paradigm did require protection domains tho. this was eventually released as pc/rt (and unix port as AIX).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Microprocessors with Definable MIcrocode

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microprocessors with Definable MIcrocode
Newsgroups: alt.folklore.computers
Date: Sat, 03 Oct 2009 19:15:00 -0400
Al Kossow <aek@bitsavers.org> writes:
I assume you've seen this before?

http://www.jfsowa.com/computer/memo125.htm


re:
https://www.garlic.com/~lynn/2009o.html#10

actually, no

there were specific sections of FS ... and somebody associated with resource management gave presentation at cambridge science center. I made some off-hand comment about what i already had running (with regard to resource management) was better than what they were proposing for FS. I later made some reference to there being comparison between FS effort and a cult film that had been playing down in central sq. for more than a decade.

my wife did stint reporting to the person responsible for FS interconnect ... and she had some observations that there were whole sections related to I/O that were non-existent.

the folklore is that analysis by Houston Science Center put the nails into the FS cofficn ... that a FS machine built from 370/195 components would have thruput comparable to 370/145 (about factor of 30 times degradation).

recent post in ibm-main regarding 3033 (ongoing thread about various kinds of hardware bugs that had been encountered in the past)
https://www.garlic.com/~lynn/2009o.html#4 Broken Brancher

as mentioned in the above ... after FS was killed, there was mad rush to get a lot of stuff back into 370 hardware & software product pipeline (having gone dry anticipating switch to FS & killing 360/370).

... 3033 started out being 168 wiring diagram mapped to chips that were about 20% faster ... and had ten times as many circuits per chip (than 168). initially 3033 was going to be 20% than 168 ... and 90% of circuits in the chips would go unused. there was then some optimization effort to better make use of "on-chip" operations that got 3033 up to about 50% faster than 168.

other recent posts in the broken brancher thread (started out as topic drift in a different thread):
https://www.garlic.com/~lynn/2009n.html#74 Best IEFACTRT (off topic)
https://www.garlic.com/~lynn/2009o.html#0 Best IEFACTRT (off topic)

I've mentioned in the past that I had continued to work on 370 all during FS period ... having been repeatedly told that the only way to promotion and raises would be to transfer to FS effort ... however I continued to draw comparison with FS project and cult film in central sq ... which seemed to linger on the rest of my time there as a non carerr-enhancing activity ... somewhat analogous to Sowa's comment
http://www.jfsowa.com/computer/index.htm

various past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

I have periodically referenced FS comments here
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

which mentions major motivation for FS was clone controllers.

minor topic drift (sowa had some comments vis-a-vis pli & pls) ... pli, pls, pl.8 and pascal
https://www.garlic.com/~lynn/2006t.html#email810808
in this post
https://www.garlic.com/~lynn/2006t.html#9

The same time codd was doing relational at ibm, and STL was doing "eagle" as the FS "database", Sowa was doing "semantic networks".
http://www.jfsowa.com/

LSG VLSI lab. had done pascal implementation for 370 ... as part of VLSI tools activity. And then some people in STL and some people at LSG VLSI lab ... did a semantic network SNDBMS implementation (in pascal). LSG had done some amount of language and dbms work in support of VLSI tools ... including using MetaWare TWS ... old reference
https://www.garlic.com/~lynn/2004d.html#71

I got to do some work on relational ... original RDBMS/SQL implementation ... some past posts
https://www.garlic.com/~lynn/submain.html#systemr

but i was also doing some stuff over in the VLSI lab ... and also did some of the SNDBMS implementation ... in fact the stuff that I use for rfc index
https://www.garlic.com/~lynn/rfcietff.htm

and various merged taxonomies & glossaries
https://www.garlic.com/~lynn/index.html#glosnote

might be considered a many times removed descendent of that implementation

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Calling ::routines in oorexx 4.0

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Calling ::routines in oorexx 4.0
Newsgroups: comp.lang.rexx
Date: Sat, 03 Oct 2009 23:18:15 -0400
LesK <5mre20@tampabay.rr.com> writes:
No it wouldn't! Rexx is simply following the CMS rules for finding an executable (with just a small modification to first search the defined libraries and to ignore IMPEX). It's been that way since I used the first CMS system installed in Product Test at RTP (1970's?) and has never changed (ignoring SFS, which I know nothing about). There is even a special bit in the disk control block that tells CMS if the disk contains an executable. Without any special tailoring by the SysProg, the basic search order is EXEC for all accessed disks in alphabetical order and then MODULE for all disks in alphabetical order. I don't remember for sure where EXECLOADed, NUCXLOADed and Shared Segments (if that's the current term) come in, but I *think* they're first.

<resend ... the ebcdic binary zeros to ascii don't work well in post>

there is also the synonym and abbrev tables as part of lookup. svc202 kernel call would do exec lookup, then module lookup, then try and hit the kernel routines (which then got modified with nuxcload ... which might or might not be in added kernel extensions in shared segments).

old post
https://www.garlic.com/~lynn/2004b.html#56 Oldest running code

which references cms assembler routine for waiting for something to showup in the (cp/67) reader. however, as undergraduate in the 60s, i discovered that you could directly make a (cms) kernel routine call from (cms) exec (w/o requiring assembler program and svc202) ... the fiddling was that the line had to end with 8 bytes of binary zeros (which would terminate the plist when it all got setup by the exec processor).

&CONTROL OFF
CP SP C CL Y
-READ DISK LOAD
&IF &RETCODE EQ 0 &SKIP 1
WAIT RDR1RDR1 --------
-DSKLOAD DISK LOAD
CP SP C CL A

... snip ...

or: WAIT RDR1RDR1

for vm370 cms they added svc203 calls ... which are somewhat more like OS/360 supervisor calls ... going directly to specific kernel routine ... significantly speeding up some applications (examp: RDBUF ... to read each record in file originally was svc202 which had to go thru the gorp for every record read).

original cp67 cms ... everything (command line input, exec processor and executable supervisor calls) was treated the same way. the minor difference was that svc202, the parameter list had to already be broken into 8-byte tokens (which was otherwise handled by command line and exec processor input).

old discussion of above
https://www.garlic.com/~lynn/2004p.html#22
https://www.garlic.com/~lynn/2004q.html#49

all of this was transparent/ambiguous originally ... but some additional structure has since been added
http://publib.boulder.ibm.com/infocenter/zvm/v5r4/topic/com.ibm.zvm.v54.dmsa5/svc202.htm
http://publib.boulder.ibm.com/infocenter/zvm/v5r4/topic/com.ibm.zvm.v54.dmsa6/hcsd3b0015.htm

i had done a lot of the sharing stuff in cp67 at the science center and then moved it to vm370 ... when the science center replaced the 360/67 with 370/155. some old email
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

then when future system project was killed ... they were in a rush to put stuff back into 370 (hardware and software) product pipelines ... some recent threads in ibm-main & alt.folklore.computers n.g.
https://www.garlic.com/~lynn/2009o.html#4
https://www.garlic.com/~lynn/2009o.html#10
https://www.garlic.com/~lynn/2009o.html#11

and a lot of stuff I'd been doing all along on 370 (and shipping in internal distribution tapes) were picked up for product release. Lots of the CMS shared segment stuff was picked up ... but not the paged mapped filesystem. I had done all the shared segment stuff thru the page mapped filesystem ... so they did a kludge and used the vm370 "IPL" saved system mechanism on the cp side to do discontiguous shared segments (as a result ... a whole lot of new function that came from the paged mapped file system didn't get out). misc. post mentioning page mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

old posts also mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Microprocessors with Definable MIcrocode

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microprocessors with Definable MIcrocode
Newsgroups: alt.folklore.computers
Date: Sun, 04 Oct 2009 09:00:53 -0400
Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
That's a fairly idiosyncratic view. Patterson;s original RISC papers mention machines such as the VAX and System/3, not just microprocessors (they also mention the iAPX-432, which was a micro). Hennessy and Patterson have described the IBM 801 as the first RISC; obviously it created the acronym.

re:
https://www.garlic.com/~lynn/2009n.html#48 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009o.html#10 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009o.html#11 Microprocessors with Definable MIcrocode

432 was a chip, more like future system ... or system/38. there was presentation at acm sigops '79(?) by some 432 people. they mentioned that so much operating system stuff had been moved into silicon (examp ... multiprocessing dispatching & number of cores/processor was masked by it having been moved into silicon and was hardware function) that they were having hard time dealing with bugs in those functions ... and having difficulty patching the code (faced with having to ship a new chip).

I had done somewhat similar design (multiprocessing dispatching & masking number of cores/processors) in '75 for a 5-way 370 smp product (that was never announced or shipped) ... but had dropped it into microcode (not silicon) ... and patches were just shipping new microcode floppy disk. misc. past posts
https://www.garlic.com/~lynn/submain.html#bounce

other smp and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

misc. past posts mentioning 432
https://www.garlic.com/~lynn/2000d.html#57 iAPX-432 (was: 36 to 32 bit transition
https://www.garlic.com/~lynn/2000d.html#62 iAPX-432 (was: 36 to 32 bit transition
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous
https://www.garlic.com/~lynn/2000f.html#48 Famous Machines and Software that didn't
https://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2001k.html#2 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002d.html#27 iAPX432 today?
https://www.garlic.com/~lynn/2002d.html#46 IBM Mainframe at home
https://www.garlic.com/~lynn/2002l.html#19 Computer Architectures
https://www.garlic.com/~lynn/2002o.html#5 Anyone here ever use the iAPX432 ?
https://www.garlic.com/~lynn/2002q.html#11 computers and alcohol
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003c.html#17 difference between itanium and alpha
https://www.garlic.com/~lynn/2003e.html#54 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#55 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003m.html#23 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#24 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#47 Intel 860 and 960, was iAPX 432
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004e.html#52 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#64 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005k.html#46 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005q.html#31 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006c.html#47 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006n.html#42 Why is zSeries so CPU poor?
https://www.garlic.com/~lynn/2006n.html#44 Any resources on VLIW?
https://www.garlic.com/~lynn/2006p.html#15 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2008d.html#54 Throwaway cores
https://www.garlic.com/~lynn/2008e.html#32 CPU time differences for the same job
https://www.garlic.com/~lynn/2008h.html#35 Two views of Microkernels (Re: Kernels

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Microprocessors with Definable MIcrocode

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microprocessors with Definable MIcrocode
Newsgroups: alt.folklore.computers
Date: Sun, 04 Oct 2009 09:27:51 -0400
Al Kossow <aek@bitsavers.org> writes:
I assume you've seen this before?

http://www.jfsowa.com/computer/memo125.htm


in the above, the question was asked ... why not just build 16 168s.

re:
https://www.garlic.com/~lynn/2009o.html#11 Microprocessors with Definable MIcrocode

there was 16-way 370 smp effort this was going to use 158 engine ... which was at the knee of cost/performance curve ... about most optimal for the period. this was after the 5-way 370 smp project got killed, mentioned here
https://www.garlic.com/~lynn/2009o.html#13 Microprocessors with Definable MIcrocode

16-way 370 smp was also presented at some internal symposium as 801 ... referenced here
https://www.garlic.com/~lynn/2009o.html#10 Microprocessors with Definable MIcrocode

the project was going great guns and lots of people in the corporation loved it ... until somebody leaked to the head of high-end computer division that it would be decades before the favorite son operating system would have 16-way smp support. at that point the effort was terminated ... and some people were invited to never show-up in POK again.

part of the problem may have been that we had interested some of the processor engineers working on 3033 to spend some of their spare time on the 16-way effort (which also came to the attention to the head of the division).

misc. past posts mentioning 16-way effort:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/95.html#5 Who started RISC? (was: 64 bit Linux?)
https://www.garlic.com/~lynn/95.html#6 801
https://www.garlic.com/~lynn/95.html#11 801 & power/pc
https://www.garlic.com/~lynn/95.html#13 SSA
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/97.html#5 360/44 (was Re: IBM 1130 (was Re: IBM 7090--used for business or
https://www.garlic.com/~lynn/98.html#23 Fear of Multiprocessing?
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2000.html#31 Computer of the century
https://www.garlic.com/~lynn/2000c.html#9 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000c.html#12 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2001j.html#18 I hate Compaq
https://www.garlic.com/~lynn/2002i.html#82 HONE
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
https://www.garlic.com/~lynn/2003.html#4 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#7 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#26 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?
https://www.garlic.com/~lynn/2005.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005k.html#45 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005m.html#48 Code density and performance?
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005p.html#39 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005r.html#46 Numa-Q Information
https://www.garlic.com/~lynn/2006.html#32 UMA vs SMP? Clarification of terminology
https://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006l.html#30 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2006n.html#37 History: How did Forth get its stacks?
https://www.garlic.com/~lynn/2006r.html#22 Was FORTRAN buggy?
https://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64?
https://www.garlic.com/~lynn/2006t.html#9 32 or even 64 registers for x86-64?
https://www.garlic.com/~lynn/2006w.html#41 Why so little parallelism?
https://www.garlic.com/~lynn/2007g.html#17 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007g.html#44 1960s: IBM mgmt mistrust of SLT for ICs?
https://www.garlic.com/~lynn/2007g.html#57 IBM to the PCM market(the sky is falling!!!the sky is falling!!)
https://www.garlic.com/~lynn/2007l.html#26 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#13 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007t.html#76 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007t.html#77 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007u.html#1 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008m.html#24 Some confusion about virtual cache
https://www.garlic.com/~lynn/2008m.html#61 CHROME and WEB apps on Mainframe?
https://www.garlic.com/~lynn/2009d.html#32 Architectural Diversity
https://www.garlic.com/~lynn/2009d.html#33 Architectural Diversity
https://www.garlic.com/~lynn/2009h.html#7 The coming death of all RISC chips
https://www.garlic.com/~lynn/2009i.html#32 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009i.html#36 SEs & History Lessons
https://www.garlic.com/~lynn/2009i.html#37 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009o.html#10 Microprocessors with Definable MIcrocode

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Calling ::routines in oorexx 4.0

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Calling ::routines in oorexx 4.0
Newsgroups: comp.lang.rexx
Date: Sun, 04 Oct 2009 10:07:27 -0400
Glenn Knickerbocker <NotR@bestweb.net> writes:
The problem here is what "meaningful" means. The search for a program and the search for a file are just not the same thing in CMS. What's meaningful in one isn't necessarily meaningful in the other. I'd say using a second word as a filetype in the search for a program would make about as much sense as searching the file system for a file with a blank filetype when only one word is specified. (You can create such a file by overwriting the FST directly, by the way, but then CMS's normal interfaces can never read or write it.)

re:
https://www.garlic.com/~lynn/2009o.html#12 Calling ::routines in oorexx 4.0

cms would search the file table for specific (file) name ... (modulo synonym and acronym tables) but search done first for filetype EXEC and if not found, repeated for filetype MODULE.

the search was really expensive in cp67/cms ... since everything used svc202 for kernel calls ... which would also perform such a search. This was alleviated somewhat in vm370/cms when svc203 was introduced that bypassed doing the (program) file search thing.

in late 70s (30yrs ago) ... one of the customers (perkin-elmer?) did a cms filesystem enhancement that sorted the file table (and left bit in table indicating whether filesystem was sorted). for sorted file table, the (program) filename lookup would then do binary search (rather than sequential search). for large filesystems ... it really speeded things up.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Microprocessors with Definable MIcrocode

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microprocessors with Definable MIcrocode
Newsgroups: alt.folklore.computers
Date: Sun, 04 Oct 2009 10:14:58 -0400
Walter Bushell <proto@panix.com> writes:
But I used paper tape as late as 1976. It was used to load up a Honeywell 316 which NASA used and for all I know still uses to control upload transmitters for unmanned spacecraft. Highest of tech, oldest method. The programs were sent to the sites on cassette tape.

I'm trying to remember when ROLM was acquired ... there was something about the development process for DGs in the switches ... something about taking 24hrs to load new test software ... but I can't remember now for sure whether it was paper tape or not(?).

I got brought in to look at possibly using T1 link between development systems and test switches for transfering new software.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Broken hardware was Re: Broken Brancher

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Broken hardware was Re: Broken Brancher
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 04 Oct 2009 17:25:03 -0400
cfmpublic@NS.SYMPATICO.CA (Clark Morris) writes:
3 incidents come to mind. The first was a 2821 print controller that blew up error recovery by sending back Device End and Busy. Despite MVT being in its last days, we were the site of first discovery. The second was on a mod 65 where the CSW was getting stored x'40' or x'48' from a 256K boundary. We were finally able to force it by using an IEBCOPY unload with IEBCOPY brought back from SVS thanks to the MICHMODS MVT tape. We called in the third party memory CEs who came in and proved it wasn't their problem by some process that I forget even though I was the person watching this for the company. I then called IBM and the CE came in. He checked for the problem after I showed the symptoms thinking it wasn't an IBM problem and turned up a 250 nano-second delay card in the channel that wasn't delaying things for 250 nano-seconds. The last was under MVS when we lost an indexed VTOC on a 3380. After rebuilding it, I checked EREP to see what was happening at the time and found a large number of temporary write errors to the drive at the time. The CE checked it out and found a loose card in the controller. Reseating the card ended the problem.

In the late 70s, I would wander san jose plant including bldg. 14 (disk engineering) and 15 (disk product test) ... they had machine rooms with several processors dedicated to testing. At one point they had tried to install MVS on the processors and do testing in operating system environment ... but MVS had something like 15mins MTBF (system having to reboot because of failure or hang) ... so they were doing dedicated time stand-along testing ... sometimes scheduled around the clock and weekends (development devices could have humongous error rates and even violations of architecture). I decided to do an input/output supervisor that would never fail ... allowing them to do lots of concurrent, on-demand testing. misc. past posts
https://www.garlic.com/~lynn/subtopic.html#disk

one of the results ... was that I would get called if things didn't appear to be working like they expected. One of the situations had to do with the slow processor used for 3880 controller. 3830 had fast, horizontal microcode engine. It was decided for 3880 involving a whole lot more function to use vertical microprocessor for control functions (much easier to program) ... and since it was much slower than 3830 ... separate dedicated hardware for data transfer.

one of the issues was new products (3880) had to have performance within +/- 5-10 percent of earlier product (3830) ... and elapsed time for identical operations using identical disks (3330s) was taking more than allowed elapsed time (vis-a-vis 3830). early attempt to mask this was to present operations ending status to the channel before controller had actually finished final operation cleanup. there then could be control unit status error discovered that should result in unit check. they started out by presenting stand-alone, asynchronous/unsoliciated unit check. I had to handle this ... but I got to telling them it was a violation of channel architecture.

after some amount of time this escalated to conference calls with pok channel engineers ... and I was being required to participate (I eventually asked why and was told that starting in the late 60s, and thru the 70s, lots of the senior san jose engineers that handled such architecture issues had been lured away to startups in the valley). Eventually 3880 was redone to present the unit check as csw-stored with the next SIO initiated to the controller (i.e. the controller was sort of in pending "contingent connection").

bldg 15 (product test lab) got early 3033 engineering machine for testing. Since even several concurrent tests would typically used a percent or two of the processor ... a couple strings of spare 3330 (16 drives) and spare 3830 controller was connected to the machine and we also ran online time-sharing service for the engineers on the machine.

one monday morning ... I got a call that I had done something over the weekend to enormously degrade system thruput. after some investigation ... 3330 operations on the 16-drive string had drastically degraded ... and they had claimed to not have done anything. It turned out that over the weekend somebody decided to replace the 3830 controller with a 3880 controller.

Now, the 3880 had passed the thruput product acceptance test (plus/minus 5-10% of previous product) ... however, it had been done in STL with a single "pack" single-thread VS1 test. The diagnoses turned out to be the same "clean-up" process that had earlier resulted in the stand-alone unit checks. In multi-drive with lots of concurrent activity, the 3880 would present ending status ... and I would immediately turn around (in the i/o supervisor) and hit the controller with the next queued request. Since the 3880 was still busy cleaning up stuff ... it would respond with CC=1, SM+BUSY csw-stored (i.e. control unit busy). The operation then had to be requeued ... and wait for the controller to get around to presenting CUE interrupt (the single-threaded, single-pack VS1 test basically overlapped VS1 processing with 3880 controller dangling busy). This 2nd go around with the 3880 slow processor (and dangling controller processing) was fortunately six months prior to first customer ship ... and a whole lot of additional work was done to try and improve thruput involving large number of concurrent operations.

somewhere along the way I did an internal document that described much of this ... and happened to include a reference to the MVS 15min MTBF ... which brought down some amount of wrath from the MVS group. This was not long after having (already) been invited to not visit POK anymore.

recent reference to 3033 in this thread:
https://www.garlic.com/~lynn/2009o.html#4 Broken Brancher

more recent reference to doing some work on 16-way 370 SMP (from thread in a.f.c. ng) ... somewhat in the wake of the FS demise
https://www.garlic.com/~lynn/2009o.html#11 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009o.html#14 Microprocessors with Definable MIcrocode

which got quite a bit of acceptance until somebody leaked to the head of POK that it would be decades before the favorite son operating system would have 16-way SMP support. then some of us were invited to not visit POK anymore. Some of this was aggravated by getting some of the 3033 engineers to work on the 16-way effort in their spare time (they got some sort of direction to get their priorities straight).

in any case ... getting close to 3380 product ship ... old email about MVS regression tests (with hardware error injection)
https://www.garlic.com/~lynn/2007.html#email801015
in this post
https://www.garlic.com/~lynn/2007.html#2 "The Elements of Programming Style"

mentions in regression bucket of 57 (injected 3380) hardware errors, MVS hangs in 100% of the cases and must be re-IPL'ed ... and for 66% of the cases, there is no indication of what the problem was that forced the re-IPL.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Microprocessors with Definable MIcrocode

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microprocessors with Definable MIcrocode
Newsgroups: alt.folklore.computers
Date: Sun, 04 Oct 2009 18:24:00 -0400
Peter Flass <Peter_Flass@Yahoo.com> writes:
There's not an awful lot of stuff on the 432 around. I found some material somewhere (Bitsavers maybe?). It sounds like a decent idea, a segmented memory like Multics and capability-based security in haredare. Unfortunately the segments were only (IIRC) 64K, and I believe the system was multiple chips because it didn't fit on one at the time, like the MicroVax I, and the 8087 FPU.

re:
https://www.garlic.com/~lynn/2009o.html#13 Microprocessors with Definable MIcrocode

I may still have manuals in boxes someplace:
https://www.garlic.com/~lynn/2000f.html#48 Famous Machines and Software that didn't
Introduction to the iAPX 432 Architecture (171821-001) copyright 1981, Intel iAPX 432 Object Primer (171858-001, Rev. B) iAPX 432 Interface Processor Architecture Reference Manual (171863-001)

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

What happened to computer architecture (and comp.arch?)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What happened to computer architecture (and comp.arch?)
Newsgroups: comp.arch
Date: Mon, 05 Oct 2009 14:42:36 -0400
Morten Reistad <first@last.name> writes:
The interrupt-coalescing code helps bring the interrupt rate down by an order of magnitude, so the interrupt rate is not a showstopper anymore.

I have a strong stomack feeling there is something going on regarding l2 cache hit rate.


"811" (i.e. March 1978) architecture allowed for stacking ending status on queue ... showed up with 370-xa in 3081s in the early 80s ... as well as placing outgoing requests on queue ... aka scenario to immediately take a interrupt ... was so that resource could be redriving with any pending requests could be redriven ... minimizing I/O resource idle time, "811" addressed both I/O interrupts trashing cache hit ratio as well as eliminating requiring processor synchronous participation in i/o "redrive".

part of "811" was hardware interface for tracking busy & idle (for things like capacity planning) ... which had previously been done by software when kernel was involved in interrupts & redrive.

start subchannel ("811" instruction for queuing i/o request)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/14.3.9?SHELF=DZ9ZBK03&DT=20040504121320

set channel monitor ("811" instruction for measurement & statistics)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/14.3.8?SHELF=DZ9ZBK03&DT=20040504121320

test pending interruption ("811" instruction for i/o completion w/o interrupt)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/14.3.8?SHELF=DZ9ZBK03&DT=20040504121320

before 811 ... early 70s, i had modified my resourcer manager (on plain vanilla 370s) to monitor interrupt rate and dynamically switch for running enabled for i/o interrupts to disabling for i/o interrupts and only doing periodic "batch" drain of interrupts ... attempting to preserve cache locality.

a two processor smp was offered for some models of 370 where only one of the processors had i/o capability. I had done an internal version of SMP support that were deployed in some places ... where I actually got higher aggregate MIP thruput than two single processors. Normally for two-processor 370 SMP ... the clock was slowed by 10% to provide head room for the processor caches to listen for cache invalidates (from the other cache) ... this resulted in two processor SMP having nominal 1.8 times a single processor (handling of any cache invalidates signals slowed it down further ... and any software SMP overhead slowed two-processor SMP even further ... compared to two single processor machines).

In any case, with lots of tuning of SMP pathlengths ... and tweaks of how I/O and I/O interrupts were handled ... I got two processor SMP configuration up to better than twice thruput of two single processor machines (rule-of-the-thumb at the time said it should have only 1.3-1.5 times the thruput) ... basically because of preserving cache hit ratio.

another part of "811" architecture was to eliminate overhead of passing thru the kernel for subsystem (demon) calls by applications. basically hardware table was defined with address space pointer and privileges for subsystems. application calls to subsystems then became very much like simple application call (for something in the applications address space). The api tended to be pointer passing ... so part of the interface was having alternate address space pointers ... and instructions where subsystems could directly access parameter values (indicated by passed pointer) back in the application address space.

part of that 811 architecture description in current 64-bit
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/5.7?SHELF=DZ9ZBK03&DT=20040504121320

"program call" instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/10.34?SHELF=DZ9ZBK03&DT=20040504121320

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

mainframe e-mail with attachments

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: mainframe e-mail with attachments
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 07 Oct 2009 11:48:51 -0400
HMerritt@JACKHENRY.COM (Hal Merritt) writes:
Something to consider, however, is that we found that email delivery of critical reports to customers to be unacceptable. Our side worked perfectly, but we found full mailboxes, people out of the office, reports too large, accidental deletion, broken PC's, etc, etc, etc. That is, we had no control over the far end and yet we still got beat up when the reports were delayed/lost.

And that was before the requirement to encrypt sensitive data.


old email from long ago and far away discussing PGP-like email operation:
https://www.garlic.com/~lynn/2007d.html#email810506
https://www.garlic.com/~lynn/2006w.html#email810515

I had done a (cms) rexx exec early on to handle smtp/822 email that was distributed internal and went thru several generations.

the internal network was mostly vm/cms machines ... some old posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

... & larger than internet from just about the beginning until possibly late '85 or early '86. there were some MVS machines ... but there was significant difficiencies in the MVS networking implementation ... including not being able to address all the nodes in the network ... and would trash network traffic if they didn't recognize either the origination or the destination (as a result, MVS network nodes were carefully controlled to "edge" nodes ... to minimize the damage they would do to network traffic).

80s was still period where govs. viewed encryption with lots of suspicion. corporate had requirement for at least link encryptors on any links that left corporate grounds (between corporate locations, there was some observation that in the mid-80s, the internal network had over half of all the link encryptors in the world). there were periodic battles with various gov. agencies around the world with installing link encryptors on a link going between two different corporate locations in different countries.

bitnet (where this ibm-main mailing list originated) used similar technology to that of the internal network. however, vm/cms network design was layered and could have drivers that talked to other infrastructures (including MVS). for much of the bitnet period, the standard vm/cms network product had stopped shipping native drivers (which had higher thruput and performance ... even over same exact telecommunication hardware), and only shipped MVS network drivers.

One of the problems with the non-layered MVS network design ... was that traffic between different MVS systems at different release levels could result in MVS system failures (forcing reboot). There was infamous scenario of traffic from some internal San Jose MVS systems resulting in MVS system failures in Hursley. They then tried to blame it on the Hursley vm/cms network machines. The issue was that MVS systems were so fragile and vulnerable ... that lots of software was developed for the vm/cms MVS drivers ... to rewrite control information into format acceptable to each specific directly connected MVS system (and since Hursley MVS systems were crashing ... it was obvious that the vm/cms network nodes was at fault for not preventing the MVS failures).

the internal network had high growth year in '83 ... when it passed 1000 nodes (at time when arpanet/internet was passing 255 nodes) ... old post listing locations around the world that added one or more new nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8 Arpa address

past posts mentioning bitnet (/earn ... europe version of bitnet)
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Opinions on the 'Unix Haters' Handbook'

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinions on the 'Unix Haters' Handbook'.
Newsgroups: alt.folklore.computers
Date: Wed, 07 Oct 2009 15:12:01 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
How Wall Street Lied to Its Computers
http://bits.blogs.nytimes.com/2008/09/18/how-wall-streets-quants-lied-to-their-computers/

article from summer 2007:

Subprime = Triple-A ratings? or 'How to Lie with Statistics' (gone 404 but lives on at the wayback machine)
https://web.archive.org/web/20071111031315/http://www.bloggingstocks.com/2007/07/25/subprime-triple-a-ratings-or-how-to-lie-with-statistics/


from above:
https://www.garlic.com/~lynn/2009n.html#47 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009n.html#49 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009n.html#56 Opinions on the 'Unix Haters' Handbook'

Winters Shows JPMorgan Path to Safety, Dimon Shows Him the Door
http://www.bloomberg.com/apps/news?pid=20601109
http://www.bloomberg.com/apps/news?pid=20601109&sid=afMhQfykyL6Y

from above:
... shunned the structured products and off-balance sheet vehicles that crippled global markets because they didn't make financial sense.

"I remember him explaining that they'd looked at these for years and couldn't understand how the economics worked," said John Fullerton, a JPMorgan executive who was one of six people assigned to untangle derivative trades that led to the demise of Long-Term Capital Management LP in 1998. "Despite the tremendous pressure all around them to do it, they didn't do it because the math didn't work."


... snip ...

there has been some written about the differences between people working on behalf of their company ... and looking at things like risk ... and people working on behalf of themselves ... doing these large, extremely risky transactions ... because they could get a percent of the size of the transaction (they weren't being paid based on how much they earned the company, they were being paid on the size of the transactions they executed ... with little regard to the effect on the corporation).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Rogue PayPal SSL Certificate Available in the Wild - IE, Safari and Chrome users beware

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Rogue PayPal SSL Certificate Available in the Wild - IE, Safari and Chrome users beware
Date: 7 Oct, 2009
Blog: Financial Crime Risk, Fraud and Security
Rogue PayPal SSL Certificate Available in the Wild - IE, Safari and Chrome users beware
http://news.softpedia.com/news/Rogue-PayPal-SSL-Certificate-Available-in-the-Wild-123486.shtml

from above:
A forged SSL certificate that could allow an attacker to trick users of IE, Safari or Chrome on Windows into thinking that a fake PayPal page is legitimate, has been publicly released. The cert exploits an yet-to-be-patched null byte poisoning vulnerability ...

... snip ...

We had been called in to consult with small client/server startup that wanted to do payment transactions on their server ... they had also invented this technology called "SSL" they wanted to use ... it is now frequently called "electronic commerce". As part of the effort, we had to detailed end-to-end walkthrus of various pieces ... including this new operations calling themselves "Certification Authorities" and manufacturing and selling this things called (ssl domain name) "digital certificates". There were also a number of issues regarding how these things were used and deployed in order to achieve security. Almost immediately several things were compromised ... and we started referring to "comfort certificates" (to differentiate the use from "security") ... misc. past posts
https://www.garlic.com/~lynn/subpubkey.html#sslcert

SSL Still Mostly Misunderstood; Even many IT professionals don't understand what Secure Sockets Layer (SSL) does and doesn't do, leaving them vulnerable, new survey shows
http://www.darkreading.com/security/vulnerabilities/showArticle.jhtml?articleID=220301548

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Opinions on the 'Unix Haters' Handbook'

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinions on the 'Unix Haters' Handbook'.
Newsgroups: alt.folklore.computers
Date: Wed, 07 Oct 2009 20:01:26 -0400
Morten Reistad <first@last.name> writes:
Not even working on behalf of the company. Working on behalf of faceness investors in your customer's customer. The banks were very good at offloading this debt. The problem came when the offloaders couldn't handle it any more.

re:
https://www.garlic.com/~lynn/2009o.html#21 Opinions on the 'Unix Haters' Handbook'

lots of the (mortgages, loans) "debt" were by unregulated loan originators and unloaded as asset-backed (toxic) CDOs (after paying for triple-A ratings). individuals on this side of the rating agencies were racking huge amount ... typically as percentage of the transaction (size). Since they could unload everything as fast as they could write it ... they didn't care about the borrowers qualifications and/or the loan quality.

No-documentation, no-down, 1% percent interest-only payment were ideal for speculators ... with real-estate inflation running 10-15 percent in some markets (and inflation even increasing with all the speculation) ... speculators could make 1000% to 2000% per year. This was the equivalent to the Brokers' Loans and unregulated stock-market speculation leading to crash of '29.

large amount of the toxic CDOs were bought up by unregulated investment banking arms of regulated depository institutions (courtesy of GLBA & repeal of Glass-Steagall) and carried off-balance. Last jan, there was estimate that the top four regulated depository institutions (in this country) had over $5 trillion of these toxic CDOs being carried off-balance (potentially enough to have all four institutions declared insolvent).

people at the unregulated load originators were raking in the dough, speculators were raking in the dough, rating agencies were raking in the dough (for the triple-A ratings on the toxic CDOs), and the investment bankers were raking in the dough (buying the toxic CDOs). Also CDS insurance writers were raking in the dough (for the CDS policies written on the toxic CDOs, ... and effectively declaring the whole premium as 100% profit and then taking much of the declared profit as bonuses)

In effect, regulated depository institutions were providing a lot of the funding ... keeping the whole bubble inflating ... but in a circuitous, round-about way, skirting regulations.

past references to Brokers' Loans (from early 30s Pecora/Glass-Steagall hearings):
https://www.garlic.com/~lynn/2009b.html#73 What can we learn from the meltdown?
https://www.garlic.com/~lynn/2009b.html#79 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#16 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#32 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#39 'WHO IS RESPONSIBLE FOR THE GLOBAL MELTDOWN'
https://www.garlic.com/~lynn/2009c.html#61 Accounting for the "greed factor"
https://www.garlic.com/~lynn/2009c.html#65 is it possible that ALL banks will be nationalized?
https://www.garlic.com/~lynn/2009d.html#0 PNC Financial to pay CEO $3 million stock bonus
https://www.garlic.com/~lynn/2009d.html#28 I need insight on the Stock Market
https://www.garlic.com/~lynn/2009d.html#62 Is Wall Street World's Largest Ponzi Scheme where Madoff is Just a Poster Child?
https://www.garlic.com/~lynn/2009d.html#73 Should Glass-Steagall be reinstated?
https://www.garlic.com/~lynn/2009d.html#77 Who first mentioned Credit Crunch?
https://www.garlic.com/~lynn/2009e.html#8 The background reasons of Credit Crunch
https://www.garlic.com/~lynn/2009e.html#23 Should FDIC or the Federal Reserve Bank have the authority to shut down and take over non-bank financial institutions like AIG?
https://www.garlic.com/~lynn/2009e.html#40 Architectural Diversity
https://www.garlic.com/~lynn/2009f.html#27 US banking Changes- TARP Proposl
https://www.garlic.com/~lynn/2009f.html#38 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009f.html#49 Is the current downturn cyclic or systemic?
https://www.garlic.com/~lynn/2009f.html#53 What every taxpayer should know about what caused the current Financial Crisis
https://www.garlic.com/~lynn/2009f.html#56 What's your personal confidence level concerning financial market recovery?
https://www.garlic.com/~lynn/2009f.html#65 Just posted third article about toxic assets in a series on the current financial crisis
https://www.garlic.com/~lynn/2009g.html#5 Do the current Banking Results in the US hide a grim truth?
https://www.garlic.com/~lynn/2009g.html#52 Future of Financial Mathematics?
https://www.garlic.com/~lynn/2009h.html#22 China's yuan 'set to usurp US dollar' as world's reserve currency
https://www.garlic.com/~lynn/2009h.html#25 The Paradox of Economic Recovery
https://www.garlic.com/~lynn/2009h.html#29 Analysing risk, especially credit risk in Banks, which was a major reason for the current crisis
https://www.garlic.com/~lynn/2009i.html#40 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009i.html#57 In the USA "financial regulator seeks power to curb excess speculation."
https://www.garlic.com/~lynn/2009j.html#35 what is mortgage-backed securities?
https://www.garlic.com/~lynn/2009j.html#38 what is mortgage-backed securities?
https://www.garlic.com/~lynn/2009n.html#47 Opinions on the 'Unix Haters' Handbook'

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Opinions on the 'Unix Haters' Handbook'

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinions on the 'Unix Haters' Handbook'.
Newsgroups: alt.folklore.computers
Date: Thu, 08 Oct 2009 10:34:57 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
Winters Shows JPMorgan Path to Safety, Dimon Shows Him the Door
http://www.bloomberg.com/apps/news?pid=20601109
http://www.bloomberg.com/apps/news?pid=20601109&sid=afMhQfykyL6Y

from above:

... shunned the structured products and off-balance sheet vehicles that crippled global markets because they didn't make financial sense.

"I remember him explaining that they'd looked at these for years and couldn't understand how the economics worked," said John Fullerton, a JPMorgan executive who was one of six people assigned to untangle derivative trades that led to the demise of Long-Term Capital Management LP in 1998. "Despite the tremendous pressure all around them to do it, they didn't do it because the math didn't work."

... snip ...


re:
https://www.garlic.com/~lynn/2009o.html#21 Opinions on the 'Unix Haters' Handbook'.
https://www.garlic.com/~lynn/2009o.html#23 Opinions on the 'Unix Haters' Handbook'.

John Thain: Merrill's structured products were so complex that nobody understood them
http://www.finextra.com/fullstory.asp?id=20584

from above:
Former Merrill Lynch chief John Thain says that the structured products created by his firm were so complex that it could take up to three hours to model one traunch of a single CDO correctly when using "one of the fastest computers in the United States".

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Opinions on the 'Unix Haters' Handbook'

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinions on the 'Unix Haters' Handbook'.
Newsgroups: alt.folklore.computers
Date: Thu, 08 Oct 2009 11:33:02 -0400
Peter Flass <Peter_Flass@Yahoo.com> writes:
Another guy and I kept saying the dotcom thing was a bubble and we should get out of the market, but it kept going up for several months after we said that.

When in doubt, I have a tendency to do nothing.


re:
https://www.garlic.com/~lynn/2009o.html#21 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#23 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#24 Opinions on the 'Unix Haters' Handbook'

there was a study that some of the same investment bankers involved in the internet bubble were then involved in the current economic meltdown (as well as lobbying for the bank modernization act with repeal of Glass-Steagall and commodities futures modernization act with keeping over-the-counter activity unregulated ... Enron & AIG)

there was joke about investment bankers putting money into internet startup and then 2yr roundmap to take it to IPO. this was repeated a large number of times. it was even better if the startup then failed (after IPO) ... since it kept the market open for the next new thing (possibly $20m investment with $2B at IPO).

there was analogous story about the new american culture with nothing succeeds like failure ... but the case was large system integrators & "beltway bandits" on large gov. (frequently technical & IT) projects. A failed project would mean another round of appropriations for the next attempt (much more profit than if projects were successful, the downside is that it is analogous to bubble ... eventually the faulty/unfixed infrastructure actually fails, but in the mean time, lots of preditory entities have diverted large amount of funds).

The Success of Failure:
http://www.govexec.com/management/management-matters/2007/04/the-success-of-failure/24107/

some past posts referencing the topic
https://www.garlic.com/~lynn/aadsm26.htm#59 On cleaning up the security mess: escaping the self-perpetuating trap of Fraud?
https://www.garlic.com/~lynn/aadsm27.htm#8 Leadership, the very definition of fraud, and the court of security ideas
https://www.garlic.com/~lynn/2007h.html#29 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007l.html#46 My Dream PC -- Chip-Based
https://www.garlic.com/~lynn/2007m.html#36 Future of System/360 architecture?
https://www.garlic.com/~lynn/2007q.html#62 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2007u.html#63 folklore indeed
https://www.garlic.com/~lynn/2007u.html#69 folklore indeed
https://www.garlic.com/~lynn/2008m.html#41 IBM--disposition of clock business
https://www.garlic.com/~lynn/2008m.html#55 With all the highly publicised data breeches and losses, are we all wasting our time?

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Some Recollections

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Some Recollections
Newsgroups: comp.databases.pick,alt.folklore.computers
Date: Thu, 08 Oct 2009 11:58:07 -0400
Clive <clive.hills@gmail.com> writes:
I've just written up some recollections about my past experiences with Reality/Pick. Since it's very quiet here perhaps I may be forgiven the hubris of daring to think they may be vaguely of interest and I'll post a link below :-

http://clive-hills.blogspot.com/2009/10/databases-and-i.html

Thank you, Clive


re:
https://en.wikipedia.org/wiki/IBM_6150_RT

from above:
One of the novel aspects of the RT design was the use of a microkernel. The keyboard, mouse, display, disk drives and network were all controlled by a microkernel, called Virtual Resource Manager (VRM), which allowed multiple operating systems to be booted and run at the same time. One could "hotkey" from one operating system to the next using the Alt-Tab key combination. Each OS in turn would get possession of the keyboard, mouse and display. Both AIX version 2 and the Pick operating system were ported to this microkernel. Pick was unique in being a unified operating system and database, and ran various accounting applications. It was popular with retail merchants, and accounted for about 4,000 units of sales.

... snip ...

The other way of looking at it was that the machine was originally targeted as a displaywriter follow-on ... running the closed cp.r operating system (and everything all written in pl.8).

When the displaywriter follow-on was killed, they looked around for someother product/market to push the machine and decided on the unix workstation market. they got the company that had done the AT&T unix port to IBM/PC (for pc/ix) to do a port to pc/rt.

the line was that the in-house existing group could implement the VRM (in pl.8) and the outside company could do the unix port to the abstract virtual machine layer (VRM ... i.e. it wasn't a "native" virtual machine) in less time than if the outside company did the port to the bare metal (doing the VRM did have the side-effect of giving the in-house PL.8 programmers something to do).

the counter-example was that west coast group did port of BSD unix to the bare PC/RT metal ... and that time/effort was much less either the VRM or AIXV2 efforts (BSD Unix port to bare metal was less time/effort than VRM effort AND BSD UNIX port to bare metal was less effort than AT&T unix port to VRM abstract virtual machine interface).

There was also ongoing issues like new devices required both VRM drivers (in PL.8) as well as AIX drivers (in C).

misc. past posts about 801, iliad, romp, rios, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

for little drift ... recent post (in a.f.c.)
https://www.garlic.com/~lynn/2009o.html#11
mentiong Sowa and semantic network DBMS
http://www.jfsowa.com

Which was ported to aix.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. students behind in math, science, analysis says

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: U.S. students behind in math, science, analysis says
Newsgroups: alt.folklore.computers
Date: Thu, 08 Oct 2009 23:15:16 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
also has made cnn tv news

U.S. students behind in math, science, analysis says
http://www.cnn.com/2009/US/08/25/students.science.math/

however, this has been going on for a couple decades ... didn't quote study that claimed US would contribute to more robust US economy and GDP. past threads


re:
https://www.garlic.com/~lynn/2009m.html#69 U.S. students behind in math, science, analysis says

It's Sputnik, Stupid!; Is it too late for the U.S. to catch up with other countries in math and science education?
http://www.forbes.com/2009/10/08/science-education-china-technology-cio-network-sputnik.html

from above:
So, where is the U.S. 52 years later? As a society, we have unknowingly eaten most of our Sputnik-era technology seed corn.

...

I and the others in my math and science generation are now retiring, and we have failed to numerically and qualitatively replace ourselves. The deputy director at one of our most prestigious national laboratories told me two years ago that all of his top scientists would retire by 2012 and that he could not find qualified candidates to replace them.


... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. students behind in math, science, analysis says

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: U.S. students behind in math, science, analysis says
Newsgroups: alt.folklore.computers
Date: Fri, 09 Oct 2009 11:22:55 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
It's Sputnik, Stupid!; Is it too late for the U.S. to catch up with other countries in math and science education?
http://www.forbes.com/2009/10/08/science-education-china-technology-cio-network-sputnik.html


re:
https://www.garlic.com/~lynn/2009o.html#27 U.S. students behind in math, science, analysis says

there have been a number of past references about deteriorating competitive situation contributes to falling economic standing and standard of living ...

Whodunit? Sneak attack on U.S. dollar
http://news.yahoo.com/s/politico/20091008/pl_politico/28091

and some followup comments ...

Washington DC discovers new economic force: the World
http://financialcryptography.com/mt/archives/001192.html

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Justice Department probing allegations of abuse by IBM in mainframe computer market

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Justice Department probing allegations of abuse by IBM in mainframe computer market
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 09 Oct 2009 18:05:33 -0400
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
That has nothing to do with whether IBM's licensing policies violated antitrust laws. The fact remains that IBM refuses to license, e.g., z/OS, on competitive systems.

major production platform that FLEX sold on was Sequent ... and then IBM bought Sequent ... and then stopped selling Sequent boxes.

FLEX had sold some on Compaq (later HP) ... but that seemed to be more for test/development.

Before IBM bought Sequent, we did some consulting for Chen when he was CTO at Sequent
https://en.wikipedia.org/wiki/Sequent_Computer_Systems

Sequent & FLEX looked at providing FLEX on an Itanium-based Sequent box ... but Itanium then had performance issues and delays.

we had gotten involved with SCI effort before leaving IBM and then spent some time with various places doing SCI efforts ... including Sequent
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface

above mentions DG AViiON and Sun using SCI as well as Sequent. There was also SGI and Convex. DG & Sequent was 64 four (intel) processor boards interconnected with SCI (256 Intel processors ). Convex (Exemplar) was 64 two (HP RISC) processor boards interconnection with SCI (128 HP RISC processors).

Much earlier Chen had been at Cray computers and was credited with the XMP. He then left and formed his own supercomputer company ... with lots of funding from IBM (which was eventually acquired by Sequent):
https://en.wikipedia.org/wiki/Steve_Chen_%28computer_engineer%29

Sequent ran both NT and Dynix (their "enhanced" UNIX) system on their pre-NumaQ intel processor SMPs. The Sequent people in that period claimed to have done much of the NT SMP scale-up & parallelization work.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Page Faults and Interrupts

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Page Faults and Interrupts
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 10 Oct 2009 00:38:15 -0400
dzs@LONGPELAEXPERTISE.COM.AU (David Stephens) writes:
I've always thought that a page fault in any operating system, including z/OS, would generate an interrupt. The task requiring the missing page would be put aside whilst RSM did the required I/O to the page datasets (unless the page was already in memory - in expanded storage, or a stolen page). However, I haven't been able to find any mention of this interrupt (Principles of Operation mentions the six interrupt types), and how it works.

Can anyone clear this up? Does a page fault generate an interrupt like a program exception? If so, what sort? If not, what happens?


re: Principles of Operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/CCONTENTS?SHELF=DZ9ZBK03&DN=SA22-7832-03&DT=20040504121320

see "Section 6.5 Program Interuption" & various things called "translation exception".

slightly older version ... 360/67 (from 1967):
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/A27-2719-0_360-67_funcChar.pdf

see pg. 17 & two program interrupts, "segment translation" and "page translation".

it is possible to have page translation exception program interrupt if the page invalid bit is on in the page table entry (possibly indicating that the page isn't in memory). it is also possible to have segment translation exception program interrupt if the segment invalid bit is on in the segment table entry (possibly indicating that the page table hasn't been built yet).

when i did table migration in the mid-70s ... i used the segment invalid bit in the segment table entry to indicate that the corresponding page table information wasn't available.

some old email references:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

in cp67, i had gotten the avg total pathlength to take an page-fault interrupt, select a page to be replace, build & initiate page fetch I/O operation, perform task switch, take the page fetch i/o interrupt, clean-up the operation and reschedule the original task ... down around 500 instructions (that includes prorated cost of doing page write when a changed page had been selected for replacement ... between 1/3rd and 1/2 of the time).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Justice Department probing allegations of abuse by IBM in mainframe computer market

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Justice Department probing allegations of abuse by IBM in mainframe computer market
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 10 Oct 2009 09:54:48 -0400
Peter Flass <Peter_Flass@Yahoo.com> writes:
I'd say I'm sure IBM knows what they're doing, but based on what I've heard about how the company makes decisions, I doubt it.

It seems to me that IBM has a lot to gain and not much to lose by encouraging companies to support z/OS on smaller boxes. It's a market they don't sell to, so there are probably very few lost sales. Letting developers have cheaper systems can only encourage developers. Last but not least, letting small customers "buy into" mainframes cheaply will probably encourage them to stick with IBM as they grow.

Probably some suit in mainframe marketing is afraid he might lose one or two sales, and he's not looking at what's good for all of IBM in the long term.


re:
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market

but some of it goes back to the earlier litigation days and clone controllers. somewhat as result of previous litigation, there was the 23jun69 unbundling announcement with starting to charge for software and services; however the justification was made that kernel software would still be free.
https://www.garlic.com/~lynn/submain.html#unbundle

recent posts with references to Future System effort:
https://www.garlic.com/~lynn/2009o.html#4 Broken Brancher
https://www.garlic.com/~lynn/2009o.html#10 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009o.html#11 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009o.html#12 Calling ::routines in oorexx 4.0
https://www.garlic.com/~lynn/2009o.html#14 Microprocessors with Definable MIcrocode

this reference talks about major motivation for FS being clone controllers.
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

from above:
IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its R&D spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals.

... snip ...

this reference (from Morris & Fergus book)
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??

makes references to the distraction of FS (which was going to completely replace 360/370) and allowing 370 hardware & software product pipeline to go dry ... contributed significantly to allowing clone processors to gain foothold in the market place (also that the damage of FS failure resulted in the old culture under Watsons being replaced with sycophancy and make no waves under Opel and Akers).
https://www.garlic.com/~lynn/submain.html#futuresys

With the rise of clone processors, there was change in decision to not charge for kernel software ... and my (about to be released) resource manager was selected for guinea pig ... i got to spend 6 months off & on with business planning people & lawyers working on policies for kernel software charging (this was made more complex during the couple years of transition when there were parts of kernel that were free and parts that weren't free and possibly complex dependency between free and not free kernel software). Besides the change to charging for kernel software (because of rise of clone processors), the later OCO (object code only) decision was possibly another outcome.

As to clone controllers ... back as undergraduate in the 60s ... I had to add ascii/tty terminal support to cp67. I tried to do it in such a way that it extended the "automatic terminal recognition" already in place for 2741 & 1052. It turned out that I tried to make the 2702 controller do something that it couldn't quite do. This was part of the motivation for the univ. to launch a clone controller project ... reverse engineer the channel interface, build channel interface board for Interdata/3 and program the Interdata/3 to emulate 2702. There was later article blaming four of us for clone controller business.
https://www.garlic.com/~lynn/submain.html#360pcm

Perkin-Elmer acquired Interdata and the box was sold during much of the 70s & 80s under the Perkin-Elmer name. Even in the later 90s, I ran into the boxes at major financial transaction processor datacenter (that was handling large percentage of the merchant POS card swipe terminals in the US).

as to sycophancy and make no waves ... recent post about bringing down the wrath of the MVS organization
https://www.garlic.com/~lynn/2009o.html#17 Broken hardware was Re: Broken Brancher

when I first got phone call from POK ... I thot it might be about helping fix the software to handle all the error scenarios (that was resulting in MVS system failures) ... but it turned out to be about who was my management and what made me think I had any right to mention MVS problems.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Justice Department probing allegations of abuse by IBM in mainframe computer market

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Justice Department probing allegations of abuse by IBM in mainframe computer market
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 10 Oct 2009 11:40:03 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
With the rise of clone processors, there was change in decision to not charge for kernel software ... and my (about to be released) resource manager was selected for guinea pig ... i got to spend 6 months off & on with business planning people & lawyers working on policies for kernel software charging (this was made more complex during the couple years of transition when there were parts of kernel that were free and parts that weren't free and possibly complex dependency between free and not free kernel software). Besides the change to charging for kernel software (because of rise of clone processors), the later OCO (object code only) decision was possibly another outcome.

re:
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2009o.html#31 Justice Department probing allegations of abuse by IBM in mainframe computer market

one of my hobbies was doing distributions of highly enhanced operating systems for internal locations. one of the long-term customers was the HONE system ... providing world-wide online sales&marketing support (by mid-70s, mainframe orders couldn't even be submitted w/o having beeing processed by HONE applications)
https://www.garlic.com/~lynn/subtopic.html#hone

so in parallel with resource manager and bunch of other stuff ... I was also involved in SMP ... and kernels support SMP ... a couple recent posts
https://www.garlic.com/~lynn/2009o.html#10 Microprocessors with Definable Microcode
https://www.garlic.com/~lynn/2009o.html#14 Microprocessors with Definable Microcode
https://www.garlic.com/~lynn/2009o.html#17 Broken hardware was Re: Broken Brancher

large number of HONE applications were implemented in APL and as a result HONE was quite CPU intensive. One of first production places for the (standard 370) SMP support was consolidated US HONE datacenter (part of one of my internal releases). I've commented before ... that in the late 70s, The consolidated US hone datacenter was a cluster (loosely-couple) of SMPs ... possibly the large single-system image operation in the world at the time.

Now, I had crammed a bunch of stuff into the resource manager product ... that wasn't strictly related to dynamic adaptive resource management (in fact nearly 90 percent of the code).

Now one of the issues in starting to charge for kernel software ... was 1) initial kernel software to be charged-for wouldn't involve direct hardware support, 2) kernel software that was directly required to support hardware would still be free, and 3) "free kernel software" couldn't have as a prerequisite "charged-for software", in order to work.

So the way that SMP hardware support was implemented ... required a bunch of stuff that I had already released in the (charged-for) resource manager product ... so when the decision was made to release the SMP support ... there was a problem with requiring the charged-for resource manager, in order for SMP support to work (which was violation of the policies for charged-for software). The resolution was to move 90% of the lines-of-code out of the "charged-for" resource manager ... into the free non-charged-for kernel software ... allowing for SMP software support to ship w/o having a dependency on charged-for software (the price charged for the "new" resource manager stayed the same ... even tho it was only about 10% of the lines-of-code).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. house decommissions its last mainframe, saves $730,000

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: U.S. house decommissions its last mainframe, saves $730,000
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 10 Oct 2009 16:30:33 -0400
U.S. house decommissions its last mainframe, saves $730,000
http://www.networkworld.com/news/2009/101209-8-ways-the-american-information.html

from above:
The U.S. House of Representatives has taken its last mainframe offline, signaling the end of a computing era in Washington, D.C.

... snip ...

at one point congress and white house was using them for at least EMAIL (PROFS). some of it possibly dates even back to:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

i was undergraduate in the 60s ... but doing lots of work on cp67 ... even getting requests from the vendor for specific kinds of enhancements. I didn't learn about the above guys until much later ... but in retrospect, some of the change requests could be considered of the kind that such customers would be interested in.

later I got blamed for computer conferencing on the internal network in the late 70s and early 80s (the internal network was larger than the internet from just about the beginning until possibly some late '85 or early '86).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Google Begins Fixing Usenet Archive

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Google Begins Fixing Usenet Archive
Newsgroups: alt.folklore.computers
Date: Sat, 10 Oct 2009 16:41:49 -0400
Google Begins Fixing Usenet Archive
http://www.wired.com/epicenter/2009/10/usenet_fix/

from above:
Google has pulled its Google Groups development team out of the basement broom closet and begun patching up its long-broken Usenet library, in response to our story Wednesday highlighting the company's neglect of the 700 million post archive.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Operation Virtualization

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Operation Virtualization
Newsgroups: alt.folklore.computers
Date: Sat, 10 Oct 2009 18:59:59 -0400
Operation Virtualization; VMware CTO discusses how virtualization changes the way we should think about operating systems.
http://www.forbes.com/2009/10/05/vmware-operating-systems-technology-virtualization-09-herrod.html

from above:
You're talking about a future where the operating system becomes just a file. Does that mean that operating systems are going to be playing a decreasing role in the future?

... snip ...

this is somewhat the virtual appliance: story ... or what we use to call service virtual machines.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. students behind in math, science, analysis says

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: U.S. students behind in math, science, analysis says
Newsgroups: alt.folklore.computers
Date: Mon, 12 Oct 2009 10:14:48 -0400
ArarghMail910NOSPAM writes:
A geosynchronous orbit is 22,000 miles (some 35,400 kilometers) out there - I have known that figure for YEARS.

I got suckered into doing some HSDT stuff for SBS (consortium of ibm, aetna and comsat) which included getting involved in how computer communication interface to earth stations ... and all the stuff about latency going up to satellite and back down (couple hops if going between west coast and europe), working with vendors building custom equipment to design spec, etc. I got invittation to cape launch party for 41-D that was taking sbs4 part way up to orbit (had to be released from the bay and had rocket boosting it the rest of the way).

one of the vendors even mentioned that a specific large telecommunication company had approached them to build identical set of earth stations to our specs. (industrial espionage ... there has been periodic references to business ethics being an oxymoron)

41-d reference at nasa
http://science.ksc.nasa.gov/shuttle/missions/41-d/mission-41-d.html

misc. past posts mentioning HSDT
https://www.garlic.com/~lynn/subnetwork.html#hsdt

misc. past posts mentioning 41-d:
https://www.garlic.com/~lynn/2000b.html#27 Tysons Corner, Virginia
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2004b.html#23 Health care and lies
https://www.garlic.com/~lynn/2005h.html#21 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2006m.html#11 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2006m.html#16 Why I use a Mac, anno 2006
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006v.html#41 Year-end computer bug could ground Shuttle
https://www.garlic.com/~lynn/2007p.html#61 Damn
https://www.garlic.com/~lynn/2008m.html#19 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008m.html#20 IBM-MAIN longevity
https://www.garlic.com/~lynn/2008m.html#44 IBM-MAIN longevity
https://www.garlic.com/~lynn/2009i.html#27 My Vintage Dream PC
https://www.garlic.com/~lynn/2009k.html#76 And, 40 years of IBM midrange

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Young Developers Get Old Mainframers' Jobs

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Young Developers Get Old Mainframers' Jobs
Date: 12 Oct, 2009
Blog: Mainframe Experts
Young Developers Get Old Mainframers' Jobs
http://itmanagement.earthweb.com/features/article.php/3842721/Young+Developers+Get+Old+Mainframers’+Jobs.htm

from above:
Last spring one of my co-workers went to college campuses to recruit prospective 'young mainframers.' Young mainframers? Isn't that an oxymoron?

... snip ...

not specifically mainframe ... but ...

It's Sputnik, Stupid!; Is it too late for the U.S. to catch up with other countries in math and science education?
http://www.forbes.com/2009/10/08/science-education-china-technology-cio-network-sputnik.html

from above:
So, where is the U.S. 52 years later? As a society, we have unknowingly eaten most of our Sputnik-era technology seed corn.

...

I and the others in my math and science generation are now retiring, and we have failed to numerically and qualitatively replace ourselves. The deputy director at one of our most prestigious national laboratories told me two years ago that all of his top scientists would retire by 2012 and that he could not find qualified candidates to replace them.


... snip ...

Some of these locations in the past, had made a decision to move off mainframes because they had increasing number of open (mainframe) jon positions that they were unable to fill.

A decade ago ... there was simultaneous internet bubble and Y2K remediation. Some number of people were transferring from mainframe to internet because internet was paying lots more money ... and new people were going into internet because there was lot more money. Finance and large commercial was bidding up mainframers for their Y2K work. Lots of other business had to outsource their Y2K remediation work to overseas because the domestic resources weren't available. That left a lot of businesses that were priced out of being able to compete for scarce resources (and decided that they had to migrate to non-mainframes to take advantage of the skills that were coming out of schools).

The internet bubble burst and Y2K finished ... that left lots of people looking around looking for jobs (jobs that had gone overseas ... because it was the only solution during the simultaneous internet bubble and Y2K remediation ... and companies that had moved off mainframes because lack of new generation).

There were statistics last year that "baby boomer" bubble ... besides having been a "math & science" generation is four times larger than the previous generation and nearly twice as large as the following generation (that is why it was labeled "baby boomers"). During the height of the "baby boomer" working years they represented enormous work force and enormous consumers (able to support a retirement population that was much smaller than they were).

With the "baby boomers" moving into retirement the ratio of workers to retirees decreases by a factor of eight times (baby boomers increase retirees by four times because there are four times as many ... and work force is nearly cut in half because the following generation is only half as large).

Aside from the fact that schools 1) weren't producing mainframers in numbers and 2) schools weren't even producing graduates with geneal necessary skills to be competitive (science and math) ... and 3) simultaneous internet bubble & Y2K remediation had forced many jobs overseas (because there weren't enough skills here at home).

Business/society is faced with lack of basic skills in domestic population, lack of mainframe skills in (any) population, as well as domestic work force that was shrinking. Besides the general implication of drop in ratio of workers to retirees falling by factor of eight times ... there can be specific issues like if there is (corresponding) drop in the ratio of health care workers to retirees by a factor of eight times.

There was another issue dating back to 80s. I had sponsored John Boyd's briefings in the early 80s. One of the points he made was about strategy of managing large groups. Going into WW2, the US was faced with very quickly deploying large numbers with little or no skills. To leverage the scarce skills that were available, a very rigid, top-down command and control structure was created (implicit assumption that the majority of the people didn't know what they were doing). The problem was going into the 80s, a lot of the young officers that had learned their organization skills in WW2, were then starting to permeate the ranks of US corporate executives ... and creating infrastructures with large number of people in a rigid, top-down command and control structure.

A year or two ago, there was report that claimed the ratio of executive compensation to avg. employee compensation had exploded to to 400:1 after having been 20:1 for quite a while (and 10:1 in most of the rest of the world). One of the possible explanations was that the top executives still had the WW2 point-of-view ... where they were the rare skills that knew what they were doing and the rest of the organization was unskilled and need rigid top-down command and control.

misc. URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

The first round of that I remember was when politicians cracked down on the medical conferences ... frequently offshore ... doctors spent 30 minutes in the hotel and the rest of the day on the golf course. That round pretty much eliminated having SHARE meetings offshore ... or any place that had golf course nearby.

I was sponsoring John Boyd's briefings at IBM in the early 80s ... when he observed shift in US executive culture because of young WW2 soldiers were dominating top of corporate america. there were number of studies about the longer term consequences of those change through the 90s and this century (although the failure of the Future System project also had long term effects on IBM specifically)

including responsible for battle plan for desert storm and comments that major problem going into the current conflict was boyd had died in 1997.

I considered some of IBM issues started to unravel with 23Jun69 "unbundling" announcement ... some past posts
https://www.garlic.com/~lynn/submain.html#unbundle

and starting to charge for (application) software (they made the case that kernel software should still be free) and services (including SE time).

one of the major learning mechanisms was young SEs as part of SE group at customer site ... sort of "hands-on" apprentice experience. with the unbundling announcement (and starting to charge for SE time at the customer) that went out the window.

an attempt to compensate, the DPD division deployed a number of (virtual machine) CP67 "HONE" systems to provide (online remote) operating system experience for branch SEs from the branch office.

The science center ... some past posts
https://www.garlic.com/~lynn/subtopic.html#545tech

had originally done virtual machine CP40 (on 360/40 with custom hardware modifications) which then morphed into CP67 (when standard 360/67 with virtual memory hardware become available. The science center also did a port of apl\360 to cms for cms\apl (with lots of enhancements for operating in large virtual memory environment and interfacing to cms facilities).

the DPD division then also started deploying online sales & marketing applications on HONE ... mostly written in cms\apl ... and shortly those applications started to dominate all use and the original HONE purpose for SE online training & practice withered away.

pretty soon HONE clones were starting to crop up around the world (I got to do some number of those installations fresh out of college) and by the mid-70s, it wasn't even possible to submit a mainframe order without it first being processed by HONE applications.

a recent ibm-main mailing list post discussing some of unbundling
https://www.garlic.com/~lynn/2009o.html#31 Justice Department probing allegations of abuse by IBM in mainframe computer market

including the future system project was also somewhat outcome of that environment. The above references some number of references to the effect of the Future System failure on IBM ... including reference to quotes from Fergus and Morris book that studied the effect.

a couple days into desert storm, US News & Report ran an article on Boyd titled "The Fight to Change How America Fights" (6May1991) ... and mentioned recent crops of cols. & majs. from war colleges as Boyd's Jedi Knights. If there was an issue of WW2 rigid, top/down command & control structure of massive unskilled resources starting to permeate corporate america ... it was even more firmly entrenched in the military. only recently has some of Boyd (& OODA-loops) started to show up in MBA programs.

part of share presentation that I made as undergraduate at fall68 SHARE meeting
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

I had been given responsibility for univ. production system .... supported both academic & administration computing. For OS/360 I had increased thruput by factor of nearly three times for typical student workload (this was before WATFOR came in to save the day). I also got to rewrite lots of CP67 kernel.

Following summer, I got con'ed into doing stint at Boeing helping set up BCS as well as online computing. For a long time, I thot the Renton datacenter was one of the largest arouund. However, one of Boyd's biographies talks about him doing a year tour in 1970 running "spook base", which was described as a $2.5B windfall for IBM (but still not nearly enough to cover the cost of Future System effort) ... misc. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

after graduating and joining the science center ... I had a hobby building, shipping, support highly enhanced internal operating system product for internal datacenters (including the sales&marketing world-wide HONE systems). Old email referring to migrating from cp67 to vm370 after the science center got a 370/155
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

Above mentions asking for some part-time help of two BU co-op students with the activity. --
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. house decommissions its last mainframe, saves $730,000

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: U.S. house decommissions its last mainframe, saves $730,000
Date: 12 Oct, 2009
Blog: Mainframe Experts
U.S. house decommissions its last mainframe, saves $730,000
http://www.networkworld.com/news/2009/100909-congress-mainframes.html?hpg1=bn

from above:
The U.S. House of Representatives has taken its last mainframe offline, signaling the end of a computing era in Washington, D.C.

... snip ...

at one point both congress and white house was using them for at least EMAIL (PROFS). some of it possibly dates even back to:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

i was undergraduate in the 60s ... but doing lots of work on cp67 ... even getting requests from the vendor for specific kinds of enhancements. I didn't learn about the above guys until much later ... but in retrospect, some of the change requests could be considered of the kind that such customers might be interested in.

not necessarily washington location specific ... but small sample of email from recent mainframe mailing list participants

ibm-main (any ibm mainframe) mailing list has recent email address from freddiemac, epa, nasa, customs.treas, irs, opm, nlm.nih, nsf, ssa, uscourts, usda

VM (mainframe) specific mailing list still has email addresses from nih, hhs, usda,

A decade or so ago ... did something with NLM at NIH ... which was mainframe running a custom RYO CICS-like operation from the late 60s.

When I was undergraduate in the 60s ... the univ. library got an ONR grant for computerized catalog ... part of the money was used to buy a 2321 (data cell). The project also got selected to be one of the beta-test sites for the original CICS product (in transition from having been developed at customer site to be offered as product). I got tasked to support and debug.

In any case, some number of things that had been done for NLM ... were similar to stuff I had done as undergraduate.

Note that in addition to traditional "mainframes" ... UNISYS sold some rebranded Sequent boxes (large i86 SMPs) as a "new" kind of mainframe ... or at least before IBM bought sequent ... and then stopped offering sequent boxes. recent post mentioning sequent in an ibm-main thread
https://www.garlic.com/~lynn/2009o.html#29

I had gotten blamed for computer conferencing on the internal network in the late 70s and early 80s (all predating PROFS). There was even a article in Nov81 Datamation article about it. The internal network was larger than the arpanet/internet from just about the beginning until possibly late '85 or early '86 ... misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

Somewhat outcome was that there was an official application and environment setup for such activity (as opposed to some of the stuff I was doing informally). TOOLSRUN started out as a CMS EXEC2 application that supported options for mailing list kind of operation as well as a USENET mode of operation (clients could select which method they wanted).

Later something similar was developed for BITNET (academic network in the US ... also EARN in europe that used similar technology to what was on the internal network) called LISTSERV. Lots of mailing list discussion groups still exist to day from those BITNET LISTSERV origins (and mailing list/LISTSERV technologies have been ported to a number of non-mainframe platforms). Misc. past posts mentioning BITNET (&/or EARN)
https://www.garlic.com/~lynn/subnetwork.html#bitnet

Another result of that was there was a researcher paid to sit in the back of my office and take notes on how I communicated. They also got copies of all my incoming and outgoing email as well as logs of all instant messages. Besides turning into corporate research report ... it was also material for Stanford Phd (joint language and computer AI) as well as some number papers and at least one book. Misc. past posts mentioning computer mediated communication.
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Disaster recovery is dead; long live continuous business operations

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Disaster recovery is dead; long live continuous business operations
Date: 12 Oct, 2009
Blog: Mainframe Experts
Disaster recovery is dead; long live continuous business operations
http://searchcio.techtarget.com/news/article/0,289142,sid182_gci1370616,00.html

from above:
Disaster recovery is dead; long live continuous business operations Despite the fact that IT now has cloud computing and storage area networks and communications systems that can plug in anywhere, many companies still

... snip ...

I coined the terms geographic survivability and disaster survivability when we were marketing our HA/CMP product ... to differentiate from simple disaster/recovery ... some past HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

I was also asked to write a section for the corporations continuous availability strategy document ... but when both Rochester and POK complained (that they couldn't meet the same objectives), the section was pulled. misc. past posts mentioning continuous availability
https://www.garlic.com/~lynn/submain.html#available

I was also doing cluster scale-up in conjunction with HA/CMP ... old post referencing Jan92 meeting in Ellison's conference room on the subject
https://www.garlic.com/~lynn/95.html#13

over period of the following 6-8 weeks, the effort was transferred, announced as a supercomputer (for numerical intensive only) and we were told we couldn't work on anything with more than four processors. some old email from the period
https://www.garlic.com/~lynn/lhwemail.html#medusa

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

The Web browser turns 15: A look back;

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: The Web browser turns 15: A look back;
Newsgroups: alt.folklore.computers
Date: Tue, 13 Oct 2009 07:36:27 -0400
The Web browser turns 15: A look back; Here is a look back at 15 years of wars, lawsuits, and standards the Web browser has brought us
http://www.infoworld.com/d/applications/web-browser-turns-15-look-back-358

from above:

The Web browser turns 15 on Oct. 13, 2009 -- a key milestone in the history of the Internet. That's when the first commercial Web browser -- eventually called Netscape Navigator -- was released as beta code. While researchers including World Wide Web inventor Tim Berners-Lee and a team at the National Center for Supercomputing Applications created Unix browsers between 1991 and 1994, Netscape Navigator made this small piece of desktop software a household name

... snip ...

recent posts discussing consulting on what is now called "electronic commerce" ... and getting to mandate stuff like multiple A-record support in the webserver to payment gateway ... some past posts mentioning "payment gateway"
https://www.garlic.com/~lynn/subnetwork.html#gateway

... but it taking another year to get multiple A-record support into the browser:
https://www.garlic.com/~lynn/2009m.html#32 comp.arch has made itself a sitting duck for spam
https://www.garlic.com/~lynn/2009n.html#41 Follow up
https://www.garlic.com/~lynn/2009n.html#43 Status of Arpanet/Internet in 1976?

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. house decommissions its last mainframe, saves $730,000

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: U.S. house decommissions its last mainframe, saves $730,000
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 13 Oct 2009 07:45:27 -0400
jchase@USSCO.COM (Chase, John) writes:
From CONgress' viewpoint: "It's public money. It doesn't belong to anybody, so we *have to* spend it."

My viewpoint: CONgress *is* organized crime.


re:
https://www.garlic.com/~lynn/2009o.html#33 U.S. house decommissions its last mainframe, saves $730,000
https://www.garlic.com/~lynn/2009o.html#38 U.S. house decommissions its last mainframe, saves $730,000

slightly related ... recent post
https://www.garlic.com/~lynn/2009o.html#25 Opinions on the 'Unix Haters' Handbook'

referencing article The Success of Failure:
http://www.govexec.com/management/management-matters/2007/04/the-success-of-failure/24107/

where a culture has grown up with the large system integrators and beltway bandits that they make more money off the failures than they do off of successes (i.e. failed products result in another round of appropriations for additional attempts, making more profit off the failed projects than made off any successful).

in the past there have been periodic references to statistics that the institution with the highest percentage of convicted felons was congress

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Outsourcing your Computer Center to IBM ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Outsourcing your Computer Center to IBM ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 13 Oct 2009 09:12:55 -0400
graeme@ASE.COM.AU (Graeme Gibson) writes:
And before people throw too many stones at IBM ..

Let's say that Air NZ were to switch IT facilities providers, either now, or when the current contract term is up, what's the chance that they will do any better next time? I'd suspect that the different facilities providers, like IBMs, EDS/HP, CSC et al, at some level are themselves using common suppliers for things like, oh let's say, diesel powered generator sets, airconditioning, telecommunications, electricians, building security, plumbers and, dare I say it, IT contractors, systems programmers and so forth. So, choose whichever overarching supplier you will, underneath they're likely to have at least some exposures in common, especially in a small-ish community like New Zealand.


a decade or so ago ... one of the offspring had college job working for air freight forwarder ... and had access to major res systems to scheduling freight in planes (those containers that go into belly of the plane). one of the issues was that they still took down the res system a couple times a month ... usually sunday nights (to do things like rebuild databases) ... but sometimes they were still offline monday morning.

we got invited in to one of the major res. systems to look at rewriting parts of it. initial look was at "routes" (find flts to get from origin to destination) ... and they had ten major things that they wanted to do (that they couldn't do). A couple months later, I came back with implementation that ran 100 times faster for the things that they currently did ... and all ten impossible things (so overall it was only about ten times faster).

then the hand-wringing started. It turned out that many of the things they couldn't do was because they had possibly 400 people involved in manual processes (like databases rebuilding). changing the paradigm to do all ten impossible things, eliminated those manual processes ... and the jobs for those 400 people.

part of the paradigm change was having done work on chip design physical layout ... so the slightly over 4000 airports in the world ... and something less than unique 500,000 flt segments (i.e. take-off/landings) from the full OAG (all airlines in the world) ... was fairly straight-forward.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Outsourcing your Computer Center to IBM ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Outsourcing your Computer Center to IBM ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 13 Oct 2009 09:34:10 -0400
jmfbahciv <jmfbahciv@aol> writes:
I'd like to whack those wringing hands with a baseball bat. The implementation would free up 400 people's time to do other things that will produce more income. 400 people working on Sunday in a union shop is, what?, triple overtime to TECO data which will be done wrong most of the time.

re:
https://www.garlic.com/~lynn/2009o.html#42 Outsourcing your Computer Center to IBM ?

at the time, TPF (PARS/ACP) programmers were going for nearly quarter mil (scarce, highly specific skill).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Outsourcing your Computer Center to IBM ?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Outsourcing your Computer Center to IBM ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 13 Oct 2009 10:26:11 -0400
jmfbahciv <jmfbahciv@aol> writes:
I'd like to whack those wringing hands with a baseball bat. The implementation would free up 400 people's time to do other things that will produce more income. 400 people working on Sunday in a union shop is, what?, triple overtime to TECO data which will be done wrong most of the time.

re:
https://www.garlic.com/~lynn/2009o.html#42 Outsourcing your Computer Center to IBM ?
https://www.garlic.com/~lynn/2009o.html#43 Outsourcing your Computer Center to IBM ?

there were actually closer to total of 800 involved ... but I couldn't be sure if all 800 would have been obsoleted by automating nearly all of the manual tasks.

one of the reasons that they could afford to pay the TPF programmers so much was that the business was structured in such a way (i still don't quite understand) that the res. system made larger gross profit than flying the planes ... the res. system could even turn a sizable profit even when flying the planes was loosing money.

stretching things a bit ... there is some analogy with the current financial crisis ... large part of it because people involved could get payed as percentage of the size of the deal ... regardless of whether there was profit or loss ... so they could push for the largest, riskiest deals ... since that maximumized their compensation ... theoretically with no down side ... for instance there seems to be some uproar over size of bonuses at goldman this year ... when they theoretically have profits ... but the bonuses seemed to be about the same as last year when they lost money.

report from last jan. was that the (direct) $10B tarp funds to goldman was a little less than the compensation they paid out (not including the tens of billions that went to AIG ... a large part of which was then turned over to goldman).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

The Web browser turns 15: A look back;

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Web browser turns 15: A look back;
Newsgroups: alt.folklore.computers
Date: Tue, 13 Oct 2009 17:09:38 -0400
Stan Barr <plan.b@dsl.pipex.com> writes:
[1] I've just had a look, I've still got Mosaic v1.13, Copyright 27 Jan 1994, on my later PPC Mac, and it still works (mostly...).

re:
https://www.garlic.com/~lynn/2009o.html#40 The Web browser turns 15: A look back;

old post about digging out early netscape distribution and reinstalling and running it
https://www.garlic.com/~lynn/2005k.html#7 Firefox Lite, Mozilla Lite, Thunderbird Lite -- where to find

from above ...
389181 Oct 30 1994 nscape09.zip

...

above post also has "internetMCI" announcement from Nov. 1994. MCI appeared to have actually funded much of the "commerce server" work at Netscape ... in the initial scenario it was a "mall" paradigm ... suitable for hosting companies (like MCI) ... and then a single store version.

we were brought in as consults on being able to do payment transactions on the servers. some recent posts mentioning working on what is now frequently called "electronic commerce":
https://www.garlic.com/~lynn/2009n.html#6 OSS's Simple Sabotage Field Manual
https://www.garlic.com/~lynn/2009n.html#8 Malware lingers months on infected PCs
https://www.garlic.com/~lynn/2009n.html#36 The Compliance Spectrum...Reducing PCI DSS Scope
https://www.garlic.com/~lynn/2009n.html#41 Follow up
https://www.garlic.com/~lynn/2009n.html#43 Status of Arpanet/Internet in 1976?
https://www.garlic.com/~lynn/2009o.html#3 Sophisticated cybercrooks cracking bank security efforts
https://www.garlic.com/~lynn/2009o.html#22 Rogue PayPal SSL Certificate Available in the Wild - IE, Safari and Chrome users beware

i've also still got mosaic distributions from '94 including
mosaic-indy.gz 7/16/1994 Mosaic-2.5b2-indy.gz 12/27/1994 wmos20a5.zip 7/21/1994 wmos20a7.zip 10/30/1994

there was new president of MIPS ... (by that time an SGI subsidiary) and they gave the executives personal indys ... I offered to order and configure the machine (we had previously reported to the "new president" when he worked for a different company in austin) ... and never actually had to give it up.

from README.Mosaic:
How To Download And Run NCSA Mosaic -----------------------------------

NCSA Mosaic is an Internet-based global hypermedia browser, available free for academic, research, and internal commercial use.

If at any time you have questions or problems with NCSA Mosaic, please feel free to send electronic mail to:

mosaic-x@ncsa.uiuc.edu

The NCSA Mosaic anonymous FTP distribution site is ftp.ncsa.uiuc.edu. Program files are in directory /Mosaic.

Executable Binaries ...................

The easiest way to download Mosaic is to retrieve an executable binary from subdirectory Mosaic-binaries. The following binaries are distributed:

Mosaic-sun.gz Sun 4, SunOS 4.1.x Mosaic-sun-lresolv.gz Sun 4, SunOS 4.1.x, no DNS Mosaic-sgi.gz Silicon Graphics, IRIX 4.x. Mosaic-indy.gz Silicon Graphics, IRIX 5.x. Mosaic-ibm.gz IBM RS/6000, AIX 3.2. Mosaic-dec.gz DEC MIPS Ultrix. Mosaic-alpha.gz DEC Alpha AXP, OSF/1. Mosaic-hp700.gz HP 9000/700, HP/UX 9.x

To download a binary, put your FTP session into binary mode (type 'binary', without the quotes), pull down the file, quit the FTP session, uncompress the binary (type, e.g., 'gunzip Mosaic-sun.gz' without the quotes), make the binary executable (type, e.g., 'chmod 755 Mosaic-sun'), and execute the binary.


... snip ...

from Mosaic-Security-Issues:
MOSAIC SECURITY ISSUES AND RESPONSES

1. Mosaic 2.2, and all previous version of the NCSA Mosaic for the X Windowing System have a serious security hole that allows telnet URLs to execute an arbitrary UNIX command. The immediate action was to inform people Mosaic 2.3 this bug has been fixed, for more information read about the details of the telnet URL problem.

2. There was once a concern with Mosaic using ghostview as a postscript viewer, because postscript can be insecure. The new version of ghostscript (Version 2.6.1) used by ghostview runs in _secure mode_ by default, so this is no longer an issue.

3. There is a way (involving reconfiguration of both client and server) to have Mosaic execute any arbitrary shell script that is passed over the network. This is a documented feature that cannot be activated accidentally, you should read about Executing Shell Scripts in Mosaic before activating this feature.

_THAT IS ALL!_ If there are any other security problems that any of you know of, _PLEASE MAIL US!_ If you post security concerns to the net, please be kind enough to be specific. Vague alarmist postings just make more busy work for us.

Eric Bina (ebina@ncsa.uiuc.edu)


... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. begins inquiry of IBM in mainframe market

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: U.S. begins inquiry of IBM in mainframe market
Date: 13 Oct, 2009
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2009o.html#31 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2009o.html#32 Justice Department probing allegations of abuse by IBM in mainframe computer market

vm370 release 5 kernel had "free" base & HPO (a charged-for add-on).

vm/370 release 6 kernel had free base (being distributed and run with hercules) and "BSEPP" (low-end & mid-range "add-on") & "SEPP" (high-end more expensive add-on) charged/licensed portions.

VM/370 R6 reference ...
http://www.cbttape.org/vm6.htm

similarly MVS 3.8j reference ...
http://www.cbttape.org/mvs38.htm

going back even further OS/360 MVT
http://www.cbttape.org/os360.htm

VM/370 release 7 kernel was renamed VM/SP1 and the whole thing was charged for (licensed).

My resource manager was the guinea pig for charging/licensing for kernel code and initial shipped on vm/370 release 3. I had to spend six months off & on with the business people and the lawyers on policies for charging/licensing kernel code.

Note, besides the strict resource manager part ... I included a bunch of other code as part of the package.

One of the policies was that "free" kernel code couldn't have dependency on priced/licensed code. And at least during the transition, kernel code directly involved in hardware support would be free.

However, internal VM370 multiprocessor SMP support was done in such a way that it was dependent on all the "extra" stuff that I had included in the "resource manager". When it was decided to release the SMP support in VM370 release 4, there was big problem ... the new multiprocessor SMP support was direct hardware support ... and therefor part of the "free" base. However, it was dependent on a bunch of code in a charged-for add-on.

Eventually the problem was resolved by moving about 90% of the code in my "charged-for" resource manager and placing it into the "free" base ... allowing multiprocessor SMP support to be released w/o requiring charged-for addon (oh and even tho 90% of the code from the resource manager was moved into the free kernel base, the resource manage price stayed the same).

I had originally done the SMP design starting in Jan '75 for a 5-way multiprocessor 370 ... that I was able to move a lot of kernel code into the microcode (including major pieces of multiprocessor support and dispatching ... a little like the later I432). That project got killed w/o ever being announced ... so i adapted the design to standard 370 multiprocessor hardware ... moving function back from microcode into standard 370 instructions.

misc. past posts mentioning the 5-way smp project from Jan '75.
https://www.garlic.com/~lynn/submain.html#bounce

about the same time ... spring of '75 ... endicott also con'ed me into working on the ECPS microcode assist (originally for virgil/tully ... i.e. 138/148) ... old post with some of the original kernel studies deciding on what to move into microcode:
https://www.garlic.com/~lynn/94.html#21

misc. past posts about the dynamic adaptive resource manager ... dating back to when I did the earlier version as undergraduate in the 60s .... with original version on cp67 (it was frequently referred to as "fair share" scheduler because of the default adaptive resource policy).
https://www.garlic.com/~lynn/submain.html#fairshare

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. begins inquiry of IBM in mainframe market

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: U.S. begins inquiry of IBM in mainframe market
Date: 14 Oct, 2009
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2009o.html#46 U.S. begins inquiry of IBM in mainframe market

There was another slightly related thread between the resource manager and Amdahl. Amdahl had been selling into education and science market ... but hadn't broken into any "big blue" accounts.

One really "big blue" account then told IBM that they were going to order Amdahl because something that the ibm branch manager had done to extremely offend them. I was asked to go on site for six months to obfuscate the issue why the customer was going to order Amdahl (obfuscate the circumstances and try and make it appear like technical issues were involved rather than bad relations between the customer and the branch manager).

I was well familiar with the customer from dealings at share meetings and several on-site visits (familiar that the customer was determined to order Amdahl machine) ... and knew that nothing I would do was going to stop the customer from ordering Amdahl machine .... it was going to look lonely in the huge datacenter otherwise filled with large number of IBM machines ... maybe not quite as many as big as Renton or spook base ... but still large ... reference to related thread in (linkedin) mainframe experts ... also here
https://www.garlic.com/~lynn/2009o.html#37 Young Developers Get Old Mainframers' Jobs

I declined to take the 6-month onsite offer ... since I knew that it was going to have no effect on the outcome. I was then told if I didn't take this hit on behalf of the branch manager (obfuscating why the customer was ordering Amdahl) it would be the end of my career ... since the branch manager was personal friend of the CEO.

After moving out to the west coast ... I would then regularly see several people from Amdahl ... if nothing else at regular monthly user group meetings. I even tried to help mediate between the unix group (GOLD) and the "RASP" group (ASPEN?)

misc. past posts mentionin GOLD &/or ASPEN
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)

With regard to no longer having a career, I had sponsored Boyd's briefings at IBM in the 80s .... recent reference
https://www.garlic.com/~lynn/2009o.html#38

Boyd is credited with battle plan for desert storm ... there have comments about one of the problems in the current conflicts is that Boyd had died in 1997. However, he (also) had managed to spend much of his career offending higher ups. some URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

In any case, this old post
https://www.garlic.com/~lynn/2000e.html#35 War, Chaos, & Business (web site), or Col John Boyd

is supposedly a Boyd quote
"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

... snip ...

From the dedication of Boyd Hall, United States Air Force Weapons School, Nellis Air Force Base, Nevada. 17 September 1999

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Opinions on the 'Unix Haters' Handbook'

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinions on the 'Unix Haters' Handbook'.
Newsgroups: alt.folklore.computers
Date: Wed, 14 Oct 2009 09:40:13 -0400
MSNBC just had interview with head of chamber of commerce and the interviewer repeated most of the issues and asked the head of chamber of commerce something about lobbying for stealing money ... the lobbying paid for by gov. bail-out money ... chamber of commerce had recently $20m from AIG ... and lobbying against regulating over-the-counter derivatives (selling insurance w/o maintaining reserves to cover payouts, and relying on gov. for bailouts). head of chamber of commerce eventually replied that all of these companies would eventually repay the money. interviewer then made statement that anybody could make large amount of money if they use gov. funds to continue doing ever increasing risky deals ... w/o regard to consequences ... making huge amounts of money on the risky deals (and if things did go wrong again ... they could depend on the gov. to bailout again).

part of this was 1999 bank modernization act (repeal glass-steagall) and 2000 commodity trading modernization act (precluding over the counter derivatives from being regulated)

now commentary is about Paulsen's (free) give-away to bail-out the institutions for their bad behavior ("largest theft of money ever").

misc. past posts in this thread:
https://www.garlic.com/~lynn/2009n.html#47 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009n.html#49 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009n.html#56 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009n.html#58 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009n.html#62 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009n.html#68 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#21 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#23 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#24 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#25 Opinions on the 'Unix Haters' Handbook'

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Opinions on the 'Unix Haters' Handbook'

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinions on the 'Unix Haters' Handbook'.
Newsgroups: alt.folklore.computers
Date: Wed, 14 Oct 2009 09:48:13 -0400
re:
https://www.garlic.com/~lynn/2009o.html#48 Opinions on the 'Unix Haters' Handbook'.

then somebody claimed that financial industry lobbying is one of the best investments ever ... for ever dollar spent lobbying (which has run to billions) they have shown return (courtesy of the gov) quarter of million (something like 259,000:1) return on investment.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

WSJ.com The Fallacy of Identity Theft

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: WSJ.com The Fallacy of Identity Theft
Date: 14 Oct, 2009
Blog: Financial Crime Risk, Fraud and Security
WSJ.com The Fallacy of Identity Theft
http://online.wsj.com/article/SB125537784669480983.html

we had gotten involved in doing taxonomy of "identity theft" ... part of it was doing merged taxonomy & glossary of various subjects, payments, security, financial, etc ... some refs
https://www.garlic.com/~lynn/index.html#glosnote

a major "sub-category" of "Identity theft" has been "account fraud" ... which is frequently just knowing the account number is sufficient for performing fraudulent transactions. Much lower rate of fraud (because it requires more work) ... has been opening new accounts using somebody else's identity (which also gets into some of the gov. "know your customer" mandates)

One scenario has been that if "account fraud" were to be made more difficult (effectively the current low-hanging fraud/fruit) ... that there might be a big shift to opening new accounts. Now some of the fraud related to opening new accounts is associated with real people ... but there is a growing amount of "synthetic IDs" ... opening accounts associated with identities that have no corresponding real people.

Many of the efforts related to "identity theft" have been to categorize different personally identifiable information (PII ... I was one of the co-authors of the financial industry x9.99 privacy standard) as to amount of information hiding that was required (aka encryption or other mechanism).

One of the alternatives that we did was to categorize PII information as to what the threat was ... and then look at possible countermeasures to the threats (which weren't necessarily restricted to just hiding the information).

An example was the X9.59 financial standard work from nearly 15yrs ago. We had been brought in to consult with a small client/server startup that wanted to do payment transactions on their server ... they had also invented this technology called SSL they wanted to use. the result is now frequently called "electronic commerce". Somewhat as a result, we were invited to participate in the X9A10 financial standard working group which had been given the requirement to preserve the integrity of the financial infrastructure for *ALL* retail payments. After looking at detailed end-to-end threat and vulnerability studies for various payment methods ... we came up with x9.59 financial transaction standard. Instead of depending on hiding the account number (&/or other card data), the X9.59 financial standard slightly tweaks the paradigm and provides end-to-end strong integrity and authentication between the consumer and consumer's financial institution ... w/o requiring any transaction or account data to be hidden or encrypted.

We were also tangentially involved in the original Cal. state data breach notification legislation. We had been asked to help word-smith the Cal. state electronic signature legislation and some of the institutions involved were also heavily involved in privacy issues. They had done in-depth consumer surveys and found that the number #1 privacy issue was "identity theft" ... primarily the "account fraud" kind (using skimmed, harvested, collected account/transaction information to perform fraudulent transactions). A major source of the account/transaction information was coming from data breaches and there seemed to be little or nothing being done. It appeared that the institutions (responsible for the data breach notification legislation) felt that the publicity from breach notification would help motivate countermeasures

this is getting lots of play recently

Identifying ID Theft And Fraud
http://www.sciencedaily.com/releases/2009/10/091014102201.htm

similar articles ...

FBI Director Nearly Hooked in Phishing Scam, Swears Off Online Banking
http://www.eweek.com/c/a/Security/FBI-Director-Nearly-Hooked-in-Phishing-Scam-Swears-Off-Online-Banking-616671/
FBI Chief Almost 'Phished' by Dangerous Teen Hackers - I expected more from the chief of the FBI
http://gadgets.softpedia.com/news/FBI-Chief-Almost-039-Phished-039-by-Dangerous-Teen-Hackers-5634-01.html
Wife bans FBI boss from banking online
http://www.finextra.com/fullstory.asp?id=20588
Security Fix - Phishing Scam Spooked FBI Director Off E-Banking
http://voices.washingtonpost.com/securityfix/2009/10/fbi_director_on_internet_banki.html?wprss=securityfix
FBI boss told by wife not to bank online
http://www.computerweekly.com/Articles/2009/10/12/238088/fbi-boss-told-by-wife-not-to-bank-online.htm

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

8 ways the American information worker remains a Luddite

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: 8 ways the American information worker remains a Luddite
Date: 14 Oct, 2009
Blog: Greater IBM
8 ways the American information worker remains a Luddite
http://www.computerworld.com/s/article/9139259/8_ways_the_American_information_worker_remains_a_Luddite
http://www.networkworld.com/news/2009/101609-ibm-intel-executives-face-insider.html

One of the hardest problems when we started the online phone book effort was getting past the site security officers.

We were sitting around regular friday after work and Jim and I got into discussion about what might get management and executives interested in using online computing and hit on the online phone book (it would benefit the corporation if more employees, especially management, actually had familiarity using computers). Reference to celebrating Jim last year
https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html

We sat down some guidelines (we had been at friday after work a few hrs) that the effort would take no more than one week of each of our times. A basic requirement was that it had to be fast enough that the answer would be on 3270 screen at least faster than manager reaching for paper phone book and looking it up.

In any case, we started collecting softcopy of information from each plant site and generating procedure to convert each locations softcopy to standard online phonebook format. Ran into objections from several security officers at various plant sites. One scenario was that while the internal paper phone book "internal use only" ... if we were to put the information online ... it should then be treated as "IBM Confidential".

there is also audio recording of the above celebration ... URLs mentioned in this post
https://www.garlic.com/~lynn/2008p.html#27 Father of Financial Dataprocessing

one of the panelist from tandem got off into talking about Jim doing online telephone book at tandem. I then got up and talked about Jim having worked on online telephone book earlier ... before leaving for Tandem.

for other topic drift ... in this old post
https://www.garlic.com/~lynn/2007.html#1

old email with references to Jim attempting to palm off stuff on to me
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Revisiting CHARACTER and BUSINESS ETHICS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Revisiting CHARACTER and BUSINESS ETHICS
Date: 14 Oct, 2009
Blog: Greater IBM
People would periodically remind me that business ethics is an oxymoron.

One of my hobbies for the 70s and part of the 80s was building, distributing and supporting highly enhanced operating systems for internal locations. A major customer was the world-wide online, sales & marketing support HONE system (early in my career I got overseas trips as part of installing HONE systems as they were being cloned around the world).

Until consolidation of all US HONE datacenters into single location, Wilshire was major HONE location that I would regularly visit. After consolidation, there were lots of system enhancements for scale-up ... multiple multiprocessors in loosely-coupled single-system image (I've periodically claimed largest such in the world at the time). Then because of earthquake and natural disaster concerns ... US HONE was replicated in Dallas with load-balancing and fall-over between the two locations ... and then a 3rd location was added to the continuous availability operation in Boulder.

Lots of past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

slightly related thread/post ... archived here:
https://www.garlic.com/~lynn/2009o.html#46
https://www.garlic.com/~lynn/2009o.html#47

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

E-Banking on a Locked Down (Non-Microsoft) PC

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: E-Banking on a Locked Down (Non-Microsoft) PC
Date: 15 Oct, 2009
Blog: Financial Crime Risk, Fraud and Security
E-Banking on a Locked Down (Non-Microsoft) PC
http://voices.washingtonpost.com/securityfix/2009/10/e-banking_on_a_locked_down_non.html

It can be a little difficult using one of the tax preparation programs w/o having a m'soft machine.

When we were doing the original thing that is now called "electronic commerce" ... there was also this thing called "payment gateway" ... that handled transactions between webservers and the payment network ... we did a bunch of compensating procedures (for the way the internet operated). misc. past posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

Possibly one of the reasons we were called in to the project ... was that two of the people in this jan92 meeting
https://www.garlic.com/~lynn/95.html#13
on high availability scale-up
https://www.garlic.com/~lynn/subtopic.html#hacmp

had moved on and were at the small client/server startup responsible for something called "commerce server" (the startup had also invented this thing called "SSL" they wanted to use).

In any case, part of deploying the payment gateway ... was multiple machines in high availability configuration ... with multiple links into different parts of the internet backbone.

On the links ... we had packet filtering routers ... locked down to allowing only payment transactions through the link ... and home-grown "firewalls". These were some old, surplus SUN "pancakes" with SUN/OS configured in such a way that system was built and run off read/only CDROM ... and R/W hard disk was just for page file.

For the past couple years ... an analogous solution has been floated for desktop virtualization ... instead of having live "R/O" media ... where compromises are only limited to (transient) "in-memory" ... have a fresh virtual browser environment generated ... which then dissolves when done (along with any compromises).

Of course virtualization can be a two-edge sword ... there have been security proposals for unique hardware ... that can be carried and used even in insecure environments (like internet cafes). If the unique hardware is determined with static values ... an attacker can leverage virtualization to emulate such static (unique) hardware values .... undermining any perceived security.

This was demonstrated decades ago when CPUID (or processor unique characteristic) was being used to determine whether licensed software was being run on only the processor that the software was licensed for.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Should SSL be enabled on every website?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Should SSL be enabled on every website?
Date: 15 Oct, 2009
Blog: Information Security
Should SSL be enabled on every website?
http://www.infosecisland.com/blogview/1449-Should-SSL-be-enabled-on-every-website.html

We were called in to consult with small client/server startup that wanted to do payment transactions on their server ... the startup had invented this technology called SSL they wanted to use. As part of that effort we had to do end-to-end look at the process ... including walkthru of the business and security processes of these new things calling themselves Certification Authorities and manufacturing this things called "digital certificates".

Part of SSL is whether or not the webserver that the user thinks they are talking to ... is the webserver they really think they are talking to. There is an implicit assumption that the user understands the relationship between the webserver the user thinks they are talking to and the URL the user provides to the browser. Then the browser uses SSL to assure the relationship between the user provided URL and the webserver being talked to.

For this to work, 1) the user is required to understand the relationship between the URL that the user provides and the webserver the user thinks they are talking to ... and 2) the browser uses SSL to assure the relationship between the user provided URL and the webserver being talked to.

This was almost immediately invalidated when merchants found that SSL cut their throughput by 85-95% ... and they dropped back to just using SSL for the payment transaction ... where the user clicks on a "button" provided by the unvalidated (and possibly fraudulent) website ... which then supplies the (SSL) URL to the browser.

Instead of the process being whether or not the webserver that the user thinks they are talking to is the webserver that they are talking to ... it is simply the webserver is the webserver that it claims to be (i.e. there is no user involvement).

In effect, SSL is now simply being used to hide the information (payment transaction) being transmitted.

Somewhat as a result of having done "electronic commerce" ... in the mid-90s we were invited to participate in the x9a10 financial standard working group ... which had been given the requirement to preserve the integrity of the financial infrastructure for ALL retail payments (ALL, point-of-sale, attended, unattended, transit turnstyle, face-to-face, internet, debit, credit, ach, stored-value, high-value, low-value, ALL).

In the x9a10 financial standard working group ... there was detailed end-to-end detailed threat & vulnerability studies of the various environments. This resulted in the x9.59 financial transaction standard. One of the things that was done in x9.59 was to slightly tweak the paradigm and it is no longer necessary to hide (encrypt) the transaction to preserve the integrity of the financial infrastructure.
https://www.garlic.com/~lynn/x959.html#x959

Now the major use of SSL in the world today ... is this earlier thing we did, frequently now called "electronic commerce", to hide the transaction information. With x9.59, it is no longer necessary to hide the transaction information ... so the major use of SSL in the world today is eliminated.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

TV Big Bang 10/12/09

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TV Big Bang 10/12/09
Newsgroups: alt.folklore.computers
Date: Fri, 16 Oct 2009 08:27:20 -0400
William Hamblen <william.hamblen@earthlink.net> writes:
Better hurry on Kodachrome. The lab is closing.

I recently was in the studio of a commercial photographer who specialized in furniture ads. They'd stopped using film long ago. The demise of film in advertising was gradual. It started with publishers scanning images to make printing plates instead of using graphics arts cameras, filters and halftone screens to make printing plates. Once big digital sensors arrived film was on the outs.


I got pulled in during early days of what was then the "berkeley 10m" telescope ... later they got funding from keck foundation and it was renamed keck 10m ... there are now two
https://en.wikipedia.org/wiki/W._M._Keck_Observatory

recent posts
https://www.garlic.com/~lynn/2009m.html#82 ATMs by the Numbers
https://www.garlic.com/~lynn/2009m.html#85 ATMs by the Numbers

there was some prototyping and testing at Lick and got some tours behind the scenes
http://mtham.ucolick.org/index.nonjs.html

they were planning on moving from film to CCDs ... justification being that CCD was 30-60 times more sensitive to photons than film. The downside was that CCD were not (yet) very big ... at the time, they were testing with 200x200 or 40,000 cell and could be variable from moment to moment. Procedure was to take a "all white reference" reading for 30 seconds prior to taking actual image ... to calibrate each cell reading at that particular moment. At the time there were rumors that there might be a 2k-x-3k in existance (6mpixel) somewhere in the motion picture industry (mininum needing for equaling 35mm film)

there seemed to be all sorts of politics with funding ... there were statements that they could have gotten NSF funding for the whole thing ... but if they took NSF funding ... then they would loose control of the schedule for observing ... and NSF would dictate who/when/what use of the observatory.

of course now, you can get 10-12mpixel cameras for a couple hundred dollars.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Opinions on the 'Unix Haters' Handbook'

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinions on the 'Unix Haters' Handbook'.
Newsgroups: alt.folklore.computers
Date: Fri, 16 Oct 2009 09:29:08 -0400
re:
https://www.garlic.com/~lynn/2009o.html#48 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#49 Opinions on the 'Unix Haters' Handbook'

msnbc is on roll.

yesterday afternoon they had NY attorney general who made the comment that the US chamber of commerce has been wrong on every major issue for (at least?) the past decade.

this morning they were going on about draft bill passed yesterday that was to fix the commoditity futures modernization act (that precluded regulating over the counter derivatives, resulting in Enron and AIG and other bad things)

They made big deal that amendements to the (new) bill actually makes things worse ... supposedly the bill creates an open (regulated) exchange for such trades ... but supposedly gives the big financial institutions exemption to decide whether they want to use the exchange or not (making it another emporer's new clothes scenario?) ... various statements that the dept. of treasury should be called to task for (also) backing the amendments (for violating obama's campaign promises).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. begins inquiry of IBM in mainframe market

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: U.S. begins inquiry of IBM in mainframe market
Date: 14 Oct, 2009
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2009o.html#46 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009o.html#47 U.S. begins inquiry of IBM in mainframe market

several years after the incident involving large "big blue" customer ordering Amdahl machine (first such Amdahl order from large commercial "true blue" customer) and no longer having a career ... I wrote an "open door" about my salary, including some amount of supporting documentation. A few weeks later, I got a written response from HR stating that detailed review of my whole career had been done and I was making just exactly what I was suppose to be making.

I then took a copy of my original "open door", the HR response and wrote a cover letter pointing out that I had recently been asked to interview new hires for a new group that would be working under my technical direction ... and the starting salary offerings for new hires from HR was 1/3rd more than what I was currently making. Not only didn't my whole career salary keep pace with inflation, it didn't even keep pace with starting salaries for new hires. This time, I didn't get a written response, but a couple weeks later, I got a 1/3rd raise (putting me even with starting salaries offered new hires ... that I was interviewing to work under my direction).

I made inquiries about the HR written response ... it was one of the times that somebody told/reminded me that business ethics is an oxymoron. There were also comments about the best I could hope for is to not be fired and allowed to do it again.

slightly related recent threads (in this group)
https://www.garlic.com/~lynn/2009o.html#51 8 ways the American information worker remains a Luddite
https://www.garlic.com/~lynn/2009o.html#52 Revisiting CHARACTER and BUSINESS ETHICS

long-winded past post discussing the salary open-door:
https://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer

and from truth is stranger than fiction ....

i was doing this stuff on cluster scalup ... reference to jan92 meeting in Ellison's conference room here
https://www.garlic.com/~lynn/95.html#13

and some old email from the period on the effort:
https://www.garlic.com/~lynn/lhwemail.html#medusa

the email concentrates more on the numerical intensive aspects, but I was also doing a lot on scale-up of the distributed lock manager and scale-up of distributed DBMS caches.

in any case, last email (in above) is just before we get told that the scale-up effort is transferred and we can't work on anything with more than four processors (this is a a few weeks after the jan92 meeting). Then a couple weeks later ... it is announced as product in the numerical intensive market segment. A couple of the items from the press
https://www.garlic.com/~lynn/2001n.html#6000clusters1
https://www.garlic.com/~lynn/2001n.html#6000clusters2
along with numerous others in these old posts:
https://www.garlic.com/~lynn/2001n.html#70
https://www.garlic.com/~lynn/2001n.html#83

Now while the corporate product moved away from the commercial aspects ... there is folklore that at least one of the RDBMS vendors, reverse engineered at least part of the DLM and started offering it on other vendor platforms.

I also had an offer that I get paid to take a sabbatical (and couldn't return), bridging to retirement. After everything else, I take the offer.

...

Several months later (after accepting the sabbatical offer), after all the exit procedures (and really strange part) and sabbatical started ... I get a letter at home stating that I've been promoted. Ever hear of somebody getting promoted after they are gone????

....

For a lot more topic drift ... caches & SMP ... HONE was really compute intensive (because majority of applications were in APL) and I produced for them a vm370 release 3 system with SMP support ... so they could upgrade all their (loosely-coupled) uniprocessors to multiprocessors (well before release 4 product with SMP was available). Now nominally, two-processor 370 ran at .9 processor cycle of a uniprocessor (to accommodate various coordination between the two caches) ... so a 2-processor 370 hardware was effectively 1.8 times that of a single processor ... and with other hardware and software SMP effects, was typically quoted as having 1.3-1.5 times the thruput of a uniprocessor. For HONE, I was able to play some games with preserving cache locality in SMP configuration ... and could get better than twice the thruput of single processor (because of the improved cache hit ratios). misc. past posts mentioning multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp

In any case, I played more kinds of "cache" games for DBMS scale-up in cluster environment.

As to life on sabbatical ... two of the other people at the Jan92 meeting, later left and show up at a small client/server startup responsible for something called "commerce server". We were called in to consult because they wanted to do payment transactions on the server. The startup also had invented some technology called "SSL" and the result is now frequently called "electronic commerce". Part of the effort was something called a "payment gateway" (we sometimes refer to it as the original SOA) ... which acts as go-between for the webservers on the internet and the payment network. misc. past posts mentioning the payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

and we used our HA/CMP (loosely-coupled cluster with no more than four processors) product in the implementation
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Rudd bucks boost IBM mainframe business

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Rudd bucks boost IBM mainframe business
Date: 15 Oct, 2009
Blog: Mainframe Experts
Rudd bucks boost IBM mainframe business
http://www.computerworld.com.au/article/321940/rudd_bucks_boost_ibm_mainframe_business&urlhash=n8ft&trk=news_discuss

possibly one of the motivations behind redirecting effort mentioned here into numerical intensive (from a thread in linkedin greater ibm)
https://www.garlic.com/~lynn/2009o.html#57 U.S. beings inquiry of IBM in mainframe market

After HP acquired Convex ... one of the people from Austin (6000) was hired at HP to head up superdome project (sort of integrating convex stuff into common line or at least producing something new for convex customer set.). Initially proposal was that effort would be done as "internet startup" with participants getting equity in the effort (we periodically stopped by to see how it was going).
http://h20338.www2.hp.com/integrity/cache/342370-0-0-0-121.html

Convex had done Exemplar ... using SCI to support 64 boards with two HP RISC processors (128 processor total). Sequent was doing something similar but using SCI to support 64 boards with four Intel processors (256 processors total). IBM later acquired Sequent (somewhat analogy to HP having acquired Convex). We had been involved in some of the SCI stuff prior to departing IBM.

of course ... if it just says "integrity" ... there is also the tandem stuff
http://h20223.www2.hp.com/NonStopComputing/cache/76385-0-0-0-121.html

and (dec vax) vms moved to Itanium
http://h18004.www1.hp.com/products/blades/components/c-class-integrity-bladeservers.html

for some trivia ... who was person responsible for some amount of "new" 370 architecture in 3033 (like dual-address space) and one of the main architects for Itanium ... old reference
https://www.garlic.com/~lynn/2008g.html#60 Different Implementations of VLIW

& some recent posts mentioning Tandem
https://www.garlic.com/~lynn/2009o.html#2 IMS
https://www.garlic.com/~lynn/2009o.html#51 8 ways the American information worker remains a Luddite

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

TV Big Bang 10/12/09

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TV Big Bang 10/12/09
Newsgroups: alt.folklore.computers
Date: Sat, 17 Oct 2009 14:19:31 -0400
Michael Black <et472@ncf.ca> writes:
I seem to remember some of the kids putting bits of chalk in the brushes, that were in sections of whatever the material was so something could be embedded. The teacher goes to erase the board, and instead adds to the chalking.

the blackboard erasers were dry ... and chalk dust accumulated both on them and the blackboard. after school job would include banging erasers together to liberate all the chalk dust as well as going over the blackboard with damp cloth ... attempting to remove the residual dust.

search for cleaning erasers turns up
http://ezinearticles.com/?Chalkboard-Erasers&id=354528

the above mentions construction of erasers with strips of felt bound on one side (which would make it possible to insert objects between the strips).

another URL turned up by search engine
http://www.ehow.com/how_4580071_easiest-way-clean-black-chalkboards.html

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

TV Big Bang 10/12/09

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TV Big Bang 10/12/09
Newsgroups: alt.folklore.computers
Date: Sat, 17 Oct 2009 21:37:44 -0400
hancock4 writes:
A camera's quality is more than merely the megapixel count; there are other factors as well.

re:
https://www.garlic.com/~lynn/2009o.html#55 TV Big Bank 10/12/09

it wasn't comment about (relative) small variation in camera costs ... but influence of consumer electronics has brought down what was possibly large tens (hundred?) of thousands for a 2Kx3K (6mpixel) CCD to much more affordable levels (at the time, rumor existed 2Kx3K one-of-a-kind existed somewhere in motion picture industry, vis-avis the 200x200 .. 40k pixel that was used in some of the testing at Lick that took 30 seconds calibration with white board before each image).

this references Keck using 2Kx2K (4mpixel) CCD
http://www.fairchildimaging.com/gallery/

this talks about upgrade from 2Kx2K 4mpixel) to 3-CCD 2Kx4K (8mpixle)
http://www.ucolick.org/~kibrick/

above also mentions remote observing from UCSC campus ... which was basically what I had originally been brought in to look at.

some Keck CCD upgrade also mentioned here
http://adsabs.harvard.edu/abs/2004SPIE.5492....1M

now this (from 10Apr09)
http://www.projectmechatronics.com/tag/astronomer/

mentions Keck II now has 67mpixel CCD ... question is whether it actually is one large CCD ... or a composite of several smaller CCDs.

all of these (CCD) cameras are under $10K
http://www.sbig.com/sbwhtmls/large_format_cameras.htm

this describes Kodak large CCDs need sophisticated cooling & electronics.
http://www.atik-cameras.com/html/atik_11000.html

now from 6oct09 ... 3.2gigapixels CCD camera being designed & built at SLAC
http://www.symmetrymagazine.org/breaking/2009/10/06/the-largest-ever-ccd-digital-cameras-will-explore-the-universe/

for computer trivia ... I use to got to monthly user group meetings at SLAC.

from above article:
Making large CCDs for astronomical purposes present all kinds of challenges beyond what you need to do for a home digital camera. For a start, any digital camera suffers from electronic noise, where extra electrons pop up and remain as extra charge on the camera surface, as if from a phantom photon. You'll see some digital cameras advertising their low-noise sensors, talking about this very problem. The issue is much more acute for LSST as the telescope will only be collecting small numbers of photons from many faint sources, so just a few stray electrons can ruin the image. Heat is enough to eject some unwanted electrons so the whole LSST camera needs to be cooled to liquid nitrogen temperatures to reduce noise.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

TV Big Bang 10/12/09

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TV Big Bang 10/12/09
Newsgroups: alt.folklore.computers
Date: Sat, 17 Oct 2009 22:15:55 -0400
hancock4 writes:
I don't know what PC projectors use as a light source. But the big lamps in the old slide projectors and overhead projectors used to burn out fairly often. A very basic in presentations was to keep a spare bulb on hand.

about decade ago, we got brought in to start looking at issues around digital cinema (& digital projectors) ... eliminating film for movie theaters ... there is huge hazardess chemical problem ... heat of bulb on film resulting in significant contaminants in the air (meeting health standards is expense ... cost of digital projectors is partially offset by eliminating stuff envolved in managing the fumes from film projection). Issue was about using change to look at whole end-to-end provisioning & infrastructure related to movie theaters ... distribution, theaters, revenue collection at theaters and electronic audits of audience size, theater remittance back to distributors ... some other issues (including looking at encryption techniques as countermeasure to priracy).

most of the people were related to studios. one issue raised was could additional electronics environment also be used to do nearly real-time management of ticket proceeds ... remittance to studios which was taking up to 180 days. improving the count accuracy (shaving the counts is apparently common) and getting remittance under 30 days ... was viewed as significant opportunity.

That particular effort was TI's DLP chip ... some discussion here:
http://www.smartcomputing.com/Editorial/article.asp?article=articles/archive/l0808/44l08/44l08.asp&guid=

wiki page:
https://en.wikipedia.org/wiki/Digital_Light_Processing

above mentions next generation of technology being for used in micro projectors.

digital cinema wiki (finally deploying in 2005).
https://en.wikipedia.org/wiki/Digital_cinema

above talks about many of the issues.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

TV Big Bang 10/12/09

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TV Big Bang 10/12/09
Newsgroups: alt.folklore.computers
Date: Sun, 18 Oct 2009 04:27:19 -0400
re:
https://www.garlic.com/~lynn/2009o.html#61 TV Big Bank 10/12/09

the meetings were in '98 held at the ritz carlton in marina del rey ... I chose the location partly because I could walk over to ISI and talk to the RFC editor.

one meeting, i also gave a talk at ISI on why Internet isn't business critical dataprocessing. I thot it was going to be just ISI ... but something like 50-60 graduate students from USC show up.

some amount of the talk was about the compensating procedures that we had to do for the "payment gateway" (part of what is now commonly called electronic commerce) ... it wasn't just taking message formats from a physical circuit-based environment and dropping it into the anarchy of the Internet packet-based environment ... misc. past posts mentioning payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

rfc editor home page:
http://www.rfc-editor.org/

isi home page:
http://www.isi.edu

ritz-carlton
http://www3.isi.edu/about-accommodations_marina_del_rey.htm

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

U.S. students behind in math, science, analysis says

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: U.S. students behind in math, science, analysis says
Newsgroups: alt.folklore.computers
Date: Sun, 18 Oct 2009 10:51:14 -0400
re:
https://www.garlic.com/~lynn/2009m.html#69 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009o.html#27 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009o.html#28 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009o.html#36 U.S. students behind in math, science, analysis says

The US's Reverse Brain Drain
http://slashdot.org/story/09/10/17/1948223/The-USs-Reverse-Brain-Drain

from above:
TechCrunch has a piece by an invited expert on the reverse brain drain already evident and growing in the US as Indian, Chinese, and European students and workers in the US plan to return home, or already have.

... snip ...

studies in the early 90s had found that half of advanced technical degrees from institutions of higher learning were going to foreign born students ... and some US industries were being dominated by these graduates. there was hypothesis that a drop in US environment &/or improvement in their home environment ... could result in tipping point (discontinuity not gradual transition) where there would be outflow of these graduates back home. i've conjectured in the past that he internet bubble would not have been possible w/o these graduates (and with the internet bubble and Y2K remediation occurring at the same time, there wasn't enough resources in the country ... accelerating movement of work offshore).

a few past posts mentioning the subject:
https://www.garlic.com/~lynn/2003p.html#33 [IBM-MAIN] NY Times editorial on white collar jobs going
https://www.garlic.com/~lynn/2006g.html#21 Taxes
https://www.garlic.com/~lynn/2007g.html#7 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007j.html#57 IBM Unionization
https://www.garlic.com/~lynn/2007r.html#36 Students mostly not ready for math, science college courses
https://www.garlic.com/~lynn/2008e.html#37 was: 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2008i.html#65 How do you manage your value statement?
https://www.garlic.com/~lynn/2008n.html#27 VMware Chief Says the OS Is History

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

The new coin of the NSA is also the new coin of the economy

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: The new coin of the NSA is also the new coin of the economy
Newsgroups: alt.folklore.computers
Date: Sun, 18 Oct 2009 11:07:37 -0400
from financial cryptography blog:

The new coin of the NSA is also the new coin of the economy
http://financialcryptography.com/mt/archives/001199.html

drawing connections between

Who's in Big Brother's Database?
http://www.nybooks.com/articles/23231

and

Great expectations
http://www.atimes.com/atimes/Global_Economy/KJ15Dj01.html

and

The Elliot Wave has arrived at stage 5, so it's all over for the dollar!
http://financialcryptography.com/mt/archives/001198.html

and

Clash of the clouds
http://www.economist.com/research/articlesBySubject/displayStory.cfm?story_id=14637206&subjectID=348909&fsrc=nwl

and resources for large datacenters (not just google, m'soft, apple, etc) ... a few past posts
https://www.garlic.com/~lynn/2006q.html#43 21st century pyramids--super datacenters
https://www.garlic.com/~lynn/2008g.html#3 It's Too Darn Hot
https://www.garlic.com/~lynn/2008n.html#68 VMware Chief Says the OS Is History
https://www.garlic.com/~lynn/2008n.html#79 Google Data Centers 'The Most Efficient In The World'
https://www.garlic.com/~lynn/2009m.html#81 A Faster Way to the Cloud

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

The new coin of the NSA is also the new coin of the economy

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The new coin of the NSA is also the new coin of the economy
Newsgroups: alt.folklore.computers
Date: Sun, 18 Oct 2009 11:19:31 -0400
re:
https://www.garlic.com/~lynn/2009o.html#64 The new coin of the NSA is also the new coin of the economy

... there have been reports that mid-atlantic power distribution infrastructure (includes beltway and maryland with lots of gov. installations) is one of the most vulnerable (in need of lots of work and upgrading) in the country ... misc past posts mentioning fraying infrastructures:
https://www.garlic.com/~lynn/2007q.html#18 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2007q.html#19 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2007q.html#60 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2007q.html#62 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2007r.html#25 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2007r.html#53 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2007r.html#58 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2007r.html#59 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2007r.html#60 Fixing our fraying Internet infrastructure
https://www.garlic.com/~lynn/2008e.html#43 fraying infrastructure
https://www.garlic.com/~lynn/2008e.html#48 fraying infrastructure
https://www.garlic.com/~lynn/2008e.html#50 fraying infrastructure
https://www.garlic.com/~lynn/2008j.html#80 dollar coins
https://www.garlic.com/~lynn/2008k.html#71 Cormpany sponsored insurance
https://www.garlic.com/~lynn/2008l.html#38 dollar coins
https://www.garlic.com/~lynn/2008n.html#87 STUDY: Lights Out In 2009?
https://www.garlic.com/~lynn/2008r.html#41 fraying infrastructure

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Need for speedy cryptography

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need for speedy cryptography
Newsgroups: sci.crypt
Date: Sun, 18 Oct 2009 15:29:53 -0400
Rob Blank <Rob.Blank@gmx.de> writes:
I am just about to hack some text together for my MSc thesis. I am just trying to motivate the need for "high"-performance implementations of cryptographic systems. Obviously, one good point to mention here is the impatient user: No one is happy when they have to wait for 15 seconds until a RSA decryption has been computed on their mobiles.

in the mid-90s, the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments (aka ALL, debit, credit, stored-value, ACH, point-of-sale, attended, unattended, high-value, low-value, internet, transit turnstyle, i.e. ALL). Things started to converge on ECDSA.
https://www.garlic.com/~lynn/x959.html#x959

one of the issues in mid-90s were that transit turnstyle have extremely tight power and elapsed time requirements for contactless card operation (small fraction of second, RF power being near the reader).

RSA solutions for chipcard operation had added large number of circuits (for crypto accelerator) to cut the elapsed time of the operation to few seconds (still way too long for transit application) ... but the huge increase in circuits also greatly increased the power draw for those few seconds ... resulting in a contact requirement. ECDSA could be done w/o the power, extra circuits, and/or elapsed time penalty of RSA.

the standard solution also did a public key paradigm w/o digital certificates. some of the other public key payment specification efforts from the period were looking at taking the standard digital certificate approach (with RSA). The issue with those approaches was that the digital certificate paradigm added a factor of approx. one hundred times payload bloat to typical payment transaction message (RSA plus digital certificate paradigm resulted in 100 times processing bloat and 100 times payload bloat to typical payment transaction).

some part of the standards group looked at coming up with a standard for "compressed" digital certificates (looking to help with the humongous 100 times payment transaction payload bloat problem) ... trying to get it down to only an enormous 10 times payment transaction payload bloat problem. I was able to show with their techniques it was possible to compress a digital certificate to zero bytes ... putting the appended digital certificate paradigm on level playing field with certificate-less paradigm. some past posts mentioning certificatelss
https://www.garlic.com/~lynn/subpubkey.html#certless

possibly part of the motivation for RSA on chipcards in the late 80s and early 90s was the DSA requirement for a trusted secret random number source ... but that changed starting in the mid-90s ... being able to get high integrity random number in low-power, extremely fast (at least if talking about ECC) and very inexpensive chips.

one of the issues regarding chip cost (even security chips ... say infineon from the dresdon fab) is that in quanity, it approaches fixed cost per wafer. cost per chip then becomes number of chips per wafer. Holding the number of circuits constant ... the transition from 200mm to 300mm wafers and declining circuit size, resulted in enormous increase in chips/wafer. for awhile, this was stalled (for smaller chips) with the technology used to cut chips from wafers (cut swath area starting to exceed aggregate chip area in wafer). Somewhat the market forces related to EPC RFID chips (i.e. inexpensive chips targeted at replacing barcodes for grocery items) developed new wafer cutting technology that drastically reduced the cut swath area. An inexpensive, contactless, fast, low-power semi-custom design, security chip was done in a few hundred thousand circuits (well under a dollar given chips/wafer ... compare that with typical current generation of processor chips with three orders of magnitude more circuits ... or more). A rough design of a fully custom chip, reduced the circuits/chip by a factor of ten times (roughly increasing the chips/wafer by another factor of ten).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

I would like to understand the professional job market in US. Is it shrinking?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: I would like to understand the professional job market in US. Is it shrinking?
Date: 19 Oct, 2009
Blog: Greater IBM
x-posted from a.f.c ...

re:
https://www.garlic.com/~lynn/2009m.html#69 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009o.html#27 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009o.html#28 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009o.html#36 U.S. students behind in math, science, analysis says

The US's Reverse Brain Drain
http://slashdot.org/story/09/10/17/1948223/The-USs-Reverse-Brain-Drain

from above:
TechCrunch has a piece by an invited expert on the reverse brain drain already evident and growing in the US as Indian, Chinese, and European students and workers in the US plan to return home, or already have.

... snip ...

studies in the early 90s had found that half of advanced technical degrees from institutions of higher learning were going to foreign born students ... and some US industries were being dominated by these graduates. there was hypothesis that a drop in US environment &/or improvement in their home environment ... could result in tipping point (discontinuity not gradual transition) where there would be outflow of these graduates back home. i've conjectured in the past that he internet bubble would not have been possible w/o these graduates (and with the internet bubble and Y2K remediation occurring at the same time, there wasn't enough resources in the country ... accelerating movement of work offshore).

a few past posts mentioning the subject:
https://www.garlic.com/~lynn/2003p.html#33 [IBM-MAIN] NY Times editorial on white collar jobs going
https://www.garlic.com/~lynn/2006g.html#21 Taxes
https://www.garlic.com/~lynn/2007g.html#7 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007j.html#57 IBM Unionization
https://www.garlic.com/~lynn/2007r.html#36 Students mostly not ready for math, science college courses
https://www.garlic.com/~lynn/2008e.html#37 was: 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2008i.html#65 How do you manage your value statement?
https://www.garlic.com/~lynn/2008n.html#27 VMware Chief Says the OS Is History

... again just using data/reports from the early 90s (i.e. it isn't a issue that is not understood) ... half the 18yr olds were functionally illiterate (and percentage increasing as society becomes more complex), when Japanese started putting in plants in the US they had to require minimum AA/2yr college degree to get workers with high school level education, Increasing percentage of workers were getting compensation & benefits greater than the value of their work with deficit having to be made up in various ways ... and that in the future it would be the majority of all workers. Study for orginization of state governors projected that if the science & math problem could be fixed (which it hasn't), it would contribute something like 2% to annual GDP growth (again this is all from early 90s).

Mantra from silicon valley startups & business plans from (at least) early 80s, golden child was somebody with technology degree that then got an MBA.

A little x-over from this (long-winded) news thread in (linkedin) "Mainframe Experts" ... and having sponsored John Boyd's briefings in the early 80s
https://www.garlic.com/~lynn/2009o.html#37 Young Developers Get Old Mainframers' Jobs

... some MBA programs are now starting to use material from John Boyd (and OODA-loops).

I also referenced Boyd in the "U.S. begins inquiry of IBM in mainframe market" news thread in this group ... also archived here
https://www.garlic.com/~lynn/2009o.html#46
https://www.garlic.com/~lynn/2009o.html#47
https://www.garlic.com/~lynn/2009o.html#48

slightly Boyd'ism and OODA-loops ... largest US auto builder had C4 task force in the early 90s on how to remake themselves and brought in several technology vendors to participate. They went thru all the reasons that foreign manufacturers were succeeding (in the US) and they weren't. Among all the items including that Japanese had significantly cut the time to execute (from traditional 7-8 yrs that US was on, to 2-3 yrs and dropping). Competitive benefits was that Japanese were significantly more agile and responsive to changing technology, market conditions and buying habits (offline, I chided the mainframe brethren at the meetings that they were still on the US auto product cycle ... so it wasn't likely that they could offer much useful advice).

misc. past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
and misc. URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

Note that while the us auto industry was perfectly able to articulate all the factors and what needed to be changed ... it appeared that so many people (executives, unions, workers, etc) had strong vested interest in the status quo ... that they were unable to change.

It wasn't limited to the auto industry ... in those years leading up to the red-ink ... we would periodically visit Somers and walk the halls talking to various people ... who were perfectly able to articulate all the factors and what needed to be changed ... but go back a month later and there was no change ... repeated month after month. In Somers, there was additional flavor that some number appeared to be attempting to preserve the status quo until they had retired ... and then it would be somebody else's problem (with overtones that they enjoyed large additional compensation based on experience in relatively static status quo ... which would be lost in rapidly changing environment ... so they had big incentive to maintain status quo until their retirement).

the studies ... even from two decades ago ... were showing lack of technical skills (IP and inventions) cost at least 2% in GDP growth (conversely fixing the technical skill base would have increased annual GDP growth by 2%).

the Boyd, OODA-loops, and Japanese product execution references were that it was necessary to do the whole end-to-end scene better AND faster ... IP & research completely thru to finished product coming off the line. I assume it is motivation that Boyd & OODA-loops starting to show up in MBA programs.

the studies about majority of the population becoming functionally illiterate and that skill level (math & science) required for even manufacturing jobs was increasing (not being able to find qualified workers for even lower level jobs).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

The Rise and Fall of Commodore

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: The Rise and Fall of Commodore
Newsgroups: alt.folklore.computers
Date: Mon, 19 Oct 2009 09:03:27 -0400
The Rise and Fall of Commodore
http://news.slashdot.org/article.pl?sid=06/11/15/1457205&tid=172

old post/thread ... here in a.f.c.
https://www.garlic.com/~lynn/2007v.html#76 Why Didn't Digital Catch the Wave?

copied from the above post:

Total share: 30 years of personal computer market share figures
https://arstechnica.com/features/2005/12/total-share/

and has graph of personal computer sales 1975-1980
https://arstechnica.com/features/2005/12/total-share/3

and graph from 1980 to 1984 ... with the only serious competitor to PC in number of sales was commodore 64
https://arstechnica.com/features/2005/12/total-share/4

and then from 1984 to 1987 the ibm pc (and clones) starting to completely swamp
https://arstechnica.com/features/2005/12/total-share/5

in much the same way that the application developers were producing for the large install base ... the machine clone makers also started to move into the market segment also. conjecture might include larger profit margin in the PC market segment (vis-a-vis commodore 64) as contributing motivation for clone makers (higher premium/value in the commercial business market).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

DHL virus

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DHL virus
Newsgroups: alt.folklore.computers
Date: Mon, 19 Oct 2009 10:54:58 -0400
greymausg writes:
Mine was probably read from some contacts address list by the virus.

or at least some virus ... there seems to be thriving underground market in information.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

cpu upgrade

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: cpu upgrade
Newsgroups: bit.listserv.ibm-main
Date: 19 Oct 2009 07:53:43 -0700
mward@SSFCU.ORG (Ward, Mike S) writes:
Yes, I understand. If I wanted to change processors I would also go for the fastest cycle times. I remember a company that had a 400 mip(single engine) machine which then purchased a 600 mip 3 engine machine (200 mips per engine). (mips and engines are fictitious to protect the innocent :)) They were sadly disappointed because now the machine was slower even though they had more mips.

this is been frequent major refrain in the PC industry (almost no progress in parallelizing technology for the past several decades) as they hit the Ghz wall and started moving to multiple cores. There have been comments that nearly all of the embarrasingly parallel applications had already been done decades ago.

there was big step-forward with the compare&swap instruction. Charlie had invented it at the science center working on CP67 fine-grain multiprocessor locking (compare&swap was chosen because CAS is charlie's initials). misc. past posts mentioning smp &/or compare&swap
https://www.garlic.com/~lynn/subtopic.html#smp

initial forey into POK to get compare&swap added to 370 architecture was rebuffed, claiming that the favorite son operating system felt that nothing more than test&set (from 360 multiprocessing) was required. challenge given the science center was to come up with a non-multiprocessor specific use for compare&swap. the result was the examples of multi-threaded use that still in current principles of operation ... where the multi-threaded (aka multiprogramming) operation is independent of whether environment was single processor or multiprocessor.

starting at least by the early 80s ... compare&swap saw major uptake in transaction and multithreaded DBMS implementations (with the same or similar construct showing up on all the major hardware platforms) ... aka example is the original relational/sql implementation ... misc. past posts mentioning system/r
https://www.garlic.com/~lynn/submain.html#systemr

misc. posts about cics/bdam
https://www.garlic.com/~lynn/submain.html#cics

including when i was undergraduate in the 60s, the univ. library project got selected to be betatest for the original cics product release ... and I got tasked supporting/debugging application (and cics).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

"Rat Your Boss" or "Rats to Riches," the New SEC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: "Rat Your Boss" or "Rats to Riches," the New SEC
Date: 19 Oct, 2009
Blog: Financial Crime Risk, Fraud and Security
"Rat Your Boss" or "Rats to Riches," the New SEC
http://www.mahanylaw.com/mahanylaw/?p=105

there have been past references that the only significant part of sarbanes-oxley was the section on informants.

in the congressional hearings into madoff ponzi scheme ... the person that had tried for a decade to get SEC to do something about madoff observed that the SEC didn't have a tip line ... but had a 1-800 number for companies to complain about investigations ... and that tips turn up 13 times more fraud than audits.

the testimony in the congressional madoff hearings were that audits turned up 4% of fraud and tips turned up 52% of fraud (i.e. 13 times as much).

one issue is that if tips turn up over half the fraud ... and audits only turn up four percent .... what is the percentage of abuses vis-a-vis valid tips?? Every program can have abuses ... an issue is whether or not they are statistically significant (more or less abuse than exists in any program).

if there are aggregate trillions in fraud ... and tips catching over half ... it is still trillions. it there is only hundreds of thousands involved in invalid tips ... that is a seven order of magnitude difference (statistically insignificant, i.e. one out of ten million).

reports are that overcharging and fraud in medicare/medicaid runs at least 10-15 percent (possibly one out of six or even one out of five or four). there are assumptions that economic stimulus funds will be worse (there are some level of controls and fraud units for medicare/medicaid but so far none appear to be in place for economic stimulus funds).

making a big deal of statistically insignificant rates raises question whether it might be obfuscation, misdirection, and/or vested interests

there is possibility some of the same people involved in medicare/medicaid fraud would get involved in invalid paid for whistle blowing scams ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

I would like to understand the professional job market in US. Is it shrinking?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: I would like to understand the professional job market in US. Is it shrinking?
Date: 19 Oct, 2009
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2009o.html#67 I would like to understand the professional job market in US. Is it shrinking?

From a different view-point, there was report last year that claimed the baby boomer generation was four times larger than the previous generation and the following generation (after baby boomers) was only a little more than half as large as the baby boomer generation (aka reason it was called baby boomer generation).

As a result, the baby boomer generation represented a large work force bubble as well as consumption bubble. Baby boomers moving into retirement changes the ratio of workers to retirees by a factor of eight (increase the number of retirees by factor of four and cut the number of workers nearly in half). That large a change will result in enormous changes in society.

The are various straight-forward implications ... if the ratio of workers to retirees declines by a factor of eight ... then there are things like the ratio of geriatric health workers to retirees is also likely to decline by a factor of eight

The looming decline in the absolute numbers of workers is separate from the issue that there has also been a decline in the skill level of those workers.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

IBM Hardware Boss Charged With Insider Trading

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: IBM Hardware Boss Charged With Insider Trading
Date: 19 Oct, 2009
Blog: Greater IBM
IBM Hardware Boss Charged With Insider Trading
http://www.crn.com/it-channel/220601204

from above:
Robert Moffat, IBM (NYSE:IBM)'s no-nonsense hardware boss, has been charged in what federal authorities are calling the largest alleged hedge fund insider trading case ever.

... snip ...

misc. other insider trading mentioning ibm in headlines

Feds' insider trading wiretap snares IBM heir apparent
http://www.theregister.co.uk/2009/10/16/insider_trading_for_dummies/
IBM, Intel executives face insider trading charges
http://www.networkworld.com/news/2009/102209-anderson-book.html
IBM, Intel Executives Face Insider Trading Charges
http://www.pcworld.com/article/173831/ibm_intel_executives_face_insider_trading_charges.html
IBM, Intel execs arrested over alleged insider trading
http://www.theregister.co.uk/2009/10/16/ibm_intel_insider_trading
IBM, Intel Capital execs face insider trading charges
http://www.computerworld.com/s/article/9139487/IBM_Intel_Capital_execs_face_insider_trading_charges
IBM, Intel Execs Arrested Over Insider Trading
http://yro.slashdot.org/story/09/10/16/200207/IBM-Intel-Execs-Arrested-Over-Insider-Trading
Insider Trading Scandal Involves IBM & Intel
http://www.internetnews.com/breakingnews/article.php/3844286/Insider+Trading+Scandal+Involves+IBM++Intel.htm

another article from slashdot

Arrested IBM Exec Goes MIA On the Web
http://news.slashdot.org/story/09/10/17/1640237/Arrested-IBM-Exec-Goes-MIA-On-the-Web

i'm not sure about the comment about bio in the above ... since google still finds it here:
http://www-03.ibm.com/press/us/en/biography/10068.wss

ongoing insider trader news items

U.S. Said to Target Wave of Insider-Trading Networks
http://www.bloomberg.com/apps/news?pid=20601103
http://www.bloomberg.com/apps/news?pid=20601103&sid=ajxDWr3piK3M
IBM executive in insider trading case placed on leave
http://news.yahoo.com/s/afp/20091020/bs_afp/usfinancecrimeinsidersecibm
Intel Exec Allegedly Made $50K Through Insider Trading
http://www.crn.com/it-channel/220700220

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Back to the 1970s: IBM in mainframe antitrust suit again

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Back to the 1970s: IBM in mainframe antitrust suit again
Date: 19 Oct, 2009
Blog: Mainframe Experts
Back to the 1970s: IBM in mainframe antitrust suit again
http://www.dailyfinance.com/2009/10/09/back-to-the-1970s-ibm-in-mainframe-antitrust-suit-again/

archived posts in recent related discussion thread in ibm-main mailing list (ibm-main discussion group mailing list was started with ibm customers on bitnet in the 80s):
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2009o.html#31 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2009o.html#32 Justice Department probing allegations of abuse by IBM in mainframe computer market

and archived posts in similar recent discussion in (linkedin) Greater IBM
https://www.garlic.com/~lynn/2009o.html#46 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009o.html#47 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market

part of the above is that 23jun69 unbundling announcement (starting to charge for services and application software) was somewhat in response to various litigation. ... some past posts discussing unbundling (note that the case was made to continue *NOT* charging for kernel software)
https://www.garlic.com/~lynn/submain.html#unbundle

There was then the future system effort ... which was going to completely replace 360/370 with something completely different. Even tho this was killed before ever being announced ... it resulted in letting 370 software & hardware product pipelines dry up ... which is claimed to have contributed to letting clone processors gain a market foothold. misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

The claim is that clone processors then contributed to the decision to transition to also charging for kernel software. My resource manager apparently was timed just so ... that it was selected as guinea pig for kernel software charging ... and I had to spend time off & on with business people & lawyers about policies for kernel software charging.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Status of Arpanet/Internet in 1976?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Status of Arpanet/Internet in 1976?
Newsgroups: alt.folklore.computers
Date: Mon, 19 Oct 2009 23:29:21 -0400
Morten Reistad <first@last.name> writes:
So did Primos. Full remote procedure calls, including locks. You could even access system libraries, like MIDAS+ (the Prime VSAM workalike); or roll your own.

Access controls were also a lot better than what NFS provides. And then, NFS scales a whole lot better.


and then there was (UCLA) LOCUS
https://en.wikipedia.org/wiki/LOCUS_%28operating_system%29

it was also basis for aix/386 & aix/370 ... palo alto started out working with UCLA on Locus and had it up on cluster including S/1 and some 68000 machines ... before doing aix/386 & aix/370 product.

from long ago and far away ...

Date: 13 April 1983, 15:59:17 PST
To wheeler

Great. Sounds like it's a bit pre-mature for any technical input, so I'll hold onto my stuff for awhile. You may be interested to know that we have a signed contract with Gerry Popek's company for development of the LOCUS distributed UNIX system as an IBM product. We in Palo Alto will be directly involved in the architecture and coding required to bring LOCUS up on the PC, Series/1's, and virtual 370's. This will give us lots of real experience with the nitty gritty of portable distributed operating system kernels, large system implementation in C, the ins and outs of Berkeley UNIX, the UNIX source code management systems, etc. etc. Incidentally, we will be getting two VAX 11/750's here at the Scientific Center to serve as development vehicles, and we hope to demonstrate a single network involving both VAX's and IBM hardware running as a single system image by the end of this calendar year.


... snip ... top of post, old email index

I don't remember off-hand where the 68000 machines they had came from.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

I would like to understand the professional job market in US. Is it shrinking?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: I would like to understand the professional job market in US. Is it shrinking?
Date: 20 Oct, 2009
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2009o.html#67 I would like to understand the professional job market in US. Is it shrinking?
https://www.garlic.com/~lynn/2009o.html#72 I would like to understand the professional job market in US. Is it shrinking?

for the past couple yrs there have been periodic reports that our math & science education is at or near the bottom of "industrial" nations .... one of those sound-bites just went across tv news that US math & science education also ranks below Kazakhstan.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Is it time to stop research in Computer Architecture ?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is it time to stop research in Computer Architecture ?
Newsgroups: comp.arch
Date: Tue, 20 Oct 2009 19:08:55 -0400
Robert Myers <rbmyersusa@gmail.com> writes:
Even though IBM and its camp-followers had to learn early how to cope with asynchronous events ("transactions"), they generally did so by putting much of the burden on the user: if you didn't talk to the computer in just exactly the right way at just exactly the right time, you were ignored.

some from the CTSS group went to 5th flr and multics ... and some went to the 4th flr and the science center. in 1965, science center did (virtual machine) cp40 on 360/40 that had hardware modifications to support virtual memory. cp40 morphed into cp67, when the science center got 360/67 that came standard with hardware virtual memory support.

last week of jan68, three people from science center came out and installed cp67 at univ. where i was undergraduate. over the next several months i rewrote significant portions of the kernel to radically speed things up. part of presentation that i made at fall68 SHARE user group meeting ... about both speedups done fro os/360 (regardless of whether or not running in virtual machine or on real hardware) as well as rewrites of major section of cp67.
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

cp67 came standard with 2741 and 1052 terminal support (and did automatic terminal type recognition). univ. also had ascii/tty machines (33s & 35s) and i got to add tty support. I tried to do this in consistent way that did automatic terminal type (including being able to have single rotary dial-in number for all terminals). Turns out that standard ibm terminal controller had a short-cut and couldn't quite do everything that i wanted to do. Somewhat as a result, univ was motivated to do a clone controller project ... where channel interface was reverse engineered and a channel interface board was built of an interdata/3 minicomputer ... and the interdata/3 was programmed to emulate the mainframe terminal controller (along with being able to do automatic baud rate detection)

my automatic terminal recognition would work with standard controller for leased lines ... the standard controller could switch the type of line scanner under program control ... but had hardware the baud rate oscillator to each port interface. This wouldn't work if i wanted to have a common pool of ports (next available selected from common dialin number) for terminals that operated at different baud rates. some past posts mentioning clone controllers
https://www.garlic.com/~lynn/submain.html#360pcm

os/360 tended to have a operating system centric view of the world ... with initiation of things at the operating system ... and people responding at the terminal. cp67 was just the opposite ... it had end-user centric view ... with the user at the terminal initiating things and the operating system reacting. one of the things i really worked on was being able to pre-emptive dispatching and page fault handling in a couple hundreds instructions (i.e. take page fault, select replacement page, initiate page read, switch to different process, handle interrupt, and switch back to previous process all in a couple hundred instructions aggregate).

the univ. library had also gotten a ONR grant to do computerized catalogue and then was also selected to be betatest for the original CICS product release (cics still one of the major transaction processing systems). i got tasked to support (and debug) this betatest. some past posts mentioning cics &/or bdam
https://www.garlic.com/~lynn/submain.html#cics

a little later one of the things that came out of cp67 was charlie invented compare&swap instruction when he was doing work on fine-grain locking in cp67 (compare&swap was selected because CAS is charlie's initials). initial forey into POK trying to get it included in 370 architecture was rebuffed ... favorite son operating system claiming that test&set from 360 SMP was more than sufficient. challenge to science center was to come up with use of compare&swap that wasn't smp specific ... thus was born all the stuff for multithreaded implementation (independent of operation on single processor or multiple processor machine) ... which started to see big uptake in transaction processing and DBMS applications ... even starting to appear on other hardware platforms. misc. past posts mentiong smp and/or compare&swap instruction:
https://www.garlic.com/~lynn/subtopic.html#smp

minor digression about getogether last year celebrating jim gray ... also references he tried to palm off some amount of his stuff on me when he departed for tandem
https://www.garlic.com/~lynn/2008p.html#27 Father of Financial Dataprocessing
some old email from that period
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

DNSSEC + Certs As a Replacement For SSL's Transport Security

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: DNSSEC + Certs As a Replacement For SSL's Transport Security
Date: 20 Oct, 2009
Blog: Information Security
DNSSEC + Certs As a Replacement For SSL's Transport Security
http://www.infosecisland.com/articleview/1458-DNSSEC--Certs-As-a-Replacement-For-SSL%E2%80%99s-Transport-Security.html

Since original DNSSEC proposal ... I've pointed out that it would be possible to use DNSSEC for trusted public key distribution w/o requiring digital certificates ... and use it for SSL/TLS certificate-less implementation.

Standard certification authority for SSL domain name digital certificate requires a bunch of identification from an digital certificate applicant. They then do a time-consuming, error-prone, and expensive identification operation matching the supplied information against what is onfile with the domain name registry (as to the true owner of the domain name).

Part of DNSSEC suggests that domain name registrants also register a public key at the same time they register the domain name (as countermeasure to various vulnerabilities like domain name hijacking). The CA industry has somewhat backed this because it improves the trust they can place in their process (they are vulnerable to domain name hijacker that then applies for a SSL domain name certificate and all the information validates correctly).

With registered public keys, then CA industry can require digital certificate applications to be digitally signed ... and they can replace a time-consuming, error prone and expensive identification process with an efficient, inexpensive and reliable authentication process by doing real-time retrieval of the registered public key from domain name infrastruture (to validate the applicant's digital signature).

This represents a catch-22 since if the CA industry could start doing real-time retrievals of public keys from the domain name infrastructure ... then possibly the rest of the world could also ... eliminating the need for digital certificates. misc. past posts mentioning the catch-22 for the CA industry
https://www.garlic.com/~lynn/subpubkey.html#catch22

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Is it time to stop research in Computer Architecture ?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is it time to stop research in Computer Architecture ?
Newsgroups: comp.arch
Date: Wed, 21 Oct 2009 16:42:38 -0400
re:
https://www.garlic.com/~lynn/2009o.html#77 Is it time to stop research in Computer Architecture?

this morning there was a presentation about OpenSolaris with top bullet item that it recently has gone "ticless" ... related to high amount of overhead when running in virtual machine even when idle (potentially with large number of concurrent numbers all "tic'ing").

in the mid-80s ... I noticed the code in unix and commented that I had replaced almost the identical code that was in cp67 in 1968 (some conjecture that cp67 might possibly traced back to ctss ... and unix might also traced design back to ctss ... potentially via multics).

i've periodically mentioned that this was significant contribution to being able to leave the system up 7x24 ... allowing things like offshift access, access from home, etc.

the issue was that the mainframes "rented" and had usage meters ... and paid monthly usage based on the number of hours run in the usage meters. in the early days ... simple sporadic offshift usage wasn't enuf to justify the additional rental logged by the usage meters.

the usage meters ran when cpu &/or i/o was active and tended to log/increment a couple hundred milliseconds ... even if only had a few hundred instructions "tic'ing" a few times per second (effectively resulting in the meter running all the time). moving to event based operation and eliminating the "tic'ing", helped enabling the usage meter actually stopping doing idle periods.

the other factors (helping enable transition to leaving systems up 7x24 for things like home dialin) were

1) "prepare" command for terminal i/o ... allowed (terminal) channel i/o program to go appear idle (otherwise would have also resulted in usage meter running) but able to immediately do something when there were incoming characters

and

2) automatic reboot/restart after failure (contributed to lights out operation, leaving the system up 2nd & 3rd shift w/o human operator ... eliminating those costs also).

on 370s, the usage meter would take 400 milliseconds of idle before coasting to stop. we had some snide remarks about the favorite son operating system that had a "tic" process that was exactly 400 milliseconds (if the system was active at all, even otherwise completely idle, it was guaranteed that the usage meter would never stop).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

OpenSolaris goes "tic'less"???

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OpenSolaris goes "tic'less"???
Newsgroups: alt.folklore.computers
Date: Thu, 22 Oct 2009 13:04:07 -0400
jmfbahciv <jmfbahciv@aol> writes:
We collected that data so the administrators had a measurement of all system usage.

re:
https://www.garlic.com/~lynn/2009o.html#79 OpenSolaris goes "tic'less"????

cp67 (and later vm370) had detailed accounting and system usage ... when there was task-switch ... there was actual calculations about how long previous task had executed and accumulated (for accounting reasons) ... information was also accumulated for overall system usage. there was also a system monitor task that ran around once every 5-10 minutes and extracted and archived the same information (could have activity on 5-10 minute increments). in the mid-to-late 70s had decade of such data for the science center (archived tapes) ... and several years of data for other internal systems.

We used the detailed, long term, and from large number of different system, usage information for workload and system profiles when formulating the synthetic workloads for calibrating and benchmarking my resource manager ... misc. past posts
https://www.garlic.com/~lynn/submain.html#bench

(at least code i saw in the 80s & 90s, I assume still the same) the unix TICs were sampling (instead of direct measurement) of what was running and updating activity information (rather than direct accounting/measurement).

in cp67, the 360/67 high resolution location "80" (x'50') timer (in storage) was used. at start of task there was

st   rx,x'54'
mvc  x'4C'(8),80
the new task's time accounting value went into location 84/x'54' and then an 8 byte overlapping move, copied the current value of location 80/x'50' into 76/x'4C' AND the new accounting value (from location 84/x'54') into location 80/x'50'.

then the previous accounting value was accumulated for what-ever had been previously running ... as well as overall system activity. this was even done for entry/exit to/from wait/idle state.

the high-resolution 360/67 location 80 timer turned out to be problem when doing the clone controller. initial tests of the reverse engineered, channel interface board (for interdata/3) resulted in the machine "red" lighting. Turns out that the interface board was reguesting the channel obtain the memory bus. Location 80 timer updates also required obtaining the memory bus ... but leave an update pending (until the memory bus was free). However, if the next time tic came in to update location 80 ... and there was already a pending location 80 timer memory update, it would raise a hardware error ("red light"). The channel interface board had to be modified to make sure that it allowed the channel to release the memory bus on regular basis to let location 80 timer updates. misc. past posts mentioning clone controller
https://www.garlic.com/~lynn/submain.html#360pcm

in the move from cp67 to 370 & vm370 ... all the system account and activity measures moved to the 370 64 bit cpu timer facilities (separate hardware facility and instructions ... not involving storage updates). description of the cpu timer facility in current principles of operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/4.6.4?SHELF=DZ9ZBK03&DT=20040504121320

One of the things that appeared to happen in UNIX ... the TIC'ing paradigm for sampling activity (rather than actually measurement) 1) went to shorter and shorter intervals (to improve sampling accuracy on faster machines) and 2) continued to be done (even if the system was otherwise idle and nothing running).

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

big iron mainframe vs. x86 servers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: big iron mainframe vs. x86 servers
Newsgroups: bit.listserv.ibm-main
Date: Thu, 22 Oct 2009 13:48:17 -0400
jbaker314@COMPORIUM.NET (John P. Baker) writes:
Give that staggering number of financial transactions processed on a daily basis, over 90% of which is done on large-scale IBM mainframes, is it not strange that you have never heard of a mainframe virus? IBM RAS and IBM Security (whether implemented via IBM RACF, CA ACF/2, CA-Top Secret Security, or some other External Security manager (ESM)) is what keep these systems running.

the original mainframe tcp/ip implementation was done in pascal and never experienced any buffer length related problems ... some past posts modifying mainframe tcp/ip so that instead of taking 3090 processor to get 44kbytes/sec ... only used small part of 4341 processor got channel speed thruput
https://www.garlic.com/~lynn/subnetwork.html#1044

Majority of of internet-related exploits and vulnerability during the 90s were buffer-length related ... associated with C language programming enviornment .... misc. past posts
https://www.garlic.com/~lynn/subintegrity.html#overflow

the attack percentage started to shift in this decade to transfer of network files that were executed containing malicious code (either automatic execution or social engineering prompting execution). I've done some word occurance analysis of internet theat & vulnerability reports ... and advocated that the centers asked for categorization ... since the reports have been free-form, making it more difficult to categorize.

there were some of this kind of viruses in the 70s & 80s on mainframes, both the internal network ... some internal network past posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
and bitnet/earn
https://www.garlic.com/~lynn/subnetwork.html#bitnet

lots of the financial stuff grew up in mainframe batch ... some past references/discussions (this from linkedin greater ibm)
https://www.garlic.com/~lynn/2009o.html#51 8 ways the American information worker remains a Luddite
and slightly older from year ago
https://www.garlic.com/~lynn/2008p.html#27 Father of Financial Dataprocessing

some amount of the transactions starting moving "online" during the 70s & 80s ... but would only be partially performed ... with the completion of process still being performed in mainframe batch (in overnight batch window). In the mid-90s, there were several large financial institutions that worked on leverage massive numbers of parallel "killer micros" to implement straight-through processing for these online transactions (actually going to comletion). The issue was growing business and growing global business was putting extreme pressure on the overnight batch windows (more work & decreasing time). However, the parallelization technology they were using added two orders of magnitude overhead (compared to the mainframe batch) ... completely swamping any anticipated thruput increase (several projects were billions into the efforts before doing any serious look at the speeds&feeds and then declared success and abandoned the efforts).

On the other hand there was a lot of mainframe clustering, continuous availability and disaster survivability done in the 70s and early 80s ... that never made it out as product. For instance at the hillgang user group meeting yesterday ... they had presentation about new single-system-image cluster support for z/VM. We had done that in the mid-to-late 70s for the HONE system (world-wide online marketing and sales support) ... and in the early 80s, for US HONE datacenter in california, it was replicated in Dallas and then Boulder (three site, load-balancing, and fall-over). misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

Long ago and far away, my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture. She was responsible for peer-coupled architecture ... but because of very little response at the time (except for IMS hot-standby), she didn't stay long.
https://www.garlic.com/~lynn/submain.html#shareddata

Later we started HA/CMP product with rs/6000s for both availability and cluster scale-up:
https://www.garlic.com/~lynn/subtopic.html#hacmp
reference to Jan92 meeting on cluster scale-up
https://www.garlic.com/~lynn/95.html#13
and some old email
https://www.garlic.com/~lynn/lhwemail.html#medusa
however, within a month of the Jan92 meeting, the cluster scale-up was transferred, we were told we couldn't work on anything with more than four processors ... and then there was announcement for (JUST) the numerical intensive marketplace:
https://www.garlic.com/~lynn/2001n.html#6000clusters1
https://www.garlic.com/~lynn/2001n.html#6000clusters2
along with numerous others in these old posts:
https://www.garlic.com/~lynn/2001n.html#70
https://www.garlic.com/~lynn/2001n.html#83

While we were out marketing ha/cmp, I coined the terms geographic survivability and disaster survivability (to differentiate from disaster recovery). I was also asked to write a section for the corporate continuous availability strategy document ... but it got pulled after both Rochester and POK complained that they couldn't (then) meet the objectives. misc. past posts on availability
https://www.garlic.com/~lynn/submain.html#available

not long after we left, we were asked to consult with small client/server startup that wanted to do payment transactions on their server; the startup had also invented this technology called SSL they wanted to use ... it is now frequently called electronic commerce. Part of that electronic commerce effort was something called a "payment gateway" (we sometimes call the original SOA) ... which acted as gateway between internet webservers and the payment infrastructure. That original implementation leveraged lots of the HA/CMP technology ... and included lots of stuff we worked on for internet avavailability and security. misc. past payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

OpenSolaris goes "tic'less"???

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OpenSolaris goes "tic'less"???
Newsgroups: alt.folklore.computers
Date: Thu, 22 Oct 2009 14:18:58 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
When I saw the unix code in the mid-80s ... I commented that I had replaced nearly the same code in cp67 in 1968 (conjecture that cp67 traced the design back to ctss and that unix may haved traced also back to ctss ... possibly by way of multics).

re:
https://www.garlic.com/~lynn/2009o.html#79
https://www.garlic.com/~lynn/2009o.html#80

one of the things (original cp67) in the periodic run-around doing "stuff" (including frequently looking repeatedly at each task ... something that scaled poorly as number of logged on users went up) ... it would accumulate the time spent in such activity. On that early/original CP67 this was called "OVERHEAD" and with 30 active users, would account for ten percent of processor time. Besides eliminating the tic'ing (including when absolutely nothing was going on) ... and moving to more straight-forward event related processing ... all that (increasingly non-linear, non-scalable) "OVERHEAD" was eliminated.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Excerpt from Digital Equipment co-founder's autobiography "Learn, Earn and Return"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Excerpt from Digital Equipment co-founder's autobiography "Learn, Earn and Return"
Newsgroups: alt.folklore.computers
Date: Thu, 22 Oct 2009 17:39:49 -0400
Excerpt from Digital Equipment co-founder's autobiography "Learn, Earn and Return"
http://www.networkworld.com/news/2009/102809-fbi-national-data-breach-law-would.html

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970

Opinions on the 'Unix Haters' Handbook'

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinions on the 'Unix Haters' Handbook'.
Newsgroups: alt.folklore.computers
Date: Thu, 22 Oct 2009 22:36:56 -0400
re:
https://www.garlic.com/~lynn/2009o.html#21 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#23 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#24 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#25 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#48 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#49 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#56 Opinions on the 'Unix Haters' Handbook'

Tues morning I heard talk about program on the radio ... but then missed the broadcast.

The Warning
http://www.pbs.org/wgbh/pages/frontline/warning/

from above:
Amidst the 1990s' bullmarket, there was one lone regulator who warned about derivatives' dangers -- and suddenly became the enemy of some of the most powerful people in Washington...

... snip ...

and ...

Interview: Brooksley Born
http://www.pbs.org/wgbh/pages/frontline/warning/interviews/born.html

... and older articles

Greenspan Slept as Off-Books Debt Escaped Scrutiny
http://www.bloomberg.com/apps/news?pid=20601109&refer=home&sid=aYJZOB_gZi0I

from above:
That same year Greenspan, Treasury Secretary Robert Rubin and SEC Chairman Arthur Levitt opposed an attempt by Brooksley Born, head of the Commodity Futures Trading Commission, to study regulating over-the-counter derivatives. In 2000, Congress passed a law keeping them unregulated.

... snip ...

Apparently Born was fairly quickly replaced by Gramm's wife ... while Gramm got legislation through congress that precluded regulation ... and then Gramm's wife resigned and joined Enron's board:

Gramm and the 'Enron Loophole'
http://www.nytimes.com/2008/11/17/business/17grammside.html

from above:
Enron was a major contributor to Mr. Gramm's political campaigns, and Mr. Gramm's wife, Wendy, served on the Enron board, which she joined after stepping down as chairwoman of the Commodity Futures Trading Commission.

... snip ...

where she served on the audit committee

Phil Gramm's Enron Favor
https://web.archive.org/web/20080711114839/http://www.villagevoice.com/2002-01-15/news/phil-gramm-s-enron-favor/

from above:
A few days after she got the ball rolling on the exemption, Wendy Gramm resigned from the commission. Enron soon appointed her to its board of directors, where she served on the audit committee, which oversees the inner financial workings of the corporation. For this, the company paid her between $915,000 and $1.85 million in stocks and dividends, as much as $50,000 in annual salary, and $176,000 in attendance fees,

... snip ...

25 People to Blame for the Financial Crisis; Phil Gramm
http://content.time.com/time/specials/packages/article/0,28804,1877351_1877350_1877330,00.html

from above:
He played a leading role in writing and pushing through Congress the 1999 repeal of the Depression-era Glass-Steagall Act, which separated commercial banks from Wall Street. He also inserted a key provision into the 2000 Commodity Futures Modernization Act that exempted over-the-counter derivatives like credit-default swaps from regulation by the Commodity Futures Trading Commission. Credit-default swaps took down AIG, which has cost the U.S. $150 billion thus far.

... snip ...

article from last spring ...

If You Think the Worst Is Behind Banks, Read This
http://www.fool.com/investing/general/2009/05/12/if-you-think-the-worst-is-behind-banks-read-this.aspx

from above:
Don't confuse what that's saying: In terms of losses and writedowns, the next 18 months are expected to be worse than the preceding 18 months.

... snip ...

past frontline program on Gramm, Gramm-Leach-Bliley Act (GLBA) and repeal of Glass-Steagall:

wall street fix
http://www.pbs.org/wgbh/pages/frontline/shows/wallstreet/

misc. past posts mentioning above:
https://www.garlic.com/~lynn/2009b.html#60 OCR scans of old documents
https://www.garlic.com/~lynn/2009b.html#73 What can we learn from the meltdown?
https://www.garlic.com/~lynn/2009b.html#80 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#0 Audit II: Two more scary words: Sarbanes-Oxley
https://www.garlic.com/~lynn/2009c.html#10 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#20 Decision Making or Instinctive Steering?
https://www.garlic.com/~lynn/2009c.html#29 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#36 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#38 People to Blame for the Financial Crisis
https://www.garlic.com/~lynn/2009c.html#42 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#44 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#55 Who will give Citigroup the KNOCKOUT blow?
https://www.garlic.com/~lynn/2009c.html#65 is it possible that ALL banks will be nationalized?
https://www.garlic.com/~lynn/2009d.html#10 Who will Survive AIG or Derivative Counterparty Risk?
https://www.garlic.com/~lynn/2009d.html#59 Quiz: Evaluate your level of Spreadsheet risk
https://www.garlic.com/~lynn/2009d.html#62 Is Wall Street World's Largest Ponzi Scheme where Madoff is Just a Poster Child?
https://www.garlic.com/~lynn/2009d.html#63 Do bonuses foster unethical conduct?
https://www.garlic.com/~lynn/2009d.html#73 Should Glass-Steagall be reinstated?
https://www.garlic.com/~lynn/2009d.html#77 Who first mentioned Credit Crunch?
https://www.garlic.com/~lynn/2009e.html#8 The background reasons of Credit Crunch
https://www.garlic.com/~lynn/2009e.html#23 Should FDIC or the Federal Reserve Bank have the authority to shut down and take over non-bank financial institutions like AIG?
https://www.garlic.com/~lynn/2009f.html#27 US banking Changes- TARP Proposl
https://www.garlic.com/~lynn/2009f.html#31 What is the real basis for business mess we are facing today?
https://www.garlic.com/~lynn/2009f.html#38 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009f.html#43 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009f.html#49 Is the current downturn cyclic or systemic?
https://www.garlic.com/~lynn/2009f.html#53 What every taxpayer should know about what caused the current Financial Crisis
https://www.garlic.com/~lynn/2009f.html#65 Just posted third article about toxic assets in a series on the current financial crisis
https://www.garlic.com/~lynn/2009g.html#5 Do the current Banking Results in the US hide a grim truth?
https://www.garlic.com/~lynn/2009j.html#21 The Big Takeover

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970




previous, next, index - home