List of Archived Posts

2010 Newsgroup Postings (02/23 - 03/11)

LPARs: More or Less?
LPARs: More or Less?
LPARs: More or Less?
"Unhackable" Infineon Chip Physically Cracked - PCWorld
Adventure - Or Colossal Cave Adventure
What is a Server?
Need tool to zap core
Need tool to zap core
Need tool to zap core
Adventure - Or Colossal Cave Adventure
Need tool to zap core
Crazed idea: SDSF for z/Linux
Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
search engine history, was Happy DEC-10 Day
Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
search engine history, was Happy DEC-10 Day
Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
What's with IBMLINK now??
What's with IBMLINK now??
paged-access method
search engine history, was Happy DEC-10 Day
Item on TPF
Unbundling & HONE
HONE Compute Intensive
whither NCP hosts?
SHAREWARE at Its Finest
What was old is new again (water chilled)
HONE & VMSHARE
SHAREWARE at Its Finest
What was old is new again (water chilled)
Need tool to zap core
SHAREWARE at Its Finest
Need tool to zap core
Why does Intel favor thin rectangular CPUs?
What was old is new again (water chilled)
More calumny: "Secret Service Uses 1980s Mainframe"
Need tool to zap core
Agile Workforce
Byte Tokens in BASIC
PCI tokenization push promising but premature, experts say
search engine history, was Happy DEC-10 Day
Boyd's Briefings
Need tool to zap core
PCI tokenization push promising but premature, experts say
Impact of solid-state drives
z9 / z10 instruction speed(s)
z9 / z10 instruction speed(s)
z9 / z10 instruction speed(s)
LPARs: More or Less?
LPARs: More or Less?
LPARs: More or Less?
z9 / z10 instruction speed(s)
search engine history, was Happy DEC-10 Day
Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
z9 / z10 instruction speed(s)
IBM Plans to Discontinue REDBOOK Series
Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
More calumny: "Secret Service Uses 1980s Mainframe"
z9 / z10 instruction speed(s)
z9 / z10 instruction speed(s)
z9 / z10 instruction speed(s)
More calumny: "Secret Service Uses 1980s Mainframe"
LPARs: More or Less?
LPARs: More or Less?
z9 / z10 instruction speed(s)
search engine history, was Happy DEC-10 Day
Entry point for a Mainframe?
search engine history, was Happy DEC-10 Day
Entry point for a Mainframe?
Entry point for a Mainframe?
Entry point for a Mainframe?
NSF To Fund Future Internet Architecture (FIA)
LPARs: More or Less?
LPARs: More or Less?
LPARs: More or Less?
Madoff Whistleblower Book
Entry point for a Mainframe?
history of RPG and other languages, was search engine history
Entry point for a Mainframe?
Entry point for a Mainframe?
history of RPG and other languages, was search engine history
Entry point for a Mainframe?

LPARs: More or Less?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 23 Feb 2010 11:55:13 -0500
dba@LISTS.DUDA.COM (David Andrews) writes:
One of you Old Ones (and I'm thinking of Shmuel in particular) correct me on this, but didn't bare MVT have a horrendous core fragmentation issue? My poor recollection is that HASP initiators essentially reintroduced "partitions" to MVT to help beat that problem.

especially for long running jobs. Boeing huntsville had installed duplex (2-processor SMP) 360/67 for tss/360 ... when tss/360 ran into deelivery problems ... they would run it as two (partitioned) 360/65s under os/360. Their workload was long-running 2250 (vector graphics) design applications which had enormous storage fragmentation issues.

To address that they modified MVT (release 13) to build 360/67 page tables and run in virtual memory mode ... there was no paging faults or page i/o supported ... the virtual memory mode was just used as countermeasure to the significant storage fragmentation problem (using virtual address tables to re-arrange memory addresses to be contiguous).

Later I was brought in for a summer at boeing seattle as part of setting up BCS (boeing computer services) ... and put in a cp67 360/67 (simplex) in the 360/30 datacenter at corporate hdqtrs (boeing field) ... which had primarily been doing corporate payroll. That summer, the 2-processor 360/67 was also moved to Seattle from Huntsville.

When 370 was initially announced, there was no virtual memory support ... and one of the IBM SEs on the boeing account wondered what was the (virtual machine & virtual memory) cp67 path in 370. some 370s did have a sort of virtual memory (a little analogous to current LPARs) ... used for emulators ... which was a mode that had base/bound flavor of (contiguous) virtual memory (i.e. virtual memory up to the "bound" limit and all addresses were "relocated" by the "base" value). The boeing account SE did a hack to cp67 that used the base/bound on 370s (pre virtual memory) ... didn't do paging but would swap in/out whole virtual machine address space.

also, somewhat analogous to the "preferred v=r guest" ... recent reference (in the v=r case, the addresses were contiguous and the virtual address was same as the real address):
https://www.garlic.com/~lynn/2010d.html#79 LPARs: More or Less?

a few recent posts mentioning BCS, boeing huntsville, etc:
https://www.garlic.com/~lynn/2010c.html#89 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010c.html#90 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010c.html#91 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010d.html#29 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#76 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 23 Feb 2010 12:23:59 -0500
dcartwright@YMAIL.COM (David Cartwright) writes:
At Monsanto Europe in Brussels about 1976 I wrote some mods to VM/370 to defeat Shadow Page Tables for V=R machines so we could run MVS under VM/370 without a crippling overhead. I sent that code out into the world on some (Waterloo?) VM Mods tape, but my own copy got dumped in some move down the years. Wish I had it now, it would go really nicely on Herc.

re:
https://www.garlic.com/~lynn/2010d.html#79 LPARs: More or Less?

the stuff done on csc/vm ... that leaked out to at&t, had been about the same time ... slightly eariler.

the design of the shadow page tables followed the semantics for the hardware "look-aside buffer". the virtual machine has page tables that translate virtual addresses to what it thinks are real addresses. However, these are actually virtual addresses for the virtual machine. So when VM runs a virtual machine ... in virtual memory mode ... it is actually run with "shadow page tables". Shadow page table entries start out all invalid. The virtual machine immediately page faults, vm then has to look at the (virtual) page tables (in virtual machine) to translate from the virtual memory address to the virtual machine address ... vm then looks it its page table to translate from the virtual machine address to the real machine address. It is this "real machine address" that is placed into the shadow tables.

The early, low & mid range 370s had a single STO stack ... every time there was a change in the virtual address space pointer ... the hardware lookaside buffer was cleared and all entries invalidated. Early VM370's shadow table operation had similar design, single STO stack, every time the virtual machine changed virtual address space pointer, all the shadow page table entries were cleared and invalidated. Moving from SVS to MVS significantly aggravated this ... because MVS was changing virtual address space pointer at the drop of the hat (and vm370 was going thru massive overhead constantly invalidating the shadow page tables every time MVS reloaded CR1).

370/168 had a 7-entry STO stack. There was a seven entry LRU queue of the most recently used STO values. Each hardware look-aside buffer entry had a 3-bit tag ... it was either one of the 7 currently valid STO entries ... or invalid. MVS constant reloading/changing CR1 was mitigated on real 168 with the 7-entry STO stack (loading new value into CR1 didn't do anything if the value was already one of the seven values in the STO staok). It wasn't until vm370 release 5 with sepp option that vm370 finally shipped something equivalent to multiple STO-stack (i.e. multiple shadow page tables being kept for a single virtual machine ... to try and minimize having to constantly clear all shadow page table entries every time MVS fiddled with CR1).

The demise of FS saw a big need to get products back into the 370 product pipeline quickly. 3033 was such effort ... take the 370/168 logic and remap it to slightly faster chips. There was also some activity to introduce some purely MVS microcode performance assists on 3033 ... one such involved cross-memory services. One of the issues with 3033 and cross-memory services ... was the 3033 still had the 370/168 design with 7-entry STO stack ... and cross-memory services was significantly increasing the number of STOs being used ... overrunning the seven entries ... with corresponding big increase in look-aside buffer entry flushing (which netted out to worse performance; somewhat analogous to the shadow page table flushing that VM was constantly being forced to do).

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 23 Feb 2010 12:23:59 -0500
dcartwright@YMAIL.COM (David Cartwright) writes:
At Monsanto Europe in Brussels about 1976 I wrote some mods to VM/370 to defeat Shadow Page Tables for V=R machines so we could run MVS under VM/370 without a crippling overhead. I sent that code out into the world on some (Waterloo?) VM Mods tape, but my own copy got dumped in some move down the years. Wish I had it now, it would go really nicely on Herc.

re:
https://www.garlic.com/~lynn/2010d.html#79 LPARs: More or Less?

the stuff done on csc/vm ... that leaked out to at&t, had been about the same time ... slightly eariler.

the design of the shadow page tables followed the semantics for the hardware "look-aside buffer". the virtual machine has page tables that translate virtual addresses to what it thinks are real addresses. However, these are actually virtual addresses for the virtual machine. So when VM runs a virtual machine ... in virtual memory mode ... it is actually run with "shadow page tables". Shadow page table entries start out all invalid. The virtual machine immediately page faults, vm then has to look at the (virtual) page tables (in virtual machine) to translate from the virtual memory address to the virtual machine address ... vm then looks it its page table to translate from the virtual machine address to the real machine address. It is this "real machine address" that is placed into the shadow tables.

The early, low & mid range 370s had a single STO stack ... every time there was a change in the virtual address space pointer ... the hardware lookaside buffer was cleared and all entries invalidated. Early VM370's shadow table operation had similar design, single STO stack, every time the virtual machine changed virtual address space pointer, all the shadow page table entries were cleared and invalidated. Moving from SVS to MVS significantly aggravated this ... because MVS was changing virtual address space pointer at the drop of the hat (and vm370 was going thru massive overhead constantly invalidating the shadow page tables every time MVS reloaded CR1).

370/168 had a 7-entry STO stack. There was a seven entry LRU queue of the most recently used STO values. Each hardware look-aside buffer entry had a 3-bit tag ... it was either one of the 7 currently valid STO entries ... or invalid. MVS constant reloading/changing CR1 was mitigated on real 168 with the 7-entry STO stack (loading new value into CR1 didn't do anything if the value was already one of the seven values in the STO staok). It wasn't until vm370 release 5 with sepp option that vm370 finally shipped something equivalent to multiple STO-stack (i.e. multiple shadow page tables being kept for a single virtual machine ... to try and minimize having to constantly clear all shadow page table entries every time MVS fiddled with CR1).

The demise of FS saw a big need to get products back into the 370 product pipeline quickly. 3033 was such effort ... take the 370/168 logic and remap it to slightly faster chips. There was also some activity to introduce some purely MVS microcode performance assists on 3033 ... one such involved cross-memory services. One of the issues with 3033 and cross-memory services ... was the 3033 still had the 370/168 design with 7-entry STO stack ... and cross-memory services was significantly increasing the number of STOs being used ... overrunning the seven entries ... with corresponding big increase in look-aside buffer entry flushing (which netted out to worse performance; somewhat analogous to the shadow page table flushing that VM was constantly being forced to do).

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

"Unhackable" Infineon Chip Physically Cracked - PCWorld

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 23 Feb, 2010
Subject: "Unhackable" Infineon Chip Physically Cracked - PCWorld
Blog: Financial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2010d.html#7 "Unhackable" Infineon Chip Physically Cracked - PCWorld
https://www.garlic.com/~lynn/2010d.html#34 "Unhackable" Infineon Chip Physically Cracked

some past posts mentioning AADS chip strawman
https://www.garlic.com/~lynn/x959.html#aadsstraw

I had earlier claimed that I could effectively do the major features required of TPM in the much simpler AADS chip strawman w/o any changes.

some amount of the stuff related to AADS chip strawman are described in these patents
https://www.garlic.com/~lynn/aadssummary.htm

... oh well

Black Hat Cracks Infineon SLE 66 CL PE Security Microcontroller
http://secureprocessing.wordpress.com/tag/electron-microscope/

from above:
The article discusses generally the methodology used by Tarnovsky to reverse-engineer the security IC. It includes a painstaking electron microscopic examination of the device (presumably with captured images), followed by insertion of micro-probes into the data busses.

... snip ...

Los Gatos had pioneered use of electron microscope for chip analysis in conjunction with debugging blue iliad.

Christopher Tarnovsky hacks Infineon's 'unhackable' chip, we prepare for false-advertising litigation
http://www.engadget.com/2010/02/12/christopher-tarnovsky-hacks-infineons-unhackable-chip-we-pre/

from above:
"Unless you have an electron microscope, small conductive needles to intercept the chip's internal circuitry, and the acid necessary to expose it." Those are some of the tools available to researcher Christopher Tarnovsky, who perpetrated the hack and presented his findings at the Black Hat DC Conference earlier this month.

... snip ...

Infineon Chip's Weakness Discovered:
http://www.securitypronews.com/insiderreports/insider/spn-49-20100203InfineonChipsWeaknessDiscovered.html

from above:
Christopher Tarnovsky, who works for Flylogic Engineering, employed electron microscopy to achieve the feat. Tim Wilson reports, "Using a painstaking process of analyzing the chip, Tarnovsky was able to identify the core and create a 'bridge map' that enabled the bypass of its complex web of defenses,"

... snip ...

There is the value of the attack (possibly fraud ROI for the attacker). One of the issues is whether the security paradigm employs a global shared-secret (that is in all chips) or a unique secret per chip ... as well as whether the infrastructure is online or offline (things that might possible limit fraudulent financial return as a result of the attack).

This can show up as the security proportional to risk theme ... if it costs the attacker $100,000 to attack the system ... and the possible fraud might be at least $100,000 ... then there may have to be additional compensating processes to act as a deterrent.

old usenet paper from 1999 describing attacks

Design Principles for Tamper-Resistant Smartcard Processors (including focused ion beam)
http://www.usenix.org/events/smartcard99/full_papers/kommerling/kommerling_html/

there is some issue that in the TPM form (i.e. on PC motherboard) that the chip might be used to protect DRM shared-secrets ... making the compromise scope more than a single chip, machine or account.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Adventure - Or Colossal Cave Adventure

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Adventure - Or Colossal Cave Adventure
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 23 Feb 2010 16:10:15 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
from above, URL for "Crowther's original source code for Adventure (as recovered from Don Woods's student account at Stanford)" (mar1977):
http://jerz.setonhill.edu/if/crowther/


I believe the above is relative close to the original vm/cms fortran that I got not long later (although I don't have a copy so I can't be absolutely sure).

the earliest PLI version I know of was done by somebody at STL ... who I had provided a copy of the fortran version.

the folklore is that adventure use at STL reached such a level ... that management had an edict that there was a 24hr amnesty period and then anybody caught playing adventure during work would be severely disciplined.

there was a period when they wanted to change all the internal vm370 logon screens to include note about use for work related activities only. there was a big push by a few to get it changed to say for management approved activities only .... small distinction ... but would allow playing games as a management approved activity.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

What is a Server?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is a Server?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Wed, 24 Feb 2010 13:14:12 -0500
charlesm@MCN.ORG (Charles Mills) writes:
Yes, I know everyone is defensive because a fad usually referred to as "client/server" cost a lot of us our jobs about twenty years ago, but it's time to move on. "Server" is not the enemy; server is a wonderful part of computer system architecture.

in the very early days of of SNA (master/slave humongous number of dumb terminal control infrastructure), my wife had co-authored peer-coupled network architecture (AWP39, for slight reference, AAPN was AWP164) ... which the SNA group appeared to find threatening.

Later, when she was con'ed into going to POK to be in charge of loosely-coupled architecture ... and did Peer-Coupled Shared Data architecturee ... some past posts
https://www.garlic.com/~lynn/submain.html#shareddata

she was having constant battles with SNA organization ... there would be sporadic temporary truces where she was allowed to use anything within the datacenter walls ... but SNA had to be used for anything crossing the machine room walls (also there was much more focus on tightly-coupled multiprocessor during the period ... so except for IMS hotstandby, her architecture didn't see much uptake until sysplex).

When the PC was announced ... the communication groups use of terminal emulation contributed significantly to early updake of PCs (could get a PC for about the same price as 3270 terminal and in single desktop footprint get terminal emulation as well as some local computing capability, it was no brainer for businesses that already had 3270 terminal justification).

During this period, communication group acquired quite a large terminal emulation install base ... but as PCs became more powerful ... there was more and more requirement for more sophisticated operation.

Unfortunately the communication group was strongly defending their terminal emulation install base. In this period, we had come up with 3-tier architecture and were out pitching it to customer execs (and taking some amount of flak from the communication group), some amount of past posts mentioning 3-tier architecture
https://www.garlic.com/~lynn/subnetwork.html#3tier

The disk division was also trying to address the opportunity with several products that would allow mainframe to play major roles in distributed processing world ... however, as my wife had earlier encountered ... the communication group would escalate to corporate and roadblock the disk group efforts ... with the line that the communication group owned anything that crossed the walls of the machine room.

In the mean time, the terminal emulation paradigm was starting to represent an enormous stranglehold on the mainframe datacenter ... and data was starting to leak out to more user friently platforms. Disk division was starting to see it (data leaking out of the mainframe datacenter) creep up into (low) double digit loss per annum. some topic drift, misc. past posts getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

At one point, a senior engineer from the disk division got a talk scheduled at the annual worldwide communication group conference. He started the talk out that the head of the communication group was going to be responsible for the demise of the disk division (because the stranglehold that the communication group had on the mainframe datacenter and cutting it off from being used in more powerful ways ... resulting in accelerating rate that data was leaking out of the datacenter to other platforms). misc. past posts mentioning terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation

this was major factor in disk division out funding other kinds of things ... circumventing the communcation group politics ... funding somebody else's product that would use mainframe disks in much more effective way ... side-stepped communcation group roadblocking announcements of disk division products. recent reference:
https://www.garlic.com/~lynn/2010d.html#69 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#71 LPARs: More or Less?

a different trivial example was that the internal network was larger than the arpanet/internet from just about the beginning until possibly late 85 or early 86. a big explosion in the size of the internal network was in late 70s and early 80s with lots of vm/4341 machines. Going into the mid 80s, the customer mid-range market was moving to workstations and large PCs (this can be seen in big drop off in both 43xx sales as well as dec vax sales in the period). A big factor in the size of the internet overtaking the internal network ... was that workstations and PCs were appearing as network nodes (again because the increasing size and power of the machines) ... while on the internal network, such machines were still being restricted to "terminal emulation". misc. past posts mentioning internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Need tool to zap core

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need tool to zap core
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Wed, 24 Feb 2010 14:47:30 -0500
chrismason@BELGACOM.NET (Chris Mason) writes:
If it's a 360 Model 40, there are some nice tactile switches it's a pleasure to flip on the front of the machine.

If it's a 360 Model 30 there are some tacky dials on the front of the machine.

I'm afraid those were the only two models with *core* with which I ever got to play - and change data in storage!


some front panels
http://infolab.stanford.edu/pub/voy/museum/pictures/display/3-1.htm
http://infolab.stanford.edu/pub/voy/museum/pictures/display/FAA9020.html
https://en.wikipedia.org/wiki/File:IBM360-65-1.corestore.jpg
http://ibmcollectables.com/gallery/FabriTek65/HPIM0775
http://ibmcollectables.com/gallery/FabriTek65/HPIM0769_001
http://ibmcollectables.com/gallery/FabriTek65/HPIM0771

I got to play a lot with both 360/30 and then a 360/67 (front panel of 65 & 67 were essentially the same).

there was an incident with 370 before virtual memory was announced where some virtual memory documents leaked to the press. there was a "watergate-like" investigation ... and then they went around putting serial numbers on the underside of the glass in all corporate copy machines ... so all copied pages would carry the serial of the copy machine that the copy was made on.

for Future System ... there was an idea to do softcopy "DRM" to minimize the leakage of documents. The vm370 development group did a extra secure version of vm370 that was used inside the corporation for future system documents (only be able to "read" them on 3270 display).

One weekend, I had some dedicated machine time scheduled in the vm370 development group machine room ... and stopped by friday afternoon to make sure everything was prepared. they took me into the machine room ... and made some reference that even I if I was left alone in the machine room, I wouldn't be able to access the FS documents.

It was just a little too much, i made sure the machine was disabled for all terminals for login ... and then did a one-byte patch to kernel memory ... and then everything was available (aka the one-byte was in the password checking routine ... so that regardless of what was typed in, it would be accepted as valid password).

i made some reference to the only countermeasure (for somebody with physical access) is completely disabling all mechanisms for compromising the operation of the system.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Need tool to zap core

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need tool to zap core
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Wed, 24 Feb 2010 17:51:52 -0500
scott@slp53.sl.home (Scott Lurndal) writes:
I was told this was because of the DOJ anti-trust investigation. Is that what your 'watergate-like' refers to?

re:
https://www.garlic.com/~lynn/2010e.html#6 Need tool to zap core

the one i remember about DOJ case was on document retention ... including computer printouts. there was a period when POK was starting to store overflow in offices ... and one bldg was loosing something like five offices/week for document retention (walking down a hall where everybody had been moved out and some of the offices completely filled with boxes and piles of paper) ... and there was starting to be issues with bldg. flr loading.

later i heard some reference to delivery of documents to DOJ ... and they were scheduling freight trains for large number of boxcars filled with paper.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Need tool to zap core

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Need tool to zap core
Newsgroups: bit.listserv.ibm-main
Date: 24 Feb 2010 19:18:36 -0800
glen.manages.mvs@GMAIL.COM (Glen Gasior) writes:
I had to call my dad to find out what was meant by core.

what is old is new again:
http://www.abc.net.au/science/articles/2010/02/24/2828135.htm
2010 International Conference On Nanoscience and Nanotechnology
http://www.ausnano.net/iconn2010/

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Adventure - Or Colossal Cave Adventure

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Adventure - Or Colossal Cave Adventure
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Thu, 25 Feb 2010 09:16:59 -0500
john.kington@CONVERGYS.COM (John Kington) writes:
WYLBUR was used at University of Cincinnati in the mid 80's when I was learning to program. Much better for an imperfect typist like me than the ancient keypunch machines that was the other alternative. I never heard of it being used in a commercial environment though.

... evolution of wylbur at nih
http://datacenter.cit.nih.gov/interface/interface206/if206-01.htm
also
http://datacenter.cit.nih.gov/interface/interface241/40years.html

we had done customer call on NLM in the 90s, NLM was sort of home-brew CICS system from the 60s with BDAM ... a couple of the guys responsible were still around and got to gosip about 60s (as undergraduate in the 60s, the univ had ONR grant to do computer catalog and was selected as beta-test for original CICS, one of my tasks was debugging CICS). misc. past posts mentioning cics &/or bdam
https://www.garlic.com/~lynn/submain.html#bdam

By the early 80s, NLM had reached point where web search engines got nearly two decades later ... number of items so large that queries returned hundreds of thousands items.

A new kind of interface originally done on Apple called GratefulMed ... that instead of getting back the actual responses, it got back the number of responses. GratefulMed managed query strategies ... holy grail was looking for something that returned more than zero and less than hundred (boolean logic queries tended to be bimodel out around six or seven tokens, switching from hundreds of thousands to zero).

past posts in this thread:
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#64 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#65 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#67 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#68 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#74 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#75 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#77 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#82 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010e.html#4 Adventure - Or Colossal Cave Adventure

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Need tool to zap core

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need tool to zap core
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Thu, 25 Feb 2010 09:35:16 -0500
mike@MENTOR-SERVICES.COM (Mike Myers) writes:
Back in the '60s, the Field Engineering Division took over first-level support of OS/360, creating a new kind of Customer Engineer called a Program Support Representative (PSR). Their primary role was to examine a dump and determine if the problem was hardware or software related. If hardware, they would turn it over to a hardware customer engineer. If software, then they could attempt to fix or bypass the issue with a zap, if feasible. If not, then they would report it to development and try to work a temporary fix.

in the very early days of REX (before it was renamed and released to customers) ... I wanted to demonstrate that it wasn't just another pretty scripting language.

I selected that I would redo the (implemented in large number of assembler LOCs) IPCS dump reader ... taking less than six weeks of my time in under three months ... it would have ten times the function and run ten times faster (some slight of hand to make the rex implementation doing ten times the function, run ten times faster than the assembler). part of the effort was gather signatures of failure modes ... and build a library of automated scripts that would examine dumps for all known, recognizable failure mode signatures.

I also made it capable of running against the live kernel as well as patching kernel storage.

For some reason I was never able to get it released as replacement for the standard product ... but at one point nearly all internal datacenters and PSRs were using it.

Getting tired of waiting to get approval for it to ever be released, I managed to get a presentation approved for BAYBUNCH ... where I went in detail on how I did the implementation. Within three months after that presentation ... there were at least two other implementations.

misc. past posts mentioning DUMPRX.
https://www.garlic.com/~lynn/submain.html#dumprx

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Crazed idea: SDSF for z/Linux

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Crazed idea: SDSF for z/Linux
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Thu, 25 Feb 2010 12:41:17 -0500
John.McKown@HEALTHMARKETS.COM (McKown, John) writes:
This just occurred to me. I wonder if I'm suffering from lack of oxygen to the brain. But, as best as I can tell, SDSF is capable of accessing the SPOOL files for a non-active JES2 system. At least as I recall from the past, I did this. So I got to wondering. Suppose I have a z/Linux system running in the same complex. Perhaps under z/VM. It might be nice (FSVO nice), if I could logon to z/Linux and do SDSF ad least to the extent of being able to read SPOOL files. Of course, being a bit paranoid, I would only allow a READONLY access to the DASD containing the SPOOL data. And there is always the specter of security. There may be SPOOL files which I should not be able to even READ (like payroll or HIPAA reports or ...). So this may be a stupid idea. But the though is intriguing to me.

there had been significant enhancement of cms os/simulation ... including handling pretty much all (both r/o and r/w) os formated disks & files (the cms os/simulation had been less than 64kbytes of code ... but there were comments that it was really cost-effective compared to the size of the MVS os/simulation). However, this was before the shutdown of the vm370 group and their move to POK to support mvs/xa development (person left the company and remained in the boston area) ... and the enhancements appeared to evaporate.

A couple years later I was getting heavily involved in redoing assembler implementations in more appropriate languages ... recent reference to doing DUMPRX in rexx
https://www.garlic.com/~lynn/2010e.html#10 Need tool to zap core

I also redid a spool file system implementation in pascal running in virtual machine. The los gatos lab had done the original mainframe pascal implementation for vlsi tool implementation ... that pascal went thru several product releases started with IUP before evolving into program product.

That pascal was also used to implement the original mainframe tcp/ip product (and suffered from none of the buffer length exploits that are common in C language implementations). recent reference to that tcp/ip implementation:
https://www.garlic.com/~lynn/2010d.html#72 LPARs: More or Less?

My full (spool file system) implementation moved the complete system spool file implementation into virtual address space ... however, I packaged subset of the routines as independent utilities for doing spool file diagnostic (on normal systems). It is actually relatively straight-forward activity.

Total aside, part of the issue was that the internal networking technology ran thru the spool system (implemented as a service virtual machine, or virtual appliance) ... and for various implementation reasons would only get 20kbytes-40kbytes/sec sustained thruput (before controller caches, etc). In HSDT, some past posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

I needed multi-megabyte sustained thruput ... so was having to add all sorts of things like lazy writes, contiguous allocation, multi-buffering, read-ahead, etc (however, lazy writes still required logged operation ... either they completed or had to be redone) ... total aside, some of this sort of thing recently shows up in enhancements for ext4 filesystem ... minor reference to google upgrading to ext4:
http://arstechnica.com/open-source/news/2010/01/google-upgrading-to-ext4-hires-former-linux-foundation-cto.ars

HSDT was also having some vendors, on the other side of the pacific, build some hardware. The friday afternoon before a vendor visit trip, the communicationg group distributes an announcement for a new "high-speed" discussion group in an internal forum ... with the following definitions:
low-speed <9.6kbits medium-speed 19.2kbits high-speed 56kbits very high-speed 1.5mbits

monday morning in vendor conference room on the other side of the pacific:
low-speed <20mbits medium-speed 100mbits high-speed 200-300mbits very high-speed >600mbits

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Thu, 25 Feb 2010 13:06:45 -0500
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
IBM was using BSL for OS/360, especially TSO. Nor was IBM the first to use a HLL to write a commercial operating system; Burroughs wrote MCP in various flavors of Algol.

some of the 7094/ctss people went to the science center on 4th flr of 545 tech sq (cp40/cms on a specially modified 360/40 with virtual memory hardware, morphed into cp67/cms when standard virtual memory became available on 360/67, invented GML ... which later morphs into SGML, HTML, XML, etc). others went to 5th flr of 545 tech sq and implemented multics ... all done in PLI. multics references:
https://www.multicians.org/multics.html

multics was the first to release commercial relational database product in 1976.

air force did a security study of multics in early 70s. a few years ago, ibm research did a paper revisiting the Multics security study ... recent post in this mailing list on the subject:
https://www.garlic.com/~lynn/2010b.html#97 "The Naked Mainframe" (Forbes Security Article)

a couple recent posts in other threads mentioning HLL:
https://www.garlic.com/~lynn/2010e.html#10 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux

one of the things mentioned in Jim's departing "MIP Envy" was that the PLS group had been dissolved during the FS years ... and when reconstituted it only supported PLS on MVS
https://www.garlic.com/~lynn/2010d.html#80 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure

... which made it difficult for use doing System/R
https://www.garlic.com/~lynn/submain.html#systemr

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

search engine history, was Happy DEC-10 Day

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: search engine history, was Happy DEC-10 Day
Newsgroups: alt.folklore.computers
Date: Thu, 25 Feb 2010 13:59:11 -0500
Michael Wojcik <mwojcik@newsguy.com> writes:
As for programming languages that were actually meant to be used for real work, there are many sites devoted to determining which was worst, and those show plenty of languages substantially worse than RPG by any reasonable measure. (Obviously "worst programming language" is ultimately a subjective question, but there are grounds for agreement by reasonable observers.)

MUMPS seems to get a lot of votes as one of the worst languages.


but the language & system has hung on for quite awhile at various medical locations ... like VA hospitals; offers arbitrary relations between any two items (many-to-many) ... something much more difficult to achieve in RDBMS.
https://en.wikipedia.org/wiki/MUMPS

when we were doing ha/cmp ... we subcontracted a lot of work to small startup (grew to couple hundred people) that was a couple people that had been working at project athena ... and one of the people that had been involved cp40 & cp67 (later was head of part of FS and then retired) and at the time was also director of MGH/Harvard medical school dataprocessing. misc. past posts mentioning ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Thu, 25 Feb 2010 14:18:05 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
some of the 7094/ctss people went to the science center on 4th flr of 545 tech sq (cp40/cms on a specially modified 360/40 with virtual memory hardware, morphed into cp67/cms when standard virtual memory became available on 360/67, invented GML ... which later morphs into SGML, HTML, XML, etc). others went to 5th flr of 545 tech sq and implemented multics ... all done in PLI. multics references:
https://www.multicians.org/multics.html


re:
https://www.garlic.com/~lynn/2010e.html#12 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

for additional language trivia ... the boston programming center was on the 3rd flr (until cp67 development group split off from the science center, morphed into vm370 development group and absorbed the 3rd flr along w/all of the boston programming center, this was before outgrowing the 3rd flr and moving into the bldg. out in burlington mall that had been vacated when SBC was transferred to CDC).

Jean Sammet was in the boston programming center ... ref
https://en.wikipedia.org/wiki/Jean_E._Sammet

from above:
She was also a member of the subcommittee which created COBOL. Sammet was president of the ACM from 1974 to 1976.

... snip ...

and BPC also had produced CPS ... online interactive system that supported PLI and BASIC that ran under os/360. CPS also had a special microcode performance assist for the 360/50 (when BPS was absorbed by vm370 group, they did a version of CPS that ran in CMS ... somewhat akin to the work that was done on apl\360 for cms\apl; somewhere in boxes I have hardcopy of the CPS/CMS description).

old post referencing that some amount of CPS work had been subcontracted(?) to Allen-Babcock
https://www.garlic.com/~lynn/2008s.html#71 Is SUN going to become x86'ed??

reference:
http://www.bitsavers.org/pdf/allen-babcock/cps/

Above allen-babcock (IBM) documents references other ibm documents by N. Rochester (who was also at boston programming center, when it was absorbed by the vm370 group).

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

search engine history, was Happy DEC-10 Day

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: search engine history, was Happy DEC-10 Day
Newsgroups: alt.folklore.computers
Date: Thu, 25 Feb 2010 15:24:25 -0500
Michael Wojcik <mwojcik@newsguy.com> writes:
That was CLaM Associates? I think all those people had left IBM ACIS when I started there.

re:
https://www.garlic.com/~lynn/2010e.html#13 search engine history, was Happy DEC-10 Day

"C" had been at the science center in the 60s ... and was no longer there when I joined the science center in the 70s (was then also director of MGH/Harvard medical school dataprocessing).

"L" had been with ACIS over at project athena ... ibm & dec both funded project athena equally ... and both had an assistant director at project athena (for a time, the ibm assistant director at project athena was CAS ... who had invented the compare&swap instructure when at the science center), we periodically did audit/review of Project athena stuff (one week we were there, sat thru working out how kerberos would implement cross-domain support; i've recently mentioned that couple years ago sat thru a SAML implementation presentation and the SAML message flows looked identical to kerberos cross-domain). I have some memory that he was involved in the X-window 8514 display driver support in AOS ... both the PC/RT and the romp co-processor board that went into PS2.

"M" had previously been at the science center.

Science center had moved from 545tech sq ... down the street to 101 Main. When the science center was dissolved ... CLaM moved into the space vacated by the science center.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Thu, 25 Feb 2010 15:31:40 -0500
gahenke@GMAIL.COM (George Henke) writes:
I believe I read somewhere, a UNIX book, that multics was the forerunner of UNIX and inspired its creator at Bell Labs

re:
https://www.garlic.com/~lynn/2010d.html#76 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010d.html#80 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010d.html#83 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010e.html#12 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010e.html#14 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

I'd commented in the 80s about tripping across some unix scheduler code that looked very similar to code in "release 1" cp67 that I had completely replaced as undergraduate in the 60s ... possibly both inherited from something in CTSS (cp67 directly from ctss, and unix possibly indirectly by way of multics, DR may comment on this over in a.f.c.)

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Fri, 26 Feb 2010 00:09:25 -0500
On 02/25/2010 09:21 PM, Clark Morris wrote:
Would Multics have been a good base for a modern operating system? Was killing Multics a mistake?

re:
https://www.garlic.com/~lynn/2010e.html#12 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

well with multics on the 5th flr and virtual machines on the 4th flr ... there was modicum of competition ... and i'm a little biased.

one of my hobbies in the past is doing my own product distribution ... at one point had as many csc/vm internal installations as the total number of multics installations.

later on the kick about HLL ... recent refs:
https://www.garlic.com/~lynn/2010e.html#10 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux
https://www.garlic.com/~lynn/2010e.html#12 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

i did it as theme for internal advanced technology conference (mar82)
https://www.garlic.com/~lynn/96.html#4a

... above included talk on running CMS applications under MVS ... also mentioned here
https://www.garlic.com/~lynn/2010d.html#66 LPARS: More or Less?

idea was to do a micro-kernel base ... in higher level language ... like some flavor of pascal ... small focused effort possibly starting with some existing assembler base and recoding into another language.

cp67 had started pretty clean micro-kernel. one of the issues with vm370 and later flavors was that there were increasing number of traditional operating system developers working on the platform that it starting to take on characteristics of traditional operating system bloat. Hardware microcode engineers (as develepors ) always seemed to be better at preserving microkernel paradigm.

So, one of the things that happened when tss/360 was decommiitted was that the support/development group was radically reduced ... which tended to reflect in reduction in implementation bloat (aka at one point tss/360 supposedly had something approaching 1100-1200 people at a time when cp67/cms had 11-12 people). After awhile tss/370 was starting to take on some interesting characteristics. Something that further contributed to this was a project for AT&T to do a stripped down tss/370 kernel (SSUP) with unix api layered on top (also presented at mar82 conference)

In any case, there was an activity to compare the size/complexity of tss/370 ssup against vm370 ... as candidate for new kernel base ... converted to higher level language. small piece of the analysis (vm/sp against tss/370 kernel).


TSS versus VM/SP

TSS     VM/SP
modules         109     261
LOC             51k     232k

... snip ...

other pieces of analysis:
https://www.garlic.com/~lynn/2001m.html#53

At one point there was a corporate get together in kingston plantsite cafeteria ... it was supposed to be a "VM/XB" meeting (play on next generation after XA); the cafeteria people misheard and put up sign that said "ZM/XB meeting". The problem was then corporation decided it was going to be strategic ... and then grew into a couple hundred people writing specs (a FS moment?) ... an old email
https://www.garlic.com/~lynn/2007h.html#email830527
in this post
https://www.garlic.com/~lynn/2007h.html#57

above email also makes reference that i was sponsoring Boyd's briefing at ibm ... recent posts mentioning Boyd
https://www.garlic.com/~lynn/2010d.html#76 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
other past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd1
misc. URLs from around the web mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

misc other past posts mentioning "ZM" investigation/effort
https://www.garlic.com/~lynn/2000c.html#41
https://www.garlic.com/~lynn/2001.html#27
https://www.garlic.com/~lynn/2001l.html#25
https://www.garlic.com/~lynn/2001n.html#46
https://www.garlic.com/~lynn/2002l.html#14
https://www.garlic.com/~lynn/2003e.html#56

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Fri, 26 Feb 2010 11:08:46 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
idea was to do a micro-kernel base ... in higher level language ... like some flavor of pascal ... small focused effort possibly starting with some existing assembler base and recoding into another language.

re:
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

this post mentions mainframe pascal originally done at the los gatos lab for vlsi tool development
https://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux

One of the two people responsible (original mainframe pascal) then leaves to do a startup for clone 3270 controller; they were figuring that TSO response was so horrible (especially compared to CMS) that they could try and offload some amount of the TSO operations into the controller ... to try and improve the appearance of interactive response ... selling into the mainframe TSO market (use to drop in periodically to see how they were doing). It never caught on ... and the person then shows up as VP of software development at MIPS. After SGI buys MIPS, he shows up as general manager of the business unit responsible for JAVA (pretty early in JAVA life).

recent posts discussing some of this
https://www.garlic.com/~lynn/2010c.html#29 search engine history, was Happy DEC-10 Day

as well as this earlier post in this thread (GREEN, DOE/SPRING, etc)
https://www.garlic.com/~lynn/2010d.html#80 Senior Java Developer vs. MVS Systems Programmer

and old post with bits about (DOE/SPRING) "A Client-Side Stub Interpreter"
https://www.garlic.com/~lynn/2001j.html#32 Whom Do Programmers Admire Now???

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

What's with IBMLINK now??

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: What's with IBMLINK now??
Newsgroups: bit.listserv.ibm-main
Date: 26 Feb 2010 08:52:42 -0800
John_J_Kelly@AO.USCOURTS.GOV (John Kelly) writes:
Here's my response form IBM FeedBack about it. I find that it goes away after a while but happens mostly with Firefox. When I get it, I go to IE and get in OK. I had an offline line email from someone else who's had the problem and they accepted the site and apparently got in.

it isn't so much that it is an invalid certificate ... it is an incorrect certificate. the whole point is that the domain name in the certificate is supposed to correspond to the URL that the browser is using. browsers have some rules about wild-card (fuzzy) match between what is in the certificate and what the URL they are using ... in general, domain names have to EXACTLY match the URL ... or for wild-card, the trailing part (in the certificate) has to match the corresponding field in the URLs used by the browser.

long ago and far away, we were brought in to consult with small client/server startup that wanted to do payment transactions on their server ... and they had invented this technology called SSL that they wanted to use (the result is now frequently called electronic commerce). As part of the effort, we had to do some in-depth review of the protocol and browser operation ... as well as business processor walkthrus with some of the new operations calling themselves Certification Authorities. misc. past posts about ssl digital certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

it turns out that there were several security assumptions about how all the pieces actually fit together and worked ... in some number of cases, some of those security assumptions were almost immediately violated (which can be considered at the root of some number of current compromises).

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

What's with IBMLINK now??

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: What's with IBMLINK now??
Newsgroups: bit.listserv.ibm-main
Date: 26 Feb 2010 09:21:46 -0800
mpace58@GMAIL.COM (Mark Pace) writes:
Same here. Working as expected with FF.

re:
https://www.garlic.com/~lynn/2010e.html#19 What's with IBMLINK now??

one issue is that people may be actually going to different machines/gateways

I've seen it with "ibm greater connection" ... there are various webhosting services ... with multiple physical locations around the world ... where connection is directed to the "closest" facility. Lots of big coporations will outsource some part of their operation to such a facility (in part because they have these massive operations at several places around the planet).

at various times when something is going on ... i've had SSL certificates come back from the underlying webhosting facility ... rather the SSL certificate for the ibm server that I'm trying to connect to.

there are lots of tricks played mapping URL to multiple different physical pieces of hardware .. (like load balancing, server with the least number of internet hops, etc). sometimes maintenance on all these pieces can get out of sync (as well as the setup for the alias identities for possible different physical boxes).

complicating this was item a month or so ago about reports of attacks on some number of (well-known) SSL servers.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

paged-access method

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 26 Feb, 2010
Subject: paged-access method
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2010d.html#27 HONE & VMSHARE

The original way that VM370 did shared segments was all thru the "IPL" command. A special CMS kernel was saved that included an image of APL. The users virtual machine was setup to automatically "IPL" cmsapl ... at logon time.

Some of HONE configurators got some performance enhancement by being recoded in FORTRAN ... requiring HONE to drop out of APL into native CMS, run the fortran application and then resume APL. This wasn't possible with the "IPL cmsapl" scenario .... w/o requiring the user to manually execute all of the associated commands.

so in this old email moving from cp67 to vm370
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

part of the page-mapped stuff was being able to have a new mechanism for shared pages ... having IPL CMS ... with the cms image separate from the shared pages for APL executable. HONE installed this ... to facilitate automatically being able to drop out of APL, execute fortran application and then resume APL execution (transparent to the end user). It wasn't absolutely necessary to have pam formated disks to use this ... because the changes would also work with normal "saved" systems (however, all the changes were required, even if there weren't CMS page-mapped formated filesystems).

Later the development group picked up a lot of the CMS changes (except for the paged-mapped filesystem changes) for additional shared stuff (shared editor, shared exec, misc. other stuff) ... and very small subset of the CP changes (w/o any of the paged-mapped filesystem changes) ... and released as part of VM370 release 3 DCSS.

misc. past posts mentioning page-mapped
https://www.garlic.com/~lynn/submain.html#mmap

The original way that I added editor, exec and other stuff to additional shared segment was using a facility that the segment could be floated/relocated to any available place in the virtual address space. This required that the code be address/location independent (first thing that virtual CMS IPL did was move the additional shared segment(s) to available place).

Old email about modifying BROWSE & FULIST to reside in shared segment ... as well swizzling addresses so they were location independent.

old post mentioning Theo
https://www.garlic.com/~lynn/2001f.html#9 Theo Alkema

includes part of email exchange with Theo about placing FULIST & BROWSE in shared segment
https://www.garlic.com/~lynn/2001f.html#email781010

Date 03/16/79 10:24:31
From: wheeler

I have modified BROWSE & FULIST3 to go into the shared relocating nucleus. One of the fall outs of that is FULIST3 can now call DMSINA a(ABBREV) directly rather than duplicating the code within the module. My changes are not very clean. I moved all variables to the bottom of the csect, duplicated them in the COMSECT dsect and then removed all labels from the copy in the csect. AT initialization time I do an MVCL from the csect to dsect to initialize the area. The relocating shared segment(s) are initialy positioned at x'30000' at IPL time, prior to relocation.


... snip ... top of post, old email index.

When the development group picked up the changes for release 3 DCSS ... they eliminated the "floating" characteristic. That change required an installation have multiple different copies of CMS additional shared segments ... all created for (different) "fixed" virtual address location. The original implementation allowed everybody to share the same exact copy ... regardless of the size of the particular virtual machine. misc. past posts mentioning address location independent code:
https://www.garlic.com/~lynn/submain.html#adcon

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

search engine history, was Happy DEC-10 Day

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: search engine history, was Happy DEC-10 Day
Newsgroups: alt.folklore.computers
Date: Fri, 26 Feb 2010 21:45:20 -0500
Charles Richmond <frizzle@tx.rr.com> writes:
Do *not* forget LISP and APL. These two languages were *not* used so much for "production" work, but they were developed with the 60's time frame.

a lot of apl was used for business modeling and what if things ... and a lot of stuff that today that are done with spreadsheets.

apl\360 was closed interactive environment with its own terminal support, dispatching, swapping, etc ... done in real memory system and typically 16kbytes to 32kbytes workspaces that swapped as integral unit. the science center migrated that to cms for cms\apl ... was able to discard all the stuff but the interpretor ... and open the workspace size up to virtual memory (allowing more substantial real-world applications to be implemented), and adding APIs to system functions (like file read/writing). One of the things that had to be re-done from apl\360 to cms\apl was storage management; apl had dynamically allocated new storage on every assignment ... until it exhausted the workspace ... and then would do garbage collection and start over; this had disastrous page thrashing characteristics in large virtual memory operation.

The internal HONE system was world-wide virtual machine based, online sales&marketing support ... with majority of the applications written in apl. In the early 80s, just the US-based HONE system had userids approaching 40,000. misc. past posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

I have little familiarity with LISP in the 60s & 70s ... but a little humor in this old email reference
https://www.garlic.com/~lynn/2003e.html#email790711

about MIT LISP machine people asking IBM for 801 risc chips for their engine ... and being offered 8100 instead.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Item on TPF

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Item on TPF
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Sat, 27 Feb 2010 11:11:42 -0500
charlesm@MCN.ORG (Charles Mills) writes:
It also might be mentioned that there was an incentive to develop a quick-and-dirty DOS/360 that came from the shortage of machine time on the 7094 simulators (being used to develop OS/360) versus the amount of 360 hardware that was coming out of the factory but unusable due to the lack of an operating system.

there was less difference between 360/370 w/o virtual memory and 370 with virtual memory.

for 370 virtual memory there was (distributed) development project between the science center and endicott to modify cp67 to support 370 virtual machines with virtual memory (i.e. 370 had some new instructions and the format of the virtual memory tables and definition of control registers were different).

since the cp67 service had non-employee users (students and others from various institutions in boston area, BU, MIT, Harvard, etc), the project went on with virtual cp67 in a virtual machine (to help keep information about virtual memory to leaking to the unauthorized). then to test the virtual 370 operation, another version of cp67 was modified to conform to the 370 architecture (instead of 360/67 architecture). A year before first engineering 370, the following was in general use
360/67 real machine "cp67l" running on real hardware "cp67h" running in 360/67 virtual machine w/virtual memory "cp67i" running in 370 virtual machine w/virtual memroy "cms"

when first engineering 370 with virtual memory hardware became operation, a version of "cp67i" was booted on the machine to test its operation. The first boot failed, and after some diagnostic, it turned out that the engineers had reversed the definition of two of the new opcodes; the cp67i kernel was quickly patched (for the incorrect opcodes) and cp67i came up and ran.

things were a little different for MVT->SVS. There were minimal changes to MVT to build a single virtual address table ... and handle page faults and paging operations (not a whole lot of difference between running MVT in a large virtual machine ... with a minimum amount of "native" virtual memory support). The biggest change for MVT->SVS was that the application channel programs passed via EXCP/svc0 had to be translated, aka a shadow copy of the CCWs built that had real addresses in place of the virtual addresses (along with the associated pinning/unpinning of the virtual pages to real addresses). To do this, a copy of the corresponding code from CP67 was borrowed (i.e. when cp67 does virtual machine channel programs it had to scan the virtual channel program, creating a "shadow" copy with real address in place of virtual addresses).

When 370s with virtual memory hardware started being deployed internally, "cp67i" was the standard system that ran on them for a long time (or "cp67sj" ... which was "cp67i" with 3330 & 2305 device support done by a couple engineers from san jose).

something different was done for 3081 and TPF. Originally 308x was never going to have non-multiprocessor machine ... however, at the time, TPF didn't have multiprocessor support. The mechanism to run TPF on 3081 was to run vm370 with TPF in a virtual machine. In some number of TPF 3081, that tended to leave the 2nd 3081 processor idle. The next release of vm then had some special modifications to improve TPF thruput ... it added a bunch of multiprocessor chatter overhead that allowed parts of vm kernel execution to be run asynchronous with TPF operation ... this drove up virtual machine overhead by about 10% ... but increased overall TPF thruput ... by having the overhead being executed on the otherwise idle processor. The problem was that (additional multiprocessor overhead chatter) change degraded the thruput of all the multiprocessor customers (that normally ran with all processors fully loaded).

Finally, the company started to offer 3083 (a 3081 w/o the 2nd processor frame), in large part for TPF. As mentioned in this post
https://www.garlic.com/~lynn/2010d.html#79 LPARs: More or Less?

the straight-forward would be to leave out the "processor 1" frame ... but the "processor 0" was built at the top of the box ... and there was some concern that the straight-forward solution would leave the box top-heavy.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Unbundling & HONE

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 27 Feb, 2010
Subject: Unbundling & HONE
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2010d.html#27 HONE & VMSHARE
https://www.garlic.com/~lynn/2010e.html#21 paged-access method

it wasn't just branch people that used HONE, there were also installations for hdqtrs. When EMEA hdqtrs moved from the states to Paris, I was asked to help with the EMEA hdqtrs HONE installation in Paris.

HONE had started out as several (virtual machine) cp67 datacenters in the US to give SEs "hands-on" experience with guest operaitng systems after the 23jun69 unbundling announcements (including charging for SE time; a lot of SE experience had been hands-on at customer accounts; nobody figured out how not charge for learning time). misc. past posts mentioning unbundling
https://www.garlic.com/~lynn/submain.html#unbundle

This is recent post in (customer) ibm-main (mainframe) mailing list about adding 370 virtual machine support to cp67
https://www.garlic.com/~lynn/2010e.html#23

a subset of those changes (just the non-virtual memory, 370 instructions) were also installed on the HONE cp67 systems, for SEs to try guest operating systems released for 370 machines.

Science center had also done a port of apl\360 to cms for cms\apl. apl\360 installations had been typical 16kbyte-32kbytes workspace ... science center had to do a lot of work to open that up to large virtual memory operation (standard apl\360 workspace storage management resulted in severe page thrashing in virtual memory environment). science center also added an API to access systems services (like file read/write) ... which causes lots of heartburn with the APL purists.

The combination of significantly larger workspace sizes and being able to do things like file I/O ... allowed much larger real-world applications to be implemented and run. One of the very early flavor os this was the business planning people in Armonk doing an APL business model that they would run on the Cambridge cp67 system. This required the business people loading the most sensitive and valuable corporate assets on the cambridge system (detailed customer information) which required extremely high level of security (especially with numerous students from places like BU, MIT, and Harvard also having access to Cambridge system).

Similar APL applications were being deployed on HONE and came to dominate all activity and the original virtual guest operation for SEs disappeared.

After VM370 became available, HONE migrated from cp67 (& cms\apl) to vm370 (& apl\cms done at the palo alto science center). The APL purists also came out with "shared-variable" paradigm as an APL-way of doing system services APIs (as opposed to what had been shipped in cms\apl).

I had been providing HONE with "enhanced" cp67 ... but they dropped back to relatively vanilla product when they moved to vm370 ... until I had also migrated to vm370 ... previsouly mentioned in these old emails
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

HONE Compute Intensive

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 28 Feb, 2010
Subject: HONE Compute Intensive
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2010d.html#27 HONE & VMSHARE
https://www.garlic.com/~lynn/2010e.html#21 paged-access method
https://www.garlic.com/~lynn/2010e.html#24 Unbundling & HONE

HONE's apl-intensive environment was extremely compute intensive.

In the mid-70s, the HONE datacenters were consolidated in silicon valley (in bldg. across the back parking lot from the palo alto science center). PASC had also done the 370/145 APL microcode assist for apl\cms ... which gave APL about a factor of ten times performance boost (APL applications on 145 ran about as fast a on 168 w/o microcode assist). HONE had multiple 168s for their operation and looked at possibly being able to move some number to 145s (with microcode assist). The problem was that a lot of the HONE APL applications were using the larger virtual memory and file i/o capability ... which also required the added channels and real storage of 168s.

So one of the things I started working on during 1975 (after having done move from cp67 to vm370) was a number of projects supporting multiprocessors. In addition to the other bits and pieces that was leaking out ... eventually it was decided to ship multiprocessor support in vm370 release 4.

There was a problem. While the original unbundling managed to make the case that kernel software should still be free ... that decision was starting to shift after the clone processors managed to get market foothold during the Future System era .. FS was going to completely replace 360/370, during the era 370 product pipelines were allowed to go dry ... and then FS was killed and there was mad scramble to get stuff back into 370 pipeline; misc. past posts mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

some of that accounts for picking up bits and pieces of my stuff in csc/vm, for vm370 release 3 (like the cp & cms changes for DCSS). it also contributed to decision to take my resource manager (lots of stuff that had even already shipped in cp67 product and was dropped in the cp67->vm370 simplification) and release as separate kernel product. However, resource manager got picked to be guinea pig for starting to charge for kernel software and I had to spend a bunch of time with business people and lawyers about kernel software charging.
https://www.garlic.com/~lynn/submain.html#unbundle

One of the issue was I included a whole lot of stuff in the resource manager ... including a bunch of kernel structure that the multiprocessor design&implementation required. now one of the policies for kernel software charging (during the transition period) was that direct hardware support would still be free and "free software" couldn't be shipped that required priced software as a prerequisite.
https://www.garlic.com/~lynn/subtopic.html#fairshare

The resource manager then represented quite a problem for shipping multiprocessor support in vm370 release 4. Finally it was decided to move 90% of the lines of code ... that had been in the charged-for resource manager ... into the free kernel (since it was required for shipping "free" multiprocessor support) ... but leaving the price of the resource manager the same.
https://www.garlic.com/~lynn/subtopic.html#smp

In any case, one of the things I did for HONE was provide them with release 3 version of production multiprocessor support (well before release 4 of vm370 was shipping) ... so they could start upgrading their 168s to multiprocessor machines (needing all the computer power they could get) ... they were already putting together a max-ed out loosely-coupled single system image configuration in the consolidated US HONE datacenter (with fall-over and load-balancing; possibly largest single-image complex in the world at the time).
https://www.garlic.com/~lynn/subtopic.html#hone

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

whither NCP hosts?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: whither NCP hosts?
Newsgroups: alt.folklore.computers
Date: Sun, 28 Feb 2010 14:22:15 -0500
Jonno Downes <jonnosan@gmail.com> writes:
So I reckon would be lots of fun to hack around with those old RFCs (i.e. get a C64 or something to talk Telnet over NCP), but obviously that only works if there's some endpoints to connect to. Does such a beast exist? Is there anything around (real or emulated) that talks NCP ? It seems like it would be possible to hook up some emulated IMPs talking to each other via an IP VPN, so I can't imagine I'm the first to have considered this, and if anyone has actually done it, I figure the may well be part of this newsgroup, so....

there were some number of NCPs during the 70s. The program that ran in the 3705 telecommunication front-ends for SNA was also called NCP (the term "network" in both SNA and NCP is possibly some term inflation ... since the host was VTAM ... aka dumb terminal controller).

On the ARPANET ... there was the IMP-to-IMP stuff that was the "network" ... and there was a "host protocol" ... that the hosts talked to the IMPs (some ambiguity as to the size of the arpanet/internet at the time of the 1/1/83 switch-over to internetworking protocol; talking about "255" ... which possibly was the number of hosts ... because there were other referrences placing the number of IMPs around 100; aka multiple hosts attaching to single IMP).

One of the tcp/ip things in FTP is its use of two channels ... separate for control & data ... attempting to map the host-to-host implementation into the interneworking implementation (there is an RFC about problems trying to do the FTP host-to-host out-of-band control stuff in an internetworking environment).

some of that is discussed in this old collection of internet related posts
https://www.garlic.com/~lynn/internet.htm

also, my RFC index:
https://www.garlic.com/~lynn/rfcietf.htm

in the RFCs listed by section, click on the Term (term->RFC#) ... and then click on either 1822, 1822l and/or AHIP in the Acronym fastpath ... aka

ARPANET host access protocol (1822) (1822L) (AHIP)
see also Advanced Research Projects Agency Network , host access protocol
1005 878 851 802 716 270


...

clicking on RFC number brings up the RFC summary in the lower-frame ... aka

1005
ARPANET AHIP-E Host Access Protocol enhanced AHIP, Khanna A., Malis A., 1987/05/01 (31pp) (.txt=68051) (Ref'ed By 1700)


...

clicking on the ".txt=nnn" field (in RFC summary) retrieves the actual RFC.

also click on "IMP" in the Acronym fastpath ... aka

interface message processor (IMP)
see also front end
2795 704 696 692 690 687 660 638 633 626 622 611 594 548 528 521 476 447 445 434 406 395 394 374 359 343 335 331 312 301 271 218 215 213 209 67 41 17 12 7


... snip ...

you can skip 2795 ... it is a more recent "IMP" April 1st, RFC; but

704
IMP/Host and Host/IMP Protocol change, Santos P., 1975/09/15 (3pp) (.txt=7504) (Obsoletes 687)


... snip ...

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

SHAREWARE at Its Finest

Refed: **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: SHAREWARE at Its Finest
Newsgroups: bit.listserv.ibm-main
Date: 28 Feb 2010 13:14:27 -0800
jim.marshall@OPM.GOV (Jim Marshall) writes:
P.S. Getting the first of a new generation of IBM computer, the IBM 3032, made us a showplace besides being in the Pentagon. But 6 months later IBM shipped the first IBM 3033 to Singer up in New Jersey, we were obsolete and never got the IBM 3032-AP/MP we hoped would come.

3032 was basically 168-3 repackaged to use 303x channel director instead of the 168 external channel boxes.

in the wake of demise of future system, there was mad-rush to get stuff back into the 370 product pipelines.

they took the 158 integrated channel microcode and split it into separate (dedicated 158 engine) box for the 303x channel controller.

the 3031 then becomes a 158 with just the 370 microcode and a 2nd/separate 158 with just the integrated channel microcode (with same 158 engine no longer needed to be shared with executing both 370 microcode and integrated channel microcode, 3031 thruput becomes almost as fast as 4341).

the 3032 then becomes a 168 with 303x channel director (158 with just the integrated channel microcode) instead of the 168 external channel boxes.

the 3033 started out being the 168 logic mapped to chips that were 20percent faster. The chips also had a lot more circuits per chip ... initially that would be unused. In the 3033 product cycle there was some effort to redesign pieces of 168 logic to take advantage of the higher cicuit density ... and finally got the 3033 up to 50% faster than 168.

there was eventually a 3033N sold that was crippled to be slower than 168/3032 but could be field upgraded to full 3033.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

What was old is new again (water chilled)

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What was old is new again (water chilled)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Sun, 28 Feb 2010 16:44:50 -0500
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
You don't consider SEPP (VM/SE) or BSEPP (VM/BSE) to be VM releases?

re:
https://www.garlic.com/~lynn/2010d.html#43 What was old is now new again (water chilled)

they were charged for kernel add-ons to the free vm370 release 6 base.

as part of 23jun69 unbundling announcement, there was start of charging for application software (somewhat as the result of various litigation), however they managed to make the case that kernel software was still free.
https://www.garlic.com/~lynn/submain.html#unbundle

also discussed in this recent posts:
https://www.garlic.com/~lynn/2010d.html#42 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010e.html#24 Unbundling & HONE

with the demise of future system project ... there was mad rush to get items back into the 370 product pipeline ... also there have been claims that it was the distraction of the FS product (and lack of products) that allowed clone processors to get foothold in the marketplace. misc. past FS posts
https://www.garlic.com/~lynn/submain.html#futuresys

part of the rush to get things back into 370 product pipeline contributed to picking up various 370 things that I had been doing during the FS period for product release. some recent discussion (as well as "Unbundling & HONE" post referenced above)
https://www.garlic.com/~lynn/2010e.html#21 paged-access method

One of the things was the resource manager. However, possibly because of the foothold that the clone processors were getting in the market, there was decision to start charging for kernel software ... and my resource manager was selected as guinea pig. Also discussed here
https://www.garlic.com/~lynn/2010e.html#25 HONE Compute Intensive

which shipped in the middle of the vm370 release 3 time-frame. As mentioned in the above ... vm370 multiprocessor support was to go out in vm370 release 4 ... but design/implementation was dependent on lots of code in the resource manager. The initial policy for kernel charging was that hardware support would still be free (including multiprocessor support) and couldn't have prerequisite that was charged for ... as in my resource manager. The eventually solution was to move approx. 90% of the code from the resource manager into the "free" base ... but leave the price of the "release 4" resource manager the same (as release 3, even tho it was only about 10% of the code).

for vm370 release 5, the resource manager was repackaged with some amount of other code ... including "multiple shadow table support" ... discussed here:
https://www.garlic.com/~lynn/2010e.html#1 LPARs: More or Less?

and renamed sepp (i.e. sepp was the charged for software that fit ontop of vm370 release 5). there was a lower price "bsepp" subset.

So neither SEPP nor BSEPP were free ... they were both kernel add-on software for the free vm370 base, that was charged for.

When it came time for vm370 release 7, there was transition to charging for all kernel software ... and everything was merged back into single charged for kernel renamed VM/SP (and there was no more "vm370" releases).

vm370 release 6 being free has been made available for Hercules (370 virtual machine on intel platform) ... but not any of the charged for software ... my resource manager, SEPP, BSEPP, vm/sp, etc.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

HONE & VMSHARE

Refed: **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 28 Feb, 2010
Subject: HONE & VMSHARE
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2010d.html#27 HONE & VMSHARE
https://www.garlic.com/~lynn/2010e.html#21 paged-access method
https://www.garlic.com/~lynn/2010e.html#24 Unbundling & HONE
https://www.garlic.com/~lynn/2010e.html#25 HONE Computer Intensive

The following is with regard to mechanisms for searching the VMSHARE files ... current archive has keyword searches
http://vm.marist.edu/~vmshare/

Date: 09/02/83 13:44:58
From: wheeler

filenames are usually descriptive ... but not alwas. There is also copies of VMSHARE 291 on YKTVMV, KGNVM8, and KGNVMC, and the HONE machines (see $VMSHAR QMARK on the disk). Tymshare provides a keyword search mechanism if you are using the "real" vmshare system on their machine. Closest thing we have to that is HONE has the files under CMS/STAIRS and on SJRVM3 the files are available under EQUAL. Both STAIRS and EQUAL provide keyword search mechanisms of the files.

Brute force is to get a userid with direct access to the files and use SCANFILE to do a complete search of the whole disk using specific tokens.


... snip ... top of post, old email index.

QMARK was an internal convention ... something akin to README files. EQUAL was an internally developed computer conferencing system.

current URL mentioning STAIRS/CMS
http://www-01.ibm.com/software/data/sm370/about.html

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

SHAREWARE at Its Finest

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SHAREWARE at Its Finest
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 01 Mar 2010 00:32:49 -0500
tony@HARMINC.NET (Tony Harminc) writes:
I don't know how similar the 158 and 155 really were (certainly very different front panel implementations), but it's interesting that the 303x got the microcoded channels, rather than the clunky but rock solid 28x0 hardwired ones.

re:
https://www.garlic.com/~lynn/2010e.html#27 SHAREWARE at Its Finest

as i've mentioned before ... happening to mention the 15min MTBF issue internally at the time brought down the wrath of the mvs organization on my head.

the 3081 channel was a lot like the 158 also.

some discussion about the 3081 ... another somewhat quick effort in the wake of the failure of FS;
http://www.jfsowa.com/computer/memo125.htm

misc. past posts mentioning future system
https://www.garlic.com/~lynn/submain.html#futuresys

misc. past posts getting to play disk engineer in bldgs. 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

i had been doing some timing tests on how sort a "dummy" record that was needed to doing track head switch (seek head) between two rotationally consecutive records on different tracks (involving both channel processing latency and control unit processing latency). 168, 145, 4341 could successfully do the switch with shorter block than 158, 303x, and 3081. There were also some number of OEM disk controllers that had lower latency and required smaller dummy record ... less of rotational delay to cover the processing latency to process seek head operation.

The 3830 disk controller was horizontal microcode engine that was much faster than the vertical microcode engine (jib-prime) used in the 3880 disk controller. To compensate for the slower processing and also handle higher data rates ... there was dedicated hardware for data flow ... separate from the control processing done by the jib-prime. Data-streaming was also introduced (no longer having to do handshake for every byte transfer (help both with supporting 3mbyte transfers at the same time allowing max. channel length to be increased from 200ft to 400ft).

There was requirement at the time that 3880 had to be within +/- 5percent of 3830 ... they ran some batch operating performance tests in STL and it didn't quite make it ... so they tweaked the 3880 control to present operation complete interrupt ... before 3880 had actually fully completed all the operations (to appear to be "within" five percent of 3830). Then if 3880 discovered something in error in its cleanup work ... it would present an asyncrhonous unit check. I told them that was violation of the architecture ... at which time they dragged me into resolution conference calls with the channel engineers in POK. Finally they decided that they would saved up the unit check error condition ... and present it as cc=1, csw-stored, unit check on the next sio ("unsolicited" unit checks were violation of channel architecture).

so everybody seems to be happy. then one monday morning the bldg. 15 engineers call me up and asked me what i did over the weekend to trash the performance of the (my) vm370 system they were running. I claimed to have done nothing ... they claimed to have done nothing. Finally it was determined that they had replaced a 3830 that they were using with string of 16 3330 cms drives ... with a 3880. While their batch os acceptance test didn't have a problem ... i had severely optimized the pathlength for i/o redrive (of queued operations) after an i/o completion. My redrive sio was managed to hit the 3880 with the next operation while the 3880 was still busy cleaning up the previous operation (they had assumed that they could get it done faster than operating system interrupt processing). Because the controller was still busy, I would get cc=1, csw-stored, sm+busy (aka controller busy). The operation then would have to be requeued and go off to look for something else to do. Then because the controller had signaled SM+BUSY, it was obligated to do a CUE interrupt. The combination of the 3880 slower processing and all the extra operating system processing gorp ... was degrading their interactive service by severe 30percent (which was what had prompted the monday morning call).

While 3880, the batch performance "acceptance" tests had originally eventually passsed ... however somewhat related the earlier 15min MTBF issue ... much nearer 3880 product ship, engineers had developed a regression test suite of 57 expected errors. old email that for all the errors in the regression test, mvs required reboot ... and in 2/3rds the cases, there was no evidence of what required mvs to be rebooted. Recent post mentioning the issue
https://www.garlic.com/~lynn/2010d.html#59 LPARs: More or Less?

old posted email mentioning the problem:
https://www.garlic.com/~lynn/2007.html#email801015

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

What was old is new again (water chilled)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What was old is new again (water chilled)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 01 Mar 2010 10:38:48 -0500
shmuel+ibm-main@PATRIOT.NET (Shmuel Metz , Seymour J.) writes:
Wasn't VM/SP also an add-on to VMF/370 R6? I know for sure that MVS/SP was an addon to OS/VS2 3.8 + SU64 et al, and I vaguely recall that it wasn't until around ESA 4 that the free MVS base went away from the packaging.

re:
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010d.html#45 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010e.html#28 What was old is new again (water chilled)

Melinda Varian's home page
https://www.leeandmelindavarian.com/Melinda#VMHist

from her VM history document at above (VM/SP becomes a new base with all the sepp/bsepp stuff):

At the same time, we started seeing the results of IBM's new commitment to VM. VM System Product Release 1 came out late in 1980. VM/SP1 combined all the [B]SEPP function into the new base and added an amazing collection of new function (amounting to more than 100,000 lines of new code): XEDIT, EXEC 2, IUCV, MIH, SUBCOM, MP support, and more.

... snip ...

also from above:

VM/SP1 was just amazingly buggy. The first year of SP1 was simply chaotic. The system had clearly been shipped before it was at all well tested, but the new function was so alluring that customers put it into production right away. So, CP was crashing all over the place; CMS was destroying minidisks right and left; the new PUT process was delaying the shipment of fixes; and tempers were flaring. When the great toolmaker Jim Bergsten produced a T-shirt that warned VM/SP is waiting for you, his supply sold out immediately.

... snip ...

I still have the t-shirt.

somewhat related other old stuff

Date: 02/17/81 11:18:51
From: wheeler

re: cia vm/sp experience; YKT has seen similar performance problems going to SP. They have isolated a major compenent of it tho. It turns out that SP increased the default terminal I/O control block size for all terminals in a redesign for new terminals. This increase in size resulted in the block no longer being in the "sub-pool" size range. A storage request for a sub-pool size block can be satisfied in less than 30 instructions. A non sub-pool size block is allocated using the a best fit free storage allocation algorithm (i.e. everything on the chain must be scanned). A large system can easily have 1,000 blocks or more on the chain. It takes 5-6 instructions per block. Re-defining sub-pool sized blocks in DMKFRE (modification & re-assembly) resulted in almost returning overhead to release 6 days. There are other significant SP degradation hits that have to do only with real AP/MP operations.


... snip ... top of post, old email index.

other old email on the subject
https://www.garlic.com/~lynn/2003c.html#email790329
in this post
https://www.garlic.com/~lynn/2003c.html#35 difference between itanium and alpha
and
https://www.garlic.com/~lynn/2006y.html#email791011b
in this post
https://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism?

another old email discussing performance problems at same customer
https://www.garlic.com/~lynn/2001f.html#email830420
in this post
https://www.garlic.com/~lynn/2001f.html#57 any 70's era supercomputers that ran as slow as today's supercomputers?

the above is related to custom MP changes made to support TPF ... but caused performance degradation for all other customers ... which was then somewhat offset/masked by improvement/changes in 3270 i/o (modulo the storage subpool "bug") ... which didn't help at the above customer since they weren't using 3270s ... but lots of ascii glass teletypes.

recent post mentioning changes done for TPF:
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF

misc. past posts also mentioning the above email
https://www.garlic.com/~lynn/2006u.html#37 To RISC or not to RISC
https://www.garlic.com/~lynn/2006v.html#6 Reasons for the big paradigm switch
https://www.garlic.com/~lynn/2006y.html#10 Why so little parallelism?
https://www.garlic.com/~lynn/2006y.html#36 Multiple mappings
https://www.garlic.com/~lynn/2008d.html#42 VM/370 Release 6 Waterloo tape (CIA MODS)
https://www.garlic.com/~lynn/2008j.html#82 Taxes
https://www.garlic.com/~lynn/2008m.html#22 Future architectures
https://www.garlic.com/~lynn/2009p.html#37 Hillgang user group presentation yesterday
https://www.garlic.com/~lynn/2009s.html#0 tty
https://www.garlic.com/~lynn/2010d.html#14 Happy DEC-10 Day

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Need tool to zap core

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need tool to zap core
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 01 Mar 2010 21:58:53 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
misc. past posts mentioning DUMPRX.
https://www.garlic.com/~lynn/submain.html#dumprx


re:
https://www.garlic.com/~lynn/2010e.html#6 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#7 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#8 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#10 Need tool to zap core

from long ago and far away (mentions dumprx, vm370 rel6 sepp, and the 3090 service processor):

Date: 31 October 1986, 16:32:58 EST
To: wheeler

Re: 3090/3092 Processor Controll and plea for help

The reason I'm sending this note to you is due to your reputation of never throwing anything away that was once useful (besides the fact that you wrote a lot of CP code and (bless you) DUMPRX).

I've discussed this with my management and they agreed it would be okay to fill you in on what the 3090 PC is so I can intelligently ask for your assistance.

The 3092 (3090 PC) is basically a 4331 running CP SEPP REL 6 PLC29 with quite a few local mods. Since CP is so old it's difficult, if not impossible to get any support from VM development or the change team.

What I'm looking for is a version of the CP FREE/FRET trap that we could apply or rework so it would apply to our 3090 PC. I was hoping you might have the code or know where I could get it from (source hopefully).

The following is an extract from some notes sent to me from our local CP development team trying to debug the problem. Any help you can provide would be greatly appreciated.


... snip ... top of post, old email index.

recent post/thread mentioning 3090 service processor:
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)

a few recent posts about archiving stuff and an incident with the almaden datacenter tape library
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2010b.html#96 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2010d.html#65 Adventure - Or Colossal Cave Adventure

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

SHAREWARE at Its Finest

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SHAREWARE at Its Finest
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 02 Mar 2010 10:15:12 -0500
jim.marshall@OPM.GOV (Jim Marshall) writes:
I have been following some discussions on Adventure, StarTrek, and other games around back in the 20th Century. If you look on the CBT tape you will find a number of Computer games from back in the 1970s; WUMPUS, RoadRace, Eliza, Lunar Lander, etc. Thought it was about time to clue folks in on some events. Back in the 1970s the Air Force assigned me to the Pentagon to work on an IBM 360-75J & OS/MVT/HASPIII. Along the way we were blessed with the first IBM 303X shipped; namely a 3032 serial number 6. Along with it came all the DASD and tape plus an IBM 3850 MSS (35GB) with a bunch of Virtual IBM 3330 (100MB) drives.

re:
https://www.garlic.com/~lynn/2010e.html#27 SHAREWARE at Its Finest
https://www.garlic.com/~lynn/2010e.html#30 SHAREWARE at Its Finest

old email about AFDS looking at getting a couple hundred vm/4341s (each about thruput of 360/75).
https://www.garlic.com/~lynn/2001m.html#email790404
in this post:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

other old email mentioning vm/43xx machines
https://www.garlic.com/~lynn/lhwemail.html#43xx

I had sponsored Col. Boyd's briefings at IBM ... misc. past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html

he had references to doing stint in 1970 running spook base. One of his biographices make reference to his stint at spook base ... with comment that it was a $2.5B windfall for IBM.

recent post reference the disk division had datacenter in bldg.26 running numerous MVS machines ... however, it ran out of space for all the computing demand for all the tools. vm/4341s were starting to go into every nook&cranny ... w/o needing all the datacenter infrastructure required by the big iron. they started looking at doing something similar with mvs on 4341 ... however, one of the issues when looking at capacity planning was the low "capture ratio" that really messed up their numbers (MVS and VTAM pathlength by itself would nearly consume the 4341).
https://www.garlic.com/~lynn/2010d.html#66 LPARs: More or Less?

Part of the problem was that the applications were using some MVS system services not supported by CMS (CMS having about 64kbytes of o/s simulation code). However, Los Gatos lab found that with about another 12kbytes of o/s simulation glue code ... these large applications moved over (application being able to use nearly all of the processor)

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Need tool to zap core

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need tool to zap core
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 02 Mar 2010 11:54:05 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
from long ago and far away (mentions dumprx, vm370 rel6 sepp, and the 3090 service processor):

Date: 31 October 1986, 16:32:58 EST To: wheeler

Re: 3090/3092 Processor Controll and plea for help


re:
https://www.garlic.com/~lynn/2010e.html#email861031
in
https://www.garlic.com/~lynn/2010e.html#32 Need tool to zap core

funny thing about the wording in the above email was that neither the person writing the email nor his immediate management seemed to have realized that I had helped the manager that started the vm service processor for 3090 (i.e. turnover and/or transient nature of the positions).

the issue with the 3081 service processor was that a whole bunch of stuff had to be created from scratch (roll-your-own operation). recent post mentioning 3081 service processor
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)

the trade-off was having a more sophisticated environment for service processor operation but having to invent/develop everything from scratch ... vis-a-vis having an off-the-shelf more sophisticated infrastructure that might possibly have some things that weren't absolutely required. 3090 service processor was getting to the point where it wasn't practical to be inventing/developing everything from scratch.

one of the funnies in the 3081 service processor was that its disk drive was 3310 FBA (versus the 3370 FBA used by vm for 3092) ... and the 3081 service processor needed to do paging operations. the 3081 didn't have enough storage for all the microcode ... so there were some 3081 operations that involved the service processor doing microcode paging from the 3310 FBA device.

the 3090 engineers would point out that some performance differences between 3081 and 3090 ... was there weren't any critical performance paths that required paging microcode.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Why does Intel favor thin rectangular CPUs?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Why does Intel favor thin rectangular CPUs?
Newsgroups: comp.arch
Date: Tue, 02 Mar 2010 12:28:35 -0500
MitchAlsup <MitchAlsup@aol.com> writes:
I have been thinking along these lines...

Consider a chip containing CPUs sitting in a package with a small- medium number of DRAM chips. The CPU and DRAM chips orchestrated with an interface that exploits the on die wire density that cannot escape the package boundary.

A: make this DRAM the only parts of the coherent memory B: use more conventional FBDIMM channels to an extended core storage C: perform all <disk, network, high speed> I/O to the ECS D: page ECS to the on die DRAM as a single page sized burst at FBDIMM speeds E: an efficient on-CPU-chip TLB shootdown mechanism <or coherent TLB>

A page copy to an FBDIMM resident page would take about 150-200 ns; and this is about the access time of a single line if the whole ECS was made coherent!

F: a larger ECS can be built <if desired> by implementing a FBDIMM multiplexer


this was somewhat the 3090 in the 80s (but room full of boxes) ... modulo not quite doing i/o into/out-of expanded store. the issue was that physical packaging couldn't get all the necessary real storage within the processor latency requirements.

there was a wide-bus, (relatively) very fast synchronous instruction that moved 4k bytes between processor storage and expanded store.

at the time, i complained about not being able to do i/o directly into/out-of expanded store.

there was something half-way in-between ... when attempting to support HIPPI, the standard 3090 i/o interface couldn't support the bandwidth ... so they hacked into the side of the expanded store bus for HIPPI I/O.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

What was old is new again (water chilled)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What was old is new again (water chilled)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 02 Mar 2010 17:03:37 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
somewhat related other old stuff
Date: 02/17/81 11:18:51
From: wheeler


re:
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010d.html#45 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010e.html#28 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010e.html#31 What was old is new again (water chilled)

the referenced VM/SP is waiting for you t-shirt (in Melinda's history) has a large, dark, ominous looking vulture ... representing VM/SP.

VM/SP performance problems weren't solely the 3270 storage subpool problem nor solely the multiprocessors changes ... email on the subject from ten months later ...

Date: 12/08/81 07:21:58
From: wheeler

re: vm/sp performance; almost all heavily loaded, large systems that have gone to vm/sp that i'm aware of, has experienced system degradation compared to whatever they were running perviously. In most cases they have been able to bring performance up to an acceptable level thru variuous code twiddling. As far as I know nobody has found whatever problems are responsible for the system degradation, that all performance improvements would have been equally applicable to whatever level of the system they were running previously (i.e. 1) whatever bug(s) are still there and 2) they could have gotten a much better performaning system by making the modifications to a pre-SP level system).


... snip ... top of post, old email index.

... and
https://www.garlic.com/~lynn/2010e.html#30 SHAREWARE at Its Finest

Date: 06/17/81 08:16:55
From: wheeler

time to switch heads includes complete propagation delay between end of data transfer on previous data block thru all the channel ccw fetches (transferring appropriate data down to the control unit, etc.) and get to the next data transfer ccw before the r/w heads get to the next data block. page format (for cp) of 4k blocks on 3330 track only leaves room for about 100 byte dummy filler records. Now . . .

First thing you do in cp is that if you know that you are chaining CCW "packages" is to eliminate the SET SECTOR from the TIC'ed to ccw packages (by overlaying the set sector with a "copy" of the seek and tic'ing to the 2nd ccw in the package). Since the rotational delay is minimal, set sector only increase the delay time in the channel for fetching and transferring data.

Since the specs. on the 3330 require a dummy filler gap of greater than what is available, IBM code didn't bother to support switching heads on the 3330. Boston Univ. (running on a 145) did some experimenting and found that head switching did in fact work for their machine. About 3 years ago we picked up that code (which includes TEST mode running where "hits&misses" are monitored). We found that on our 158 that it was missing 80% of the time. Since our floor system also runs on several GPD machines, we checked its operation on 4341, 3031s, and 3033s. On 4341 it worked fine, on 3031 and 3033 it had the same operating characteristics as 158. I then talked to a customer with a 168 which did some further experimenting for me. He found that it would "hit" 100% of the time using IBM & CDC drives with the expanded filler. He also found that on Memorex drives he could "crank" the filler record down to 50 bytes and still get 100% hits.

The 4341 essentially has the same channel hardware as the 145 (in fact a standard 4341 with minimal microcode changes is used for 3meg. channel testing). The 158 channels are the "slowest" channels of the 370 line. The similarity between the 158 and the 303x is because of the channel directors. The same channel director hardware is shared across the whole 303x line. A channel director is essentially a stripped down 158 (i.e. 158 integrated channel microcode).

Anyway, most of the delay is not in the actual head switching but in all the delays back in the channel to process all the CCWs between the end of the previous data transfer and the start of the next one. I haven't bothered to actually figure out what would be the required filler block using channel directors. Might try at least double and start cutting it back down from there if you get constant "hits".


... snip ... top of post, old email index.

note comment in above about 4341 being used for 3mbyte/sec channel testing. the other alternative for retrofitting (3mbyte/sec) 3380s to earlier machines was "Calypso" speed-matching addition to 3880 controller ... that was basically the start of ECKD ... and long litany of various kinds of emulation to avoid having to move past CKD.

misc. past posts mentioning ECKD and/or Calypso:
https://www.garlic.com/~lynn/97.html#16 Why Mainframes?
https://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2007.html#29 Just another example of mainframe costs
https://www.garlic.com/~lynn/2007e.html#40 FBA rant
https://www.garlic.com/~lynn/2007e.html#46 FBA rant
https://www.garlic.com/~lynn/2007f.html#0 FBA rant
https://www.garlic.com/~lynn/2007g.html#39 Wylbur and Paging
https://www.garlic.com/~lynn/2007o.html#54 mainframe performance, was Is a RISC chip more expensive?
https://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
https://www.garlic.com/~lynn/2008b.html#16 Flash memory arrays
https://www.garlic.com/~lynn/2008b.html#17 Flash memory arrays
https://www.garlic.com/~lynn/2008q.html#40 TOPS-10
https://www.garlic.com/~lynn/2009e.html#61 "A foolish consistency" or "3390 cyl/track architecture"
https://www.garlic.com/~lynn/2009k.html#44 Z/VM support for FBA devices was Re: z/OS support of HMC's 3270 emulation?
https://www.garlic.com/~lynn/2009k.html#74 Disksize history question
https://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

More calumny: "Secret Service Uses 1980s Mainframe"

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More calumny: "Secret Service Uses 1980s Mainframe"
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 02 Mar 2010 17:21:38 -0500
there was something about after secret service was absorbed into dept. homeland security (1mar2003) ... something like 1/3rd of secret service budget found its way elsewhere.

subject of thread from last year (in this mailing list):

Secret Service plans IT reboot
http://fcw.com/Articles/2009/10/19/Web-Secret-Service-IT-modernization.aspx

from above:
According to the RFI, the service's existing IT infrastructure is outdated and at risk of failing. Forty-two mission-oriented applications run on a 1980s IBM mainframe with a 68 percent performance reliability rating, it said. In addition, data systems and IT security don't meet requirements, the service said.

... snip ...

misc. past posts in that thread;
https://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2009p.html#12 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2009p.html#13 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2009p.html#18 Secret Service plans IT reboot

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Need tool to zap core

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need tool to zap core
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Wed, 03 Mar 2010 08:34:38 -0500
re:
https://www.garlic.com/~lynn/2010e.html#10 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux
https://www.garlic.com/~lynn/2010e.html#32 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#34 Need tool to zap core

... long after i have made presentation at baybunch on how to implement dumprx ... more from the 3090 (3092) service processor group
Date: 23 December 1986, 10:38:21 EST
To: wheeler

Re: DUMPRX

Lynn, do you remember some notes or calls about putting DUMPRX into an IBM product? Well .....

From the last time I asked you for help you know I work in the 3090/3092 development/support group. We use DUMPRX exclusively for looking at testfloor and field problems (VM and CP dumps). What I pushed for back aways and what I am pushing for now is to include DUMPRX as part of our released code for the 3092 Processor Controller.

I think the only things I need are your approval and the source for RXDMPS.

I'm not sure if I want to go with or without XEDIT support since we do not have the new XEDIT.

In any case, we (3090/3092 development) would assume full responsibility for DUMPRX as we release it. Any changes/enhancements would be communicated back to you.

If you have any questions or concerns please give me a call. I'll be on vacation from 12/24 through 01/04.


... snip ... top of post, old email index.

RXDMPS was small stub code that dealt with failure image file.

misc. past posts mentioning dumprx
https://www.garlic.com/~lynn/submain.html#dumprx

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Agile Workforce

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 4 Mar, 2010
Subject: Agile Workforce
Blog: Greater IBM
early in my career, i was associated with rapidly growing development effort ... so they had five year plan that involved adding hundreds of people and huge number of individual items spread over five years (before spreadsheets and other automated aids that managed such things).

i had done dynamic adaptive resource management (for computing resources as undergraduate) so got stuck with managing the plan. there were also corporate politics involved ... and there would be weekly calls from hdqtrs with people asking ridiculously trivial what-if questions about the plan ... things that represented small fractions of a percent in the overall scope of things .... but given the resources of the group at the time ... could have completely buried the whole group coming up with answers. So i memorized the whole thing ... and got practiced to the point that i could answer the questions as fast as hdqtrs could pose their (ridiculously insignificant trivial) questions.

much later there was a study of successful startups in silicon valley ... that claimed that the single most common characteristic of a successful startup was that they had completely changed their business plan at least once within the first two years (i.e. agility was much more important than accuracy).

I also sponsored John Boyd's briefings at IBM ... including his Organic Design for Command and Control.

More recently he was credited with the battle plan for the previous effort in the middle east ... and people involved in the current middle east activities have commented that one of the biggest problems was that John had died in the interim. More recent reference to John
"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons School, Nellis Air Force Base, Nevada. 17 September 1999


In addition to his OODA-loop theories beginning to show up in business and management programs ... there are starting to be references to his to be or to do line. His OODA-loop theories have also been used to rewrite Marine training oriented towards a much more adaptable and agile work force.

His observation about change that ww2 rigid, top-down, command&control structure training has had on american business culture, has also been used to explain the enormous bloat in executive compensation. There have been reports that the ratio of executive compensation to worker compensation has exploded to 400:1 after having been 20:1 for long time (and 10:1 in much of the rest of the world). The WW2 scenario has only a very few at the very top understanding what they are doing and the great masses of workers are unskilled and require the rigid, top-down command & control structure in order to make accomplishments. The very few with skills with huge masses of unskilled is then used to justify the 400:1 compensation ratios.

misc. past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Byte Tokens in BASIC

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Byte Tokens in BASIC
Newsgroups: alt.folklore.computers
Date: Thu, 04 Mar 2010 12:32:15 -0500
Eric Chomko <pne.chomko@comcast.net> writes:
If you take Gates' coding ability as a standard of his wealth, then more than enough folks around here should be worth 100s of millions of dollars at the very least!

i think that it is more like wealth inversely related to the time spent coding ... more akin to Boyd's to be or to do
"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons School, Nellis Air Force Base, Nevada. 17 September 1999


misc. past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

PCI tokenization push promising but premature, experts say

From: lynn@garlic.com (Lynn Wheeler)
Date: 4 Mar, 2010
Subject: PCI tokenization push promising but premature, experts say
Blog: Financncial Crime Risk, Fraud and Security
PCI tokenization push promising but premature, experts say
http://searchsecurity.techtarget.com/news/article/0,289142,sid14_gci1409257,00.html

from above ...

Credit card tokenization technology can help better protect credit card data, but a lack of industry standards and complexity issues pose problems for merchants, according to a panel at the 2010 RSA Conference

... snip ...

There were associations mandates for this in the 90s ... the issue was that consumers would reference dispute by account number and date/time .... not by token ... and there wasn't sufficient dataprocessing infrastruction to provide the mapping from account number and transaction date/time ... to the transaction specific token (or transaction-id)

The underlying issue is what ever value is being used is being forced into dual-purpose role ... both for the business process of managing the transaction .... as well as (effectively) something you know authentication. The token scenario ... attempts to introduce a one-time use transaction dual-purpose business handle/authentication .... rather than fixing the paradigm of having the same value act as both the business transaction identifier as well as the authentication value.

The transaction-id scenario doesn't fix the underlying problem ... just attempts to somewhat reduce the scope of the vulnerability.

Banks encouraged to implement decent multi-factor authentication to securely offer online banking
http://www.scmagazineuk.com/banks-encouraged-to-implement-decent-multi-factor-authentication-to-securely-offer-online-banking/article/164880/

from above:

Multi-factor authentication will solve the problems of online banking. In a blog posting on the threatpost website Roel Schouwenberg, a senior anti-virus researcher in Kaspersky Lab's global research and analysis team, claimed that

... snip ...

Eliminating the dual-use nature of the account number (being used for transaction related business processes as well as something you know authentication) ... goes a long way to fixing many of the vulnerabilities in the existing infrastructure.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

search engine history, was Happy DEC-10 Day

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: search engine history, was Happy DEC-10 Day
Newsgroups: alt.folklore.computers
Date: Thu, 04 Mar 2010 18:15:29 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
Yes, but the system/z software isn't available - at any price. You can run it, but IBM won't sell it to you to run on a "non-approved" system. They used to partner with the authors of the Flex/es emulator (Rational Software?), but the yanked the agreement leaving customers hanging. There was some talk about them (IBM) coming out with their own emulator, but I haven't heard more about it.

*Legally* all you can run on Hercules emulating system/z is 64-bit z/Linux (native, z/VM is also unavailable to host it).


fundamental software
https://web.archive.org/web/20240130182226/https://www.funsoft.com/

FSI had deal with sequent as their major commercial platform offering.

steve chen was cto at sequent and we did a little consulting for him.

ibm then bought sequent ... and was going to continue supporting sequent numa stuff ... but then all that went by the wayside.

misc. past posts mentioning hercules and/or fundamental software
https://www.garlic.com/~lynn/2000c.html#35 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000g.html#7 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001.html#27 VM/SP sites that allow free access?
https://www.garlic.com/~lynn/2001n.html#22 Hercules, OCO, and IBM missing a great opportunity
https://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#32 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#34 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#37 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2002i.html#31 : Re: AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#64 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#69 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#79 Fw: HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#1 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#2 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#6 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#37 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#48 SHARE Planning
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#53 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002p.html#37 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2002q.html#23 Free Desktop Cyber emulation on PC before Christmas
https://www.garlic.com/~lynn/2003.html#21 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#72 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003i.html#9 IBM system 370
https://www.garlic.com/~lynn/2003k.html#7 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004e.html#32 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004g.html#2 Text Adventures (which computer was first?)
https://www.garlic.com/~lynn/2004g.html#19 HERCULES
https://www.garlic.com/~lynn/2004g.html#29 [IBM-MAIN] HERCULES
https://www.garlic.com/~lynn/2004g.html#48 Hercules
https://www.garlic.com/~lynn/2004g.html#49 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004g.html#50 Chained I/O's
https://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004h.html#0 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004h.html#1 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004h.html#2 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004h.html#4 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004n.html#21 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2005k.html#8 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005m.html#4 [newbie] Ancient version of Unix under vm/370
https://www.garlic.com/~lynn/2005t.html#7 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
https://www.garlic.com/~lynn/2006.html#14 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2006c.html#30 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006d.html#1 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006d.html#3 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006d.html#15 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006d.html#19 Hercules 3.04 announcement
https://www.garlic.com/~lynn/2006u.html#16 IA64 and emulator performance
https://www.garlic.com/~lynn/2006x.html#6 Multics on Vmware ?
https://www.garlic.com/~lynn/2007i.html#11 Laugh, laugh. I thought I'd die - application crashes
https://www.garlic.com/~lynn/2007k.html#78 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007m.html#15 Patents, Copyrights, Profits, Flex and Hercules
https://www.garlic.com/~lynn/2007m.html#20 Patents, Copyrights, Profits, Flex and Hercules
https://www.garlic.com/~lynn/2007m.html#32 Patents, Copyrights, Profits, Flex and Hercules
https://www.garlic.com/~lynn/2007p.html#19 zH/OS (z/OS on Hercules for personal use only)
https://www.garlic.com/~lynn/2007u.html#23 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007u.html#29 Folklore references to CP67 at Lincoln Labs
https://www.garlic.com/~lynn/2007v.html#21 It keeps getting uglier
https://www.garlic.com/~lynn/2008d.html#48 VM/370 Release 6 Waterloo tape (CIA MODS)
https://www.garlic.com/~lynn/2008d.html#57 Fwd: Linux zSeries questions
https://www.garlic.com/~lynn/2008r.html#46 pc/370
https://www.garlic.com/~lynn/2008s.html#64 Computer History Museum
https://www.garlic.com/~lynn/2009.html#9 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2009c.html#2 history of comments and source code annotations
https://www.garlic.com/~lynn/2009c.html#21 IBM tried to kill VM?
https://www.garlic.com/~lynn/2009e.html#44 Architectural Diversity
https://www.garlic.com/~lynn/2009k.html#52 Hercules; more information requested
https://www.garlic.com/~lynn/2009k.html#55 Hercules; more information requested
https://www.garlic.com/~lynn/2009k.html#56 Hercules; more information requested
https://www.garlic.com/~lynn/2009k.html#59 Hercules; more information requested
https://www.garlic.com/~lynn/2009k.html#62 Hercules; more information requested
https://www.garlic.com/~lynn/2009k.html#71 Hercules Question
https://www.garlic.com/~lynn/2009o.html#46 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009s.html#26 PDP-10s and Unix
https://www.garlic.com/~lynn/2010.html#2 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010.html#27 Oldest Instruction Set still in daily use?
https://www.garlic.com/~lynn/2010d.html#42 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010e.html#28 What was old is new again (water chilled)

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Boyd's Briefings

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 5 Mar, 2010
Subject: Boyd's Briefings
Blog: Greater IBM
re:
https://www.garlic.com/~lynn/2010e.html#39 Agile Workforce

The first time I was scheduling Boyd's briefing at IBM, I tried to do it thru employee education ... who initially agreed. However, after I provided more information about Boyd's briefings, they changed their mind. Effectively what they said was that the company spends a lot of money educating/training managers ... and presenting some of Boyd's ideas to ordinary employees might be counterproductive to all that management training. Employee education suggested that I might consider limiting Boyd's audience to just people in corporate competitive analysis departments.

Boyd made a couple of references doing a years stint running "spook base". One of his biographies has a reference that "spook base" was a $2.5B windfall for IBM.

One of Boyd's WW2 examples were shermans vis-a-vis tigers .... tigers having something like 10:1 kill ratio ... but the US could produce massive numbers of shermans and win by overwhelming numbers and logistics (the downside being some loss of tank crew morale ... because they were being used as cannon fodder).

Boyd's counter example was Guderian's verbal orders only during the blitzkrieg ... Guderian wanted the local commander to feel free to make decisions on the spot w/o having to worry about thick pile of paper CYA.

About that time, we were having a corporate audit. 6670s&sherpas were being deployed around the building in nearly every dept (dept. secretary or office supply) for computer output. The machines (basically copier 3s with computer interface) had the alternate paper drawer loaded with colored paper. The computer printer driver has modified to print an output "separator" page from the alternate paper drawer. The separator page was mostly blank ... so the driver was enhanced to randomly select quotations from a file (to also print on the separator page). An afterhour sweep by the auditors for unsecured confidential information, found an unclassified output on one of the 6670s, where the separator page had the definition of an auditor (people that go around the battlefield after a war, stabbing the wounded). They tried to lodge a complaint that we had placed it there on purpose.

It wasn't true, but we were having something of a disagreement with the auditors regarding demo programs. We had a large collection of demo programs that the auditors were classifying as games and wanted them removed from all corporate computers. They were pushing for modifying the logon screen to say only could be used for business purposes only. We were pushing for the logon screen to say only could be used for management approved purposes. Demo programs were a valuable tool in educating people about characteristics of online computing.

Boyd closed Organic Design for Command and Control ... after highlighting the pervasive command & control mentality (including in the business world) ... that it should be replaced with "leadership & appreciation"

misc. past posts mentioning Boyd
https://www.garlic.com/~lynn/subboyd.html

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Need tool to zap core

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need tool to zap core
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Fri, 05 Mar 2010 08:24:21 -0500
re:
https://www.garlic.com/~lynn/2010e.html#32 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#34 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#38 Need tool to zap core

old email menioning SIE in vm/811 (811 was a code name for 370/xa because architecture documents) ... aka vmtool which was (originally) only for internal mvs/xa development and was never going to be released to customers. Also some discussion of difference between SIE in 3081 and Trout (trout was codename for 3090)
https://www.garlic.com/~lynn/2006j.html#email810630
in this post
https://www.garlic.com/~lynn/2006j.html#27 virtual memory

above also mentions part of SIE poor performance on 3081 was that it had to be "paged in" (from the 3310/piccolo by the service processor).

another old email mentioning SIE in 3090
https://www.garlic.com/~lynn/2003j.html#email831118
in this post
https://www.garlic.com/~lynn/2003j.html#42 Flash 10208

another reference to SIE on 3090 still being expensive instruction
https://www.garlic.com/~lynn/2007c.html#email860121
in this post
https://www.garlic.com/~lynn/2007c.html#49 SVC

the above discusses potentially disabling for i/o interrupts. as it mentions .... I had done something similar a decade earlier ... would dynamically change based on I/O interrupt rate crossing some threshhold.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

PCI tokenization push promising but premature, experts say

From: lynn@garlic.com (Lynn Wheeler)
Date: 5 Mar, 2010
Subject: PCI tokenization push promising but premature, experts say
Blog: Financncial Crime Risk, Fraud and Security
re:
https://www.garlic.com/~lynn/2010e.html#41 PCI tokenization push promising but premature, experts say

we had been called in to consult with small client/server startup that wanted to do payment transactions on their server ... they had also invented this technology called "SSL" they wanted to use ... the result is now frequently referred to as electronic commerce.

somewhat as a result, we were invited to participate in the x9a10 financial standard working group ... which had been given the requirement to preserve the integrity of the financial infrastructure for ALLretail payments (aka a standard for POS, debit, credit, ach, stored-value, internet, attended, unattended, transit turnstile, online banking ... aka ALL ... needed to be lightweight enuf to perform within the time & power constraints of transit turnstile ... but with enough integrity that it can be used for very high-value transactions). part of the x9.59 standard slightly tweak the paradigm to also eliminate the breach vulnerability (part of x9a10 financial standard working group was detailed end-to-end threat and vulnerabilty studies of different environments. some references to x9.59
https://www.garlic.com/~lynn/x959.html#x959

at the time, the associations had a number of different groups doing a number of different specifications for various environments ... aka one specification for POS ... and different specification(s) for the internet.

some trivia ... that early electronic commerce involved internet SSL connection between webservers and payment gateway ... and then the payment gateway simulated a payment concentrator with dedicated leased line into acquiriing processor. the first "protocol" implemented had been used by the acquirer and shift4 terminals.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Impact of solid-state drives

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Impact of solid-state drives
Newsgroups: comp.arch, comp.unix.programmer
Date: Fri, 05 Mar 2010 09:35:23 -0500
Chris Friesen <cbf123@mail.usask.ca> writes:
Of course it's better not to swap. However, given a specific machine and a specific workload, there may be no possible way to fit both the code and the data set into RAM at the same time.

If swapping out a page of code that gets executed extremely rarely allows the data set to fit in RAM and the app runs 10x faster, I'm all for paging executable code.


recent reference to somebody's decision to bias page replacement algorithm to non-changed pages ... (i.e. less effort, since replaced paged didn't have to be written out first, valid copy was still on paging device) ... was that way for nearly a decade (most of the 70s) before they realized that they were replacing high-use, shared executable code before much lower-use, private data pages
https://www.garlic.com/~lynn/2010d.html#78 LPARs: More or Less?

there was some facetious reference about getting paid for doing it the wrong way ... so later can get award for fixing the problem.

more recent thread about performance issues related to paging microcode
https://www.garlic.com/~lynn/2010e.html#34 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#44 Need tool to zap core

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

z9 / z10 instruction speed(s)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z9 / z10 instruction speed(s)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Fri, 05 Mar 2010 11:33:43 -0500
gahenke@GMAIL.COM (George Henke) writes:
The current trend towards CMMI and the Six Sigma standard of quality, 6 standard deviations (3.4 defects in a million) instead of the typical 3 standards deviations (1 defect in 370) points to the demand for quality, excellence, and perfection in everything.

Toyota doing Lean manufacturing
https://en.wikipedia.org/wiki/Lean_manufacturing

some of this has cross-over with Boyd's OODA-loops ... I had sponsored Boyd's briefings at IBM
https://en.wikipedia.org/wiki/OODA_loop

Toyota Production System
https://en.wikipedia.org/wiki/Toyota_Production_System

In the early 90s, one of the big-3 had C4 task force that was looking at improving their competitive position ... and invited in some number of technology vendors to participate. They went thru majority of the issues with respect to their current state and foreign competitors. One of the big issues was major foreign competitor had reduced elapsed cycle to produce a (totally new) product (from idea to rolling off the line) from 7-8 years to 2-3 years (and looking at dropping below 2 years). Big part of C4 was leveraging technology as part of drastically reducing elapsed time for product cycle.

I chided some of the mainframe brethren attending the meetings about being there to offer advice on reducing product cycle from 7-8yrs to 2-3yrs (when they were still on long product cycle).

Within the domesitic auto industry ... although they could very clearly articulate all the important issues ... the status quo was so entrenched that they found it difficult to change.

recent thread in greater ibm blog mentioning Boyd
https://www.garlic.com/~lynn/2010e.html#39 Agile Workforce
https://www.garlic.com/~lynn/2010e.html#43 Boyd's Briefings

misc. past posts mentioning Boyd &/or OODA-loops
https://www.garlic.com/~lynn/subboyd.html

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

z9 / z10 instruction speed(s)

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z9 / z10 instruction speed(s)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Fri, 05 Mar 2010 13:29:21 -0500
eamacneil@YAHOO.CA (Ted MacNEIL) writes:
Why? If you don't need the capacity, what's the issue? Would you rather pay full hardware & software costs for capacity you don't need.

Also, this way, IBM has to build just one processor chip.


re:
https://www.garlic.com/~lynn/2010e.html#47 z9 / z10 instruction speed(s)

i.e. aggregate computing cost (for everybody) is actually less with single part number ... than if there were large number of different parts.

in early 80s, major analysis of vm/4341s going into every nook & cranny versus "big iron" in the datacenter, was the enormously greater "big iron" expense involved in adding capacity. this can somewhat also be seen with returning to the old timesharing days with cloud computing.

having extra capacity already available at the customer site ... is analogous to having on-site spare part depot &/or on-site CE.

recent reference to 3033N ... slower than 168/3032 ... but able to be "field upgraded" to full speed 3033:
https://www.garlic.com/~lynn/2010e.html#27 SHAREWARE at Its Finest

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

z9 / z10 instruction speed(s)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z9 / z10 instruction speed(s)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Sat, 06 Mar 2010 09:43:54 -0500
jmfbahciv <jmfbahciv@aol> writes:
ROTFLMAO. A typing fo-paw?

re:
https://www.garlic.com/~lynn/2010e.html#47 z9 / z10 instruction speed(s)

yep ... oh well .. s/invented/invited/

note that it was in 1990 ... twenty years ago.

we would go by somers and discuss with some of the occupants that the company (especially mainframe business) was facing similar issues; they would all essentially agree and (also) be able to clearly articulate the issues ... and then we would go back the next month or a couple months ... and nothing had changed.

there seemed to be a strong sense that they were (also) trying to preserve the status quo until their retirement, leaving corrective action to somebody else.

then the company went into the red ... and some of the status quo and vested interests were being forced to change (compared to auto industry that managed to preserve status quo & vested interests across years and years in the red).

misc posts mentioning auto C4 &/or past protectionism import quotas
https://www.garlic.com/~lynn/2000f.html#43 Reason Japanese cars are assembled in the US (was Re: American bigotry)
https://www.garlic.com/~lynn/2003i.html#61 TGV in the USA?
https://www.garlic.com/~lynn/2004c.html#51 [OT] Lockheed puts F-16 manuals online
https://www.garlic.com/~lynn/2004h.html#22 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2006g.html#20 The Pankian Metaphor
https://www.garlic.com/~lynn/2006m.html#49 The Pankian Metaphor (redux)
https://www.garlic.com/~lynn/2007f.html#50 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007g.html#29 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007g.html#34 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007g.html#52 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007i.html#13 U.S. Cedes Top Spot in Global IT Competitiveness
https://www.garlic.com/~lynn/2007j.html#33 IBM Unionization
https://www.garlic.com/~lynn/2007n.html#31 IBM obsoleting mainframe hardware
https://www.garlic.com/~lynn/2008.html#84 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008.html#85 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#22 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008c.html#68 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008e.html#31 IBM announced z10 ..why so fast...any problem on z 9
https://www.garlic.com/~lynn/2008f.html#50 Toyota's Value Innovation: The Art of Tension
https://www.garlic.com/~lynn/2008h.html#65 Is a military model of leadership adequate to any company, as far as it based most on authority and discipline?
https://www.garlic.com/~lynn/2008k.html#2 Republican accomplishments and Hoover
https://www.garlic.com/~lynn/2008k.html#50 update on old (GM) competitiveness thread
https://www.garlic.com/~lynn/2008k.html#58 Mulally motors on at Ford
https://www.garlic.com/~lynn/2008m.html#21 Fraud due to stupid failure to test for negative
https://www.garlic.com/~lynn/2008m.html#52 Are family businesses unfair competition?
https://www.garlic.com/~lynn/2008n.html#4 Michigan industry
https://www.garlic.com/~lynn/2008p.html#77 Tell me why the taxpayer should be saving GM and Chrysler (and Ford) managers & shareholders at this stage of the game?
https://www.garlic.com/~lynn/2008q.html#22 Is Pride going to decimate the auto Industry?
https://www.garlic.com/~lynn/2009i.html#2 China-US Insights on the Future of the Auto Industry
https://www.garlic.com/~lynn/2009i.html#3 IBM interprets Lean development's Kaizen with new MCIF product
https://www.garlic.com/~lynn/2009i.html#10 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009i.html#31 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2010b.html#14 360 programs on a z/10

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Sat, 06 Mar 2010 10:30:45 -0500
rich@VELOCITYSOFTWARE.COM (Rich Smrcina) writes:
Most of the new support now centers around capacity (very large virtual machines) and virtual networking (virtual switch). There is a statement of direction for clustering (not sysplex) and guest migration (moving live machines between VM systems). All of this with the intent of supporting Linux.

note that some of the commercial (virtual machine) timesharing service bureaus had done moving live machines between VM systems in the early 70s, it was combination of 7x24 operation and providing services to customers around the world .... addressing problem that there was no down period for service ... where downtime/outages could be tolerated for things like preventive maintenance. They would migrate virtual machines as part of dynamically taking complexes offline (out of the cluster)for service. misc. past posts mentioning the virtual machine commercial timesharing services dating back to 60s:
https://www.garlic.com/~lynn/submain.html#timeshare

The largest such of the clustering ("single system image") operations in the late 70s (not limited to vm systems, but any mainframe complex, anywhere) was the US VM-based internal (worldwide) sales&marketing support HONE system that. The US HONE centers had been consolidated in bldg. (they no longer occupy the bldg, but it is located next door to the new facebook bldg ). This US datacenter was the largest (although other HONE clones had started to spring up several places around the world) with load-balancing and fall-over recovery. Then because of earthquake concern, in the early 80s, the cal. center was first replicated in Dallas and then a 3rd in Boulder (with load-balancing and fall-over across the redundant centers). a few recent posts in "Greater IBM" discussing HONE:
https://www.garlic.com/~lynn/2010d.html#27 HONE & VMSHARE
https://www.garlic.com/~lynn/2010e.html#24 Unbundling & HONE
https://www.garlic.com/~lynn/2010e.html#25 HONE Compute Intensive
https://www.garlic.com/~lynn/2010e.html#29 HONE & VMSHARE

misc. ohter posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

The product to customers saw big impact when POK got the development group shutdown and all the people moved to POK to support MVS/XA development. initially the product was also killed, Endicott managed to save the product mission... but had to reconstitute a group from scratch. This possibly contributed to the VM/SP quality mentioned in Melinda's history ... recent reference:
https://www.garlic.com/~lynn/2010e.html#31 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)

The 4331/4341 saw big explosion in distributed machines connected via network ... so the (their new) focus wasn't on the highend and/or clustering in single location.

There was a research clustering project, eight 4341s with 3088 ... that eventually offered as product ... but that ran into a couple problems

1) there was already pressure from high-end (mvs, 3033, 3081s, etc) where customers found that multiple vm/4341s were much more cost effective than the big iron ... so anything that enhanced this, was met with opposition.

2) the internal product had things like cluster-wide operations taking very small subsecond elapsed time. however, for product ship they were forced to move to SNA ... and all of a sudden simple cluster-wide coordination operations were taking nearly a minute elapsed time. This is similar to the on-going SNA battles that my wife faced when she had been con'ed into going to POK to be in charge of (high-end) mainframe loosely-coupled architecture. The battles with SNA (ephemeral temporary truces where she could use anything she wanted with datacenter walls but SNA had to be used by anything crossing the walls of the datacenters) and very little uptake at the time (except for IMS hot-standby until sysplex) met that she didn't stay long ... misc. past posts mentioning her Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Sat, 06 Mar 2010 16:38:51 -0500
"Dave" <g8mqw@yahoo.com> writes:
Well "VM" or "CP" is very different. As the XA/ESA/z machine can't be virtualized easily in software, I assume because of the need to make AMODE switchs efficient, CP now relies on the SIE instruction to provide virtual machines. I gather this uses the same "hardware" that LPARs use. I guess you might consider this a "superset" of the VM Assists that were available on S/370...

XA wasn't as easily virtualized as 360 & 370 was ... and therefor the SIE instruction. SIE predates PR/SM and LPARs ... in that sense PR/SM and LPARs were leveraging the pre-existing infrastructure for microcode assists and SIE (and for some time, some of the assist microcode was mutually exclusive, either for VM use or LPAR use ... but not both).

In the 360/370 scenario ... LPSW and interrupts loaded new PSW ... which simultaneously switched address space and privilege/problem mode in single operation (in MVS, a copy of the kernel appears in every address space ... where in cp/vm, the kernel and the guest address spaces are totally distinct).

SIE was able to change address spaces and privilege/problem mode in single operation, as well as set a flag for privilege instructions ... basically "assist mode" that indicates privilege instruction is executed according to virtual machine rules (as opposed to real machine rules, basically each assisted privilege instruction has modified microcode). To simulataneously use the assists for LPARs and virtual machines ... effectively needed each privilege instruction microcode to be further modified to recognize 1) real machine, no LPAR, no virtual machine, 2) LPAR, no virtual machine, 3) virtual machine, no LPAR, 4) both LPAR and virtual machine. From a microcode standpoint, LPAR+VM is similar to virtualizing SIE (i.e. running a guest VM system under a VM virtual machine).

SIE had additional performance issues on 3081 ... starting out just purely being internal tool supporting mvx/xa development ... originally never intended for shipping to customers (or production use). ... recent references to 3081 SIE microcode was "paged" (i.e. 3081 service processing paging SIE microcode from 3310 FBA ... representing something of performance issue):
https://www.garlic.com/~lynn/2010e.html#34 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#44 Need tool to zap core
https://www.garlic.com/~lynn/2010e.html#46 Impact of solid-state drives

the other is the emulated implementations ... like Hercules ... implemented on Intel platform; this could be considered analogous to the entry & mid-range mainframe implementations that had been done in vertical microcode.

There was a separate performance issue going from virtual MVT guests to virtual SVS/MVS guests ... since VM started out simulating the TLB (hardware look aside buffer implementing virtual memory) with shadow page tables. The management of the entries in the shadow page tables with software was enormously slower than the hardware overhead involved in managing TLB entries. There could also be pathelogical page replacement algorithm behavior ... with MVS approximating a LRU page replacement and VM also approximating a LRU page replacement ... VM might be selecting the MVS guest page for removal from real storage ... moments before the MVS guest desides that it is the ideal next page to use (VM deciding that since it hasn't been used, it can be removed from real storage and MVS deciding that since it hasn't been used, it can be reassigned for some other use).

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Sat, 06 Mar 2010 16:51:59 -0500
re:
https://www.garlic.com/~lynn/2010e.html#50 LPARs: More or Less?

so this old email mentions getting two (BU) co-op students to help me with a bunch of code migration from cp67 to vm370
https://www.garlic.com/~lynn/2006v.html#email731212

the co-op students graduate that spring and one of them goes to work for a VM-based online commercial timesharing service bureau (started by head of MIT lincoln labs & some of the other cp67 people in the area).

lot of the code he worked on doesn't ship for awhile ... so he recreates the implementations; shared-segment extensions, page migration, administrative address spaces (used to package kernel control blocks and migrate them to disk paging areas). Adding loosely-coupled shared device support allows page&spool to be migrated off of devices that need to be taken offline for service, also complete virtual machine description can be moved out of one processor and back into a different processor (processor complex needing to go offline for service). That was done along with front-end load-balancing and single-system-image clustering support.

some of these vm-based timesharing companies move up value stream by offering online financial information to financial community (stuff like 100? years of stock closing prices ... service delivery being done via the paged-mapped, shared segment facility). much later, they show up offering the services on the web ... and this company even shows up in the financial crisis.

congressional hearings highlighted that rating agencies played major role in the current crisis by selling triple-A ratings" for toxic CDOs. The unregulated (non-depository) loan originators had their source of funds enormously increased when they were able to pay for triple-A ratings" w/o regard to the actual value. The hearings pointed out that the seeds for this conflict of interest was sown in the early 70s, when the rating agencies switched from buyers paying for the ratings to the sellers paying for the ratings. By being able to pay for triple-A ratings, not only enormously increased the source of funds to the unregulated (non-depository) loan originators but also eliminated any concern they had for loan quality or borrower qualification (since they could always get triple-A ratings and immediately sell off at premium). Speculators sucked them up since no-down, no-documentation, one-percent ARMs with interest only payments were significantly less than real-estate inflation (with speculation frenzy further fueling inflation; planning on flipping before ARM adjusted; 20 percent inflation in some markets, with 1% interest only payments, potentially nearly 2000% ROI).

This particular (started out as vm-based commercial timesharing service bureau) company bought the "pricing services division" from one of the rating agencies about the time the congressional testimony said that the rating agencies switched who paid for the ratings (and created the opening for conflict of interest).

In early Jan2009, the same company shows up in the news as helping the fed. gov. price the troubled/toxic assets for purchase with the TARP funds (possibly the pricing services division purchased nearly 40yrs earlier?). This is before the gov. realized that the amount of appropriated TARP funds was small drop in the bucket compared to just the total troubled assets being carried offbook by the four largest too-big-to-fail institutions (and the gov. is forced to invent other ways for using TARP funds and saving the economy).

past posts in this thread:
https://www.garlic.com/~lynn/2010d.html#58 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#59 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#60 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#61 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#63 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#66 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#69 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#70 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#71 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#72 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#73 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#78 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#79 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#81 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#0 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#1 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#2 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#51 LPARs: More or Less?

past posts mentioning TARP:
https://www.garlic.com/~lynn/2008r.html#61 The vanishing CEO bonus
https://www.garlic.com/~lynn/2008s.html#32 How Should The Government Spend The $700 Billion?
https://www.garlic.com/~lynn/2008s.html#33 Garbage in, garbage out trampled by Moore's law
https://www.garlic.com/~lynn/2008s.html#35 Is American capitalism and greed to blame for our financial troubles in the US?
https://www.garlic.com/~lynn/2008s.html#41 Executive pay: time for a trim?
https://www.garlic.com/~lynn/2009.html#73 CROOKS and NANNIES: what would Boyd do?
https://www.garlic.com/~lynn/2009.html#80 Are reckless risks a natural fallout of "excessive" executive compensation ?
https://www.garlic.com/~lynn/2009b.html#25 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2009b.html#30 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2009b.html#35 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2009b.html#41 The subject is authoritarian tendencies in corporate management, and how they are related to political culture
https://www.garlic.com/~lynn/2009b.html#45 The recently revealed excesses of John Thain, the former CEO of Merrill Lynch, while the firm was receiving $25 Billion in TARP funds makes me sick
https://www.garlic.com/~lynn/2009b.html#49 US disaster, debts and bad financial management
https://www.garlic.com/~lynn/2009b.html#57 Credit & Risk Management ... go Simple ?
https://www.garlic.com/~lynn/2009b.html#59 As bonuses...why breed greed, when others are in dire need?
https://www.garlic.com/~lynn/2009c.html#10 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#11 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#16 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009d.html#73 Should Glass-Steagall be reinstated?
https://www.garlic.com/~lynn/2009d.html#74 Why is everyone talking about AIG bonuses of millions and keeping their mouth shut on billions sent to foreign banks?
https://www.garlic.com/~lynn/2009e.html#17 Why is everyone talking about AIG bonuses of millions and keeping their mouth shut on billions sent to foreign banks?
https://www.garlic.com/~lynn/2009e.html#23 Should FDIC or the Federal Reserve Bank have the authority to shut down and take over non-bank financial institutions like AIG?
https://www.garlic.com/~lynn/2009e.html#35 Architectural Diversity
https://www.garlic.com/~lynn/2009e.html#59 Tesco to open 30 "bank branches" this year
https://www.garlic.com/~lynn/2009e.html#79 Are the "brightest minds in finance" finally onto something?
https://www.garlic.com/~lynn/2009f.html#27 US banking Changes- TARP Proposl
https://www.garlic.com/~lynn/2009f.html#29 What is the real basis for business mess we are facing today?
https://www.garlic.com/~lynn/2009f.html#35 US banking Changes- TARP Proposl
https://www.garlic.com/~lynn/2009f.html#43 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009f.html#47 TARP Disbursements Through April 10th
https://www.garlic.com/~lynn/2009f.html#51 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009g.html#7 Just posted third article about toxic assets in a series on the current financial crisis
https://www.garlic.com/~lynn/2009g.html#44 What TARP means for the future of executive pay
https://www.garlic.com/~lynn/2009i.html#44 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009i.html#54 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009j.html#36 Average Comp This Year At Top Firm Estimated At $700,000
https://www.garlic.com/~lynn/2009j.html#81 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009n.html#20 UK issues Turning apology (and about time, too)
https://www.garlic.com/~lynn/2009n.html#21 UK issues Turning apology (and about time, too)
https://www.garlic.com/~lynn/2009n.html#62 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009p.html#23 Opinions on the 'Unix Haters' Handbook
https://www.garlic.com/~lynn/2009q.html#47 Is C close to the machine?
https://www.garlic.com/~lynn/2009r.html#47 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2010.html#37 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010.html#61 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#23 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#48 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#51 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#53 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#4 alphas was: search engine history, was Happy DEC
https://www.garlic.com/~lynn/2010d.html#13 search engine history, was Happy DEC-10 Day

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

z9 / z10 instruction speed(s)

Refed: **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: z9 / z10 instruction speed(s)
Newsgroups: bit.listserv.ibm-main
Date: 6 Mar 2010 20:16:38 -0800
lists@AKPHS.COM (Phil Smith III) writes:
If you look at carefully written PC software like, say, Steve Gibson's stuff (www.grc.com -- not a plug, just an example that comes to mind), you'll see incredibly rich and powerful stuff that fits in the palm of your PC's hand, so to speak. http://www.grc.com/freepopular.htm has dozens of apps, most in the 25K (yes, K!) range. So it isn't impossible to write tight software for PCs, just discouraged by the apparent lack of need and ubiquity of IDEs that produce bloat.

recent thread about redoing part of res system ... sized that something like ten rs/6000 580s could have handled the full world-wide activity .... way outperforming large datacenter of TPF systems (same load projected to take a couple hundred es9000 processors) .... and current treo (xscale processor) theoretically has approx. compute power of those ten 580s.
https://www.garlic.com/~lynn/2010b.html#80 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#19 Processes' memory

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

search engine history, was Happy DEC-10 Day

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: search engine history, was Happy DEC-10 Day
Newsgroups: alt.folklore.computers
Date: Sun, 07 Mar 2010 10:01:08 -0500
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
Oh, we had just about every other language imaginable, and probably a few that weren't. WATFOR, FORTRAN G, Assembler G, PL/I, Snobol4, LISP, UMIST, APL, Algol 60, Algol 68, Algol W, pl/360 (the misbegotten bastard child of Algol and assembly language)... In fact, there was one course whose sole purpuse was to feed you a new language every two weeks. (I dropped out before I had to take that one.)

for some 4GL, Mathematica, & virtual machine topic drift:

A Brief History of Fourth Generation Languages
http://www.decosta.com/Nomad/tales/history.html

from above:
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a report that would have taken many hundreds of lines of Cobol to produce. The product grew in capability and in revenue, both to NCSS and to Mathematica, who enjoyed increasing royalty payments from the sizable customer base.

... snip ...

RAMIS wiki
https://en.wikipedia.org/wiki/Ramis_Software

Focus wiki
https://en.wikipedia.org/wiki/FOCUS

Nomad wiki
https://en.wikipedia.org/wiki/Nomad_software

computer history museum item on NCSS, RAMIS, and NOMAD
http://www.computerhistory.org/collections/accession/102658182

for other drift, original relational/sql was (also) done on vm370, misc. past posts mentioning system/r
https://www.garlic.com/~lynn/submain.html#systemr

various other recent posts touching on computer language:
https://www.garlic.com/~lynn/2010.html#21 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010.html#77 Korean bank Moves back to Mainframes (...no, not back)
https://www.garlic.com/~lynn/2010.html#78 y2k10 problem with credit cards in Germany
https://www.garlic.com/~lynn/2010b.html#16 How long for IBM System/360 architecture and its descendants?
https://www.garlic.com/~lynn/2010b.html#86 Oldest Instruction Set still in daily use?
https://www.garlic.com/~lynn/2010c.html#8 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#80 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010e.html#14 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

misc. past posts mentioning ramis, focus, nomad:
https://www.garlic.com/~lynn/2002i.html#64 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#69 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002l.html#56 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2003d.html#15 CA-RAMIS
https://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003n.html#12 Dreaming About Redesigning SQL
https://www.garlic.com/~lynn/2003n.html#15 Dreaming About Redesigning SQL
https://www.garlic.com/~lynn/2004e.html#15 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2004j.html#52 Losing colonies
https://www.garlic.com/~lynn/2004l.html#44 Shipwrecks
https://www.garlic.com/~lynn/2006k.html#35 PDP-1
https://www.garlic.com/~lynn/2006k.html#37 PDP-1
https://www.garlic.com/~lynn/2007c.html#12 Special characters in passwords was Re: RACF - Password rules
https://www.garlic.com/~lynn/2007e.html#37 Quote from comp.object
https://www.garlic.com/~lynn/2007j.html#17 Newbie question on table design
https://www.garlic.com/~lynn/2007o.html#38 It's No Secret: VMware to Develop Secure Systems for NSA
https://www.garlic.com/~lynn/2007u.html#87 CompUSA to Close after Jan. 1st 2008
https://www.garlic.com/~lynn/2008s.html#66 Computer History Museum
https://www.garlic.com/~lynn/2009k.html#40 Gone but not forgotten: 10 operating systems the world left behind

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
Newsgroups: bit.listserv.ibm-main
Date: 7 Mar 2010 07:32:56 -0800
for some 4GL, Mathematica, & virtual machine topic drift:

A Brief History of Fourth Generation Languages
http://www.decosta.com/Nomad/tales/history.html

from above:
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a report that would have taken many hundreds of lines of Cobol to produce. The product grew in capability and in revenue, both to NCSS and to Mathematica, who enjoyed increasing royalty payments from the sizable customer base.

... snip ...

RAMIS wiki
https://en.wikipedia.org/wiki/Ramis_Software

Focus wiki
https://en.wikipedia.org/wiki/FOCUS

from above:
All three products flourished during the 1970s and early 1980s, but Mathematica's time ran out in the mid-80s, and NCSS also failed, a victim of the personal computing revolution which obviated commercial timesharing (although it has since been revived in the form of ASPs and shared web servers).

... snip ...

and the lastest buzz word, cloud computing.

Nomad wiki
https://en.wikipedia.org/wiki/Nomad_software

from above:
Nomad was claimed to be the first commercial product to incorporate relational database concepts. This seems to be borne out by the launch dates of the well-known early RDBMS vendors, which first emerged in the late 70s and early 80s -- such as Oracle (1977), Informix (1980), and Unify (1980). The seminal non-commercial research project into RDBMS concepts was IBM's System R, first installed at IBM locations in 1977. System R included and tested the original SQL implementation.

... snip ...

... system/r original developed on vm370 370/145 in bldg. 28
https://www.garlic.com/~lynn/submain.html#systemr

and first commercial relational DBMS released on Multics in 1976 (virtual machine stuff was done at the science center on 4th flr, 545 tech sq, while multics was done on 5th flr).

other SQL history ...

The 1995 SQL Reunion: People, Projects, and Politics
http://www.mcjones.org/System_R/SQL_Reunion_95/index.html

from above:
Jim Gray: In about 1972 Stonebraker got a grant to do a geo-query database system. It was going to be used for studies of urban planning. The project did do some geographic database stuff, but fairly quickly it gravitated to building a relational database system. The result was the INGRES system[20]. INGRES started in about 1972 and a whole series of things spun off from that: Ingres[21], Britton-Lee, and Sybase.

... snip ...

and computer history museum item on NCSS, RAMIS, and NOMAD
http://www.computerhistory.org/collections/accession/102658182

past posts mentioning various vm-based commercial online timesharing service bureaus
https://www.garlic.com/~lynn/submain.html#timeshare

past posts in this thread:
https://www.garlic.com/~lynn/2010d.html#76 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010d.html#80 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010d.html#83 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010e.html#12 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010e.html#14 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010e.html#16 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer
https://www.garlic.com/~lynn/2010e.html#18 Senior Java Developer vs. MVS Systems Programmer

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

z9 / z10 instruction speed(s)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z9 / z10 instruction speed(s)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Sun, 07 Mar 2010 11:37:33 -0500
jmfbahciv <jmfbahciv@aol> writes:
I think the hardest thing to do is get a project started. It seems to take one person whose enthusiasm cannot be quenched to do all the preliminary work required to setup a project. Bosses don't tend to do this. DECUS had an accounting session at every Fall and Spring meeting asking DEC to do "something" about the system usage gathering mechanism. Every customer had their own _unique_ solution which didn't mesh with anybody else's. No monitor developer was interested in doing the mundane thing called accounting. Most, as in 99%, considered the whole thing a PITA.

re:
https://www.garlic.com/~lynn/2010e.html#47 z9 / z10 instruction speed(s)
https://www.garlic.com/~lynn/2010e.html#49 z9 / z10 instruction speed(s)

the preservation of the status quo went way beyond difficulty in getting something new started ... this was (relatively) large numbers feeling that they had significant vested interest in the way things were; aka there was significant additional compensation having 30yrs experience in that status quo ... which would not be so highly prized in a totally different environment ... trying to stall any changes until they retired, some analogy to rearguard action during retreat.

the auto industry seemed to survive a couple decades with significant amounts of red ink w/o having to make significant changes ... all of the major stakeholders involved in the industry trying to preserve the status quo ... more & more individuals seeing retirement just around the corner and trying to preserve seniority, privileges & status quo until after they are gone.

my dynamic adaptive resource manager was dependent on a lot instrumentation that was also used for accounting. part of accounting is creation & keeping of all the information ... somewhat separate from accounting as in billing for the resource usage. another application of the usage data was for workload profiling and evoluation into things like capacity planning.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

IBM Plans to Discontinue REDBOOK Series

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: IBM Plans to Discontinue REDBOOK Series
Newsgroups: bit.listserv.ibm-main
Date: 7 Mar 2010 09:27:07 -0800
lists@AKPHS.COM (Phil Smith III) writes:
So it's not quite as clear from the beancounter side of the street, since those last are impossible to quantify.

But since both Timothy (on IBM-MAIN) and Alan (on IBMVM) have stated that there is no official statement of direction, then the good news is that it's not even a rumor at this point -- it's fiction. What we need to worry about is it becoming reality down the road.


problem is that bean counting frequently has 3month horizon ... which is easy to quantify ... the impossible part is trying to extend past the quarterly horizon .... and it may have to constantly be repeated with every new generation of bean counters.

HONE had a re-accuring problem for at least a decade ... somebody from the branch would be promoted to DP hdqtrs that included HONE organization. At some point the branch person would become aware that HONE was not MVS (but virtual machine) based ... and decide that their mark on the corporation would be made by moving HONE off the VM platform to MVS. They then would direct the HONE organization (stop whatever they were doing and) to port everything to MVS ... it might take a year before there was enough evidence (for upper executives) that it wasn't practical for HONE to operate on MVS platform ... and then the HONE organization would return to their normal duties for a couple months until that executive was promoted and somebody new came in to repeat the process.

misc. past posts mentioning HONE (world-wide online, vm-based, sales and marketing support)
https://www.garlic.com/~lynn/subtopic.html#hone

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
Newsgroups: bit.listserv.ibm-main
Date: 7 Mar 2010 12:52:16 -0800
lefuller@SBCGLOBAL.NET (Lloyd Fuller) writes:
I respect your knowledge, Lynn, but I cannot let that go by without saying somethings.

1. NCSS did not go away because of the PC revolution: they gave up after D&B bought them. I worked there at the time on VP/CSS. There are MANY things that we did with VP/CSS that even PCs, z/VM and z/OS still cannot do (look up PROTECT EXEC in the old VM requirements for example). The powers that be at NCSS decided that although it was profitable (very much so), since D&B required its subsidiaries to be number 1 or 2 in the market, they could not cut that and gave up on time-sharing.

2. NOMAD has not gone away. In fact SElect Business Solutions would still be VERY happy to sell you a license today for NOMAD or UltraQuest.


re:
https://www.garlic.com/~lynn/2010e.html#55 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

the quotes from the wiki pages ... possibly look at getting the wiki pages updated.

there is also reference to the computer history museum item:
http://www.computerhistory.org/collections/accession/102658182

from above:
Five of the National CSS principals participated in a recorded telephone conference call with a moderator addressing the history of the company's use of RAMIS and development of NOMAD. The licensing of RAMIS from Mathematica and the reasons for building their own product are discussed as well as the marketing of RAMIS for developing applications and then the ongoing revenue from using these applications. The development of NOMAD is discussed in detail along with its initial introduction into the marketplace as a new offering not as a migration from RAMIS. The later history of NOMAD is reviewed, including the failure to build a successor product and the inability to construct a viable PC version of NOMAD.

... snip ...

i.e.
http://archive.computerhistory.org/resources/access/text/Oral_History/102658182.05.01.acc.pdf

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

More calumny: "Secret Service Uses 1980s Mainframe"

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More calumny: "Secret Service Uses 1980s Mainframe"
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 08 Mar 2010 10:03:05 -0500
Louis Krupp <lkrupp_nospam@indra.com.invalid> writes:
Interesting, but how does the word "calumny" apply?

re:
https://www.garlic.com/~lynn/2010e.html#37 More calumny: "Secret Service Uses 1980s Mainframe"

somebody/somewhere upthread with original (ibm-main) post that started out with abc news reference that apparently had just discovered the RFI from last year. there were some statements that situation might be the result of some recent budgetary policy (as opposed to something that dates back possible a decade or more).

Thread started late feb, just before 1mar2010 ... which was 7yrs after (1mar2003) secret service was transferred to homeland defense (from treasury). there have been past news references that after the 1mar2003 transfer something like 1/3rd of secret service budget had disappeared into homeland defense ... in a period when secret service was being tasked with more activities (and having to make do with 2/3rds the budget).

Secret Service Computers Only Work at 60 Percent Capacity; Agency Uses 1980s Mainframe System Is 'Fragile' and Cannot Sustain Tempo of Current or Future Operational Missions
http://abcnews.go.com/Politics/us-secret-service-outdated-computer-mainframe-system-1980s/story?id=9945663

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

z9 / z10 instruction speed(s)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z9 / z10 instruction speed(s)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 08 Mar 2010 10:24:20 -0500
jmfbahciv <jmfbahciv@aol> writes:
Yep. All of that was done. The usage reports in universities, billed to each department, also helped them justify the cost of buying their own computer.

None of that seems to be done these days. I've been wondering if this is one aspect of auld computing that won't be recycled.


re:
https://www.garlic.com/~lynn/2010e.html#47 z9 / z10 instruction speed(s)
https://www.garlic.com/~lynn/2010e.html#48 z9 / z10 instruction speed(s)
https://www.garlic.com/~lynn/2010e.html#49 z9 / z10 instruction speed(s)
https://www.garlic.com/~lynn/2010e.html#53 z9 / z10 instruction speed(s)
https://www.garlic.com/~lynn/2010e.html#56 z9 / z10 instruction speed(s)

when I was undergraduate at univ. datacenter ... the datacenter sort of went thru the opposite ... making the case to state legislature that the datacenter should operate as independent entity. the complaint was that up until then ... "powerful" depts could run rough shod over the datacenter ... using resources way in excess of any actual budget contribution (since the datacenter accounting was purely fictitious matter).

the change at the legislature was that explicit funds showed up in univ. budget for each dept and was actually transferred to the books of the univ. datacenter ... which had become an independent entity from the univ. the change put the datacenter on a much better fiscal footing ... actually having real budget from their users ... in proportion to actual resource usage ... which, in turn, allowed datacenter to do budget planning and equipment ordering (up until then some depts. could use enormous amount of resources for which the datacenter wasn't provisioned to handle ... which then resulted in less powerful depts having to do with less than what they were budgeted for).

Up until then there was disconnect between the funny money that the datacenter was billing for usage ... and the operating budget that the datacenter was given for purchase, leases, expenses, payroll, etc. With the budget/accounting change, the datacenter could actually provision the equipment/software/staff for providing services that they were being paid to provide (in effect, the budget from the 2nd class depts. was being misappropriated to provide services for the more powerful depts ... out of proportion to what they were paying).

I've recently posted about shortly after the univ. datacenter went thru the cycle with state legislature ... boeing did something similar with its dataprocessing operation.
https://www.garlic.com/~lynn/2010b.html#61 Source code for s/360 [PUBLIC]
https://www.garlic.com/~lynn/2010c.html#89 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010c.html#90 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010d.html#29 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#76 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010e.html#0 LPARs: More or Less?

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

z9 / z10 instruction speed(s)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z9 / z10 instruction speed(s)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 08 Mar 2010 11:00:41 -0500
tony@HARMINC.NET (Tony Harminc) writes:
I don't know about a total IT budget of $38k, but in 1975 licensed software was pretty much a novelty. The first priced version of MVS (or any other IBM OS except perhaps ACP/TPF?) had yet to appear, and most software was written in house. Some shops were using priced IBM software like PL/I Optimizer, or the COBOL equivalent, and bigger places ran non-free IMS and/or CICS. And there were priced (duh) products from other vendors, like Syncsort. But buying run-the-business application packages was pretty rare.

23jun69 unbundling announcement started charging for software, SE services, other stuff (result of various litigation). however, corporation managed to make the case that kernel software should still be free ... misc. past posts mentioning unbundling
https://www.garlic.com/~lynn/submain.html#unbundling

the distraction of the FS project allowed the 360/370 product pipeline to go dry ... with the failure of FS ... some past posts
https://www.garlic.com/~lynn/submain.html#futuresys

there was mad rush to get products back into the 370 product pipeline (both hardware & software; aka FS was planned to replace 370 something that was radically different from 370). the lack of products in the pipeline was also used to explain the clone processors being able to gain foothold in the market. as a result, there was decision to also start charging for kernel software.

I had been doing 360/370 during the FS period (and even made some less than complimentary observations about the FS activity) ... somewhat as a result ... some of that stuff was picked up as part of basic vm370 release 3. There was then decision made to package up lots of my other stuff and release it as an independent "resource manager" ... nad it was selected to be the guinea pig for kernel software charging. I got to spend lots of time with business, planning, legal groups on the subject of kernel software charging. Part of the policy was that kernel software directly involved in hardware support was to remain free ... but other stuff could be charged for.

The "resource manager" was shipped as separately charged for product. The pricing of such software had to at least cover the cost of the development (aka pricing couldn't be done at leas than the development cost). Getting close to release ... the work on deciding price for MVS resource manager had been done ... and the direction was that my resource manager couldn't be shipped at lower price than what was being planned for the MVS resource manager (although the MVS development costs had been enormously greater ... all the work that I had done with regard to my costs for setting price ... just went out the window).
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

During the transition period, the other kernel pricing policy was that free software couldn't have prerequisite on charged for kernel software. For vm370 release 4, there was decision to release multiprocessor support (as support for hardware, would be part of the free kernel base). The problem was that I had included in my resource manager ... a whole bunch of code that multiprocessor support used. The resolution was that something like 90% of the code in my "release 3" resource manager ... was moved into the free release 4 kernel (w/o changing the price of the resource manager).
https://www.garlic.com/~lynn/subtopic.html#smp

More & more software (both kernel & non-kernel) was being charged for. vm/370 release 6 was still free ... but had a lower-priced kernel add-on (bsepp, for entry level and midrange customers) that was subset of the higher-priced SEPP (which had absorbed my resource manager and added a bunch of other stuff).

As previously mentioned, VM/SP "Release 1" marked the end of transition in kernel software pricing, BSEPP/SEPP were merged back into the base kernel ... and the whole thing became charged for (the name change also reflected change, instead of vm/370 "release 7" ... it was vm "system product" release 1 ... i.e. a charged-for product).

recent posts:
https://www.garlic.com/~lynn/2010d.html#14 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#27 HONE & VMSHARE
https://www.garlic.com/~lynn/2010d.html#39 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#42 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010d.html#60 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#66 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#79 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010e.html#24 Unbundling & HONE
https://www.garlic.com/~lynn/2010e.html#25 HONE Compute Intensive
https://www.garlic.com/~lynn/2010e.html#28 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010e.html#29 HONE & VMSHARE
https://www.garlic.com/~lynn/2010e.html#31 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2010e.html#42 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010e.html#50 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#56 z9 / z10 instruction speed(s)

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

z9 / z10 instruction speed(s)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z9 / z10 instruction speed(s)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 08 Mar 2010 15:30:40 -0500
eamacneil@YAHOO.CA (Ted MacNEIL) writes:
A contention which I disagree with. It's cheaper to build one type of chip/card, and use other methods to limit capacity, which is what software pricing is based on.

aka there is a lot of upfront & fixed costs ... but volume manufacturing techniques frequently drop the bottom out of per unit costs (i.e. per unit price can be dominated by the upfront & fixed costs, leveraging common unit built in larger volumes can easily offset having multiple custom designed items).

it actually costs both the vendor and the customers quite a bit to physically change item ... potentially significantly more than bare bones per unit (volume) manufacturing costs ... as a result having large number of units prestaged ... is trade-off of the extra volume manufacturing cost of each of the units against the vendor&customer change cost of physically adding/replacing each individual item.

it is somewhat the change-over to 3rd wave (information age). Earlier, the cost ... and therefor perceived value, was mostly in the actual building of something. moving into the 3rd wave, much more of the value has moved to the design of something ... and volume manufacturing techniques has frequently reduced the per unit building cost as close as possible to zero.

They are now doing multi-billion dollar chip plants that are obsolete in a few years. Manufacturing cost is the actual creation of the wafer ... with thousands of chips cut from each wafer (motivating move from 8in wafer to 12in wafer, getting more chips per wafer). The bare-bones cost for building one additional chip ... can be a couple pennies ... however, the chip price may be set at a couple hundred (or more) in order to recover the cost of the upfront chip design as well as the cost of the plant.

It may then cost the vendor&customer, tens (or hundreds) of dollars to actually physically deploy each chip where it is useful.

An economic alternative is to package a large number of chips in a single deployment ... potentially at loss of a few cents per chip ... in the anticipation that the extra chips might be needed at some point (possibly being able to eliminate cost of actually having to physical deploy each individual chip).

note that the pharmaceutical industry has been going thru similar scenarios with brand drugs (with upfront development costs) and generic drugs.

something similar was used as justification for the FS project ... the corporate R&D costs was significantly higher than the vendors turning out clone controllers ... including the one I worked on as undergraduate
https://www.garlic.com/~lynn/submain.html#360pcm

then the distraction of the FS effort (and drying up 370 product pipelines) is then blamed for allowing clone processors to gain market foothold.
https://www.garlic.com/~lynn/submain.html#futuresys

Some discussion here:
http://www.jfsowa.com/computer/memo125.htm

Article by corporate executive involved in FS effort:
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

quote from above:
IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its R&D spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals.

... snip ...

There have been some comments that the baroque nature of the pu4/pu5 (vtam/3705ncp) interface, did try & approximate the FS "high level of integration" objective.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

More calumny: "Secret Service Uses 1980s Mainframe"

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: More calumny: "Secret Service Uses 1980s Mainframe"
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 08 Mar 2010 15:34:57 -0500
Louis Krupp <lkrupp_nospam@indra.com.invalid> writes:
From thefreedictionary.com, a definition of "calumny":

1. A false statement maliciously made to injure another's reputation. 2. The utterance of maliciously false statements; slander.

The statement that a government program is underfunded isn't necessarily false or even particularly malicious.


re:
https://www.garlic.com/~lynn/2010e.html#37 More calumny: "Secret Service Uses 1980s Mainframe"
https://www.garlic.com/~lynn/2010e.html#59 More calumny: "Secret Service Uses 1980s Mainframe"

Go back and read the originating post in ibm-main ... it is possible that the person was reacting to any critism of mainframes ... as opposed to whether or not the agency was underfunded.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main
Date: 8 Mar 2010 13:22:40 -0800
stephen@HAWAII.EDU (Stephen Y Odo) writes:
and that's the rub. why the requirement that it be used ONLY for research?

none of the other vendors have such restrictions.

that's one of the biggest reasons we're moving off of the mainframe ...


telcos faced something similar with dark fiber and NSFNET backone in the 80s ... (tcp/ip is technology basis for modern internet, NSFNET backbone was operational basis for modern internet, and CIX was business basis for modern internet).

telcos have large fixed costs & expenses ... but recover costs based on usage. all the fiber going into the ground enormously increased capacity ... however w/o signicant reduction in use charges ... people weren't going to use the extra capacity. however, any upfront reduction in the use charges .... w/o the bandwidth hungry applications ... would result in telcos operating at significant loss for possibly decade (period it would take for the new bandwidth hungry applications to evolve in an environment with drastically reduced fees).

the telcos leveraged NSFNET backbone as a commercial-free technology incubator. The NSFNET backbone RFP was awarded for $11.2M ... and was for non-commercial use only. Folklore is that resources put into NSFNET backbone was closer to four times the RFP. The non-commercial use of the NSFNET backbone would limit impact on telco revenue ... but at the same time telcos could provide large amount of extra resources for non-profit educational technology incubator to promote evolution of the bandwidth hungry applications.

misc. old email about NSFNET backbone
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

we had been working with NSF and various institutions leading up to NSFNET backbone effort ... but then weren't allowed to bid on the RFP. The director of NSF attempted to help by writing letter to corporation asking for our help (there were also comments like what we already had running was at least five years ahead of all the NSFNET backbone RFP responses to build something new). Turns out that just aggravated the internal politics.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main
Date: 8 Mar 2010 13:35:55 -0800
stephen@HAWAII.EDU (Stephen Y Odo) writes:
Thus we paid regular price (less 15% academic institution discount). Which made it way too expensive for us. And paved the way for our migration to Solaris (which was FREE whether we used it for academic OR research OR administrative purposes).

I vaguely remember in the 60s ... the academic institution discount was 40% ... but that seemed to change with the 23jun69 unbundling announcement (in response to gov. litigation) ... along with starting to charge for application software, SE services, and other stuff.
https://www.garlic.com/~lynn/submain.html#unbundle

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

z9 / z10 instruction speed(s)

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z9 / z10 instruction speed(s)
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 08 Mar 2010 17:23:15 -0500
Jim Stewart <jstewart@jkmicro.com> writes:
Completed wafers are oftentimes stockpiled.

re:
https://www.garlic.com/~lynn/2010e.html#62 z9 / z10 instruction speed(s)

scenario here is environment where vendor actually has system, charging, and quite a bit of operational control over resources being used.

in less structured environment there would likely be a whole lot of piracy ... akin to what goes on with software and content on lots of platforms (motivation behind a lot of DRM activity).

in desktop environment there has been software analogy with lots of software being packaged with new PC ... that requires paying a fee in return for activation. the increasing availability of high-speed broadband internet has mitigated that somewhat ... being able to download ... in lieu of requiring some sort of physical delivery (startrek transporter technology for similar delivery of physical chips isn't here yet).

when I was doing AADS chip strawman ... with some amount of how stuff operated in the chip foundary ... misc. references
https://www.garlic.com/~lynn/x959.html#aads

got contacted about looking at possibility of doing something similar for common processor chips (countermeasure to copy-chips and grey market chips).

A little more topic drift, in the mid-90s, I had semi-facetiously commented that I would take $500 milspec part, aggressively cost reduce it by 2-3 orders of magnitude while making it more secure. A very aggressive KISS campaign helped reduce circuits per chip ... but along with technology decreasing circuit size ... wafer area for cuts was exceeding chip wafer area (basically close to the EPC RFID chip scenario, further increases in chips/wafer required new technology for cutting wafers into chips ... that resulted in significantly lower wafer area loss). The KISS scenario was independent of the grey market/copy chip scenario.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

search engine history, was Happy DEC-10 Day

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: search engine history, was Happy DEC-10 Day
Newsgroups: alt.folklore.computers
Date: Mon, 08 Mar 2010 18:22:29 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
History has shown that IBM can't "force" stuff on anybody. There is a long list of IBM products that were killed off because they didn't sell: OS/2 (unfortunately), MicroChannel, Token Ring, etc. etc.

at least T/R sold an amazing amount ... akin to PCs that were used for 3270 terminal emulation ... just into a market that no longer exists.

Standard 3270 terminals required point-to-point coax cables ... cable from terminal ... typically all the way back to the controller in the datacenter. An growing problem in many buildings was the weight of all those coax cables were starting to exceed bldg. loading levels.

It was possible to remap PC 3270 terminal emulation to CAT4 T/R that used significantly lighter cable ... and only had to run to MAU in nearby wiring closet ... which could support a large number of terminals. Then only a single cable need to run from the wiring closet down to the datacenter. The bldg. weight loading was enormously decreased.

Large numbers of bldgs. were rewired for t/r CAT4.

The problem with microchannel and T/R (and possibly some of OS/2) was terminal emulation paradigm hanging over all their heads.

The PC/RT was billed as engineering workstation (in unix market) with AT compatible bus. The Austin did their own PC/RT 4mbit T/R card for the AT bus ... trying to maximize the sustained card thruput.

The RS/6000 was upgraded to use microchannel ... however there was a corporate edict that RS/6000 had to help their corporate brethren and use PS2 microchannel cards. It turns out that the PS2 16mbit T/R microchannel card had lower per card thruput than the PC/RT 4mbit T/R AT bus card (reflecting the PS2 terminal emulation design point ... having 300-400 PS2s all sharing common 16mbit T/R).

The PS2 microchannel SCSI disk card had similar design point as well as the PS2 microchannel graphics cards. The joke was that with the edict in place ... except for strict numerical intensive activity ... the RS/6000 couldn't have any better thruput than PS2.

A partial solution to the corporate politics was RS/6000 730 ... which was a "wide" 530 ... that had a vmebus with vmebus graphics card.

Other vendors were then starting to sell high-speed microchannel ethernet cards that ran over T/R CAT4 (instead of ethernet cable, with individual cards able to sustain full media speed). When Almaden research center went in, they found that they got better thruput and lower latency with 10mbit ethernet over the bldgs. CAT4 than they got with 16mbit T/R (however, lots of commercial customers continued to use 16mbit T/R for a very long time).

The high-speed ethernet cards significantly helped client/server ... compared to the T/R terminal emulation design point. An "ethernet" server enet adapter card needed to sustain the aggregate activity of all its clients ... while the individual 16mbit T/R microchannel cards were designed to share small (1/300, 1/400) pieces of the 16mbit ... aka terminal emulation ... going into the datacenter mainframe.

old post that was part of our 3-tier presentation comparing a 3-tier enet configuration against a terminal emulation T/R configuration
https://www.garlic.com/~lynn/96.html#17 middle layer
https://www.garlic.com/~lynn/2005q.html#20 Ethernet, Aloha and CSMA/CD

other posts mentioning out pitching 3-tier architecture to customer executives
https://www.garlic.com/~lynn/subnetwork.html#3tier

and past posts mentioning terminal emulation paradigm
https://www.garlic.com/~lynn/subnetwork.html#emulation

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Entry point for a Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Entry point for a Mainframe?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 08 Mar 2010 19:54:47 -0500
Steve_Thompson@STERCOMM.COM (Thompson, Steve) writes:
This is the level of machine IBM killed when they pulled the plug on the FLEX/ES boxes.

And those boxes (FLEX/ES) were upgradeable (as I understand it) to be able to connect to the "standard" RAID boxes, and even have CTCA between them (once they had ESCON capability), so that you could grow into a "sysplex".

And what did such a box cost compared to the z10-BC?

That would have been a drop, plug and play environment (pretty much a turn-key system).


this is flex presentation done at mar2005 baybunch meeting (5 yrs ago)
http://www.baybunch.org/prezos/zbb.pdf

a major FLEX platform was sequent (before ibm bought sequent). we did some consulting for Steve Chen when he was CTO at sequent ... and there were customers that had escon attachments (ibm connectivity) for sequent (numa) box (up to 256 intel shared memory multiprocessor). I know of at least one sequent numa customer (in 90s, before sequent bought by ibm) had escon and 3590 tape drives.

sequent numa supported shared disk, raid, cluster, FCS (FCS "open systems" from early 90s flavor of ibm's proprietary ficon), etc ... all in 90s before being bought by ibm.

2000 competitive analysis of (unix) clusters
http://h30097.www3.hp.com/dhba_ras.pdf

above includes discussion sequent's cluster implementation (some number of loosely-coupled/clustered 256 processor tightly-coupled machines, say 4*256-way for 1024 processor complex).

above ranking/comparison also includes our ha/cmp that we started in late 80s
https://www.garlic.com/~lynn/subtopic.html#hacmp

this old post discusses jan92 meeting regarding ha/cmp 128-way cluster operation (using FCS)
https://www.garlic.com/~lynn/95.html#13

meeting was just a couple weeks before project was transferred and we were told we couldn't work on anything with more than four processors. related email on ha/cmp scale-up and fcs:
https://www.garlic.com/~lynn/lhwemail.html#medusa

sequent had specialized in commercial unix markets.

(after departing) we were called in to consult with small client/server startup that wanted to do payment transactions on their server ... the startup had also invented this technology called "SSL" they wanted to use (the result is now frequently called "electronic commerce").

one of the things happening during this period was lots of servers were starting to experience heavy processor overload with web operation. the small client/server startup was growing and having to add increasing numbers of servers to handle their various kinds of web traffic. finally they installed a sequent system ... and things settled down.

in turned out that sequent had fixed the networking implementation that was absorbing majority of server processing on other platforms. sequent explained that they had encountered the specific problem with commercial accounts supporting 20,000 (terminal) telnet sessions ... long before other platforms started experiencing the same networking problem with large number of HTTP/HTTPS connections (94/95 timeframe). Somewhat later, other platforms started distributing fixes for the tcp/ip processor overhead problem.

for other topic drift ... part of the effort for electronic commerce was deploying something called a "payment gateway" ... that took payment transactions tunneled thru SSL, from webservers on the internet and passed tham to acquiring processor. misc. past posts mentioning payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

search engine history, was Happy DEC-10 Day

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: search engine history, was Happy DEC-10 Day
Newsgroups: alt.folklore.computers
Date: Mon, 08 Mar 2010 20:40:55 -0500
Peter Flass <Peter_Flass@Yahoo.com> writes:
In the early days of LANs, Token Ring and Ethernet were competitive. If T-R performance had been there, (and IBM hadn't caused it to be overpriced) I'm sure it would still be around. No technical reason why not.

re:
https://www.garlic.com/~lynn/2010e.html#67 search engine history, was Happy DEC-10 Day

aka T/R wasn't being sold as LAN solution ... it was being sold as solution to the 3270 terminal (emulation) cabling problem (including the weight). one might claim that it isn't around any more than terminal emulation. t/r price was considered a bargain ... when avg. labor price to run 3270 coax from machine room to desk was $1000.
https://www.garlic.com/~lynn/subnetwork.html#emulation

FDDI was also token ... and it is no longer around either (at one time considered the upward path for T/R).
https://en.wikipedia.org/wiki/Fiber_Distributed_Data_Interface

in mid-80s, on trip to far east related to hsdt project and some equipment that was being built:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

one of the vendors showed some stuff that they were doing for toyota. copper wire, wiring harness has been major point of failure ... and they were working on inexpensive dual, counterrotating (1 mbit/sec) LAN to replace wiring harness. all the components in the vehicle has power distribution ... with common command&control LAN mechanism (dual counterrotating, no single point of failure) that all switches and components would be on.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Entry point for a Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Entry point for a Mainframe?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 08 Mar 2010 21:39:21 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
a major FLEX platform was sequent (before ibm bought sequent). we did some consulting for Steve Chen when he was CTO at sequent ... and there were customers that had escon attachments (ibm connectivity) for sequent (numa) box (up to 256 intel shared memory multiprocessor). I know of at least one sequent numa customer (in 90s, before sequent bought by ibm) had escon and 3590 tape drives.

re:
https://www.garlic.com/~lynn/2010e.html#68 Entry point for a Mainframe?

small typo ... that is 3590 (not 3990) tape drive ... some old email about large mainframe customer with some sequent multiprocessor machines and sponsoring attaching 3590s to sequent machines.

Date: Mon, 30 Oct 1995 09:49:55 -0800
From: wheeler

In original sept. meeting we had thot we would have drives from IBM by 11/1/95 ... and would be able to loan (at least) one drive to sequent for development testing.

they currently have one 3590 drive for the project attached to a dynix 2.3 system. the 3590 driver (w/o stacker support) will be thru beta test on 11/1/95 ... but will continue thru various kinds of product test/support.

We require a 3590 driver (eventually w/stacker support) for a dynix 4.2 system. Sequent estimates approximately 7-10 days to port the 2.3 driver to 4.2 ... after availability of 3590 drive on dynix 4.2 level system.

We've tentatively estimated that we might have a loaner 3590 drive for them on or around mid. Dec.


... snip ... top of post, old email index.

more than year later:

Date: Wed, 11 Dec 1996 12:31:49 -0800
To: wheeler

1) The 3590 driver SILI handling is wrong. This makes varying sized blocks impossible to implement.

2) The writev() implementation cannot be used for scatter gather. A possible solution would be to write a special interface for direct QCIC DMA gathering of block fragments. To not have this means sever memory/cache overhead in reconstructing new blocks for record inserts and length changes. Sequent was asked to provide a cost for access to the PTX kernel code for purposes of estimating the effort to add code to support a true gather function for whole blocks on tape. The $$ figure was never given.


... snip ... top of post, old email index.

sequent wiki
https://en.wikipedia.org/wiki/Sequent_Computer_Systems

above mentions oracle parallel server high availability on sequent in 1993. there was some folklore that design for the implementation had come from some other vendors implementation.

it also discusses ibm purchase and then sequent is gone.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Entry point for a Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Entry point for a Mainframe?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 09 Mar 2010 01:19:48 -0500
Mike <mhammoc@bellsouth.net> writes:
Hey Lynn, I remember giving that presentation to the Bay Bunch!!

Just for what it's worth, using today's Intel based processors and IBM's zPDT "software based system" (some might call it "emulator") we are getting well over 100 MIPS per core. (I suspect FLEX-ES version 8 sould also be in this range, if allowed on the market.) With a quad core processor, running 3 enabled processors, thats somewhere in the range of 300 - 350 MIPS in a "relatively" inexpensive system. zPDT is only available for developers today though. IBM is very cautions about making any comments about possible commercial availability. Mike Hammock mike@hammocktree.us


re:
https://www.garlic.com/~lynn/2010e.html#68 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#70 Entry point for a Mainframe?

relative off the shelf, white box, over-clocked, quiet (liquid cooling, low-noise fans, low-noise case), 64bit 4-core ... say $3k-$6k (tens of gbytes memory, six disks, terabyte or larger) ... done for gaming market. possibly 25%-50% faster than stock chip.

processor cache is important. I've got a large DBMS implementation that runs faster on stock 1.7ghz chip with 2mbyte cache than it does on stock 3.4ghz chip that only has 512kbyte cache.

has anybody gotten hands on intel 6core gulftown with two threads per core? ... there is reference that some chips might be sold with only four cores enabled (lower price?).
https://en.wikipedia.org/wiki/Gulftown_%28microprocessor%29

what is the chance of beating 1000MIPS??

note that the supercomputer market is starting to latch onto the GPUs developed for high-end graphics in the gaming market (starting to push thousand cores/GPU) ... it would be interesting to see if any emulator pieces could be mapped to GPU with hundreds/thousands of cores.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Entry point for a Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Entry point for a Mainframe?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 09 Mar 2010 10:38:30 -0500
Anne & Lynn Wheeler <lynn@garlic.com> writes:
has anybody gotten hands on intel 6core gulftown with two threads per core? ... there is reference that some chips might be sold with only four cores enabled (lower price?).
https://en.wikipedia.org/wiki/Gulftown_%28microprocessor%29

what is the chance of beating 1000MIPS??


re:
https://www.garlic.com/~lynn/2010e.html#68 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#70 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#71 Entry point for a Mainframe?

possibly $5/370-mip???

vendors have been sorting chips ... chips failing higher speed test, being classed at lower rate for lower price.

however, there seems to be some additional sorting ... apparently oriented towards overclocking & gaming marketing ... that pushes higher rates ... and are sold at premium price. brand names are starting to offer boxes with such chips ... when it use to be just niche, offbrand players.

some of the reduced core chips aren't necessarily just pricing ... sometimes it may be chip defects that would ordinarily have the whole chip going to trashbin ... localized defects may be core specific ... rest of the chip still being useable.

other recent chip/foundary posts
https://www.garlic.com/~lynn/2010e.html#62 z9 / z10 instruction speed(s)
https://www.garlic.com/~lynn/2010e.html#66 z9 / z10 instruction speed(s)

late 70s & early 80s, single chip processors were starting to appear that drastically reduced cost of building computer systems ... and saw lots of vendors starting to move into the market. However, the cost of producing proprietary operating system hadn't come done ... so overall costs weren't reduced that much and therefor the price that the system could be offered to customers wouldn't come down.

I've frequently commented those economics significantly contributed to the move to unix offerings ... vendors could ship unix on their platform for enormously lower cost (similar to the cost reduction offered by single chip processors) compared to every vendor doing their own proprietary operating system from scratch.

A similar argument was used in the IBM/ATT effort moving higher level pieces of UNIX to stripped down TSS/370 base (the cost of adding mainframe ras, erep, device support, etc ... being several times larger than plain unix port). reference in this recent post mentioning adtech conference i did (that including presentations on both the unix/ssup activity as well as running cms applications on mvs):
https://www.garlic.com/~lynn/2010e.html#17 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)

conference was also somewhat the origins for the VM/XB (ZM) effort (also mentioned in the above). the effort was then declared strategic, hundreds of people writing specs, and then collapsed under its own weight (somewhat a mini-FS). The strategic scenario was doing microkernel (somewhat akin to tss/370 ssup effort for ATT/unix) that had (at least) all the mainframe ras, erep and device support ... that could be used as common base for all the company's operating system offerings (the costs to the company in this area was essentially fully replicated for every operating system offering).

in later 80s, having aix/370 (project that ported UCLA's Locus unix-clone to both 370 & 386) run under vm370 was aix/370 being able to rely on vm370 RAS (cost of adding that RAS directly to aix/370 was many times larger than the simple port of locus to 370).

In recent years, increasing amounts of RAS is moving into intelligent hardware ... somewhat mitigating the duplication of effort in the operating systems.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

NSF To Fund Future Internet Architecture (FIA)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: NSF To Fund Future Internet Architecture (FIA)
Newsgroups: comp.protocols.tcp-ip
Date: Tue, 09 Mar 2010 12:19:10 -0500
Albert Manfredi <bert22306@hotmail.com> writes:
I remember very well how IP was in principle going to be replaced by "something better," when the time was right. That was supposed to be the ISO network protocol suite, with the nice 160-bit address formats. Even the US DoD, which had played a major role in developing IP, was waiting to implement the new ISO suite. So, it's not like there was a huge "us vs them" attitude, except perhaps within the standard bodies themselves. And these standards bodies were NOT national in nature. At least, that's how I remember it. Proponents of the IP suite were certainly not ONLY Americans.

Then, ca. 1995 or 1996, the bottom literally fell out of the ISO effort. That's when the world's focus went 100 percent to IP.


I was involved in trying to take HSP to ansi x3s3.3 (ISO chartered US body for standards related to OSI level 3&4). ISO had guidelines about not working on standards for anything that didn't conform to OSI model. HSP was rejected because

1) supported internetworking ... something that doesn't exist in OSI model (and big reason for IP prevailiing over OSI)

2) went from top of transport to LAN MAC (bypassing OSI transport/network interface)

3) went to LAN MAC interface ... which doesn't exist in OSI ... sits someplace in the middle of level 3/network.

there were jokes in the period about standards process differnce between IETF and ISO ... with IETF actually requiring interoperable implementations while ISO didn't require that a standard to have proof that it was even implementable.

one of the big upticks for IP was availability of the BSD implementation. there are old folklore about ARPA constantly telling CSRG that they couldn't work on networking ... and CSRG would "yes them to death" (but continue to work on networking).

we had been involved in various activities leading up to NSFNET backbone for T1. some old email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

but for various corporate political reasons weren't allowed to bid on the NSFNET backbone RFP. Director of NSF wrote letter to corporation trying to change the situation ... but it just seemed to aggravate the internal politics (there were also comments that the backbone we already had running internally was at least five years ahead of all the NSFNET RFP bid submissions).

in fact, we had full T1 and higher speed links operational (and we claim that the example contributed to T1 being specified in the NSFNET backbone RFP). The winning bid ... didn't even put in T1 links ... but had 440kbit links ... and then apparently to somewhat seem to meet the RFP ... had T1 trunks with telco multiplexing the 440kbit links (we made sarcastic reference that some of the T1 trunks possibly were in turn multiplexed ... maybe even T5 ... so they might be able to theoretically claim a T5 network).

trivia ... what was bringing down the floor nets sunday night and monday morning before interop '88 opened?

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LPARs: More or Less?
Newsgroups: alt.folklore.computers
Date: Tue, 09 Mar 2010 12:30:36 -0500
sidd <sidd@situ.com> writes:
Blackstone ?

re:
https://www.garlic.com/~lynn/2010e.html#52 LPARs: More or Less?

blackstone was the major reference in the news ... there was much smaller reference to this other company (that had purchased the pricing services division from one of the rating companies in the early 70s); it wasn't clear whether they might be subcontracting doing various calculations.

minor referencs to it at the time:
https://www.garlic.com/~lynn/2009.html#21 Banks to embrace virtualisation in 2009: survey
https://www.garlic.com/~lynn/2009.html#31 Banks to embrace virtualisation in 2009: survey
https://www.garlic.com/~lynn/2009.html#32 What are the challenges in risk analytics post financial crisis?
https://www.garlic.com/~lynn/2009.html#42 Lets play Blame Game...?
https://www.garlic.com/~lynn/2009.html#52 The Credit Crunch: Why it happened?
https://www.garlic.com/~lynn/2009.html#77 CROOKS and NANNIES: what would Boyd do?
https://www.garlic.com/~lynn/2009.html#79 The Credit Crunch: Why it happened?

... for additional drift

reports are that there was $27T in toxic CDOs done during the period (only $5.2T was being carried offbook by the four largest, too-big-to-fail financial institutions when TARP was appropriated; aka courtesy of their unregulated investment banking arms by way of GLBA and repeal of Glass-Steagall).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
https://www.bloomberg.com/news/articles/2008-10-27/evil-wall-street-exports-boomed-with-fools-born-to-buy-debt

There would be several transactions along the toxic CDO value chain, with commissions at the various points ... possibly 15-20% aggregate commissions by the time the $27T made its way thru the labyrinth ... or maybe $5T.

Wharton business school had an article that possibly 1000 are responsible for 80percent of the financial mess ... $5T would be enough to go around ... with enough left to keep the other players agreeable ... including congress. The amounts involved are so large that it would be enough to more than overcome any possible individual concerns about the effects on their institutions, the economy, and/or the country.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 09 Mar 2010 16:11:04 -0500
tony@HARMINC.NET (Tony Harminc) writes:
I'm still not convinced they are related. Hardware-level TLB management would still be there for the shadow tables. In the early days where the only TLB invalidating instruction was PTLB, which clobbered the whole thing, the trick would presumably lie in avoiding that instruction like the plague.

recent threads mentioning shadow tables
https://www.garlic.com/~lynn/2010e.html#1 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#2 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#28 What was old is new again (water chilled)

The "TLB" rules followed by shadow table operation also did implicit invalidation every time address space pointer changed.

the shadow tables operation followed the same rules as for TLB. PTLB, ISTO, ISTE, & IPTE were all instructions in the original 370 virtual memory architecture. When 370/165 hardware group ran into problems with the virtual memory hardware retrofit ... and wanted to drop several features in order to buy back six months in schedule ... ISTO, ISTE, and IPTE were part of the things from the base architrecture that were dropped (leaving only PTLB ... i.e. every time any invalidation occurred, everything got invalidated).

Also, original cp67 and vm370 "shadow table" only had a single "STO stack" ... this is analogous to the 360/67 and 370/145 TLB ... where every time there was control register address space pointer changed (CR0 in 360/67 and CR1 in 370/145) ... there was an implicit TLB purge (aka all TLB entries implicitly belonged to same/single address space). The corresponding vm370 implementation was that all shadow table entries were invalidated/reset ... anytime there was a ("virtual") CR0/CR1 change.

The 370/168 had seven entry STO-stack ... aka every TLB entry had a 3bit identifier (8 states, invalid, or belonging to one of seven address spaces, 370/145 TLB entries had single bit, either valid or invalid). Loading new CR1 value on 370/168 didn't automatically purge the whole TLB ... it would check if the new value was one of the already loaded saved values ... and if there was match ... it would continue. If the new address space value loaded into CR1 didn't match a saved value ... it would select one of the seven saved entries to be replaced ... and invalidate/reset all TLB entries that had the matching 3bit ID.

VM370 product didn't support multiple shadow tables until the priced kernel addon to VM370 release 5. MVS did extremely frequent change to the CR1 value ... even w/o doing explicit PTLB ... and evertime ... VM had to do the full invalidation of all shadow table entries ... corresponding to the similar implicit operation that went on with 370/145 (not having a multiple entry STO-stack at least up until the priced kernel add-on to vm370 release 5).

There was somewhat analogous issue on real 3033 hardware with the introduction of dual-address space mode. The 3033 was effectively the same logic design as 370/168 ... remapped to slightly faster chips ... and as such ... the TLB had the same seven entry STO-stack. When using dual-address space mode ... the increase in number of different address space pointers was overruning the 3033 (seven entry) STO-stack and the frequency of (implicit) TLB entry invalidations went way up ... to the point that dual-address space was running slower than common segment implementation.

Dual-address space mode was somewhat a subset retrofit of the 370-xa multiple address spaces. The common segment problem on 370 was MVS kernel was taking half the 16mbyte address space and common segment started out taking only single mbyte segment. The common segment was to address the pointer passing paradigm from MVT&SVS days for subsystems ... which had resided in the same address space as the application. With the move to MVS, the subsystems were now in different address space (from the calling applications) and broke the pointer passing API paradigm. The solution was to have common segment that was the same in applications and subsystems. The problem was common segment grew with the subsystems installed and applications using subsystems ... and larger installations had common segment area pushing over five mbytes (threatening to leave only 2mbytes for applications use).

Burlington lab was large MVS shop with large chip design fortran applications and a very carefully crafted MVS that held common segment area to one mbyte ... so the applications still had seven mbytes. However, with increases in chip complexity ... it was forcing the fortran applications over seven mbytes ... threatening to convert the whole place to vm/cms ... since the fortran applications under CMS could get very nearly the whole 16mbytes.

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

LPARs: More or Less?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: LPARs: More or Less?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Wed, 10 Mar 2010 12:23:03 -0500
re:
https://www.garlic.com/~lynn/2010e.html#75 LPARs: More or Less?

basically SIE greatly expanded the architecture definition for virtual machine mode ... as addition/alternative to real machine mode ... aka principles of operation defines a lot of stuff about how things operate in real machine mode ... virtual machine mode makes various changes ... like what are the rules for supervisor & problem state when running in virtual machine mode; it greatly increased performance of running in virtual machine mode (compared to full software simulation) ... modulo 3081 choosing to have the service processor page the SIE microcode on 3310 FBA device ... recent ref
https://www.garlic.com/~lynn/2010e.html#34 Need tool to zap core

now one of the big guest performance hits to vm370 was transition from svs to mvs ... because the number of times that MVS changed CR1 exploded (compared to svs) ... requiring vm370 to flush the shadow page tables each time (virtual machine changed its "virtual" cr1).

now one of the things that could be done in a SIE scenario is to change the operation of TLB miss/reload in virtual machine mode ... so that hardware performed the double lookup on TLB miss ... eliminating the requirement for having shadow tables (instead of having to maintain the shadow table information ... which then is significantly amount of overhead in flush scenario ... whether explicit via "virtual" PTLB or implicit by change in "virtual" CR1 value).

As SIE began to blanket nearly ever aspect of machine operation ... with the virtual machine flavor ... it greatly simplified the introduction of LPARs.

there use to be some SHARE thing about the greatly increasing MVS bloat ... one was scenario about creeping processor bloat ... possibility of waking up one day and MVS was consuming all processor cycles with none left for applications.

This was somewhat the capture ratio scenario where the amount of processor cycles even being accounted for, falling below 50%. The san jose plant site highlighted a 70% capture ratio of MVS system dedicated to apl ... but the apl subsystem was doing nearly everything possible to avoid using any MVS system services as method of improving thruput and performance. recent capture ratio mention
https://www.garlic.com/~lynn/2010e.html#33 SHAREWARE at Its Finest

The creeping bloat of common segment area size was similar ... threatening to totally consume the remaining 16mbyte virtual address space ... leaving nothing for applications to run. The dual-address space introduction in 3033 was an attempt to at least slow down the creeping common segment size bloat (threatening to totally consume all available virtual address space).

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Madoff Whistleblower Book

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com (Lynn Wheeler)
Date: 10 Mar, 2010
Subject: Madoff Whistleblower Book
Blog: Financncial Crime Risk, Fraud and Security
Madoff Whistleblower Book: Claims He Uncovered State Street Fraud, Thought About Killing Madoff
http://www.huffingtonpost.com/2010/02/25/madoff-whistleblower-book_n_476820.html

from above:
In an explosive new book, Bernie Madoff whistleblower Harry Markopolos tells the inside story of how he uncovered the $65 billion fraud, claims that he exposed State Street's alleged fraud of pension funds and admits that he considered the idea of killing Madoff.

... snip ...

Some of this was his testimony in congressional hearings last year about trying for a decade to get SEC to do something about Madoff. His concern for personal safety wasn't w/o reason. There was (at least) case in the news of big FBI investigation into Internet IPO manipulations ... when some investment banker was found on NJ(?) mudflats.

He seems to be having a lot more fun on the book tour ... during the congressional hearings, he sent his lawyer for TV interviews ... his lawyer would say he still believed that he had a chance of not surviving.

He was asked what he would do if he was put in charge of SEC ... and he said that he would fire everybody currently there (and there still has been no house cleaning). Something about nearly all are lawyers and have no training in financial forensics (and the industry likes it that way).

He has re-iterated several times that he believed that if Madoff had known about him, Madoff would have sent people after him (and his only choice was get to Madoff before Madoff got to him). Latest comment was he believed that Madoff turned himself in to get in protective custody ... that Madoff had misappropriated some people's money that would likely do to Madoff what the author believed Madoff would have done to him (references to lot of people involved are sociopaths ... there are other references to being totally amoral).

misc. past posts mentioning Madoff:
https://www.garlic.com/~lynn/2009b.html#65 What can agencies such as the SEC do to insure us that something like Madoff's Ponzi scheme will never happen again?
https://www.garlic.com/~lynn/2009b.html#73 What can we learn from the meltdown?
https://www.garlic.com/~lynn/2009b.html#80 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#0 Audit II: Two more scary words: Sarbanes-Oxley
https://www.garlic.com/~lynn/2009c.html#4 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#20 Decision Making or Instinctive Steering?
https://www.garlic.com/~lynn/2009c.html#29 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#39 'WHO IS RESPONSIBLE FOR THE GLOBAL MELTDOWN'
https://www.garlic.com/~lynn/2009c.html#44 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009c.html#51 How to defeat new telemarketing tactic
https://www.garlic.com/~lynn/2009d.html#0 PNC Financial to pay CEO $3 million stock bonus
https://www.garlic.com/~lynn/2009d.html#3 Congress Set to Approve Pay Cap of $500,000
https://www.garlic.com/~lynn/2009d.html#37 NEW SEC (Enforcement) MANUAL, A welcome addition
https://www.garlic.com/~lynn/2009d.html#42 Bernard Madoff Is Jailed After Pleading Guilty -- are there more "Madoff's" out there?
https://www.garlic.com/~lynn/2009d.html#47 Bernard Madoff Is Jailed After Pleading Guilty -- are there more "Madoff's" out there?
https://www.garlic.com/~lynn/2009d.html#57 Lack of bit field instructions in x86 instruction set because of patents ?
https://www.garlic.com/~lynn/2009d.html#61 Quiz: Evaluate your level of Spreadsheet risk
https://www.garlic.com/~lynn/2009d.html#62 Is Wall Street World's Largest Ponzi Scheme where Madoff is Just a Poster Child?
https://www.garlic.com/~lynn/2009d.html#63 Do bonuses foster unethical conduct?
https://www.garlic.com/~lynn/2009d.html#73 Should Glass-Steagall be reinstated?
https://www.garlic.com/~lynn/2009d.html#75 Whistleblowing and reporting fraud
https://www.garlic.com/~lynn/2009e.html#0 What is swap in the financial market?
https://www.garlic.com/~lynn/2009e.html#15 The background reasons of Credit Crunch
https://www.garlic.com/~lynn/2009e.html#35 Architectural Diversity
https://www.garlic.com/~lynn/2009e.html#36 Architectural Diversity
https://www.garlic.com/~lynn/2009e.html#37 How do you see ethics playing a role in your organizations current or past?
https://www.garlic.com/~lynn/2009e.html#40 Architectural Diversity
https://www.garlic.com/~lynn/2009e.html#53 Are the "brightest minds in finance" finally onto something?
https://www.garlic.com/~lynn/2009e.html#70 When did "client server" become part of the language?
https://www.garlic.com/~lynn/2009f.html#2 CEO pay sinks - Wall Street Journal/Hay Group survey results just released
https://www.garlic.com/~lynn/2009f.html#29 What is the real basis for business mess we are facing today?
https://www.garlic.com/~lynn/2009f.html#31 What is the real basis for business mess we are facing today?
https://www.garlic.com/~lynn/2009f.html#38 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009f.html#43 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009f.html#45 Artificial Intelligence to tackle rogue traders
https://www.garlic.com/~lynn/2009f.html#47 TARP Disbursements Through April 10th
https://www.garlic.com/~lynn/2009f.html#49 Is the current downturn cyclic or systemic?
https://www.garlic.com/~lynn/2009f.html#51 On whom or what would you place the blame for the sub-prime crisis?
https://www.garlic.com/~lynn/2009f.html#65 Just posted third article about toxic assets in a series on the current financial crisis
https://www.garlic.com/~lynn/2009f.html#67 Just posted third article about toxic assets in a series on the current financial crisis
https://www.garlic.com/~lynn/2009g.html#1 Future of Financial Mathematics?
https://www.garlic.com/~lynn/2009g.html#5 Do the current Banking Results in the US hide a grim truth?
https://www.garlic.com/~lynn/2009g.html#7 Just posted third article about toxic assets in a series on the current financial crisis
https://www.garlic.com/~lynn/2009g.html#29 Transparency and Visibility
https://www.garlic.com/~lynn/2009g.html#33 Treating the Web As an Archive
https://www.garlic.com/~lynn/2009g.html#34 Board Visibility Into The Business
https://www.garlic.com/~lynn/2009g.html#76 Undoing 2000 Commodity Futures Modernization Act
https://www.garlic.com/~lynn/2009g.html#77 A new global system is coming into existence
https://www.garlic.com/~lynn/2009h.html#3 Consumer Credit Crunch and Banking Writeoffs
https://www.garlic.com/~lynn/2009h.html#17 REGULATOR ROLE IN THE LIGHT OF RECENT FINANCIAL SCANDALS
https://www.garlic.com/~lynn/2009h.html#22 China's yuan 'set to usurp US dollar' as world's reserve currency
https://www.garlic.com/~lynn/2009i.html#13 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009i.html#23 Why are z/OS people reluctant to use z/OS UNIX? (Are settlements a good argument for overnight batch COBOL ?)
https://www.garlic.com/~lynn/2009i.html#40 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009i.html#54 64 Cores -- IBM is showing a prototype already
https://www.garlic.com/~lynn/2009i.html#60 In the USA "financial regulator seeks power to curb excess speculation."
https://www.garlic.com/~lynn/2009j.html#12 IBM identity manager goes big on role control
https://www.garlic.com/~lynn/2009j.html#21 The Big Takeover
https://www.garlic.com/~lynn/2009j.html#30 An Amazing Document On Madoff Said To Have Been Sent To SEC In 2005
https://www.garlic.com/~lynn/2009l.html#5 Internal fraud isn't new, but it's news
https://www.garlic.com/~lynn/2009m.html#89 Audits V: Why did this happen to us ;-(
https://www.garlic.com/~lynn/2009n.html#13 UK issues Turning apology (and about time, too)
https://www.garlic.com/~lynn/2009n.html#49 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#23 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009o.html#71 "Rat Your Boss" or "Rats to Riches," the New SEC
https://www.garlic.com/~lynn/2009o.html#84 Opinions on the 'Unix Haters' Handbook'
https://www.garlic.com/~lynn/2009p.html#20 U.K. lags in information security management practices
https://www.garlic.com/~lynn/2009p.html#51 Opinions on the 'Unix Haters' Handbook
https://www.garlic.com/~lynn/2009p.html#57 MasPar compiler and simulator
https://www.garlic.com/~lynn/2009r.html#35 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2009r.html#47 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2009r.html#53 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2009r.html#61 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2009s.html#45 Audits VII: the future of the Audit is in your hands
https://www.garlic.com/~lynn/2009s.html#47 Audits VII: the future of the Audit is in your hands
https://www.garlic.com/~lynn/2010.html#37 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#82 Oldest Instruction Set still in daily use?
https://www.garlic.com/~lynn/2010c.html#34 Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#87 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010d.html#8 search engine history, was Happy DEC-10 Day

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Entry point for a Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@GARLIC.COM (Anne & Lynn Wheeler)
Subject: Re: Entry point for a Mainframe?
Newsgroups: bit.listserv.ibm-main
Date: 10 Mar 2010 20:50:39 -0800
timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
By the way, an awful lot of small businesses are opting for "Software as a Service" offerings and choosing not to own or host their own servers, of any type. If you want a zero-footprint z/OS machine -- that sure beats the MP3000! -- it's available. To extend the above analogy, you can buy Fedex, UPS, or USPS service and avoid renting, leasing, or owning your own trucks or bicycles. If the world is already heading in that SaaS direction -- and it sure looks that way -- then a z10 footprint makes even more sense.

a lot of the software as service ... and cloud computing ... very analogous to oldtime online timesharing ... is partially being driven by super-efficient megadatacenters (coupled with ubiquitous high-bandwidth connectivity)

Microsoft: PUE of 1.22 for Data Center Containers
http://www.datacenterknowledge.com/archives/2008/10/20/microsoft-pue-of-122-for-data-center-containers/

Google: The World's Most Efficient Data Centers
http://www.datacenterknowledge.com/archives/2008/10/01/google-the-worlds-most-efficient-data-centers/

i was at presentation that had a claim about google's very careful crafting of their servers ... results in them having price/performance about 1/3rd that of their next closest competitor

215 Data Centers to Participate in EPA Study
http://www.datacenterknowledge.com/archives/2008/06/30/215-data-centers-to-participate-in-epa-study/

ibm's entry in some of this
http://www.datacenterknowledge.com/archives/2009/12/02/ibm-steps-up-its-partner-driven-container-game/
http://www.datacenterknowledge.com/archives/2008/06/11/ibm-launches-modular-data-centers-containers/
http://www.datacenterknowledge.com/archives/2008/06/11/a-look-at-ibms-data-center-container/
http://www-05.ibm.com/lt/ibmforum/pdf/ibm_data_center_family_maite_frey.pdf
http://www-03.ibm.com/press/us/en/pressrelease/29058.wss
http://www.itjungle.com/tug/tug120607-story01.html
http://datacenterjournal.com/content/view/3392/41/
http://www.theregister.co.uk/2009/12/07/ibm_data_center_containers/

the "rack" references in the above seems to imply that it isn't a mainframe play

some survey here of emerging operations
http://perspectives.mvdirona.com/2009/04/25/RandyKatzOnHighScaleDataCenters.aspx

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

history of RPG and other languages, was search engine history

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of RPG and other languages, was search engine history
Newsgroups: alt.folklore.computers
Date: Thu, 11 Mar 2010 00:59:58 -0500
John Levine <johnl@iecc.com> writes:
The COBOL committee was in 1959, but it adapted several existing languages, notably Grace Hopper's FLOW-MATIC which was working by 1958. There were several generations of Algol, producing reports in 1958, 1960, and 1968. Algol 60 is the best known, with working compilers probably in 1961.

a little algol w topic drift ...

Early Computers at Stanford
http://forum.stanford.edu/wiki/index.php/Early_Computers_at_Stanford

from above:
IBM System/360 model 67

In May of 1967 an IBM System/360 model 67 replaced the IBM 7090, Burroughs B5500 and DEC PDP-1 in Pine Hall. There is some question about whether this machine was a model 65 or model 67, but John Sauter remembers seeing the lights of the 'Blaauw Box', the dynamic address translation module that is the difference between the models. Also, Glen Herrmannsfeldt and John Ehrman remember that it was always described as a model 67. However, despite the dynamic relocaton capability, the model 67 was run as a model 65 using IBM's OS/360 MFT operating system.

The original development of WYLBUR and ORVYL were done on the model 67. John Sauter remembers a flyer featuring cartoon personages named Wylbur and Orvyl with the caption 'My brothers communicate'. MILTEN was used to support remote users equipped with IBM 2741 terminals. SPIRES was also originally written on the model 67. Nicklaus Wirth developed PL360 and Algol W on the model 67; Algol W later evolved into Pascal.

The IBM System/360 model 67 had 524,288 8-bit bytes of memory. It could perform an add in 1.5 microseconds, a multiply in 6.


... snip ...

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Entry point for a Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Entry point for a Mainframe?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Thu, 11 Mar 2010 09:43:50 -0500
timothy.sipples@US.IBM.COM (Timothy Sipples) writes:
Agreed. There are a lot of similarities, but one difference is the ubiquity of the Internet. It's really an accident of history (telco monopolies) that the price-per-carried bit collapsed *after* the prices of CPU and storage did. So we went through (suffered?) an intermediate phase when computing architectures were principally constrained by high priced long distance networking (the "PC revolution" and then "Client/Server"). It's interesting viewing those phases through the rear view mirror. In many ways it's back to the future now.

re:
https://www.garlic.com/~lynn/2010e.html#78 Entry point for a Mainframe?

recent post/thread in tcp/ip n.g.
https://www.garlic.com/~lynn/2010e.html#73 NSF to Fund Future Internet Architecture (FIA)
and similar comments in this (mainframe) post/thread
https://www.garlic.com/~lynn/2010e.html#64 LPARs: More or Less?

about telcos having very high fixed costs/expenses and significant increase in available bandwdith with all the dark fiber in the ground represented difficult chicken/egg obstacle (disruptive technology). The bandwidth hungry applications wouldn't appear w/o significant drop in use charges (but could still take a decade or more) ... and until the bandwidth hungry applications appeared, any significant drop in the usage charges would mean that they would operate deeply in the red during the transition.

in the mid-80s, the hsdt project had a very interesting datapoint with communication group ... where we were deploying and supporting T1 and faster links.
https://www.garlic.com/~lynn/subnetwork.html#hsdt

The communication group then did a corporate study that claimed that there wouldn't be customer use of T1 until mid-90s (aka since they didn't have product that supported T1, the study supported customers not needing T1 for another decade).

The problem was that 37x5 boxes didn't have T1 support ... and so what the communication group studied was "fat pipes" ... support for being able to operate multiple 56kbit links as single unit. For their T1 conclusions they plotted the number of "fat pipes" with 2, 3, 4, ..., etc 56kbit links. They found that number of "fat pipes" dropped off significantly at four or five 56kbit links and there were none above six.

There is always the phrase about statistics lie ... well, what the communication group didn't appear to realize was that most telcos had tariff cross-over about five or six 56kbit links being about the same as a single T1 link. What they were seeing, was when customer requirement reached five 56kbit links ... the customers were moving to single T1 link supported by other vendors products (which was the reason for no "fat pipes" above six).

The communication groups products were very oriented towards to the legacy dumb terminal paradigm ... and not the emerging peer-to-peer networking operation. In any case, a very quick, trivial survey by HSDT turned up 200 customers with T1 links (as counter to the communication group survey that customers wouldn't be using T1s until mid-90s ... because they couldn't find any "fat pipes" with more than six 56kbit links).

this is analogous to communication group defining T1 as "very high speed" in the same period (in part because their products didn't support T1) ... mentioned in this post:
https://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux

the various internal politics all contributed to not letting us bid on the NSFNET backbone RFP ... even when the director of NSF wrote a letter to corporation ... and there were observations that what we already had running was at least five years ahead of RFP bid responses (to build something new). misc. old NSFNET related email from the period
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Entry point for a Mainframe?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Entry point for a Mainframe?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Thu, 11 Mar 2010 10:37:18 -0500
gahenke@GMAIL.COM (George Henke) writes:
Actually a previous client, a large Wall St investment house that survived the recent crisis, has sooooooooo many blade servers in its data center they can't fit anymore in. So they have a pilot project to bring them up on LINUX under z/VM.

Hurray for server consolidation, finally, a software solution beating a hardware solution.


re:
https://www.garlic.com/~lynn/2010e.html#80 Entry point for a Mainframe?

the claim was that in the 90s ... to have multiple applications co-exist in the same operating system required scarce, high-level skills (allocation, co-existance, capacity planning, etc) ... it was much easier and cheaper to throw hardware at the problem ... giving each application (and even application instance) its own dedicated hardware.

rolling forward to couple years ago and organizations found themselves with thousands, tens of thousands, or even hundreds of thousands of these dedicated services ... all running at 5-10% utilization.

virtualization, dynamic load balancing and some other technologies came together to support server consolidation (sometimes 10 or 20 to one running on essentially the identical hardware). part of the issue was that there was only very modest incremental skill level required for server consolidation (as compared to trying to getting lots of disparent applications to co-exist in the same system).

lots of technologies are being pump into virtualization environment ... like dynamic migration of virtual machine to different hardware (even in different datacenters) for capacity reasons and/or continuous operation reasons.

other posts in this thread:
https://www.garlic.com/~lynn/2010e.html#68 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#70 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#71 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#72 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#78 Entry point for a Mainframe?

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

history of RPG and other languages, was search engine history

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: history of RPG and other languages, was search engine history
Newsgroups: alt.folklore.computers
Date: Thu, 11 Mar 2010 13:30:10 -0500
paul c <toledobythesea@oohay.ac> writes:
What was ORVYL?

I can vaguely remember WYLBUR at a service bureau in the 1970's, we were told it had less machine cost than TSO, it was the first time I had my own terminal but I don't remember WYLBUR being used in very imaginative ways, mostly it just avoided the unit record equipment but even end users still had to know a little about JCL.


re:
https://www.garlic.com/~lynn/2010e.html#79 history of RPG and other languages, was search engine history

clicking on the "WYLBUR and ORVYL" field in the
http://forum.stanford.edu/wiki/index.php/Early_Computers_at_Stanford

is URL goes to the orvyl manual page
http://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML

with more than you ever want to know ... basically wylbur is the editor ... and orvyl is the online interactive environment (all the gorp about executing things, running applications, etc).

i never used wylbur/orvyl ... but possibly users simply truncated the reference to just wylbur. this is discussion about a terminal input/output routine that refers to handling wylbur & tso environment:
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#82 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010e.html#9 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)

and "technically" how could there be reference to wylbur when wylbur didn't run applications. the referenced terminal input/output is assembler subroutine (for PLI adventure) ... that uses TPUT/TGET macros. CMS had simulation for TSO TPUT/TGET (and the specific source actually comes from a cms adventure pli distribution) ... so i'm guessing that orvyl may have also had TSO TPUT/TGET macro support.

There are also web references to CERN getting WYLBUR/ORVYL from SLAC ... and that WYLBUR had been done at SLAC (not sure whether they didn't differentiate between SLAC and stanford ... or just didn't know). SLAC and CERN were somewhat "sister" operations ... sharing a lot of dataprocessing applications. The CERN documents also have CERN porting WYLBUR editor to non-IBM platforms (or at least re-implementing an editor with the same syntax) ... aka

The user-friendly nature of the WYLBUR time-sharing system, which was developed at SLAC, was reflected in its beautifully handwritten and illustrated manual by John Ehrman.
http://cerncourier.com/cws/article/cern/29150/1/cernmain5_9-04

above references simply calling the whole thing WYLBUR (w/o differentiating between WYLBUR and ORVYL) as well as stating that it had been developed at SLAC (rather than down the road).

for other drift ... going in the opposite direction ... SLAC/VM was the first with webserver outside CERN.
https://ahro.slac.stanford.edu/wwwslac-exhibit

other bits and pieces
http://slac.stanford.edu/spires/papers/history.html
https://en.wikipedia.org/wiki/History_of_the_World_Wide_Web

--
42yrs virtualization experience (since Jan68), online at home since Mar1970

Entry point for a Mainframe?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Entry point for a Mainframe?
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Thu, 11 Mar 2010 13:52:04 -0500
ps2os2@YAHOO.COM (Ed Gould) writes:
In the mid 70's we had a T1 and we muxed it and IIRC we had 1 256K chunk and another chunk (sorry do not remember the speed) connected up to our 3745 and it worked really well (except a really strange bug which took us with the help of chance to figure out what the issue was). We were exercising it and kept it busy at least 20 out of 24 hours a day. I vaguely remember talking about the bug with IBM at the time (we were a small minority user of something like this at the time as IBM apparently only had a few people that seemed to know this part of NCP). Its not too surprising I guess that IBM really did not support a full T1 but if my memory (its iffy here) is correct it had something to do with the speed of the 3745 as to why IBM couldn't support it. SInce memory fades with time and I only remember small pieces we did seem to be on the bleeding edge at that time.

re:
https://www.garlic.com/~lynn/2010e.html#80 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2010e.html#81 Entry point for a Mainframe?

3705 in the 70s, 3725 in the 80s, 3745 later. this has (some?) 3745 withdrawn from marketing in sep2002
http://www.networking.ibm.com/nhd/webnav.nsf/pages/375:375prod.html
3745 wiki page
https://en.wikipedia.org/wiki/IBM_3745

3745 wasn't announced until 1988
http://www-03.ibm.com/ibm/history/history/year_1988.html

in mid-80s, Laguade had an experimental 3725 that was dedicated to running single T1.

corporation did have 2701 in the 60s that supported T1 ... but the communication group in the 70s acquired increasingly narrow myopic focus on dumb terminals; they also leveraged corporate politics to keep other business units out of areas that thot even remotely touched on what they believed was their responsibility.

this shows up, at least in the constant battles my wife had with the communication group when she was in POK responsible for loosely-coupled architecture ... and only temporary truces that she could use her own protocol for machine-to-machine communication within datacenter walls. some past posts mentioning her Peer-Coupled Shared Data architecture ... which except for IMS hot-standby, saw little uptake until sysplex.
https://www.garlic.com/~lynn/submain.html#shareddata

the narrow focus on dumb terminal became increasingly rigid in the 80s ... even tho early on real 3270s were being replaced with more capable ibm/pcs and terminal emulation
https://www.garlic.com/~lynn/subnetwork.html#emulation

(20yr old) 2701s were becoming increasingly long in the tooth during this period. in the mid-80s, federal systems division did come up with zirpel card for S/1 that supported T1 (for special gov. bids).

however, for the most part, if other business units couldn't be kept out purely using the argument that only communication group could produce communication related products ... then there was always studies like the "fat pipe" argument that would be presented to corporate hdqtrs ... showing that customers didn't even want such products.

it was also motivation for senior engineer from disk division getting presentation slippped into the internal world-wide communication group annual conference ... where the opening statement was that the communication group was going to be responsible for the demise of the disk division.

I got HSDT involved with babybell that had done NCP emulation on series/1 ... and I was deep into trying to put it out as a corporate product (and really got involved in interesting politics ... this is scenario where the truth is really stranger than fiction). In any case, I gave a presentation on the work at fall '86, SNA architecture review board meeting in Raleigh ... quickly putting out a series/1 based version while quickly porting to RIOS chip (aka rs/6000).

the 3725 pieces of the numbers came from official corporate HONE configurator (sales & marketing use for selling to customers) ... part of the presentation to fall '86 SNA architecture review board meeting in Raleigh
https://www.garlic.com/~lynn/99.html#67 System/1 ?

part of spring '86 common presentation on pu4/pu5 support in series/1
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)

--
42yrs virtualization experience (since Jan68), online at home since Mar1970


previous, next, index - home