From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Any interesting PDP/TECO photos out there? Newsgroups: alt.folklore.computers Date: Fri, 15 Nov 2024 17:45:58 -1000Lynn Wheeler <lynn@garlic.com> writes:
whole still at univ, CSC came out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) ... and I mostly played with it during my weekend dedicated time. Initially I spent most of the time reWriting CP67 pathlengths to improve OS/360 running in virtual machine, OS/360 test stream ran 322secs on real machine, but initially 856secs in virtual machine (534secs CP67 CPU). After a couple months I got CP67 CPU down to 113secs (from 534) ... and was asked to attend CP67 "official" announcement at spring '68 SHARE meeting in Houston ... where I gave presentations on both (earlier) OS/360 optimization and (more recent) CP67 optimization work (running OS/360 invirtual machine).
I then rewrite I/O, ordered arm seek queuing (in place of FIFO) and multiple 4k page transfers I/O, optimized transfers/revolution for 2314 (disk) and 2301 (drum, from 70-80/sec to 270/sec peak), optimized page replacement, dynamic adaptive resource management and scheduling (for multi-user CMS interactive).
CP67 initially had 1052&2741 terminal support with automagic terminal type identification. Univ. had some asciii (mostly tty33 ... but some tty35) ... so added tty/ascii support (including integrated with terminal type identification). I then wanted to have single dail-in phone number for all terminal types ("hunt group"). Didn't quiet work, since IBM telecommunication controller had taken short cut and hard-wired terminal line speed.
This kicks off univ clone telecommunication project, building 360 channel interface board for Interdata/3 programmed to simulate IBM controller with the addition of being able to do dynamic line speed. Later it was upgraded with Interdata/4 for channel interface and cluster of Interdata/3s for line(/port) interfaces.
was then sold as 360 clone controller by Interdata and later
Perkin-Elmer, and four of us get written up for (some part of) IBM
clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
360 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
other ascii info ... originally 360 was suppose to be ascii machine, but
ascii unit record was ready yet, so it was "temporary" going to be
EBCDIC with old BCD machines. "Biggest Computer Goof Ever":
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
other history
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
https://en.wikipedia.org/wiki/History_of_CP/CMS
above (currently) has confusion about Future System and Gene
Amdahl. Amdahl had won the battle to make ACS, 360 compatible ... and
then leaves IBM when ACS/360 was killed
https://people.computing.clemson.edu/~mark/acs_end.html
... before Future System started.
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://en.wikipedia.org/wiki/History_of_CP/CMS#Historical_notes
FS was completely different than 370 and was going to be completely
replace 370, during FS, 370 projects were being killed off, the lack
of new 370 products during the FS period is credited with giving clone
370 makers (including Amdahl) their market foothold. When FS finally
implodes there is mad rush to get stuff back into the 370 product
pipelines, including quick and dirty 3033&3081 efforts in parallel
... more information
http://www.jfsowa.com/computer/memo125.htm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
A decade ago, I was asked to track down the executive decision to add virtual memory to all 370s and found staff member reporting to the executive. Basically the (OS/360) MVT storage management was so bad that regions typically had to be specified four times later than used ... so that standard 1mbyte 370/165 only ran four regions concurrently, insufficient to keep the system busy and justified.
Mapping MVT to 16mbyte virtual memory (VS2/SVS) allowed concurrent regions to be increased by factor of four times (with cap of 15 with 4bit storage protect keys, unique key for each concurrent running region) with little or no paging (sort of like running MVT in a CP67 16mbyte virtual machine).
pieces of that email exchange
https://www.garlic.com/~lynn/2011d.html#73
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Origin Of The Autobaud Technique Newsgroups: alt.folklore.computers Date: Fri, 15 Nov 2024 21:52:20 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
thread a few hrs ago ... in the late 60s, did it for clone ibm 360
telecommunication controller we built using Interdata/3 machine
(upgraded to Interdata/4 with cluster of Interdata/3s) ... that
Interdata and then Perkin/Elmer sold.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
initial (virtual machine) CP/67 delivered to univ had 1050&2741 terminal type support with automagic terminal type recognition. Univ. had ascii TTY (mostly 33, but some 35), so I added ascii terminal support integrated with automagic terminal type recognition (able to use the SAD CCW to switch the terminal type line scanner for each line/port). I then wanted a single dial-in number ("hunt group") for all terminal types ... but while the terminal type line scanner could be switched for each port, IBM had hard-wired the port line speed ... thus kicked-off the univ project to build our own clone controller that also did "autobaud".
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Origin Of The Autobaud Technique Newsgroups: alt.folklore.computers Date: Fri, 15 Nov 2024 22:08:35 -1000Lynn Wheeler <lynn@garlic.com> writes:
trivia: turn of century had tour of datacenter that handled majority of dial-up POS credit card swipe terminal calls east of the mississippi ... the telecommunication controller was descendant of what we had done in the 60s ... some question that the mainframe channel interface card was same design we had done more than three decades earlier.
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM CKD DASD Date: 17 Nov, 2024 Blog: FacebookCKD DASD for original os/360 (not just disks, but things also 230x "drums" and 2321 "data cell") .... "fixed-block architecture" was introduced in the late 70s and all IBM CKD increasingly became CKD emulated on fixed-block disk (can be seen in 3380 formulas for records/track where record size has to be rounded up to a multiple of fixed cell size). Currently "CKD" is still required even though no CKD disks have been made for decades (not even emulated, everything being simulated on industry standard fixed-block).
ECKD channel program architecture was original introduced with "Calypso" ... 3880 disk controller speed matching buffer, allowing 3380 3mbyte/sec disks to be attached to 370 1.5mbyte/sec channels.
1973, IBM 3340 "winchester"
https://www.computerhistory.org/storageengine/winchester-pioneers-key-hdd-technology/
trivia: when I 1st transfer to San Jose Research in 1977, I got to wander around IBM (and non-IBM) datacenters in silicon valley, including disk bldg14/engineering and bldg15/product-test across the street. They were running 7x24 prescheduled, stand alone testing and had said they had recently tried MVS, but it had 15min MTBF (in that environment, required manual re-ipl). I offer to rewrite input/output supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity (downside was they wanted me to spend increasing amount of time playing disk engineer). Bldg15 tended to get very early engineering processors for I/O testing ... and when they got the 1st engineering 3033 off the POK engineering flr, found that disk testing only took percent or two of CPU. We scrounge up a 3830 disk controller and string of 3330 drives for setting up our own private online service.
Person doing air-bearing simulation (part of thin-film head design)
https://en.wikipedia.org/wiki/Thin-film_head#Thin-film_heads
was getting a few turn-arounds a month from the SJR 370/195 (even with high priority designation). We set air-bearing simulation up on the bldg15 3033 (only about half the MIPs of 195) and was able to get several turn-arounds a day.
other trivia: original 3380 had 20 (data) track spacings between each data track. They then cut the spacing in half for double the original capacity and then cut the spacing again for triple the capacity.
Mid-80s, the father of 801/RISC technology wants me to help him with idea for "wide" disk head ... transfers 16 closely-spaced disk tracks (bracketed with servo-track on each side) in parallel. Problem was IBM 3090, its channels were 3mbyte/sec while "wide-head" required 50mbyte/sec.
Then in 1988, IBM branch office asked me if I could help LLNL standardize some serial stuff they were working with. It quickly becomes fibre-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. IBM mainframe eventually ships some of their serial stuff in 90s as ESCON (17mbytes/sec), when it was already obsolete. Later some POK engineers become involved with FCS and define a heavy-weight protocol that radically reduces the native throughput, eventually ships as FICON. Most recent public published benchmark I've found, is 2010 z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time a FCS was announced for (Intel) E5-2600 server blade claiming over million IOPS (two such FCS with higher throughput than 104 FICON). Note that IBM pubs recommend that SAPs (system assist processors that do actual I/O) be capped at 70% CPU ... which would drop z196 throughput from "Peak I/O" 2M IOPS to 1.5M IOPS.
Around 1988, Nick Donofrio had approved HA/6000 proposal, initially
for NYTimes to migrate their newspaper system (ATEX) off (DEC)
VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national labs (already involved w/LLNL for "FCS") and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres, who have VAXcluster support in same source base with unix). Early JAN1992, have meeting with Oracle CEO, where AWD/Hester tells Ellison we would have 16-system clusters by mid1992 and 128-system clusters by ye1992. Then mid-Jan, I update FSD about work with national labs ... and FSD then tells Kingston supercomputer group they would be going with HA/CMP for the gov. (supercomputing). Then late JAN1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work with anything that has more than four processors (we leave IBM a few months later). Possibly contributing, was mainframe DB2 group were complaining that if we were allowed to proceed, it would be years ahead of them.
1993 (count of program benchmark iterations compared to reference
platform):
ES/9000-982 (8 processors) : 408MIPS, 51MIPS/processor
RS6000/990 : 126MIPS; cluster scale-up 16-system/2016MIPS, 128-system/16,128MIPS
IBM CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
FCS/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
posts mentionin calypso and eckd
https://www.garlic.com/~lynn/2024c.html#74 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2018.html#81 CKD details
https://www.garlic.com/~lynn/2015g.html#15 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#89 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2012o.html#64 Random thoughts: Low power, High performance
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
https://www.garlic.com/~lynn/2010h.html#30 45 years of Mainframe
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2009p.html#11 Secret Service plans IT reboot
https://www.garlic.com/~lynn/2007e.html#40 FBA rant
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Transformational Change Date: 18 Nov, 2024 Blog: FacebookLate 80s, senior disk engineer got talk scheduled at internal, annual, world-wide communication group conference supposedly on 3174 performance, but opened his talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing a drop in disk sales with data fleeing (mainframe) datacenters to more distributed computing friendly platforms and had come up with a number of solutions which were constantly vetoed by the communication group (with their corporate strategic ownership of everything that crossed datacenter walls, fiercely fighting off client/server and distributed computing trying to preserver their dumb terminal paradigm).
The communication group stanglehold on mainframe datacenters weren't
just disks and a few years later, IBM has one of the largest losses in
the history of US companies and was being reorg'ed into the 13 "baby
blues" (take-off on AT&T "baby bells" breakup a decade earlier) in
preperation for breakup of the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
senior disk divsion executive partial countermeasure had been investing in distributed computing startups that would use IBM disks ... and would periodically ask us to stop in on some of his investments to see if we could provide any help.
Before leaving IBM, we would drop in on an executive I'd known since the 70s (with a top floor corner office in Somers) and would also stop by other people in the building and talk abut the changes in the computer market and mostly they could articulate necessary IBM changes. Visits over a period of time showed nothing had changed (conjecture that they were trying to maintain IBM status quo until their retirement).
posts mentioning communication group fighting off client/server and
distributed computing trying to preserve their dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Two decades earlier, Learson trying (&failed) to block the
bureaucrats, careerists and MBAs from destroying Watsons'
culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Transformational Change Date: 18 Nov, 2024 Blog: Facebookre:
The last product we did at IBM was HA/CMP. Nick Donofrio approved
HA/6000, initially for NYTimes to move their newspaper system from
(DEC) VAXCluster to RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs (including LLNL which I had already worked with for fibre-channel
standard and some other things) and commercial cluster scale-up with
RDBMS vendors (Oracle, Sybase, Informix, Ingres that had VAXCluster
support in the same source base with Unix). The S/88 product
administrator also starts taking us around to their customers and gets
me to write a section for the corporate continuous availability
strategy document (it gets pulled when both Rochester/AS400 and
POK/mainframe complain).
One of the San Jose distributed computing investments was Mesa Archival (spin-off of NCAR supercomputer system in Boulder) including port to HA/CMP, another was porting LLNL's UNICOS LINCS supercomputer system to HA/CMP.
Early JAN1992, in meeting with Oracle CEO, AWD/Hester tells Ellison that we would have 16-system cluster mid92 and 128-system cluster ye92. I then update FSD with the HA/CMP work with national labs ... and they tell Kingston supercomputing project that FSD was going with HA/CMP for gov. supercomputer. Late JAN1992, cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we are told we weren't allowed to work with anything that had more than four processors (we leave IBM a few months later). Possibly contributing was mainframe DB2 complaining that if we were allowed to go ahead, it would be years ahead of them.
1993 (MIPS benchmark, not actual instruction count but number of
program iterations compared to reference platform):
ES/9000-982 (8 processors) : 408MIPS, 51MIPS/processor
RS6000/990 : 126MIPS; cluster scale-up 16-system/2016MIPS, 128-system/16,128MIPS
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
FCS/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 5100 Date: 19 Nov, 2024 Blog: FacebookIBM 5100
Trivia: Ed and I transferred to SJR from CSC in 1977 and I got to wander around lots of silicon valley datacenters. One of my hobbies after joining IBM/CSC was enhanced production operating systems for internal datacenters and HONE was long time customer. In mid-70s, the US HONE datacenters were consolidated in Palo Alto (across the back parking lot from PASC which had moved slightly up the hill from Hanover, trivia when facebook 1st moved into silicon valley, it was into new bldg built next door to the former US HONE consolidated datacenter) and I spent some of my time there.
NOTE: after 23Jun1969 unbundling announcement, HONE was created to give branch office SEs online practice with guest operating system running in CP67 virtual machines. CSC had also ported APL\360 to CMS for CMS\APL (redoing storage management from 16kbyte swapped workspaces to large virtual memory demand paged workspaces and APIs for system services like file I/O, enabling lots of real world apps) and HONE started using it for online sales&marketing support apps (for DPD branch offices, regional offices and hdqtrs) which eventually come to dominate all HONE use (and HONE clones started sprouting up all over the world, my 1st IBM overseas business trips were to Paris and Tokyo for HONE install). PASC then did APL\CMS for VM370 and HONE after moving to VM370 leveraging use of PASC APL expertise (HONE had become the largest use of APL in the world).
trivia: PASC did the 370/145 APL microcode assist ... claim was it ran APL with throughput of 370/168.
... note it wasn't long before nearly all hardware orders had to be first processed by a HONE APL app before submission.
Los Gatos also gave me part of a wing with offices and lab space and I did HSDT project there (T1 and faster computer links, both satellite and terrestrial, had TDMA Ku-band satellite system with 4.5m dishes in Los Gatos and Yorktown and 7m dish in austin, Austin used the link for sending RIOS chip designs to hardware logic simulator/verifier in San Jose ... claiming it helped bring in RS/6000 design a year early). Los Gatos also had the IBMer responsible for magstripe (showed up on ATM cards) and developed ATM machine (in the basement still was vault where they had kept cash from all over the world for testing, also related early ATM machine across from fast food and kids would feed tomato packets into the card slot)
At SJR I worked with Jim Gray and Vera Watson on original
SQL/relational System/R .... and the Los Gatos VLSI lab had me help
with a different kind of relational that they used with VLSI chip
design, "IDEA" ... that was done with Sowa (who was then down at STL)
http://www.jfsowa.com/
trivia: some of my files on garlic website are maintained with a IDEA-like RDBMS that I had redone from scratch after leaving IBM.
I was also blamed for online computer conferencing in the late 70s and early 80s on the internal network. It really took off spring of 1981, when I distributed trip report of visit to Jim Gray at Tandem (who had left SJR the fall before), folklore was when corporate executive committee was told, 5of6 wanted to fire me. Apparently for online computer conferencing and other transgressions, I was transferred to YKT, but left to live in San Jose, offices in SJR/Almaden, LSG, etc ... but had to commute to YKT every couple weeks (monday in san jose, SFO redeye to Kennedy, bright and early Tues in YKT, return friday afternoon).
Almaden research mid-80s on eastern hill of almaden valley ... Los Gatos lab was other side of western hill from almaden valley on the road to San Jose dump. LSG had T3 collins digital radio (microwave) on the hill above lab with line-of-site to the roof of bldg12 on main plant site. HSDT got t1 circuits from Los Gatos to various places in San Jose plant. One was tail circuit to IBM C-band T3 satellite system connecting to Clementi's E&S lab in Kingston that had whole boatloads of Floating System boxes.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
original sql/relational System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
some past 5100 posts
https://www.garlic.com/~lynn/2024f.html#45 IBM 5100 and Other History
https://www.garlic.com/~lynn/2024b.html#15 IBM 5100
https://www.garlic.com/~lynn/2023f.html#55 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2022c.html#86 APL & IBM 5100
https://www.garlic.com/~lynn/2022.html#103 Online Computer Conferencing
https://www.garlic.com/~lynn/2021c.html#90 Silicon Valley
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history
https://www.garlic.com/~lynn/2018f.html#52 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018d.html#116 Watch IBM's TV ad touting its first portable PC, a 50-lb marvel
https://www.garlic.com/~lynn/2018b.html#96 IBM 5100
https://www.garlic.com/~lynn/2017c.html#7 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2016d.html#34 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2013o.html#82 One day, a computer will fit on a desk (1974) - YouTube
https://www.garlic.com/~lynn/2010c.html#28 Processes' memory
https://www.garlic.com/~lynn/2005m.html#2 IBM 5100 luggable computer with APL
https://www.garlic.com/~lynn/2005.html#44 John Titor was right? IBM 5100
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 5100 Date: 19 Nov, 2024 Blog: Facebookre:
one of the other places I got to wander was disk bldgs 14 (engineering) and 15 (product test) across the street from sjr/28. They were running 7x24, prescheduled, stand-alone test and mentioned they had recently tried MVS (but it had 15min MTBF requiring manual re-ipl in that environment). I offered to rewrite I/O supervisor to be bullet proof and never fail ... so they could do any amount of on-demand testing, greatly improving productivity (downside, they wanted to spend increasing time playing disk engineer). Later I write a (internal only) research report about the I/O integrity work and happened to mention MVS 15min MTBF ... bringing down the wrath of MVS organization on my head.
Bldg15 would get very early engineering processors, and got something like 3rd or 4th 3033 machined. It turned out testing only took a percent or two of CPU, so we scrounge up 3830 disk controller and string of 3330 drives and setup our private online service. About that time somebody was doing air bearing simulation (part of designing thin film disk head, initially for 3370) on the SJR 370/195 but were only getting a few turn arounds a month. We set it up on bldg15/3033 and could get multiple turn arounds a day (even tho 3033 was only about half processing of 195). Also ran 3270 coax under the street from bldg15 to my SJR office in 028.
1980 STL (since renamed SVL) was bursting at the seams and moving 300 people (and 3270s) from the IMS group to offsite bldg. They had tried "remote" 3270 support, but found human factors totally unacceptable. I get con'ed into doing channel extender support so they can place channel attached 3270 controllers at offsite bldg (with no perceived human factors between off-site and in STL). There was then an attempt to release my support to customers, but there was a group in POK playing with some serial stuff and got it vetoed (afraid that if it was in the market, it would make it harder to release their stuff).
1988, IBM branch office asks if I could help LLNL standardize some serial stuff they were working with it, it quickly becomes Fibre Channel Standard (FCS), including some stuff I had done in 1980 ... initially 1gbit/sec, full-duplex, aggregate 200mbytes/sec. Then POK gets their stuff released in the 90s with ES/9000 as ESCON, when it is already obsolete (17mbytes/sec).
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Transformational Change Date: 19 Nov, 2024 Blog: Facebookre:
Early 80s, I get HSDT project, T1 and faster computer links (both
satellite and terrestrial), some amount of conflicts with
communication group. Note, in the 60s, IBM had 2701 telecommunication
controller supporting T1 (1.5mbits/sec) links, however move to
SNA/VTAM in mid-70s and resulting issues seemed to cap controllers at
56kbit/sec. We were also working with NSF director and was suppose to
get $20M to interconnect the NSf supercomputer centers, then congress
cuts the budget, some other things happen and finally an RFP is
released (in part based on what we already had running). From
28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
Somebody had been collecting (communication group) email with
misinformation about supporting NSFNET ... copy in this archived post
(heavily clipped and redacted to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109
Note 1972, Learson tries (and fails) to block bureaucrats, careerists,
and MBAs from destroying the Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
20yrs later IBM has one of the largest losses in the history of US
companies (and was being reorg'ed into the 13 "baby blues" in
preparation for breaking up the company).
recently posted related comment/replies
https://www.garlic.com/~lynn/2024f.html#118 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#119 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#121 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#122 IBM Downturn and Downfall
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
on the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 4th Generation Programming Language Date: 20 Nov, 2024 Blog: Facebook4th gen programming language
even before SQL (& RDBMS) which was originally done on VM370/CMS aka
System/R at IBM SJR,
https://en.wikipedia.org/wiki/IBM_System_R
later tech transfer to Endicott for SQL/DS and nearly decade after start of System/R, tech transfer to STL for DB2, there were other "4th Generation Languages", one of the original 4th generation languages, Mathematica made available through NCSS (a '60s online commercial cp67/cms spin-off of the IBM cambridge science center; cp67/cms virtual machine precursor to vm370/cms)
NOMAD
http://www.decosta.com/Nomad/tales/history.html
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a
report that would have taken many hundreds of lines of Cobol to
produce. The product grew in capability and in revenue, both to NCSS
and to Mathematica, who enjoyed increasing royalty payments from the
sizable customer base. FOCUS from Information Builders, Inc (IBI),
did even better, with revenue approaching a reported $150M per
year. RAMIS moved among several owners, ending at Computer Associates
in 1990, and has had little limelight since. NOMAD's owners, Thomson,
continue to market the language from Aonix, Inc. While the three
continue to deliver 10-to-1 coding improvements over the 3GL
alternatives of Fortran, Cobol, or PL/1, the movements to object
orientation and outsourcing have stagnated acceptance.
... snip ...
other history
https://en.wikipedia.org/wiki/Ramis_software
When Mathematica makes Ramis available to TYMSHARE for their
VM370-based commercial online service, NCSS does their own version
https://en.wikipedia.org/wiki/Nomad_software
and then follow-on FOCUS from IBI
https://en.wikipedia.org/wiki/FOCUS
Information Builders's FOCUS product began as an alternate product to
Mathematica's RAMIS, the first Fourth-generation programming language
(4GL). Key developers/programmers of RAMIS, some stayed with
Mathematica others left to form the company that became Information
Builders, known for its FOCUS product
... snip ...
more spin-off of IBM CSC
https://www.computerhistory.org/collections/catalog/102658182
also some mention "first financial language" done in 60s at IDC
("Interactive Data Corporation", another cp67/cms '60s online
commercial spinoff from the IBM cambridge sc ience center)
https://archive.computerhistory.org/resources/access/text/2015/09/102702884-05-01-acc.pdf
https://archive.computerhistory.org/resources/access/text/2015/10/102702891-05-01-acc.pdf
as an aside, a decade later, IDC person involved w/FFL, joins with
another to form startup and does the original spreadsheet
https://en.wikipedia.org/wiki/VisiCalc
other trivia, REX (before renamed REXX and released to customers) was also originally done on VM370/CMS
... and TYMSHARE offered commercial online VM370/CMS services
https://en.wikipedia.org/wiki/Tymshare
also started offering their VM370/CMS-based online computer
conferencing "free" to SHARE
https://www.share.org/
starting in Aug1976 as VMSHARE
http://vm.marist.edu/~vmshare
Cambridge Scientific Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
CP/CMS
https://en.wikipedia.org/wiki/CP/CMS
CP67
https://en.wikipedia.org/wiki/CP-67
Cambridge Monitor System, renamed Conversational Monitor System for
VM370
https://en.wikipedia.org/wiki/Conversational_Monitor_System
IBM Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
commercial online (virtual machine) services posts
https://www.garlic.com/~lynn/submain.html#online
some past 4th gen, RAMIS, NOMAD, NCSS, etc posts
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100
https://www.garlic.com/~lynn/2023g.html#64 Mainframe Cobol, 3rd&4th Generation Languages
https://www.garlic.com/~lynn/2023.html#13 NCSS and Dun & Bradstreet
https://www.garlic.com/~lynn/2022f.html#116 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021g.html#23 report writer alternatives
https://www.garlic.com/~lynn/2021f.html#67 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2021c.html#29 System/R, QBE, IMS, EAGLE, IDEA, DB2
https://www.garlic.com/~lynn/2019d.html#16 The amount of software running on traditional servers is set to almost halve in the next 3 years amid the shift to the cloud, and it's great news for the data center business
https://www.garlic.com/~lynn/2019d.html#4 IBM Midrange today?
https://www.garlic.com/~lynn/2018e.html#45 DEC introduces PDP-6 [was Re: IBM introduces System/360]
https://www.garlic.com/~lynn/2018d.html#3 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018c.html#85 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017j.html#39 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017j.html#29 Db2! was: NODE.js for z/OS
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2016e.html#107 some computer and online history
https://www.garlic.com/~lynn/2015h.html#27 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2014i.html#32 Speed of computers--wave equation for the copper atom? (curiosity)
https://www.garlic.com/~lynn/2014e.html#34 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2013m.html#62 Google F1 was: Re: MongoDB
https://www.garlic.com/~lynn/2013f.html#63 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#57 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2013c.html#56 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012n.html#30 General Mills computer
https://www.garlic.com/~lynn/2012e.html#84 Time to competency for new software language?
https://www.garlic.com/~lynn/2012d.html#51 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#1 Deja Cloud?
https://www.garlic.com/~lynn/2011m.html#69 "Best" versus "worst" programming language you've used?
https://www.garlic.com/~lynn/2010q.html#63 VMSHARE Archives
https://www.garlic.com/~lynn/2010e.html#55 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2010e.html#54 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2007e.html#37 Quote from comp.object
https://www.garlic.com/~lynn/2006k.html#37 PDP-1
https://www.garlic.com/~lynn/2006k.html#35 PDP-1
https://www.garlic.com/~lynn/2003n.html#15 Dreaming About Redesigning SQL
https://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS
https://www.garlic.com/~lynn/2003d.html#15 CA-RAMIS
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Signetics 25120 WOM Newsgroups: alt.folklore.computers, comp.arch Date: Wed, 20 Nov 2024 17:02:24 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 4th Generation Programming Language Date: 20 Nov, 2024 Blog: Facebookre:
besides NOMAD refs in post, there is lots more in the computerhistory
refs ... for some reason the way FACEBOOK swizzles the URL, the
trailing /102658182 gets lost
https://www.computerhistory.org/collections/catalog/102658182
truncated to just the catalog url ... with the /102658182
Five of the National CSS principals participated in a recorded
telephone conference call with a moderator addressing the history of
the company's use of RAMIS and development of NOMAD. The licensing of
RAMIS from Mathematica and the reasons for building their own product
are discussed as well as the marketing of RAMIS for developing
applications and then the ongoing revenue from using these
applications. The development of NOMAD is discussed in detail along
with its initial introduction into the marketplace as a new offering
not as a migration from RAMIS. The later history of NOMAD is reviewed,
including the failure to build a successor product and the inability
to construct a viable PC version of NOMAD.
... snip ...
then points to
https://archive.computerhistory.org/resources/access/text/2012/04/102658182-05-01-acc.pdf
NCSS trivia: .... I was undergraduate and univ had hired me responsible for os/360 running on 360/67. CSC came out to install CP67 Jan1968 (3rd after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my (48hr) weekend dedicated time, first couple months rewriting lots of CP67 to improve running OS/360 in virtual machine. OS/360 test stream ran 322secs stand-alone and initially 856secs in virtual machine (534secs CP67 CPU). After a couple months I got CP67 CPU down to 113secs (from 534) ... and was asked to attend CP67 "official" announcement at spring '68 SHARE meeting in Houston. CSC was then having a one week class in June and I arrive Sunday night and am asked to teach the CP67 class, the people that were suppose to teach it had given notice that Friday, leaving for NCSS.
IBM Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 4th Generation Programming Language Date: 20 Nov, 2024 Blog: Facebookre:
other CP/67 trivia ... before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/m
https://en.wikipedia.org/wiki/CP/M
before developing CP/m, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
Opel and Gates' mother
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates,
CEO of then-small software firm Microsoft, to discuss the possibility
of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel
set up the meeting at the request of Gates' mother, Mary Maxwell
Gates. The two had both served on the National United Way's executive
committee.
... snip ...
more CP67 trivia, one of my hobbies afer joining the science center was enhanced operating systems for internal systems (and the internal branch office online sales&market support HONE was long time customer, eventually HONE clones sprouting up all over the world and customer orders required to be run through HONE APL apps before submitting).
In the morph of CP/67->VM/370, lots of features were simplified and/or dropped. I then started migrating stuff to a VM370R2-based system. I had an automated benchmarking system (I originally developed "autolog" command for benchmarking scripts but it then got adopted for lots of automated operational purposes) and started with that to get a baseline for VM370R2 before moving lots of CP67 to VM370. Unfortunately, VM370 wasn't able to finish the benchmarking scripts (w/o system crashes) and so I had to add a bunch of CP67 kernel serialization and integrity stuff in order to just complete set of benchmarks (for baseline performance numbers).
Then for internal production CSC/VM, I enhanced VM370R2 with a bunch of other CP67 work, including kernel reorg needed for multiprocessing operation (but not the multiprocessor support itself). Then for a VM370R3-based CSC/VM I added multiprocessor support originally for the consolidated US HONE datacenter. All the US HONE systems had been consolidated in Palo Alto (across the back parking lot from the Palo Alto Scientific Center, trivia: when FACEBOOK 1st moved into silicon valley, it was into a new bldg built next to the former US HONE datacenter) ... upgraded with single system image, shared DASD, load-balancing and fall-over across all the systems. Then with VM370R3-based CSC/VM were able to add a 2nd processor to each system.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP/67L, CSC/VM, SJR/VM, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: ARPANET And Science Center Network Date: 21 Nov, 2024 Blog: FacebookCo-worker at cambridge science center was responsible for the science center CP67-based wide-area network which morphs into the IBM internal network (larger than arpanet/internet from the beginning until sometime mid/late 80s, about the time communication group forced the internal network to convert to SNA/VTAM) and technology also used for the corporate sponsored Univ BITNET ("EARN" in Europe) ... account from one of the inventors of GML (at CSC, 1969)
Edson passed Aug2020
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
Bitnet (& EARN) ref:
https://en.wikipedia.org/wiki/BITNET
In 1977, Ed and I had transfered out to San Jose Reseach ... and SJR
installed the first IBM gateway to non-IBM (CSNET) in Oct1982 (that
had gateways to arpanet and other networks, just before the arpanet
conversion to TCP/IP).
https://en.wikipedia.org/wiki/CSNET
1jan1983 arpanet big conversion from host protocol and IMPs to
internetworking protocol (TCP/IP), there were about 100 IMPs and 255
hosts while the internal network was rapidly approaching 1000 ... old
archived post with list of corporate world-wide posts that added one
or more host during 1983:
https://www.garlic.com/~lynn/2006k.html#8
SJMerc article about Edson and "IBM'S MISSED OPPORTUNITY WITH THE
INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website ... blocked from converting internal network to
tcp/ip
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
Also, I get HSDT project in the early 80s, T1 and faster computer
links (both terrestrial and satellite) bringing conflicts with the
communication group. Note in the 60s, IBM had 2701 telecommunication
controller that supported T1, but then moves to SNA/VTAM in the 70s,
SNA/VTAM issues apparently cap controller links at 56kbits/sec. Was
working with the NSF director and was suppose to get $20M to
interconnect the NSF Supercomputing Centers, then congress cuts the
budget, some other things happen and then a RFP is released (in part
based on what we already had running). From 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
Somebody had been collecting (communication group) email with
misinformation about supporting NSFNET (also about the time internal
network was forced to convert to SNA/VTAM) ... copy in this archived
post (heavily clipped and redacted to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109
Note, late 80s a senior disk engineer gets talk scheduled at internal,
world-wide, annual communication group conference supposedly about
3174 performance, but opens the talk with statements that the
communication group was going to be responsible for demise of the disk
division, the disk division was seeing drop in disk sales with data
fleeing mainframe datacenters to more distributed computing friendly
platforms. The disk division had come up with a number of solutions,
but they were constantly veto'ed by the communication group (with
their corporate responsibility for everything that cross datacenter
walls, fiercely fighting off client/server and distributed computing
trying to preserve their dumb terminal paradigm). It wasn't just
stranglehold on disks ... and a couple years later IBM has one of the
largest losses in the history of US companies and was being
reorganized into the 13 "baby blues" (somewhat take-off on AT&T "baby
bells" breakup a decade earlier) in preparation for breaking up the
company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet/earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
communication group stanglehold posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
two decades earlier, Learson trying (&failed) to block the
bureaucrats, careerists and MBAs from destroying Watsons'
culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: ARPANET And Science Center Network Date: 21 Nov, 2024 Blog: Facebookre:
other trivia: mid-80s, communication group was fighting the release of mainframe tcp/ip support, when that got reversed, they changed their strategy and (with their corporate strategic responsibility for everything that crossed datacenter walls) they said it had to be released through them. What shipped got 44kbytes/sec aggregate throughput using nearly whole 3090 processor. I then do "fixes" to support RFC1044 and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
later in the early 90s, they hired silicon valley contractor to do implementation of tcp/ip support directly in VTAM, what he initially demo'ed had much higher throughput than LU6.2. He was then told that everybody knows that a proper TCP/IP implementation has much lower throughput than LU6.2 and they would only be paying for a proper implementation (trivia: he passed a couple months ago).
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: ARPANET And Science Center Network Date: 21 Nov, 2024 Blog: Facebookre:
The Internet's 100 Oldest Dot-Com Domains
https://www.pcworld.com/article/532545/oldest_domains.html
my old post in internet group
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
and comment about IBM getting class-a 9-net (after interop88)
https://www.garlic.com/~lynn/2024b.html#35 Internet
with email
https://www.garlic.com/~lynn/2024b.html#email881216
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
interop '88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: ARPANET And Science Center Network Date: 22 Nov, 2024 Blog: Facebookre:
GML invented at IBM cambridge science center in 1969, a decade later
morphs into ISO standaard SGML and after another decade morphs into
HTML at CERN. First "web" server in the US was Stanford SLAC (CERN
sister institution) VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
NSF preliminary announcement mentions supercomputer software, and
ncsa.illinois.edu does mosaic
http://www.ncsa.illinois.edu/enabling/mosaic
then some of the people come out to silicon valley to do mosaic
startup, ncsa complains about use of "mosaic" and they changed the
name to "netscape"
last product we did at IBM was HA/6000, I change the name to HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs and commercial cluster scale-up with RDBMS (Oracle, Sybase,
Informix, Ingres). Early Jan1992, we have meeting with Oracle CEO,
AWD/Hester tells Ellison that we would have 16-system clusters mid92
and 128-system clusters ye92. Late Jan1992, cluster scale-up is
transferred for announce as IBM supercomputer (for
technical/scientific *ONLY*) and we are told that we can't work on
anything with more than four processors (we leave IBM a few months
later).
Not long afterwards was brought in as consultant to small client/server startup, two former Oracle people (that were in the Ellison/Hester meeting) are there responsible for something called the "commerce server" and want to do financial transactions on the server, the startup had also invented this technology call "SSL" they want to use, result is now frequently called "electronic commerce". I had responsibility for everything between the webservers and the financial payment networks. Afterwards I put together a talk on "Why Internet Wasn't Business Critical Dataprocessing" (and the Internet Standards RFC editor, Postel sponsors my talk at ISI/USC) based on the work I had to do, multi-level security layers, multi-redundant operation, diagnostics, processes, procedures, and documentation.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet payment network gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
some posts mentioning ha/cmp, mosaic, netscape, business critical
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019e.html#87 5 milestones that created the internet, 50 years after the first network message
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 60s Computers Date: 22 Nov, 2024 Blog: FacebookAmdahl wins the battle to make ACS, 360-compatible, when it is canceled, Amdahl leaves IBM. Some folklore ACS/360 canceled because it might advance state of the art too fast and IBM would loose control of the market.
I took a 2 credit-hr intro to fortran/computers and at end of semester was univ hired to reimplement 1401 MPIO in assembler for 360/30. Univ was getting 360/67 for tss/360 to replace 709/1401 and was getting 360/30 replacing 1401 temporarily pending 360/67. Univ. shutdown datacenter on weekends and I would have the whole place dedicated (although 48hrs w/o sleep made monday classes hard). They gave me a bunch of hardware and software documents and I got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. and in a few weeks, I had 2000 card program. Within a year of taking intro class, the 360/67 and I was hired fulltime responsible for os/360, running 360/67 as 360/65 (tss/360 never came to production). 709 ran student fortran in under a second. Initially os/360 ran student fortran over a minute. I install HASP and cuts the time in half. I then redo STAGE2 SYSGEN to carefully place datasets and PDS members to optimize disk arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Time never got better than 709 until I install Univ. of Waterloo WATFOR.
CSC then comes out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly get to play with it during my weekend dedicated window. first couple months rewriting lots of CP67 to improve running OS/360 in virtual machine. OS/360 test stream ran 322secs stand-alone and initially 856secs in virtual machine (534secs CP67 CPU). After a couple months I got CP67 CPU down to 113secs (from 534) ... and was asked to attend CP67 "official" announcement at spring '68 SHARE meeting in Houston. CSC was then having a one week class in June and I arrive Sunday night and am asked to teach the CP67 class, the people that were suppose to teach it had given notice that Friday, leaving for NCSS (one of the 60s virtual machine online commercial spin-offs of the science center).
Before I graduate, I'm hired fulltime into a small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit). I think Renton datacenter possibly largest in world, couple hundred million in IBM gear, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room (some joke that Boeing was getting 360/65s like other companies got keypunches).
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning 709/1401, MPIO, Boeing, Renton, ACS/360:
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023f.html#19 Typing & Computer Literacy
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: PS2 Microchannel Date: 22 Nov, 2024 Blog: FacebookThere were tight grips on microchannel and communication group had performance kneecapped IBM microchannel cards (part of fierce battle fighting off client/server and distributed computing trying to preserve the dumb terminal paradigm). Note AWD IBU (advance workstation division independent business unit) had done their own 4mbit token-ring card for the PC/RT (16bit at bus) ... but for the microchannel RS/6000 they were told they couldn't do their ownl cards but had to use standard PS2 microchannel cards. Turns out that the $800 16mbit token-ring microchannel PS2 card had lower throughput than the PC/RT 4mbit token-ring card (and standard $69 10mbit ethernet cards had much higher throughput than both)
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
communication group stanglehold posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 60s Computers Date: 22 Nov, 2024 Blog: Facebookre:
Early 70s, IBM had the future system project, completely different
than 370 and was going to completely replace it ... and internal
politics were killing of 370 projects ... claim is that the lack of
new (IBM) 370 during the period is what gave the clone system makers
(including Amdahl) their market foothold (and IBM marketing had to
fall back on enormous amount of FUD). some more FS detail
http://www.jfsowa.com/computer/memo125.htm
After joining IBM I continued to attend user group meetings (SHARE, others) and drop by customers. The director of one of largest commercial "true blue" datacenters liked me to stop by and talk technology. At some point, the IBM branch manager horribly offended the customer and in retribution they ordered an Amdahl (lonely Amdahl in vast sea of blue). Up until then Amdahl had been selling into univ/technical/scientific markets, but this would be the first "true blue" commercial install. I was then asked to go spend onsite for 6-12months at the customer. I talk it over with the customer and then decline IBM's offer. I'm them told that the branch manager is good sailing buddy of IBM CEO and if I didn't do this, I could forget having a career, promotions, and raises.
After transferring to San Jose Research in late 70s, would attend the monthly BAYBUNCH meetings hosted by SLAC. Earlier Endicott had roped me into helping with the VM370 ECPS microcode assist ... and in early 80s got permission to give presentations on how it was done at user group meetings (including BAYBUNCH). After (BAYBUNCH) meetings, we would usually adjourn to local watering holes and Amdahl people briefed me they were developing HYPERVISOR ("multiple domain") and grilled me about more ECPS details.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning future system, amdahl, ecps, baybunch, hypervisor
https://www.garlic.com/~lynn/2024f.html#30 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2018e.html#30 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2015d.html#14 3033 & 3081 question
https://www.garlic.com/~lynn/2013n.html#46 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2012f.html#78 What are you experiences with Amdahl Computers and Plug-Compatibles?
https://www.garlic.com/~lynn/2011p.html#114 Start Interpretive Execution
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The New Internet Thing Date: 22 Nov, 2024 Blog: FacebookThe New Internet Thing
We were doing cluster scale-up for HA/CMP ... working with national labs on technical/scientific scale-up and with RDBMS vendors on commercial scale-up. Then JAN1992 meeting in Ellison conference room with several Oracle people (including CEO Ellison) on cluster scale-up. within a few weeks, cluster scale-up is transferred, announced as IBM supercomputer (for scientific/technical *ONLY*), and we were told we couldn't work on anything with more than four processors. A few months later, we leave IBM.
Later two of the Oracle people that were in the Ellison HA/CMP meeting, have left and are at a small client/server startup responsible for something called "commerce server". We are brought in as consultants because they want to do payment transactions, the startup had also invented this technology called "SSL" they want to use, the result is now frequently called "electronic commerce". I had responsibility for everything from the webserver to the payment networks ... but could only recommend on the serve r/client side. About the backend side, I would pontificate that it took 4-10 times the effort to take a well designed, will implemented and tested application to turn it into industrial strength service. Postel sponsored my talk on the subject at ISI/USC.
Object oriented operating systems for a time were all the rage in the valley ... apple was doing PINK and SUN was doing Spring. Taligent was then spun off and lot of the object technology moved there ... but heavily focused on GUI apps.
Spring '95, i did a one-week JAD with dozen or so taligent people on use of taligent for business critical applications. there were extensive classes/framework for GUI & client/server support, but various critical pieces were missing. I was asked to do a week JAD with Taligent about what it would take to provide support for implementing industrial strength services (rather than applications). Resulting estimate was 30% hit to their existing "frameworks" and two new frameworks specifically for industrial strength services. Taligent was also going thru evolution (outside of the personal computing, GUI paradigm) ... a sample business application required 3500 classes in taligent and only 700 classes in a more mature object product targeted for the business environment.
old comment from taligent employee: The business model for all this was never completely clear, and in the summer of 1995, upper management quit en masse
I think that shortly after taligent vacated their building ... sun java group moved in.
About last gasp for Spring was when we were asked in to consider taking on turning Spring out as commercial product (we declined) ... and then Spring was shutdown and people moved over to Java.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning Taligent, object, JAD
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2017b.html#46 The ICL 2900
https://www.garlic.com/~lynn/2017.html#27 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016f.html#14 New words, language, metaphor
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2009m.html#26 comp.arch has made itself a sitting duck for spam
https://www.garlic.com/~lynn/2008b.html#22 folklore indeed
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/aadsm27.htm#48 If your CSO lacks an MBA, fire one of you
https://www.garlic.com/~lynn/aadsm24.htm#20 On Leadership - tech teams and the RTFM factor
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The New Internet Thing Date: 22 Nov, 2024 Blog: Facebookre:
was consultant ... but ran into lots of sun people before on various
projects. one was the VP of the group that the HA/SUN product reported
to. Early HA/SUN financial customer ran into a glitch resulting in
loss of customer records and I was brought in as part of the after
action review. The SUN VP opened with a pitch about HA/SUN .... that
sounded just like a HA/CMP marketing talk I had created nearly a
decade before.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
late 80s, IBM branch offices asked me if I could help SLAC with what
becomes SCI standard and LLNL with what becomes FCS standard. 1995
(after having left IBM in 1992), I was spending some time at chip
company and (former SUN) SPARC10 engineer was there, working on high
efficiency SCI chip and looking at being able to scale-up to 10,000
machine configuration running Spring ... and got me Spring
documentation and (I guess) pushed SUN about making me an offer to
turn Spring into a commercial product.
https://web.archive.org/web/20030404182953/http://java.sun.com/people/kgh/spring/
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
FCS/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
some SLAC SCI and SPRING posts
https://www.garlic.com/~lynn/2014.html#85 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013b.html#21 New HD
https://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2012f.html#94 Time to competency for new software language?
https://www.garlic.com/~lynn/2010f.html#47 Nonlinear systems and nonlocal supercomputing
https://www.garlic.com/~lynn/2010.html#44 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2008p.html#33 Making tea
https://www.garlic.com/~lynn/2008i.html#3 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM SE Asia Date: 23 Nov, 2024 Blog: FacebookI was introduced to Boyd in early 80s and use to sponsor his briefings at IBM. One of his stories was about being vocal that the electronics across the trail wouldn't work and then (possibly as punishment) is put in command of "Spook Base" (about the same time I'm at Boeing). Some refs:
Boyd biographies claim "spook base" was a $2.5B "wind fall" for IBM
(60s dollars). Other recent Boyd refs:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Before I graduated, I had been hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidated all dataprocessing into an independent business unit. I think Renton was the largest datacenter in the world (couple hundred million in 360 stuff, but only 1/10th "spook base") ... 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room (joke that Boeing was getting 360/65s like other companies got keypunches).
Late 80s, commandant of the Marine Corp leverages Boyd for a make over of the corp (at a time when IBM was desperately in need of make over) and we continued to have Boyd conferences at Marine Corp Univ. through last decade.
from Jeppeson webpage:
Access to the environmentally controlled building was afforded via the
main security lobby that also doubled as an airlock entrance and
changing-room, where twelve inch-square pidgeon-hole bins stored
individually name-labeled white KEDS sneakers for all TFA
personnel. As with any comparable data processing facility of that
era, positive pressurization was necessary to prevent contamination
and corrosion of sensitive electro-mechanical data processing
equipment. Reel-to-reel tape drives, removable hard-disk drives,
storage vaults, punch-card readers, and inumerable relays in
1960's-era computers made for high-maintainence systems. Paper dust
and chaff from fan-fold printers and the teletypes in the
communications vault produced a lot of contamination. The super-fine
red clay dust and humidity of northeast Thailand made it even more
important to maintain a well-controlled and clean working environment.
Maintenance of air-conditioning filters and chiller pumps was always a
high-priority for the facility Central Plant, but because of the
24-hour nature of operations, some important systems were run to
failure rather than taken off-line to meet scheduled preventative
maintenance requirements. For security reasons, only off-duty TFA
personnel of rank E-5 and above were allowed to perform the
housekeeping in the facility, where they constantly mopped floors and
cleaned the consoles and work areas. Contract civilian IBM computer
maintenance staff were constantly accessing the computer sub-floor
area for equipment maintenance or cable routing, with the numerous
systems upgrades, and the underfloor plenum areas remained much
cleaner than the average data processing facility. Poisonous snakes
still found a way in, causing some excitement, and staff were
occasionally reprimanded for shooting rubber bands at the flies during
the moments of boredom that is every soldier's fate. Consuming
beverages, food or smoking was not allowed on the computer floors, but
only in the break area outside. Staff seldom left the compound for
lunch. Most either ate C-rations, boxed lunches assembled and
delivered from the base chow hall, or sandwiches and sodas purchased
from a small snack bar installed in later years.
... snip ...
Boyd would claim that it was the largest air conditioned bldg in that part of the world.
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Move From Leased To Sales Date: 24 Nov, 2024 Blog: Facebookre:
leased to sales 1st half 70s (predated Gerstner by 20yrs). I've commented that leased charges were based on system meter that ran whenever cpu(s) and/or channel(s) were busy. In the 60s lot of work was done with CP67 (precursor to vm370) for 7x24, online operation ... dark room no operator and system meter would stop whenever there was no activity (but instant-on whenever characters were arriving ... analogous to large cloud megadatacenters today focused on no electrical use when idle, but instant-on when needed). Note cpu/channels had to be idle for 400ms before system meter stopped; trivia ... long after switch-over from leased to sales, MVS still had timer event that woke up every 400ms ... which would make sure system meter never stopped.
1972, Learson was trying (& failed) to block the bureaucrats,
careerists, and MBAs from destroying the Watson culture/legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
20yrs later, IBM had one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" (take-off on
AT&T "baby bells" in AT&T breakup a decade earlier) in preparation for
breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup.
First half of 70s there was the Future System project, completely
different than 370 and was going to completely replace 370, internal
politics during FS was killing off 370 projects and claim is that lack
of new 370 products during FS was what gave the clone 370 system
makers their market foothold (and IBM marketing had to fall back on
lots of FUD). It might be said that switch from lease to sales was
motivated by trying to maintain/boost revenue during this period.
When FS implodes, there is mad rush to get stuff back into 370 product
pipelines, including kicking off quick&dirty 3033&3081 in
parallel. More FS:
http://www.jfsowa.com/computer/memo125.htm
also "Future System" (F/S, FS) project
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive
... snip ...
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning leased charges based on system meter
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#61 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024c.html#116 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023d.html#78 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022g.html#93 No, I will not pay the bill
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#115 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#23 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022d.html#108 System Dumps & 7x24 operation
https://www.garlic.com/~lynn/2021i.html#94 bootstrap, was What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2019d.html#19 Moonshot - IBM 360 ?
https://www.garlic.com/~lynn/2019b.html#66 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2017i.html#65 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017.html#21 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016h.html#47 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016b.html#86 Cloud Computing
https://www.garlic.com/~lynn/2016b.html#17 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?
https://www.garlic.com/~lynn/2015c.html#103 auto-reboot
https://www.garlic.com/~lynn/2014m.html#113 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014l.html#56 This Chart From IBM Explains Why Cloud Computing Is Such A Game-Changer
https://www.garlic.com/~lynn/2014h.html#19 weird trivia
https://www.garlic.com/~lynn/2014g.html#85 Costs of core
https://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
https://www.garlic.com/~lynn/2014e.html#4 Can the mainframe remain relevant in the cloud and mobile era?
https://www.garlic.com/~lynn/2012l.html#47 I.B.M. Mainframe Evolves to Serve the Digital World
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 2001/Space Odyssey Date: 24 Nov, 2024 Blog: FacebookHAL ,,, each letter one preceeding IBM
IBM System 9000 (1982) M68k Laboratory Computer
https://en.wikipedia.org/wiki/IBM_System_9000
IBM ES/9000 (1990) ESA/390 mainframe
https://en.wikipedia.org/wiki/IBM_System/390#ES/9000
Amdahl won battle to make ACS, 360 compatible. Then when ACS/360 is
canceled, Amdahl leaves IBM and forms his own company. Following also
mentions some ACS/360 features that show up in ES/9000 in the 90s
(recent IBM webserver changes seem to obliterated lots of mainframe
history)
https://people.computing.clemson.edu/~mark/acs_end.html
... by former IBMer:
HAL Computer
https://en.wikipedia.org/wiki/HAL_Computer_Systems
HAL SPARC64
https://en.wikipedia.org/wiki/HAL_SPARC64
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Taligent Date: 25 Nov, 2024 Blog: Facebookre:
The New Internet Thing
https://albertcory50.substack.com/p/this-new-internet-thing-chapter-27
Notes on Chaper 27
https://albertcory50.substack.com/p/notes-on-chapter-27
Grant Avery is now working for Taligent, a joint effort between Apple
(before Jobs returned) and IBM, which everyone involved would rather
forget about. Not on my watch!
IBM still thought they could finally, really, really beat Microsoft at
the PC game. OS/2 hadn't done it, so now they were doing
Object-Oriented with Apple, and it was going to be the thing that
everyone would get behind. Grant's job is to evangelize for it with
other big suckers companies, HP being the one he's pitching in this
chapter.
... snip ...
Taligent
https://en.wikipedia.org/wiki/Taligent
Taligent Inc. (a portmanteau of "talent" and "intelligent")[3][4] was
an American software company. Based on the Pink object-oriented
operating system conceived by Apple in 1988, Taligent Inc. was
incorporated as an Apple/IBM partnership in 1992, and was dissolved
into IBM in 1998.
... snip ...
We were doing cluster scale-up for HA/CMP ... working with national labs on technical/scientific scale-up and with RDBMS vendors on commercial scale-up. Early JAN1992 meeting with Oracle CEO and several Oracle people on cluster scale-up, AWD/Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. After couple weeks by end of JAN1992, cluster scale-up is transferred, announced as IBM supercomputer (for scientific/technical *ONLY*), and we were told we couldn't work on anything with more than four processors. A few months later, we leave IBM.
Later two of the Oracle people that were in the Ellison HA/CMP meeting, have left and are at a small client/server startup responsible for something called "commerce server". We are brought in as consultants because they want to do payment transactions, the startup had also invented this technology called "SSL" they want to use, the result is now frequently called "electronic commerce". I had responsibility for everything from the webserver to the payment networks ... but could only recommend on the server/client side. About the backend side, I would pontificate that it took 4-10 times the effort to take a well designed, will implemented and tested application to turn it into industrial strength service. Postel (Internet/RFC standard editor) sponsored my talk on the subject at ISI/USC.
Object oriented operating systems for a time were all the rage in the valley ... Apple was doing PINK and SUN was doing Spring. Taligent was then spun off and lot of the object technology moved there ... but heavily focused on GUI apps.
Spring '95, i did a one-week JAD with dozen or so Taligent people on use of Taligent for business critical applications. there were extensive classes/framework for GUI & client/server support, but various critical pieces were missing. I was asked to do a week JAD with Taligent about what it would take to provide support for implementing industrial strength services (rather than applications). Resulting estimate was 30% hit to their existing "frameworks" and two new frameworks specifically for industrial strength services. Taligent was also going thru evolution (outside of the personal computing, GUI paradigm) ... a sample business application required 3500 classes in Taligent and only 700 classes in a more mature object product targeted for the business environment.
old comment from Taligent employee: The business model for all this was never completely clear, and in the summer of 1995, upper management quit en masse
I think that shortly after Taligent vacated their building ... Sun Java group moved in (The General Manager of business unit that Java reported to, was somebody I did some work with 15yrs earlier at IBM Los Gatos).
late 80s, IBM branch offices had asked me if I could help SLAC with what becomes SCI standard and LLNL with what becomes FCS standard. 1995 (after having left IBM in 1992), I was spending some time at chip company and (former SUN) SPARC10 engineer was there, working on high efficiency SCI chip and looking at being able to scale-up to 10,000 machine distributed configuration running Spring ... and got me Spring documentation and (I guess) pushed SUN about making me an offer to turn Spring into a commercial product.
About last gasp for Spring was when we were asked in to consider
taking on turning Spring out as commercial product (we declined)
... and then Spring was shutdown and people moved over to Java
https://web.archive.org/web/20030404182953/http://java.sun.com/people/kgh/spring/
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
other posts mentioning Taligent JAD
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2017b.html#46 The ICL 2900
https://www.garlic.com/~lynn/2017.html#27 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016f.html#14 New words, language, metaphor
https://www.garlic.com/~lynn/2010g.html#59 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#15 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2009m.html#26 comp.arch has made itself a sitting duck for spam
https://www.garlic.com/~lynn/2008b.html#22 folklore indeed
https://www.garlic.com/~lynn/2007m.html#36 Future of System/360 architecture?
https://www.garlic.com/~lynn/2006n.html#20 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005f.html#38 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000.html#10 Taligent
https://www.garlic.com/~lynn/aadsm27.htm#48 If your CSO lacks an MBA, fire one of you
https://www.garlic.com/~lynn/aadsm24.htm#20 On Leadership - tech teams and the RTFM factor
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Move From Leased To Sales Date: 26 Nov, 2024 Blog: Facebookre:
note: Amdahl won battle to make ACS "360" compatible, then when
ACS/360 is killed, he leaves IBM (prior to "FS") to form his own clone
370 company.
https://people.computing.clemson.edu/~mark/acs_end.html
FS implosion and 3033 started out 168 logic remapped to 20% faster chips and 3081 is warmed over FS technology (see memo125 ref, and 3081 was going to be multiprocessor only). 3081D (two processor aggregate) was slower than Amdahl single processor. They then double processor cache size for 3081K ... aggregate about same as Amdahl single processor ... although IBM pubs had MVS multiprocessor throughput only 1.2-1.5 times a single processor ... aka MVS on single processor Amdhal much higher througthput than 2CPU 3081K, even tho aggregate CPU cycles about the same, requiring lots more marketing FUD.
Also customers weren't converting/migrating to MVS/XA as planned and so tended to run 3081s in 370-mode. Amdahl was having more success because it had come out with HYPERVISOR microcode ("multiple domain") and could run MVS and MVS/XA concurrently as part of migration (in the wake of FS implosion, the head of POK had convinced corporate to kill IBM's virtual machine VM370, shutdown the development group and transfer all the people to POK for MVS/XA; eventually Endicott manages to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch) ... it wasn't until almost a decade later that IBM responded with LPAR and PR/SM for 3090.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
posts mentioning 3081, mvs/xa, amdahl hypervisor, lpar and pr/sm
https://www.garlic.com/~lynn/2024f.html#113 IBM 370 Virtual Memory
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018.html#97 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2006h.html#30 The Pankian Metaphor
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Unbundling, Software Source and Priced Date: 26 Nov, 2024 Blog: Facebookre:
Last product did at IBM started out HA/6000 originally for the NYTimes
to move their newspaper system from VAXCluster to RS/6000. I rename it
HA/CMP when start doing technical/scientific cluster scale-up with
national labs and commercial cluster scale-up with RDBMS vendors
(Oracle, Sybase, Informix, Ingres that had VAXCluster in the same
source base with Unix). Early Jan1992, had meeting with Oracle CEO
where AWD/Hester tells Ellison that we would have 16-system clusters
by mid92 and 128-system clusters by ye92. I then update FSD (federal
system division) about the HA/CMP work with national lab and they tell
IBM Kingston Supercomputer group that they were going with HA/CMP for
gov. accounts. Late JAN1992 HA/CMP scale-up is transferred to Kingston
for announce as IBM Supercomputer (for technical/scientific *ONLY*)
and we are told we can't work on anything with more than four
processors. We leave IBM a few months later.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
Not long later am brought in as consultant to a small client/server startup. Two of the former Oracle people (that were in the JAN1992 Oracle Ellison meeting) are there responsible for something called "commerce server" and want to do payment transactions on the server, the startup had also invented this technology they call "SSL" they want to use, it is now frequently called "electronic commerce". I was given responsibility for everything between webservers and the payment networks.
The payment networks had been using circuit based technologies and
their trouble desk standard included doing 1st level problem
determination within 5mins. The initial testing of "electronic
commerce" had experienced network failure and after 3hrs was closed as
NTF (no trouble found). I had to bring the webserver "packet based"
operation up to the payment networks standards. Later, I put together
a talk on "Why Internet Isn't Business Critical Dataprocessing" based
on the software, documentation, procedures, etc, that I had to do for
"electronic commerce", which Internet Standards/RFC editor, Postel
https://en.wikipedia.org/wiki/Jon_Postel
sponsored at ISI/USC.
Other trivia: In early 80s, I got IBM HSDT effort, T1 and faster
computer links (both terrestrial and satellite) resulting in various
battles with the communication group (In the 60s, IBM had 2701
telecommunication controller that supported T1 links, but the
transition to SNA/VTAM in the mid-70s and associated issues, seemed to
cap links at 56kbits/sec). Was working with NSF director and was
suppose to get $20M to interconnect the NSF Supercomputing
centers. Then congress cuts the budget, some other things happen and
finally a RFP is released (in part based on what we already had
running). From NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
Note IBM transition in the 80s from source available to "object-code
only" resulting in the OCO-Wars with customers ... some of this can be
found in the VMSHARE archives ... aka TYMSHARE had offered their
VM370/CMS-based online computer conferencing system, free to
(mainframe user group) SHARE starting in Aug1976 ... archives here:
http://vm.marist.edu/~vmshare
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
Payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
some Business Critical Dataprocessing posts
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Unbundling, Software Source and Priced Date: 26 Nov, 2024 Blog: Facebookre:
OS/360 SYSGEN was mostly specifying (STAGE1) hardware configuration and system features .... would "assemble" with macros ... the macros mostly generated (STAGE2) several hundred cards of job control statements that selected which executables to be move/copied to the system disk(s). I started reworking sequence of STAGE2 cards to order datasets and PDS (executable) members to improve/optimize disk arm seeks and multi-track searches.
some recent posts mention sysgen/stage2, optimize, arm seek,
multi-track search
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#60 IBM 3705
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024f.html#15 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#98 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024e.html#2 DASD CKD
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#117 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computer System Performance Work Date: 27 Nov, 2024 Blog: Facebook1979, the largest national grocery store chain was having severe store operation performance issues (12-13? regions partitioned across the systems). They had large ibm dsd/pok multi-system shared dasd datacenter and apparently had all the usual DPD & DSD/POK performance experts through before they got around to asking me. I was brought into large classroom with tables covered with system activity performance reports. After about 30mins I noticed that during worst performance periods, the aggregate activity of a specific 3330 (summing across all the systems) flatlined between 6&7 physical I/Os per second. It turned out it was shared 3330 with large PDS dataset for the store controller applications. It had three cylinder PDS directory and basically disk was spending nearly all its time doing PDS directory full cylinder multi-track searches ... resulting in peak, aggregate store controller application load throughput of two/sec for all the stores in the country. Two things, partition the applications into multiple PDS datasets on different disks and then replicate private sets for each system (on non-shared disks with non-shared controllers).
Mid-80s, the communication group was fiercely fighting off client/server and distributed computing, including blocking release of mainframe TCP/IP support. When that was reversed, they changed their tactic, since the communication group had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate 44kbytes/sec throughput using nearly whole 3090 processor. I then did changes to support RFC1044 and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
Turn of century (after leaving IBM) brought into large financial outsourcing datacenter that was handling half of all credit card accounts in the US and had 40+ systems (@$30M per/40+*$30M.. >$1.2B, number needed to finish settlement in the overnight batch window), all running the same 450K statement COBOL program. They had a large group that had been handling performance care and feeding for decades but got somewhat myopic. In the late 60s and early 70s there was lots of performance analysis technologies developed at the IBM Cambridge Scientific Center ... so I tried some alternate technologies for a different view and found 14% improvement.
In the 60s, at univ took 2 credit hr intro to fortran/computers and at the end of the semester, was hired to rewrite 1401 MPIO in assembler for 360/30 (I was given lots of hardware&software manuals and got to design and implement my own monitor, device drivers, error recovery, storage management, etc ... and within a few weeks had 2000 card program). Univ. was getting 360/67 tss/360 replacing 709/1401 and 360/30 temporarily replaced 1401 pending getting 360/67. Within year of taking intro class, 360/67 arrived and I was hired fulltime responsible of os/360 (tss/360 never came to production and ran as 360/65). Student Fortran ran under second on 709 but well over minute on 360/67 os/360. I install HASP cutting time in half. I then start redoing stage2 sysgen, instead of starter system sysgen, run in production system with HASP and statements reordered to carefully place datasets and PDS members for optimizing arm seek and multi-track searches .... getting another 2/3rds improvement down to 12.9secs ... never got better than 709 until I install Univ. of Waterloo WATFOR.
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
IBM CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
some posts mentioning (multi-track search) grocery store and univ work
as well as 450k statement COBOL for credit card processing
https://www.garlic.com/~lynn/2023g.html#60 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2022c.html#70 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computer System Performance Work Date: 28 Nov, 2024 Blog: Facebookre:
CSC comes out to the univ. to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I would mostly play with it during my weekend dedicated time. I initially start out rewriting large parts to improve running OS/360 in CP67 virtual machine. Test stream ran 322secs on real machine, initially 856secs in virtual machine (534secs CP67 CPU). After a couple months I got CP67 CPU down to 113secs (from 534). I then start redoing other parts of CP67, page replacement algorithm, thrashing controls, scheduling (aka dynamic adaptive resource management), ordered arm seek queuing, multiple chained page requests channel programs optimizing transfers/revolution (exp 2301 paging drum peak from 80/sec to 270/sec).
After graduating and joining IBM CSC, one of my hobbies was enhanced production operating systems for internal datacenters. Then in the morph from CP67->VM370, a lot of stuff was simplified and/or dropped and starting with VM370R2, I began adding lots of it back in. 23jun1969 unbundling announce included charging for software (although they were able to make the case that kernel software was still free).
Then the IBM Future System project, completely different and completely replace 370 and internal politics was killing off 370 efforts, claim was the lack of new IBM 370 products during FS gave the 370 clone makers their market foothold (all during FS I continued to work on 360&370 stuff, and periodically ridiculing FS). When FS finally implodes, there is mad rush to get stuff back in the 370 product pipelines, including kicking of quick&dirty 3033&3081 efforts in parallel.
Also the rise of clone 370 makers contributed to decision to start charging for kernel software and a bunch of my internal production VM370 stuff was selected for the guinea pig. A corporate expert reviewed it and said he wouldn't sign off because it didn't have any manual tuning knobs which were "state-of-the-art". I tried explaining dynamic adaptive, but it fell on deaf ears. I package up some manual tuning knobs as a joke and call it SRM ... as parody on MVS SRM vast array, with full source code and documentation, joke was from operations research, the dynamic adaptive had greater degrees of freedom than the SRM values so could dynamically compensate for any manual setting; I package all the dynamic adaptive as "STP" (from TV adverts "the racer's edge").
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement and thrashing control posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
some recent posts mentioning charging for kernel software
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#39 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024e.html#83 Scheduler
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#18 IBM Downfall and Make-over
https://www.garlic.com/~lynn/2024d.html#81 APL and REXX Programming Languages
https://www.garlic.com/~lynn/2024d.html#29 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#119 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#31 HONE &/or APL
https://www.garlic.com/~lynn/2024c.html#6 Testing
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2024.html#94 MVS SRM
https://www.garlic.com/~lynn/2024.html#20 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#105 VM Mascot
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#45 Wheeler Scheduler
https://www.garlic.com/~lynn/2023g.html#43 Wheeler Scheduler
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#111 Copyright Software
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#54 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#110 APL
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023.html#68 IBM and OSS
https://www.garlic.com/~lynn/2022g.html#93 No, I will not pay the bill
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: What is an N-bit machine? Newsgroups: comp.arch Date: Thu, 28 Nov 2024 11:45:38 -1000jgd@cix.co.uk (John Dallman) writes:
360/67
https://bitsavers.org/pdf/ibm/360/functional_characteristics/A27-2719-0_360-67_funcChar.pdf
https://bitsavers.org/pdf/ibm/360/functional_characteristics/GA27-2719-2_360-67_funcChar.pdf
Before 370/xa, MVS was getting so bloated that they did hack to 3033 for 64mbyte real memory ... still 24bit (real & virtual) instruction addressing ... but they scavanged two unused bits in the virtual memory 16bit PTE, used to prefix the 12bit page numbers (4096 4096byte pages ... 16mbyte) for 14bit page numbers (16384 4096byte pages) aka translating 24bit (virtual) addresses into 26bit (real) addresses (64mbytes) ... pending 370/xa and 31bit
posts mentioning some hacks that had to craft onto 3033
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2014k.html#36 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014g.html#83 Costs of core
https://www.garlic.com/~lynn/2012e.html#80 Word Length
https://www.garlic.com/~lynn/2011f.html#50 Dyadic vs AP: Was "CPU utilization/forecasting"
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: What is an N-bit machine? Newsgroups: comp.arch Date: Thu, 28 Nov 2024 12:44:23 -1000jgd@cix.co.uk (John Dallman) writes:
Amdahl had won the battle to make ACS, 360 compatible, then when ACS/360
was canceled, he left IBM and formed his own 370 clone mainframe
company.
https://people.computing.clemson.edu/~mark/acs_end.html
Circa 1971, Amdahl gave talk in large MIT auditorium and somebody in the audience asked him what justifications he used to attract investors and he replied that even if IBM were to completely walk away from 370, there was hundreds of billions in customer written 360&370 code that could keep him in business through the end of the century.
At the time, IBM had the "Future System" project that was planning on doing just that ... and I assumed that was what he was referring to ... however in later years he claimed that he never had any knowledge about "FS" (and had left IBM before it started).
trivia: during FS, internal politics was killing off 370 projects and
claims are the lack of new 370 products in the period is what gave the
clone 370 makers (including Amdahl) their market foothold. some more
info
http://www.jfsowa.com/computer/memo125.htm
when FS finally imploded, there was mad rush to get stuff back into the
370 product pipelines, including kicking off the quick&dirty 3033&3081
efforts.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning Amdahl's talk at MIT
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2017j.html#66 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017g.html#22 IBM Future Sytem 1975, 1977
https://www.garlic.com/~lynn/2014h.html#68 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014h.html#65 Are you tired of the negative comments about IBM in this community?
https://www.garlic.com/~lynn/2012l.html#27 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012f.html#78 What are you experiences with Amdahl Computers and Plug-Compatibles?
https://www.garlic.com/~lynn/2009p.html#82 What would be a truly relational operating system ?
https://www.garlic.com/~lynn/2008s.html#17 IBM PC competitors
https://www.garlic.com/~lynn/2007f.html#26 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2003i.html#3 A Dark Day
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SUN Workstation Tidbit Date: 28 Nov, 2024 Blog: Facebooksome people from stanford came to ibm palo alto science center (PASC) to ask if IBM would build/sell a workstation they developed. PASC put together review with several IBM locations and projects (including ACORN eventually announced as ibm/pc, only one that made it out as product). All IBM locations and projects claimed they were doing something much better ... and IBM declines. Stanford people then form their own company, SUN.
ibm unix workstation, 801/risc, iliad, romp, rios, pc/rt, rs/6000,
power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
some posts mentioning Stanford people asking IBM to produce
workstation they developed
https://www.garlic.com/~lynn/2024e.html#105 IBM 801/RISC
https://www.garlic.com/~lynn/2023c.html#11 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#11 Open Software Foundation
https://www.garlic.com/~lynn/2023.html#40 IBM AIX
https://www.garlic.com/~lynn/2022c.html#30 Unix work-alike
https://www.garlic.com/~lynn/2021i.html#100 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021c.html#90 Silicon Valley
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2017k.html#33 Bad History
https://www.garlic.com/~lynn/2017i.html#15 EasyLink email ad
https://www.garlic.com/~lynn/2017f.html#86 IBM Goes to War with Oracle: IT Customers Praise Result
https://www.garlic.com/~lynn/2014g.html#98 After the Sun (Microsystems) Sets, the Real Stories Come Out
https://www.garlic.com/~lynn/2013j.html#58 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2012o.html#39 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and Amdahl history (Re: What is an N-bit machine?) Newsgroups: comp.arch Date: Fri, 29 Nov 2024 08:24:46 -1000anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
a little history drift ... IBM communication group had corporate strategic ownership of everything that crossed the datacenter walls and was fiercely fighting off client/server and distributed computing (trying to preserve its dumb terminal paradigm). Late 80s, a senior disk engineer got a talk scheduled at a world-wide, internal, annual communication group conference supposedly on 3174 performance but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division; the disk division was seeing data fleeing datacenter to more distributed computing friendly platforms with drops in disk sales. The disk division had tried to come up with a number of solutions, but they were constantly being vetoed by the communication group.
One of the disk division executive's (partial) countermeasure was investing in distributed computing startups that would use IBM disks (and would periodically ask us to drop in on investments to see if we could help).
It wasn't just disks but whole mainframe industry and a couple years
later IBM had one of the largest losses in the history of US
corporations and was being reorged into the 13 "baby blues" (a take-off
on the AT&T baby blues and its breakup a decade early) in preparation
for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
(corporate hdqtrs) asking if we could help with the breakup. Before we
get started, the board brings in the former AMEX president as CEO to try
and save the company, who (somewhat) reverses the breakup.
note AMEX had been in competition with KKR for LBO (private-equity)
take-over of RJR and KKR wins, it then runs into some difficulties and
hires away AMEX president to help
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
later as IBM CEO, uses some of the same methods used at RJR:
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
In the 80s, IBM mainframe hardware was majority of IBM revenue but by the turn of the century it was a few percent of revenue and dropping. Around 2010-2013, mainframe hardware was a couple percent of IBM revenue and still dropping, although the mainframe group was 25% of revenue (and 40% of profit) ... aka software and services.
... IBM was turning into a financial engineering company
IBM deliberately misclassified mainframe sales to enrich execs, lawsuit
claims. Lawsuit accuses Big Blue of cheating investors by shifting
systems revenue to trendy cloud, mobile tech
https://www.theregister.com/2022/04/07/ibm_securities_lawsuit/
IBM has been sued by investors who claim the company under former CEO
Ginni Rometty propped up its stock price and deceived shareholders by
misclassifying revenues from its non-strategic mainframe business - and
moving said sales to its strategic business segments - in violation of
securities regulations.
flash-back: mid-80s, the communication group had been blocking release of mainframe TCP/IP ... but when that was reversed, it changed its tactic and said that since they had strategic ownership of everything that crossed datacenter walls, it had to be released through them; what shipped got aggregate of 44kbit/sec using nearly whole 3090 processor. I then add support for RFC1044 and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel media throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
posts mentioning communication group stranglehold on mainframe
datacenters
https://www.garlic.com/~lynn/subnetwork.html#terminal
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
some recent posts mentioning IBM becoming financial engineering
company
https://www.garlic.com/~lynn/2024f.html#121 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#108 Father, Son & CO. My Life At IBM And Beyond
https://www.garlic.com/~lynn/2024e.html#141 IBM Basic Beliefs
https://www.garlic.com/~lynn/2024e.html#137 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#124 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#77 The Death of the Engineer CEO
https://www.garlic.com/~lynn/2024e.html#51 Former AMEX President and New IBM CEO
https://www.garlic.com/~lynn/2024.html#120 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023f.html#22 We have entered a second Gilded Age
https://www.garlic.com/~lynn/2023c.html#72 Father, Son & CO
https://www.garlic.com/~lynn/2023c.html#13 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#118 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#105 IBM 360
https://www.garlic.com/~lynn/2022f.html#105 IBM Downfall
https://www.garlic.com/~lynn/2022d.html#83 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022c.html#91 How the Ukraine War - and COVID-19 - is Affecting Inflation and Supply Chains
https://www.garlic.com/~lynn/2022c.html#46 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2022b.html#115 IBM investors staged 2021 revolt over exec pay
https://www.garlic.com/~lynn/2022b.html#52 IBM History
https://www.garlic.com/~lynn/2022.html#108 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2022.html#54 Automated Benchmarking
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM and Amdahl history (Re: What is an N-bit machine?) Newsgroups: comp.arch Date: Fri, 29 Nov 2024 11:51:07 -1000anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
one of the last nails in the future system coffin was study by the IBM Houston Scientific Center that if 370/195 applications were rewritten for Future System machine made out of the fastest available technology, it would have throughput of 370/145 (about factor of 30 times slowdown).
After graduating and joining IBM, one of my hobbies was enhanced production operating systems for IBM internal datacenters ... and was ask to visit lots of locations in US, world trade, europe, asia, etc (one of my 1st and long time customers was the world-wide, branch office, online sales&marketing support HONE systems). I continued to work on 360/370 all through FS, even periodically ridiculing what they were doing (it seemed as if the people were so dazzled by the blue sky technologies, they had no sense of speeds&feeds).
I had done a paged mapped filesystem for CMS and claimed I learned what not to do from TSS/360 single level store. FS single-level store was even slower than TSS/360 and S/38 was simplified and slower yet ... aka for S/38 low-end/entry market there was plenty of head room between their throughput requirements and the available hardware technology, processing power, disk speed, etc. S/38 had lots of canned applications for its market and very high-level, very simplified system and programming environment (very much RPG oriented).
Early/mid 80s, my brother was regional Apple marketing manager and when he came into town, I could be invited to business dinners ... including arguing MAC design with developers (before announce). He had stories about figuring out how to remotely dial into the S/38 running Apple to track manufacturing and delivery schedules.
other trivia: late 70s, IBM had effort to move the large variety of internal custom CISC microprocessors (s/38, low&mid range 370s, controllers, etc) to 801/risc chips (with common programming environment). First half 80s, for various reasons, those 801/RISC efforts floundered (returning to doing custom CISC) and found some of the 801/RISC chip engineers leaving IBM for other vendors.
1996 MIT Sloan The Decline and Rise of IBM
https://sloanreview.mit.edu/article/the-decline-and-rise-of-ibm/?switch_view=PDF
1995 l'Ecole de Paris The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
CMS page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: What is an N-bit machine? Newsgroups: comp.arch Date: Fri, 29 Nov 2024 17:42:25 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
In the wake of the Future System mid-70s implosion, the head of POK (high-end mainframe) also convinced corporate to kill the (virtual machine) VM370 product, shutdown the development group and transfer all the people to POK to work on MVS/XA (i.e. a lot of XA/370 changes were to address various bloat&kludges in MVS/370). Come 80s, with 3081 and MVS/XA, customers weren't converting as planned, continuing to run 3081 370-mode with MVS/370. Amdahl was having more success, it had developed microcode hypervisor/virtual machine ("multiple domain") support and able to run both MVS/370 and MVS/XA concurrently on the same (Amdahl) machine (note Endicott did eventually obtain VM370 product responsibility for the mid-range 370s, but had to recreate a development group from scratch).
Amdahl had another advantage, initially 3081 was two processor only and 3081D aggregate MIPS was less than the single processor Amdahl machine. IBM doubles the processor cache sizes for the 2-CPU 3081K, having about same aggregate MIPs as Amdahl single CPU .... however at the time, IBM MVS documents had MVS two-processor (multiprocessor overhead) support only getting 1.2-1.5 times the throughput of single processor (aka Amdahl single processor getting full MIPS throughput while MVS two processor 3081K loosing lots of throughput to multiprocessor overhead).
POK had done a rudementary virtual software system for MVS/XA testing ... which eventually ships as VM/MA (migration aid) and then VM/SF (system facility) ... however since 370/XA had been primarily focused to compensate for MVS issues ... 3081 370/XA required a lot of microcode tweaks when running in virtual machine mode and 3081 didn't have the space ... so switching in and out of VM/MA or VM/SF with virtual machine mode, had a lot of overhead "paging" microcode. It was almost a decade later before IBM was able to respond to Amdahl's hypervisor/multiple-domain with 3090 LPAR & PR/SM support.
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some posts mentioning Amdahl, hypervisor, multiple domain, 3081,
vm/ma, vm/sf, sie, page microcode, 3090, LPAR
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Mainframe User Group SHARE Date: 30 Nov, 2024 Blog: Facebookrecent similar post in usenet comp.arch group
Long winded warning: Future System project, 1st half of 70s was completely different than 370 and was going to completely replace 370; internal politics during FS was killing off 370 efforts and claim was that the lack of new 370 stuff gave clone 370 makers (including Amdahl) their market foothold (and IBM marketing having to fall back on lots of FUD). Then when FS imploded there was mad rush to get stuff back into the 370 product pipelines.
When I had graduated and joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters and I continued to work on 360&370 all during FS, even periodically ridiculing what they were doing (and online branch office sales&marketing support HONE systems was 1st and long time customer). In the morph of CP67->VM370 transition, lots of features were dropped (including multiprocessor support) or greatly simplified. I then was adding lots of stuff back in starting with VM370R2 (including re-org of the kernel for multiprocessor support, but not actual multiprocessor support itself).
In the 23jun1969 unbundling announce included started charging for (application) software (but managed to make the case that kernel software should still be free). With the rise of clone 370 makers, there was decision to start charging for new kernel software, eventually transitioning to charging for all kernel software in the 80s (which was then followed by the OCO-wars) and bunch of my internal stuff was selected as "charged-for" guinea pig (released with VM370R3).
All the US HONE datacenters had been consolidated in Palo Alto (trivia: when FACEBOOK 1st moves into Silicon Valley, it was into a new bldg built next door to the former US HONE datacenter) and had been upgraded to largest shared DASD, single-system-image operation with load balancing and fall-over across the complex. I then put multiprocessor support into a VM370R3-based CSC/VM, initially for US HONE so they can add a 2nd processor to each system (16 CPUs aggregate, each SMP system was getting twice throughput of single processor system; combination of very short SMP overhead pathlengths and some cache affinity hacks). When IBM wanted to release multiprocessor support in VM370R4, there is a problem. Kernel charging (transition) had requirement that hardware support was (still) free and couldn't require charged-for software as pre-req (multiprocessor kernel re-org pre-req was in my VM370R3-based charged-for product). Eventually decision was made to move something like 80%-90% of code from my "charged-for" VM370R3-bsed add-on product, into the free VM370R4 base (w/o change in price for my VM370R4-based charged-for product).
Part of the FS implosion and mad rush back to 370, Endicott cons me
into helping with ECPS microcode assist ... old post with initial
analysis for ECPS
https://www.garlic.com/~lynn/94.html#21
and another group cons me into helping with 16-cpu SMP multiprocessor (in part because I was getting 2-CPU SMP throughput twice 1-CPU throughput) and the 3033 processor engineers in helping in their spare time. Everybody thought it was really great until somebody tells the head of POK that it could be decades before the POK favorite son operating systems ("MVS") had (effective) 16-CPU SMP support (MVS documentation had 2-CPU SMP only getting 1.2-1.5 throughput of single CPU); POK doesn't ship a 16-CPU SMP until after turn of the century. He then directs some of us to never visit POK again and the 3033 processor engineers "heads down" and no distractions.
The head of POK also convinces corporate to kill VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission for the mid-range, but had to recreate a development group from scratch). They weren't planning on telling the VM370 people ahead of time, about shutdown & move to POK, to minimize those that might escape the move. The information leaked early and several managed to escape (this was in the early days of DEC VMS and joke was that the head of POK was major contributor to VMS). There then was witch hunt for the leak source, fortunately for me, nobody gave up the source.
POK executives were then going around to internal datacenters (including HONE) trying to browbeat them into moving off VM/370 to MVS (late 70s, HONE started sequence of 3-4 year-long programs unsuccessfully trying to move to MVS; then in early 80s somebody decided that HONE was unsuccessful in moving to MVS, because they were running my enhanced CSC/VM systems ... so HONE got mandate to move to vanilla VM370 product (then they would be able to move to MVS).
Early 80s, transition to "charging-for" all kernel software was complete and then begins 2nd part, software becomes object-code-only and the "OCO-wars".
Later customers weren't converting to MVS/XA as planned but Amdahl was having more success, Amdahl had a purely microcoded HYPERVISOR ("multiple domain") and was able to run MVS and MVS/XA concurrently on the same machine (helping customers to migrate). POK had done a rudimentary virtual machine system for MVS/XA testing (never intended for customer release) ... which POK eventually ships as VM/MA (migration aid) and then VM/SF (system facility) ... however since 370/XA had been primarily focused to compensate for MVS issues ... 3081 370/XA required a lot of microcode tweaks when running in virtual machine mode and 3081 didn't have the space ... so switching in and out of VM/MA or VM/SF with virtual machine mode, had a lot of overhead "paging" microcode (it was almost a decade later before IBM was able to respond to Amdahl's hypervisor/multiple-domain with 3090 LPAR & PR/SM support).
Note: 3081s originally were only to be multiprocessor and initial 3081D aggregate MIP rate was less than Amdahl single processor MIP rate. IBM doubles processor cache size for 3081K which give it about same aggregate MIP rate as Amdahl 1-CPU systems (although Amdahl 1-CPU had higher MVS throughput since MVS 2-CPU overhead only got 1.2-1.5 times the throughput of single CPU systems)
The VMMA/VMSF people then had proposal for a few hundred person group to enhance VMMA/VMSF with the feature, function, and performance of VM/370 for VM/XA. A possible alternative was an internal Rochester sysprog had added full 370/XA support to VM370 ... but the POK group prevails.
trivia: a corporate performance specialist reviewed my VM370R3-based charge-for product and said that he wouldn't sign off release because it didn't have any manual tuning knobs which were the current state-of-the-art. I tried to explain Dynamic Adaptive Resource Management/Scheduling (that I had originally done for CP67 as undergraduate in the 60s). I then created some manual tuning knobs and packaged as "SRM" (parody on the vast array of MVS SRM tuning knobs) with full source, documentation and formulas ... and the dynamic adaptive resource management/scheduling was packaged as "STP" ("racer's edge" from TV advertisements). What most people never caught was from operation research "degrees of freedom" ... the STP dynamic adaptive could compensate for any SRM manual setting.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
IBM Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67I, CSC/VM, and SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
dynamic adaptive resource management/scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
some other recent posts mentioning OCO-wars
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#114 REXX
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024d.html#81 APL and REXX Programming Languages
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024c.html#103 CP67 & VM370 Source Maintenance
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2023g.html#99 VM Mascot
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#111 Copyright Software
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#59 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2023c.html#41 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#23 IBM VM370 "Resource Manager"
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2023.html#68 IBM and OSS
https://www.garlic.com/~lynn/2022e.html#7 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022b.html#118 IBM Disks
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021h.html#55 even an old mainframer can do it
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Mainframe User Group SHARE Date: 30 Nov, 2024 Blog: Facebookre:
Note "SLAC First VM 3081" button better than 3033. Folklore is 1st 3033 order was VM370 customer and it was going to be great loss of face for POK, especially since they had only recently convinced corporate to kill the VM370 product ... and gov. regs required machines ship in the sequence they were ordered. They couldn't do anything about the shipping sequence, but they managed to fiddle the van delivery making a MVS 3033 the first "install" (for publications and publicity).
After transferring to SJR in late 70s, I got to wander around to lots of IBM (and other) datacenters in silicon valley ... as well attending the monthly BAYBUNCH meetings hosted by SLAC. Early 80s, I got permission to give presentations on how ECPS was done ... normally after meetings we adjourn to local watering holes and the Amdahl people tell me about in process of developing MACROCODE and HYPERVISOR ... and grilling me about more details on ECPS.
I counter with SLAC/CERN, initially 168E & then 3081E ... sufficient
370 to run fortran programs to do initial data reduction along
accelerator line.
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf
And SLAC had the first webserver in the US on their VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
I also got to wander around disk bldg 14/engineering and 15/product-test across the street, they were running 7x24, prescheduled, stand alone mainframe testing. They mentioned they had recently tried MVS but it had 15min MTBF (requiring manual re-ipl) in that environment. I offer to rewrie I/O supervisor to make it bullet proof and never fail, allowing any amount of concurrent, ondemand testing, greatly improving productivity (downside was they kept calling me to spend increasing amount of time playing disk enginneer). Bldg15 would get early engineering mainfames for disk i/o test and got the first 3033 outside POK enginneering, I/O testing took only a percent or two of 3033 CPU, so we scrounge up a 3830 controller and 3330 string and setup our own online service.
Air bearing simulation was being run on SJR 370/195 (part of thin film disk head design, first ships with 3370FBA), but only getting a few turn-arounds/month. We set it up on bldg15 3033 (only half 195 MIPs) and was getting several turn-arounds/day. Also run a 3270 coax underneath the street to my office in bldg28.
They then get engineering 4341 and branch office finds out about it and cons me into doing (CDC6600 fortran) benchmark on it for a national lab looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).
posts about getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
some recent posts mentioning 4341 benchmark
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#48 VAX MIPS whatever they were, indirection in old architectures
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#91 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Applications That Survive Date: 01 Dec, 2024 Blog: FacebookUndergraduate in the 60s took two credit hr intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO in 360 assembler ... univ was getting 360/67 replacing 709/1401 and pending 360/67 (for tss/360), temporarily got 360/30 replacing 1401 (30 had 1401 emulation, so I was just part of exercise in getting to know 360, I was given pile of hardware&software manuals and got to design & implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc; within a few weeks had 2000 card assembler program). Univ. datacenter shutdown for weekend and I got the whole place to myself, although 48yrs w/o sleep made monday classes hard. Within a year of taking intro class, 360/67 came in and I was hired fulltime responsible for os/360 (tss/360 never came to production, so ran as 360/65) and I continued to have my weekend dedicated 48hrs.
Univ. had 407 plug-board (admin financial) application that had been redone in 709 cobol simulating 407 plug-board, including printing 407 sense switch values at the end ... that was ported to os/360. One day the program ended with different sense switch values. They stopped all production work while looking for somebody that knew what it meant; after a couple of hrs (shutdown) they decided to run it again to see what happens.
other trivia: 709 (tape->tape) ran student fortran in under second, initially ran over minute 360/67 (as 360/65) os/360. I install HASP cuts time in half. I then start redoing STAGE2 SYSGEN to place datasets and PDS members to optimize arm seek and multi-track search cutting another 2/3rds to 12.9secs; student fortran never got better than 709 until I install Univ. Waterloo WATFOR. Before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in the world, 360/65s arriving faster than could be installed, boxes constantly staged in the hallway around the machine room. Then part of disaster planning, they decide to replicate Renton at the new 747 plant up in Everett ... another large number of 360/65s (somebody joked that Boeing was getting 360/65s like other companies got keypunches).
Early 80s, I'm introduced to John Boyd and would sponsor his briefings
at IBM. He had lots of stories, including about being vocal that
electronics across trail wouldn't work and possibly as punishment he
is put in command of spook base (about the same time I'm at
Boeing). His biographies have spook base a $2.5B "wind fall" for IBM
(ten times Renton), Boyd would comment "spook base" datacenter had the
largest air conditioned bldg in that part of the world ... refs:
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White
misc. note: Marine Corps Commandant 89/90 leverages Boyd for make-over
of Corps ... a time when IBM was desperately in need of make-over and
a couple years later IBM has one of the largest losses in the history
of US companies and was being re-orged into the 13 "baby blues"
(take-off on AT&T "baby bells" and breakup a decade earlier) preparing
for breakup.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup. longer winded account
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
Boyd posts
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
recent posts mentioning univ 709/1401, 360/67, WATFOR, Boeing CFO, and
Renton
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: We all made IBM 'Great' Date: 01 Dec, 2024 Blog: Facebook... periodically reposted in various threads
Co-worker at cambridge science center was responsible for the science
center CP67-based wide-area network which morphs into the IBM internal
network (larger than arpanet/internet from the beginning until
sometime mid/late 80s, about the time communication group forced the
internal network to convert to SNA/VTAM) and technology also used for
the corporate sponsored Univ BITNET ("EARN" in Europe) ... account
from one of the inventors of GML (at CSC, 1969)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Edson passed Aug2020
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
Early 80s, I got HSDT project, T1 and faster computer links (both satellite and terrestrial) and some amount of battles with communication group. 60s, IBM had 2701 telecommunication controller that supported T1 links. Mid-70s IBM moves to SNA/VTAM and various issues seem to cap controllers at 56kbit links. Mid-80s I reported to same executive as person responsible for AWP164 (aka APPN) and I periodically needle him about coming over and working on "real" networking (TCP/IP). When they went for APPN product announcement, the communication group vetoed it. The announcement then was carefully rewritten to not imply any relationship between APPN and SNA. Sometime later they rewrite history to claim APPN part of SNA.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Also mid-80s, communication group was blocking mainframe TCP/IP announce (part of fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm). When that was reversed, they then claimed that since they had corporate strategic responsibility for everything that cross datacenter walls, it had to be shipped through them. What is delivered got aggregate 44kbytes/sec using nearly whole 3090 processor. I then do RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel media throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
Late 80s, senior disk engineer got talk scheduled at annual,
world-wide, internal communication group conference supposedly on 3174
performance but opens the talk with statement that the communication
group was going to be responsible for the demise of the disk
division. They were seeing data fleeing mainframe datacenters to more
distributed computing friendly platforms with drop in disk sales. They
had come up with several solutions that were all vetoed by the
communication group. Communication group mainframe datacenter
stranglehold wasn't just disks and a couple years later IBM has one of
the largest losses in the history of US companies and was being
re-orged into the 13 "baby blues" (take-off on AT&T "baby bells"
and breakup a decade earlier) in preparation for breaking up the
company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
Communication group protecting dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
a few past posts mentioning AWP164/APPN
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022e.html#34 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2014e.html#15 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013n.html#26 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013j.html#66 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2013g.html#44 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012k.html#68 ESCON
https://www.garlic.com/~lynn/2012c.html#41 Where are all the old tech workers?
https://www.garlic.com/~lynn/2011l.html#26 computer bootlaces
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2009l.html#3 VTAM security issue
https://www.garlic.com/~lynn/2007q.html#46 Are there tasks that don't play by WLM's rules
https://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
https://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: We all made IBM 'Great' Date: 02 Dec, 2024 Blog: Facebookre:
Learson tries (and fails) to block the bureaucrats, careerists and
MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
more recent
https://www.garlic.com/~lynn/2024g.html#23 IBM Move From Leased To Sales
and after turn of century (and former AMEX president leaves for
Carlyle), and IBM becomes financial engineering company
https://www.garlic.com/~lynn/2024g.html#26 IBM Move From Leased To Sales
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
some recent posts mentions becoming financial engineering company
https://www.garlic.com/~lynn/2024f.html#121 IBM Downturn and Downfall
https://www.garlic.com/~lynn/2024f.html#108 Father, Son & CO. My Life At IBM And Beyond
https://www.garlic.com/~lynn/2024e.html#124 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#77 The Death of the Engineer CEO
https://www.garlic.com/~lynn/2024e.html#51 Former AMEX President and New IBM CEO
https://www.garlic.com/~lynn/2024.html#120 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023c.html#72 Father, Son & CO
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Back When Geek Humour Was A New Concept To Me ... Newsgroups: alt.folklore.computers Date: Mon, 02 Dec 2024 14:14:31 -1000I (and others) keynote at NASA/CMU Dependable Computing workshop
When I first transfer out to SJR in 2nd half of 70s, I get to wander around IBM (and other) datacenters, including disk bldg14/enginneering and bldg15/product-test across the street. They were running 7x24, prescheduled, stand alone testing and mentioned that they had recently tried MVS ... but it had 15min MTBF (in that environment), requiring re-ipl. I offer to rewrite I/O supervisor to make it bullet proof and never fail so they could do any amount of on-demand, concurrent testing, greatly improving productivity (downside was that they wanted me to increasingly spend time playing disk engineer). I do an internal research report about "I/O integrity" and happen to mention the MVS 15min MTBF. I then get a call from the MVS group, I thot that they wanted help in improving MVS integrity ... but it seems they wanted to get me fired for (internally) disclosing their problems.
1980, IBM STL was bursting at the seams and they were moving 300 (people&3270s from IMS DBMS group) to offsite bldg with dataprocessing back to STL datacenter ... they had tried "remote" 3270 support and found the human factors unacceptable. I get con'ed into doing "channel extender" support so they can place channel attached 3270 controllers at the off-site bldg with no perceptable difference in the human factors offsite and in STL. The vendor then tries to get IBM to release my support but there is group in POK that get it vetoed (they were playing with some serial stuff and afraid that if it was in the market, it would make it difficult to releasing their stuff). The vendor then replicates my implementation.
Role forward to 1986 and 3090 product administrator tracks me done.
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
There was an industry service that collected customer mainframe EREP (detailed error reporting) data and generated periodic summaries. The 3090 engineers had designed the I/O channels predicting there would be a maximum aggregate of 4-5 "channel errors" across all customer 3090 installations per year ... but the industry summary reported total aggregate of 20 channel errors for 3090s first year.
It turned out for certain types of channel-extender transmission errors, I had selected simulating "channel error" in order to invoke channel program retry (in error recovery) ... and the extra 15 had come from customers running the channel-extender support. I did a little research (various different kernel software) and found simulating IFCC (interface control check) would effectively perform the same kinds of channel program retry (and got the vendor to change their implementation from "CC" to "IFCC").
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
some posts mentioning 3090 channel check ("CC") errors
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#27 STL Channel Extender
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2019c.html#16 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2016h.html#53 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016f.html#5 More IBM DASD RAS discussion
https://www.garlic.com/~lynn/2012e.html#54 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2010i.html#2 Processors stall on OLTP workloads about half the time--almost no matter what you do
https://www.garlic.com/~lynn/2009l.html#60 ISPF Counter
https://www.garlic.com/~lynn/2008q.html#33 Startio Question
https://www.garlic.com/~lynn/2008g.html#10 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2007l.html#7 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007f.html#53 Is computer history taught now?
https://www.garlic.com/~lynn/2006n.html#35 The very first text editor
https://www.garlic.com/~lynn/2006b.html#21 IBM 3090/VM Humor
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Apollo Computer Newsgroups: alt.folklore.computers Date: Mon, 02 Dec 2024 14:51:22 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
then went over to head up Somerset (i.e. AIM; apple, ibm, motorola) single-chip processor and I somewhat characterize as adding motorola 88k risc multiprocessor cache coherency ... then can have large scalable clusters of multiprocessor systems (rather than just clusters of single processor systems)
https://en.wikipedia.org/wiki/AIM_alliance
https://wiki.preterhuman.net/The_Somerset_Design_Center
https://en.wikipedia.org/wiki/IBM_Power_microprocessors#PowerPC
https://en.wikipedia.org/wiki/Motorola_88000
In the early 1990s Motorola joined the AIM effort to create a new RISC
architecture based on the IBM POWER architecture. They worked a few
features of the 88000 (such as a compatible bus interface[10]) into
the new PowerPC architecture to offer their customer base some sort of
upgrade path. At that point the 88000 was dumped as soon as possible
... snip ...
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Back When Geek Humour Was A New Concept To Me ... Newsgroups: alt.folklore.computers Date: Tue, 03 Dec 2024 07:58:21 -1000Lynn Wheeler <lynn@garlic.com> writes:
AADS chip mentioned in NASA/CMU dependable workship talk ... more
here in Assurance panel in trusted computing track at IDF:
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
and prototype chips used in NACHA pilot (23july2001)
https://web.archive.org/web/20070706004855/http://internetcouncil.nacha.org/News/news.html
assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance
AADS refs
https://www.garlic.com/~lynn/x959.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Mainframe User Group SHARE Date: 03 Dec, 2024 Blog: Facebookre:
TYMSHARE in Aug1976 started providing their CMS-based online computer
conferencing system "free" to SHARE as VMSHARE, archives here
http://vm.marist.edu/~vmshare
which has some posts about the OCO-wars
TYMSHARE was one of the places I would wander around to (and see people at the monthly BAYBUNCH meetings hosted by SLAC). I cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on the internal network and systems (including the online branch office sales&marketing HONE systems). The biggest problem I had was with lawyers that were concerned that internal IBM employees would be contaminate exposed to unfiltered customer opinions and statements.
On one visit they demo'ed ADVENTURE that somebody had found on Stanford PDP10 SAIL system and ported to CMS. I got copy (with full source) for making available on internal IBM systems ... and would distribute source to anybody that showed they got all points ... and shortly versions with more points started appearing as well as PLI version.
Tymshare
https://en.wikipedia.org/wiki/Tymshare
online, virtual machine-based, commercial services:
https://www.garlic.com/~lynn/submain.html#online
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
recent posts mentioning tymshare vmshare &/or adventure:
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#9 4th Generation Programming Language
https://www.garlic.com/~lynn/2024f.html#125 Adventure Game
https://www.garlic.com/~lynn/2024f.html#106 NSFnet
https://www.garlic.com/~lynn/2024f.html#75 Prodigy
https://www.garlic.com/~lynn/2024f.html#11 TYMSHARE, Engelbart, Ann Hardy
https://www.garlic.com/~lynn/2024f.html#4 IBM (Empty) Suits
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#139 RPG Game Master's Guide
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#60 Early Networking
https://www.garlic.com/~lynn/2024e.html#20 TYMSHARE, ADVENTURE/games
https://www.garlic.com/~lynn/2024e.html#1 TYMSHARE Dialup
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#77 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#74 Some Email History
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024d.html#37 Chat Rooms and Social Media
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#110 Anyone here (on news.eternal-september.org)?
https://www.garlic.com/~lynn/2024c.html#104 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#103 CP67 & VM370 Source Maintenance
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2024b.html#90 IBM User Group Share
https://www.garlic.com/~lynn/2024b.html#87 Dialed in - a history of BBSing
https://www.garlic.com/~lynn/2024b.html#81 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024.html#109 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#47 3330, 3340, 3350, 3380
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IT Career Memory Date: 03 Dec, 2024 Blog: FacebookLast product did at IBM was HA/CMP
very early 90s, was on Far East marketing tour riding up in Hong Kong skyscraper bank bldg elevator with local IBMers and some customers. From back of the elevator came a question from young, newly minted local IBMer, "are you wheeler of the 'wheeler scheduler'" (dynamic adaptive resource management/scheduling for CP67 I had originally done as undergraduate in the 60s, and then adapted for VM370 in the mid-70s), I say "yes", he says, "we studied you at Univ of Waterloo" (I ask was the joke embedded in the code mentioned?).
HA/CMP had started out as HA/6000 originally for the NYTimes to move their newspaper system (ATEX) from VAXCluster to RS/6000. I then rename it HA/CMP when start doing technical/scientific cluster scale-up with the national labs and commercial scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres). Early Jan1992, we had meeting between Oracle CEO and IBM executive where Ellison is told we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and told we can't do anything involving more than four system clusters (leave IBM a few months later).
dynamic adaptive resource management/scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IT Career Memory Date: 04 Dec, 2024 Blog: Facebookre:
After graduating and joining IBM Cambridge Science Center, one of my hobbies was enhanced production operating systems for internal datacenters. Then in the morph from CP67->VM370, a lot of stuff was simplified and/or dropped and starting with VM370R2, I began adding lots of it back in (for my internal CSC/VM production systems).
23jun1969 unbundling announce included charging for software (although they were able to make the case that kernel software was still free). Then the IBM Future System project, completely different and completely replace 370 and internal politics was killing off 370 efforts, claim was the lack of new IBM 370 products during FS gave the 370 clone makers their market foothold (all during FS I continued to work on 360&370 stuff, and periodically ridiculing FS). When FS finally implodes, there is mad rush to get stuff back in the 370 product pipelines, including kicking of quick&dirty 3033&3081 efforts in parallel.
Also the rise of clone 370 makers contributed to decision to start charging for kernel software and a bunch of my internal production VM370 stuff was selected for the guinea pig. A corporate expert reviewed it and said he wouldn't sign off because it didn't have any manual tuning knobs which were "state-of-the-art". I tried explaining dynamic adaptive, but it fell on deaf ears.
I package up some manual tuning knobs as a joke and call it SRM ... as parody on MVS SRM vast array, with full source code and documentation, joke was from operations research, the dynamic adaptive had greater degrees of freedom than the SRM values so could dynamically compensate for any manual setting; I package all the dynamic adaptive as "STP" (from TV adverts "the racer's edge").
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Dynamic Adaptive Resource Management/Scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
23Jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Implicit Versus Explicit "Run" Command Newsgroups: alt.folklore.computers Date: Wed, 04 Dec 2024 16:16:16 -1000Peter Flass <peter_flass@yahoo.com> writes:
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Trump, Bankruptcy, Russians Date: 05 Dec, 2024 Blog: Facebookremember Trump went bankrupt so many times they said no US bank would touch them ... his son then said it didn't matter because they can get all the money they need from the Russians .. we don't need no stinkin US banks, we have the Russians.
Eric Trump in 2014: 'We have all the funding we need out of Russia'
https://thehill.com/homenews/news/332270-eric-trump-in-2014-we-dont-rely-on-american-banks-we-have-all-the-funding-we
ERIC TRUMP REPORTEDLY BRAGGED ABOUT ACCESS TO $100 MILLION IN RUSSIAN
MONEY. "We don't rely on American banks. We have all the funding we
need out of Russia."
https://www.vanityfair.com/news/2017/05/eric-trump-russia-investment-golf-course
How Russian Money Helped Save Trump's Business. After his financial
disasters two decades ago, no U.S. bank would touch him. Then foreign
money began flowing in.
https://foreignpolicy.com/2018/12/21/how-russian-money-helped-save-trumps-business/
Trump's oldest son said a decade ago that a lot of the family's assets
came from Russia
https://www.businessinsider.com/donald-trump-jr-said-money-pouring-in-from-russia-2018-2
Here are 18 reasons Trump could be a Russian asset
https://www.washingtonpost.com/opinions/here-are-18-reasons-why-trump-could-be-a-russian-asset/2019/01/13/45b1b250-174f-11e9-88fe-f9f77a3bcb6c_story.html
The DOJ under Barr wrongly withheld parts of a Russia probe memo, a
court rules
https://www.npr.org/2022/08/20/1118625157/doj-barr-trump-russia-investigation-memo
Should William Barr Recuse Himself From Mueller Report? Legal Experts
Say Attorney General's Ties to Russia Are Troubling
https://www.newsweek.com/so-many-conflicts-so-little-time-1396435
some posts mentioning Trump, bankruptcy and Russians
https://www.garlic.com/~lynn/2024e.html#107 How the Media Sanitizes Trump's Insanity
https://www.garlic.com/~lynn/2023e.html#101 Mobbed Up
https://www.garlic.com/~lynn/2021.html#30 Trump and Republican Party Racism
some post mentioning attorney general bill barr
https://www.garlic.com/~lynn/2023b.html#71 That 80s Feeling: How to Get Serious About Bank Reform This Time and Why We Won't
https://www.garlic.com/~lynn/2022.html#107 The Cult of Trump is actually comprised of MANY other Christian cults
https://www.garlic.com/~lynn/2021c.html#22 Fighting to Go Home: Operation Desert Storm, 30 Years Later
https://www.garlic.com/~lynn/2019e.html#81 38 people cited for violations in Clinton email probe
https://www.garlic.com/~lynn/2019e.html#78 Retired Marine Gen. John Allen: 'There is blood on Trump's hands for abandoning our Kurdish allies'
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Remote Satellite Communication Date: 05 Dec, 2024 Blog: Facebookdrift, I got HSDT in early 80s (T1 and faster computer links, both terrestrial and satellite) ... some battles with IBM communication group ... in the 60s they had 2701 telecommunication controller that supported T1 links ... but the move to SNA/VTAM in the mid-70s and associated issues seemed to cap controllers at 56kbit/sec links. Was also working with NSF director and was suppose to get $20M to interconnect the NSF Supercomputer centers; then congress cuts the budget, some other things happened and eventually an RFP was released (in part based on some stuff we already had running). NSF 28Mar1986 Preliminary Announcement:
Along the way, NSF gave UC $120M(?) for Berkeley supercomputer center (although regents plan had next new bldg was for UCSD and it morphed into San Diego Supercomputing center) and I was asked to give some briefings for Berkeley. One of the results in 1983, was also asked to do some work with the Berkeley 10M telescope people ... with trips to Lick Observatory (east of san jose) ... they were also testing CCD as replacement for film. It was planned to be built on mountain in Hawaii and wanted to do remote viewing from mainland. Figured duplex 1200baud control ... but about 800kbits/sec (one-way, broadcast) satellite (for images) ... this was before the Keck foundation grant and it becomes Keck 10m.
other trivia: IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Berkeley/Keck 10M and Lick observatory posts
https://www.garlic.com/~lynn/2024d.html#19 IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#68 Berkeley 10M
https://www.garlic.com/~lynn/2024b.html#51 IBM Token-Ring
https://www.garlic.com/~lynn/2023d.html#39 IBM Santa Teresa Lab
https://www.garlic.com/~lynn/2023b.html#9 Lick and Keck Observatories
https://www.garlic.com/~lynn/2022e.html#104 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022.html#67 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2021k.html#56 Lick Observatory
https://www.garlic.com/~lynn/2021g.html#61 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021c.html#60 IBM CEO
https://www.garlic.com/~lynn/2021c.html#25 Too much for one lifetime? :-)
https://www.garlic.com/~lynn/2021b.html#25 IBM Recruiting
https://www.garlic.com/~lynn/2019e.html#88 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019c.html#50 Hawaii governor gives go ahead to build giant telescope on sacred Native volcano
https://www.garlic.com/~lynn/2019.html#47 Astronomy topic drift
https://www.garlic.com/~lynn/2018f.html#71 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#22 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018d.html#76 George Lucas reveals his plan for Star Wars 7 through 9--and it was awful
https://www.garlic.com/~lynn/2018c.html#89 Earth's atmosphere just crossed another troubling climate change threshold
https://www.garlic.com/~lynn/2017g.html#51 Stopping the Internet of noise
https://www.garlic.com/~lynn/2016f.html#71 Under Hawaii's Starriest Skies, a Fight Over Sacred Ground
https://www.garlic.com/~lynn/2015g.html#97 power supplies
https://www.garlic.com/~lynn/2015.html#20 Spaceshot: 3,200-megapixel camera for powerful cosmos telescope moves forward
https://www.garlic.com/~lynn/2015.html#19 Spaceshot: 3,200-megapixel camera for powerful cosmos telescope moves forward
https://www.garlic.com/~lynn/2014h.html#56 Revamped PDP-11 in Brooklyn
https://www.garlic.com/~lynn/2014g.html#50 Revamped PDP-11 in Honolulu or maybe Santa Fe
https://www.garlic.com/~lynn/2014.html#76 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014.html#8 We're About to Lose Net Neutrality -- And the Internet as We Know It
https://www.garlic.com/~lynn/2012o.html#55 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#86 OT: Physics question and Star Trek
https://www.garlic.com/~lynn/2010i.html#24 Program Work Method Question
https://www.garlic.com/~lynn/2009o.html#55 TV Big Bang 10/12/09
https://www.garlic.com/~lynn/2008f.html#80 A Super-Efficient Light Bulb
https://www.garlic.com/~lynn/2007t.html#30 What do YOU call the # sign?
https://www.garlic.com/~lynn/2005l.html#9 Jack Kilby dead
https://www.garlic.com/~lynn/2004h.html#8 CCD technology
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM RS/6000 Date: 07 Dec, 2024 Blog: Facebookre:
Last product had at IBM was HA/CMP, it was originally HA/6000 for the
NYTimes to move their newspaper system (ATEX) off VAXCluster to
RS/6000. We move project from Austin to Los Gatos in Dec1989 when
doing technical/scientific cluster scale-up with national labs (LLNL,
LANL, NCAR, etc) and commercial cluster scale-up (Oracle, Sybase,
Ingres, Informix that had VAXCluster in same source base with Unix)
... and rename it from HA/6000 to HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
Early Jan1992, meeting with Oracle, IBM executive tells Oracle CEO we would have 16-system clusters mid92 and 128-system clusters ye92. However late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (technical/scientific *ONLY*) and we are told we can't work on anything with more than four systems (we leave IBM a few months later).
I turned in all my IBM stuff including office 6000/520. However, we continued to do stuff helping GPD/Adstar executive and although had left IBM, he has my (former) office 520 delivered to my home.
A couple years later I am brought into largest airline res system to look at the 10 (impossible) things that they can't do. Am asked to first look at "ROUTES" (transactions finding airline flt segments getting from origin to destinations) and provided with softcopy of full OAG (all scheduled commercial flts in the world). I do a unix implementation on my home 520 ... initially 1st existing functions running about 100 times faster (claim existing design reflects 60s technology trade-offs, starting from scratch can make totally different technology trade-offs). Then add support for all ten impossible things. Two months later go back and demo everything and based on 520 benchmarks, ten 6000/990 cluster could easily handle every ROUTES request for every airline in the world. Then the hand-wringing started ... part of existing implementation had several hundred people massaging the OAG data ... which was all eliminated.
Mainfame/6000 comparison:
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 10-system cluster: 1,260MIPS
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM RS/6000 Date: 07 Dec, 2024 Blog: Facebookre:
Also in 1992, IBM has one of the largest losses in the history of US
companies and was being reorganized into the 13 "baby blues" (somewhat
take-off on AT&T "baby bells" breakup a decade earlier) in preparation
for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
During the IBM troubles of the early 90s, lots of stuff was being unloaded, real estate, projects, etc ... including lots of VLSI design tools were being turned over to industry VLSI tools vendor ... and since SUN was standard industry platform ... they all had to run on SUN machines. I got a contract to port a VS/Pascal 50,000 statement app to SUN and in retrospect it would have been easier to have rewritten it in "C" ... than getting it running in SUN pascal (which didn't look like it had been used for anything other than educational/instructional purposes). Also while it was easy to drop by SUN hdqtrs ... it turned out that they had outsourced pascal support to organization on the opposite of the earth.
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM RS/6000 Date: 07 Dec, 2024 Blog: Facebookre:
Early 80s, research co-worker leaves IBM and is doing lots of contracting work in silicon valley, fortran optimization for HSPICE vendor, work for senior engineering VP of large VLSI chip shop (which had large IBM mainframes), he redid AT&T mainframe C-compiler for VM370/CMS fixing lots of bugs and improving code optimization, and then porting a lot of Berkeley chip tools to CMS. One afternoon I get an hr phone call full of 4-letter words. An IBM marketing rep had come through and asked him what he was doing ... and he said Ethernet support so the front-end design workstations can use backend mainframe. The rep then tells him that he should be doing token-ring support instead or otherwise they might not find their mainframe support as timely as in the past. The next morning the senior vp of engineering has press conference and announces the company is moving everything off mainframes to sun servers.
IBM then has taskforces to investigate why silicon valley isn't using mainframes ... but can't look at issue of marketing reps.
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Creative Ways To Say How Old You Are Newsgroups: alt.folklore.computers Date: Sat, 07 Dec 2024 13:38:44 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Within a year of taking intro class, the 360/67 arrived and I was hired fulltime responsible for os/360 (and continued to have my 48hrs weekend dedicated time, TSS/360 never came to production, so ran as 360/65). Student fortran ran less than sec on 709 tape->tape, but more than minute w/os360. I install HASP and it cuts time in half and then redoing stage2 SYSGEN to carefully place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. It never got better than 709 until I install univ. of waterloo WATFOR.
IBM Cambridge Science Center comes out to install CP67/CMS (3rd installation after CSC itself and MIT Lincol Labs). I mostly got to play with it during my dedicated weekend time, initially reWriting lots of pathlengths running OS/360 in virtual machine. Test stream ran 322secs on bare machine but 856secs in virtual machine). After a couple months I got CP67 CPU down to 132secs (from 534). I then redo scheduling, dispatching, page replacement, I/O, ordered arm seek disk queuing (replacing FIFO), 2301 drum (fixed head per track) replace FIFO single page transfer I/O (about 70/sec) to multiple 4k transfers optimized for max. transfer/revolution (peak 270/sec).
trivia: 2303 & 2301 drums were similar ... except 2301 transferred four
heads in parallel with four times the transfer rate (1/4 number of
"tracks", each four times larger)
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_2301_drum
decade later (after having graduated, joined IBM and) transferred to san jose research and get to wander around ibm and non-ibm silicon valley datacenters including disk bldg 14/engineering and 15/product-test across the street. They are doing 7x24, pre-scheduled stand-alone test ... mentioned that they had recently tried MVS but it had 15min MTBF (in that environment) required manual re-ipl. I offer to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of concurrent ondemand testing (greatly improving productivity) ... downside they keep calling wanted me to increasingly spend my time playing disk engineer.
Note 3350 disk drives had a few fixed-head/track cylinders (3350FH)
similar to the 2305 all fixed-head/track disks. The 2305 controller
supported "multiple-exposure", eight subchannel addresses allowing eight
active channel programs ... with hardware optimizing which one gets
executed. I wanted to add multiple exposure support for 3350FH ...
allowing transfers overlapped with disk arm seek movement.
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_2305
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3350
Then the VULCAN (electronic disk for paging) group in POK get it canceled because they were concerned that it would impact their market forecast. However they were using standard processor memory chips ... and got told that IBM was already selling every chip they could make for processor memory at a higher markup ... and they got canceled ... however it was too late to resurrect multiple exposure for 3350FH feature.
3350 Direct Access Storage, Models A2, A2F, B2, B2F, C2F
https://bitsavers.computerhistory.org/pdf/ibm/dasd/3350/GX20-1983-0_3350_Reference_Summary_Card_197701.pdf
bldg15 would get early engineering system models and get 1st
engineering 3033 outside POK engineering floor. Testing only took
percent or two of testing, so we scrounge up 3830 controller and
string of 3330 disk drivers for private online service. Somebody was
run air bearing simulation (part of thin-film head design, originally
used for 3370FBA, then later 3380s), but were only getting a couple
turn-arounds/month of the SJR 370/195. We set it up on the bldg15 3033
(only half MIPs of 195) and could get several turn-arounds/day.
https://en.wikipedia.org/wiki/Disk_read-and-write_head#Thin-film_heads
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3370
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3380
trivia: original 3380 had 20 track spacings between each data track. They cut the spacing between data tracks in half (doubling data capacity, cylinders, and tracks) ... and then cut spacing again ... tripling data capacity. Then father of 801/risc wants me to help him with idea for "wide" disk head; parallel transfers with sets of 16 closely-spaced data tracks (with servio tracks on either side of 16-track set). A problem was it would have had 50mbyte/sec transfer at a time mainframes only supported 3mbyte/sec.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
some posts mentioning wide disk-head
https://www.garlic.com/~lynn/2024g.html#3 IBM CKD DASD
https://www.garlic.com/~lynn/2024f.html#5 IBM (Empty) Suits
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024d.html#96 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#72 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2023g.html#97 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#84 Vintage DASD
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#67 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2019b.html#75 IBM downturn
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Compute Farm and Distributed Computing Tsunami Date: 08 Dec, 2024 Blog: FacebookFirst part of 70s, IBM had "Future System" and was completely different and completely replace 370; during FS, internal politics was killing off 370 efforts (and lack of new 370 is credited with giving clone 370 system makers their market foothold). When FS implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off 3033&3081 efforts in parallel.
3033 started out with 168 logic remapped to 20% faster chips. The 303x (external) channel director was 158 engine with just the (158) integrated channel microcode. A 3031 was two 158 engines, one with just the 370 microcode and a 2nd with just the integrated channel microcode, a 3032 was 168-3 reworked to use the 303x channel director for external channels. A 3033 could be configured with three channel directors (for 16 channels).
Jan1979, I've access to engineering 4341 (same room with engineering 3033) and branch office cons me into doing (CDC 6600 Fortran) benchmark for national lab that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). 4341 looked like an office credenza, inexpensive, smaller footprint and environmentals, higher throughput than 3031 ... early 80, large corporations were also ordering hundreds at a time for placing out in departmental, non-datacenter areas (inside IBM, conference rooms became scarce because so many had been turned to VM/4341 rooms), sort of the start of the distributed computing tsunami. A small cluster of 4341s also had higher aggregate throughput than 3033, smaller footprint and environmentals and much cheaper.
Trivia: MVS looked at huge uptake of distributed VM/4341s and wanted part of the market, issue was the only mid-range, non-datacenter disks were (3370) FBA ... and MVS didn't have FBA support. Eventually there is 3375 CKD emulation (no CKD have been made for decades, all being simulated on industry standard fixed-block disks), but it didn't do them much good; distributed vm/4341s were measured in scores of systems per support person, while MVS operation was still scores of support and operators per system.
60s, IBM science center and commercial online spin-offs of science center, put a lot of work into CP67 (precursor to vm370) for 7x24, dark room, unattended operation. It was also in the days of rented/leased mainframes with charges based on system meter that ran whenever any CPU or channel was busy. There was also work on (terminal) channel programs that allowed things to go idle, but were immediately operational when any characters arrived. Note: everything (cpu/channels) had to be idle for at least 400ms before system meter was stopped. trivia: long after IBM had switched from leased/rental to sales, MVS still had timer event that woke up every 400ms.
Future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
posts mentioning compute farm and distributed 4341s
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022e.html#67 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2021f.html#84 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019c.html#42 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018c.html#80 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017c.html#87 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2016b.html#23 IBM's 3033; "The Big One": IBM's 3033
https://www.garlic.com/~lynn/2015g.html#4 3380 was actually FBA?
https://www.garlic.com/~lynn/2014g.html#83 Costs of core
https://www.garlic.com/~lynn/2014f.html#40 IBM 360/370 hardware unearthed
https://www.garlic.com/~lynn/2013j.html#86 IBM unveils new "mainframe for the rest of us"
https://www.garlic.com/~lynn/2013i.html#15 Should we, as an industry, STOP using the word Mainframe and find (and start using) something more up-to-date
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Compute Farm and Distributed Computing Tsunami Date: 08 Dec, 2024 Blog: Facebookre:
Les Comeau ... from CSC ... they modified 360/40 with virtual memory
for CP40/CMS
https://www.garlic.com/~lynn/cp40seas1982.txt
which morphs into CP67/CMS when 360/67s become available ... then VM370/CMS when they decide to add virtual memory to all 370s. Les had transferred to Gburg (living in sail boat in Annapolis harbor) and was responsible for one of the FS sections (and he brings on my *future* wife) ... she claims that Les told her that there was only one other person in FS that might actually knew what they were doing. I had recently graduated and joined IBM CSC and continued to work on 360&370 all during FS and periodically would ridicule FS (a favorite was analogy with long playing cult film down in central sq).
Later Evans asks her to review 8100/DPPX, not long later, 8100 is decommitted .
note: one of the last nails in FS coffin is analysis by the IBM Houston Science Center that if 370/195 applications were redone for an FS machine made out of the fastest available technology, it would have throughput of 370/145 (about 30 times slowdown).
I would claim that after FS implodes and the mad rush to get stuff back into the 370 product pipelines ... ASDD disappears because they had to throw nearly the whole organization into the development breach.
trivia: one of my hobbies after joining IBM CSC was enhanced production operating systems for internal datacenters (and branch office online sales&marketing support HONE systems were long time customers. In the morph of CP67->VM370 they simplify and/or drop lots of stuff, including SMP, tightly-coupled, multiprocessor support. I start adding stuff back into VM370R2-base for a CSC/VM, including kernel re-org for SMP operation, but not tightly-coupled itself. Then upgrade to VM370R3 base, I do multiprocessor support ... initially for HONE so they can add a 2nd processor to each system (and was getting twice throughput of single processor, aka highly optimized pathlengths and some cache affinity hacks).
When FS implodes, one of the efforts was 16-CPU SMP and I got roped into helping as well as the 3033 processor engineers in their spare time. Everybody thought it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system ("MVS") has (effective) 16-CPU support (MVS docs at the time had 2-cpu MVS only getting 1.2-1.5 times throughput of single cpu). Head of POK then invites some of us to never visit again (POK doesn't ship 16-cpu SMP until after turn of the century).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: FCS, ESCON, FICON Date: 09 Dec, 2024 Blog: Facebook1980, STL (since renamed silicon valley lab) was bursting at the seams and they were transferring 300 people/3270s from IMS DBMS group to offsite bldg, with dataprocessing support back to STL datacenter. They had tried remote 3270 but found human factors unacceptable. I then get con'ed into doing channel-extender support allowing placing channel-attached 3270 controllers at offsite bldg ... with no perceptible difference in human factors between offsite and in STL. It turned out that channel-extender also improved system throughput by 10-15% (for mainframe used by the offsite group). STL had spread 3270 controllers across same same channels with DASD and the channel-extender reduced channel busy for the same amount of 3270 traffic (improving DASD throughput) ... STL then considered using channel-extender for all their mainframes. Then there was an attempt to get my support released, but there was group in POK playing with some serial stuff and they were afraid if it was in the market, it would make it harder to get their stuff released.
1988, branch office asks me if I could help LLNL (national lab) get
standardized some serial stuff they were playing with, which quickly
becomes fibre-channel standard ("FCS", including some stuff I had done
in 1980, initially 1gbit, full-duplex, 200mbyte/sec aggregate).
https://en.wikipedia.org/wiki/Fibre_Channel
Then the POK people get their stuff released (decade after vetoing channel-extender) in the 90s with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec). One of the Austin RS/6000 engineers had worked on SLA (sort of tweaked ESCON about 10% faster and full-duplex) and then wanted to do 800mbit SLA. We convince him to join the FCS standards group instead.
Note mid-80s, father of 801/risc asks me to help with disk "wide-head". Original 3380 had 20 track spacings between data tracks, that was cut in half for double the capacity/cylinders/tracks ... then cut again for triple capacity/cylinders/tracks. Wide-head would read/write 16 closely spaced data tracks in parallel (with servo tracks on both sides of data 16-track sets) for 50mbytes/sec ... problem was mainframe still 3mbytes/sec and ESCON only improved to 17mbytes/sec.
Then some POK engineers become involved with FCS and define
heavy-weight protocol that significantly reduced throughput which is
eventually released as FICON
https://en.wikipedia.org/wiki/FICON
The last product did at IBM was HA/CMP in Los Gatos lab, originally
started out HA/6000 for the NYTimes to move their newspaper system
(ATEX) from DEC VAXCluster to RS/6000. I rename it HA/CMP when start
doing technical/scientific cluster scale-up with national labs (LLNL,
LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors
(Oracle, Sybase, Ingres, Informaix that have VAXCluster support in
same source base with unix). Also the IBM S/88 product administer was
taking us around to their customers and got me to do a section in the
corporate continuous availability strategy document (but it got pulled
when both Rochester and POK complained).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
1990 there are 64-port non-blocking FCS switches and we have RS/6000
FCS adapters (for high-end HA/CMP) and 9333 for mid-range. I wanted
9333 to turn into interoperable factional speed FCS, but turns into
SSA instead.
https://en.wikipedia.org/wiki/Serial_Storage_Architecture
Early Jan1992 meeting with Oracle CEO, IBM Exec tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. However by late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on clusters with more than four systems (we leave IBM a few months later).
1993 mainframe/RS6000 (industry benchmark is no. program iterations
compared to reference platform):
ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS; 16CPU: 2BIPS; 128CPU: 16BIPS
The most recent public FICON benchmark I could find was z196 "Peak
I/O" getting 2M IOPS using 104 FICON. At the same time there were FCS
announced for E5-2600 server blades claiming over million IOPS (two
such FCS have higher throughput than 104 FICON). Also note IBM docs
recommend that SAPs (system assist processors that do actual I/O) be
held to 70% CPU ... or about 1.5M IOPS. Other trivia: there have been
no IBM CKD disks manufactured for decades, all CKD simulated on
industry standard fixed-block.
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON & FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: FCS, ESCON, FICON Date: 09 Dec, 2024 Blog: Facebookre:
2nd half of 70s transferred to San Jose Research and got to wander
around IBM (and non-IBM) datacenters in silicon valley, including disk
bldgs 14/engineering and 15/product-test across the street. They were
doing 7x24, prescheduled, stand-alone testing and mentioned that they
had recently tried MVS, but it had 15min MTBF (in that environment)
requiring manual re-ipl. I offer to rewrite I/O supervisor to make it
bullet proof and never fail so they can do any amount of concurrent,
on-demand testing (greatly improving productivity) ... downside was
they wanted me to increasingly spend time playing disk
engineer. Product test got very early engineering processors for disk
I/O testing ... getting 1st engineering 3033 outside POK processor
development. Testing took only a percent or two of CPU so scrounge up
3830 controller and 3330 string setting up private online service
(including running 3270 cable under the street to my
office). Air-bearing simulation ... part of thin-film floating head
design ... was getting a few turn-arounds a month on SJR 370/195, so
we set it up on the bldg15 3033 (only half MIPs of 195 but) able to
get several turn-arounds/day ... first used in 3370 FBA
https://en.wikipedia.org/wiki/Thin-film_head#Thin-film_heads
posts mentioning getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning DASD, CKD, FBA, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd
some posts mentioning air bearing simulation
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#38 IBM Mainframe User Group SHARE
https://www.garlic.com/~lynn/2024f.html#5 IBM (Empty) Suits
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#25 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#58 IBM 3031, 3032, 3033
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#9 3880 DASD Controller
https://www.garlic.com/~lynn/2022c.html#74 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022.html#64 370/195
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2018.html#41 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2012o.html#70 bubble memory
https://www.garlic.com/~lynn/2012o.html#59 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2011d.html#63 The first personal computer (PC)
https://www.garlic.com/~lynn/2009k.html#75 Disksize history question
https://www.garlic.com/~lynn/2009c.html#9 Assembler Question
https://www.garlic.com/~lynn/2007e.html#43 FBA rant
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Cloud Megadatacenters Date: 09 Dec, 2024 Blog: Facebook... large cloud operators have been saying (since around turn of century) that they assemble their own (server) systems for 1/3rd brand name servers. A few years ago, there was press that server component businesses were shipping half their product directly to large cloud operators ... and IBM (also) unloads is server business. A large cloud operator can have scores of large megadatacenters around the world and each megadatacenter with half million or more server blades.
60s, there was lot of CP67 (precursor to vm370) work by the IBM Cambridge Science Center and the CSC spin-off, online, commercial CP67 operations for on-demand, 7x24, dark-room, unattended operation. This was back in the days when IBM rented/leased machines and charges were based on system meter ... which ran whenever any CPU or channel was busy. Then there was work for terminal channel programs that would go idle, but "instant on" when characters start arriving. Everything had to be idle for 400ms for the system meter would stop. Long after switch to sales, MVS still had 400ms timer event where system meter would never stop.
Cloud operation so radically reduced server system cost, they could extensively over provision with large number of systems for "on-demand" ... and electricity becoming major megadatacenter expense. The equivalent to our 60s work for on-demand, becomes electricity drops to zero, but instantly "on" when needed "on-demand". Also lots of pressure on the server component makers to significantly reduce component electricity use (in addition to dropping to zero when idle, but instantly on). When latest generation of components significantly improve electricity efficiency, it would justify complete swap-out/swap-in of half million blade servers. Large cloud operations have also used the threat of switching to ARM chips (electrical efficiency designed for batteries) over the heads of Intel and AMD.
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
IBM Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Creative Ways To Say How Old You Are Newsgroups: alt.folklore.computers Date: Mon, 09 Dec 2024 21:04:15 -1000Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
trivia: there was 3344 ... which was 3350 hardware emulating multiple 3340 drives.
3370 was fixed-block architecture and first floating, thin-film heads
https://en.wikipedia.org/wiki/Thin-film_head#Thin-film_heads
large corporates were ordering hundreds of 4341s at a time for placing out in departmental areas (inside IBM, conference rooms were becoming scarce because so many were being converted into vm/4341 rooms) sort of the leading edge of the coming distributed computing tsunami.
MVS was looking at the explosion in vm/4341 distributed computing market ... but the only mid-range, non-datacenter drive was 3370FBA and MVS didn't have FBA support. Eventually they came out with 3375 w/CKD emulation ... but it didn't do MVS much good, customers were looking at scores of VM/4341 systems per support person ... and MVS was scores of support & operators per MVS system (note no CKD drives have been manufactured for decades, all being simulated on industry standard fixed block disks).
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
recent posts mentioning distributed computing tsunami
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024f.html#95 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#70 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#64 Distributed Computing VM4341/FBA3370
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024e.html#46 Netscape
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#85 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#30 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#15 Mid-Range Market
https://www.garlic.com/~lynn/2024c.html#107 architectural goals, Byte Addressability And Beyond
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#87 Gordon Bell
https://www.garlic.com/~lynn/2024c.html#29 Wondering Why DEC Is The Most Popular
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#23 HA/CMP
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#65 IBM Mainframes and Education Infrastructure
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#51 VAX MIPS whatever they were, indirection in old architectures
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: FCS, ESCON, FICON Date: 09 Dec, 2024 Blog: Facebookre:
I got HSDT in the early 80s, T1 and faster computer links (both terrestrial and satellite) ... some amount of battles with the communication group. Note in the 60s, IBM had 2701 telecommunication controller that supported T1. However in the mid70s with the change to SNA/VTAM and associated issues appeared to cap controllers at 56kbit/sec.
Was working with the NSF director and was suppose to get $20M to interconnect the NSF Supercomputing Centers. NSF $120M(?) grant to UC was supposedly for Berkeley supercomputer center (but regents' bldg plan had UCSD getting next new bldg and it becomes San Diego Supercomputing Center) and I was asked to brief Berkeley. In 1983, that seemed to get me asked to help with Berkeley 10M and visits to Lick Observatory (east of san jose); they were also working on moving to CCD from film. Their plan was to build on top of mountain in Hawaii and enable remote observing from the mainland (this was before the Keck Foundation grant and it becomes the Keck 10M/observatory)
Then congress cuts the budget, some other things happen and finally a
RFP is released (in part based on what we already had running). From
28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
Mid-80s, the communication group was also fighting the release of mainframe TCP/IP. When that was over ruled, they changed their tactic and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate of 44kbytes/sec using nearly whole 3090 CPU. I then modify it with RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
posts mentioning berkeley/keck 10M and lick observatory
https://www.garlic.com/~lynn/2024g.html#50 Remote Satellite Communication
https://www.garlic.com/~lynn/2024d.html#19 IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#68 Berkeley 10M
https://www.garlic.com/~lynn/2024b.html#51 IBM Token-Ring
https://www.garlic.com/~lynn/2023d.html#39 IBM Santa Teresa Lab
https://www.garlic.com/~lynn/2023b.html#9 Lick and Keck Observatories
https://www.garlic.com/~lynn/2022e.html#104 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022.html#67 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2021k.html#56 Lick Observatory
https://www.garlic.com/~lynn/2021g.html#61 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021c.html#60 IBM CEO
https://www.garlic.com/~lynn/2021c.html#25 Too much for one lifetime? :-)
https://www.garlic.com/~lynn/2021b.html#25 IBM Recruiting
https://www.garlic.com/~lynn/2019e.html#88 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019c.html#50 Hawaii governor gives go ahead to build giant telescope on sacred Native volcano
https://www.garlic.com/~lynn/2018f.html#22 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2017g.html#51 Stopping the Internet of noise
https://www.garlic.com/~lynn/2016f.html#71 Under Hawaii's Starriest Skies, a Fight Over Sacred Ground
https://www.garlic.com/~lynn/2015g.html#97 power supplies
https://www.garlic.com/~lynn/2015.html#19 Spaceshot: 3,200-megapixel camera for powerful cosmos telescope moves forward
https://www.garlic.com/~lynn/2014h.html#56 Revamped PDP-11 in Brooklyn
https://www.garlic.com/~lynn/2014g.html#50 Revamped PDP-11 in Honolulu or maybe Santa Fe
https://www.garlic.com/~lynn/2014.html#76 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014.html#8 We're About to Lose Net Neutrality -- And the Internet as We Know It
https://www.garlic.com/~lynn/2012o.html#55 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#86 OT: Physics question and Star Trek
https://www.garlic.com/~lynn/2010i.html#24 Program Work Method Question
https://www.garlic.com/~lynn/2009o.html#55 TV Big Bang 10/12/09
https://www.garlic.com/~lynn/2008f.html#80 A Super-Efficient Light Bulb
https://www.garlic.com/~lynn/2005l.html#9 Jack Kilby dead
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Progenitors of OS/360 - BPS, BOS, TOS, DOS (Ways To Say How Old You Are) Newsgroups: alt.folklore.computers Date: Wed, 11 Dec 2024 13:44:30 -1000Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
I had taken two credit hr intro to fortran/comuters (univ. had 709 ibsys tape->tape with 1401 unit record front-end) and at the end of semester was hired to rewrite 1401 MPIO (709 unit record front end) for 360/30 os/360 (360/30 with 1401 emulation replaced 1401 temporarily pending arrival of 360/67 for tss/360, as way of getting some 360 experience).
Univ. shutdown datacenter on weekends and I would have the whole place to myself (although 48hrs w/o sleep made monday classes hard). I was given a bunch of hardware&software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had a 2000 card assembler program (place BPS loader on front with os/360 assembled txt deck behind) that could do concurrent card->tape and tape->printer/punch. I then modified with assembler option that either generated the stand-alone monitor or an OS/360 (system services) version. The stand alone version took 30mins to assemble, the OS/360 version took an hour to assemble (each DCB macro took 5-6mins).
I would compare (card->)tapes generated by (360/30) emulated 1401 MPIO and generated by my 360 rewritten MPIO. 709->tape could be intermixed printer and punch output and punch output could be intermixed BCD and "binary" (two 6-bit "bytes" in one 12row column).
Within a year of taking intro class, the 360/67 arrives and I was hired fulltime responsible of OS/360 (tss/360 didn't come to production so ran as 360/65 with os/360). Student Fortran had run in under second on 709, but initially with os/360 ran over a minute. I install HASP and cut the time in half. I then start redoing OS/360 STAGE2 SYSGEN to carefully place datasets and PDS members to optimizing arm seek and multi-track search, cutting another 2/3rds to 12.9secs ... never got better than 709 until I install univ. of waterloo WATFOR.
Then CSC came out to install CP67/CMS (3rd installation after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my weekend dedicated time. They were still assembling CP67 source on OS/360 ... collecting assembler output TXT decks in tray, place BPS loader on front and IPL tray of cards. The loaded program then would write the memory image to specified disk ... which then could IPL the disk. Within a few months after that, CSC had moved all source&assembly maintenance to CMS.
I work first on reWriting a lot of CP67 pathlengths for running OS/360 in virtual machine. Test OS/360 stream ran 322secs stand alone and initially 856secs in virtual machine (534secs CP67 CPU). After a couple months I got CP67 CPU down to 113secs (from 534) ... and was asked to attend CP67 "official" announcement at spring '68 SHARE meeting in Houston. CSC was then having a one week class in June and I arrive Sunday night and am asked to teach the CP67 class, the people that were suppose to teach it had given notice that Friday, leaving for NCSS (online commercial CP67 spin-off of the science center).
IBM Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
commercial online (virtual machine) services posts
https://www.garlic.com/~lynn/submain.html#online
some recent posts mentioning 709/1401 mpio, os/360, fortran, watfor,
csc, and cp67/cms
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024f.html#15 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Progenitors of OS/360 - BPS, BOS, TOS, DOS (Ways To Say How Old You Are) Newsgroups: alt.folklore.computers Date: Thu, 12 Dec 2024 07:44:42 -1000Lars Poulsen <lars@cleo.beagle-ears.com> writes:
SHARE Original/Founding Knights of VM
http://mvmua.org/knights.html
IBM Mainframe Hall of Frame
https://www.enterprisesystemsmedia.com/mainframehalloffame
IBM System Mag article (some of their history details slightly garbled)
about history postings
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
System mag. sent a photographer to home for photo shoot for magazine
article, online & wayback machine didn't include photos. Referenced
https://www.garlic.com/~lynn/
besides archived posts ... several contained email ... references
https://www.garlic.com/~lynn/lhwemail.html
In early 80s, I was introduced to John Boyd and use to sponsor his
briefings at IBM, I felt quite a bit of affinity to him. Recent tome
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
The Commadant of the Marine Corps 89/90, leveraged Boyd for make-over of the corps at the time IBM was desperately in need of make-over. Boyd passed in 97, USAF had pretty much disowned him, but the Marines were at Arlington and his effects went to Quantico ... there continued to be Boyd conferences at Quantico MCU.
trivia: HASP/MVTR18, I modified HASP, removed 2780 support to reduce
fixed storage requirement and put in terminal support and editor with
CMS-syntax (rewritten from scratch since HASP environment totally
different from CMS) for CRJE-like environment. old (archived) post
mentions more ... I had been asked to track down decision to add
virtual memory to all 370s, pieces of the email exchange ... wandering
into HASP & SPOOL
https://www.garlic.com/~lynn/2011d.html#73
late 70s & early 80s, I had been blamed for online computer conferencing on the IBM internal nework (larger than arpanet/internet from beginning until sometime mid/late 80s, about the time the IBM communication group forced the internal network to be converted to SNA/VTAM). It really took off spring of 1981 when I distributed trip report of visit to Jim Gray at Tandem, only about 300 participated, but claims upwards of 25,000 were reading (folklore is when corporate executive committee was told, 5of6 wanted to fire me). One of the outcomes was researcher was paid to sit in the back of my office for nine months taking notes on how I communicated, face-to-face, telephone, etc, got copies of all my incoming and outgoing email and logs of all instant messages. Material was used for conference papers, talks, books and Stanford Phd (joint with language and computer AI, winograd was advisor on computer side).
Early history was my father had died when I was in Junior High and being the oldest, I got jobs after school and all day sat&sun. In the winter I chopped wood after dinner and got up early to restart the fire. In high school I worked for local hardware store and would get loaned out to local contractors, concrete (foundations, driveways, sidewalks), framing, joists (floor, ceiling, roof), roofing, electrical, plumbing, wallboard, etc ... saved enough for freshman year in college (along with washing dishes in dorm). Following summer I was foreman on construction job, three nine-person crews ... was really wet spring and project was way behind schedule and shortly was doing 12hr days and 7 day weeks (more $$$/week until long after I had graduated). Sophomore year took computer intro and started working computers (lot more fun than construction jobs).
HASP/ASP/JES, NJE/NJI, posts
https://www.garlic.com/~lynn/submain.html#hasp
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: John Boyd and Deming Date: 12 Dec, 2024 Blog: LinkedinJohn Boyd and Deming
1990, US auto industry had C4 taskforce to look at completely remaking themselves and because they were planning on heavily leveraging technology they wanted reps from IT companies to participate and I was one of the people asked.
60s, low-end, Japanese autos were taking increasing part of the US market and US industry asked for help from congress ... which placed import quota limits ... increasing US profits with some assumption the US industry would use the profit increase to completely remake themselves ... but they just pocketed the money (in the 70s there was some call for 100% unearned profit tax). The Japanese determined at the quota limits, they could sell that many high-end cars ... but had to completely redo their autos design. The auto industry mode was taking 7-8yrs to go from initial design to rolling off the line (frequently two efforts in parallel offset 3-4yrs), with the need to completely change their product, the Japanese cut the elapsed time from 7-8yrs to 3-4yrs.
The combination (less import competition and competition shifted from low-end to high-end) also allowed US makers to nearly double car prices. However since consumer income didn't double, they had to go from 36month loans to 60-72month loans. Then financial institutions were limiting auto loans to the warranty period ... so US automakers had to increase warranty period to match loan period ... and found they were being killed by warranty costs (needed a trade-off, improving quality to the point that could still make loads of money off doubling car prices while not being killed by warranty costs).
Roll forward to 1990, US auto makers had spun of their parts business as independent companies and 7-8yr old designs were finding some of the parts had changed and no longer "fit" and needed delays for redoing designs for currently available parts. Also the Japanese makers were cutting product development elapsed time in half again to 18-24months, enabling them to more rapidly respond to changes in technology and customer preferences.
How Toyota Turns Workers Into Problem Solvers
http://hbswk.hbs.edu/item/how-toyota-turns-workers-into-problem-solvers
To paraphrase one of our contacts, he said, "It's not that we don't
want to tell you what TPS is, it's that we can't. We don't have
adequate words for it. But, we can show you what TPS is."
We've observed that Toyota, its best suppliers, and other companies
that have learned well from Toyota can confidently distribute a
tremendous amount of responsibility to the people who actually do the
work, from the most senior, experienced member of the organization to
the most junior. This is accomplished because of the tremendous
emphasis on teaching everyone how to be a skillful problem solver.
... snip ...
TPS
https://en.wikipedia.org/wiki/Toyota_Production_System
Manufacturing Innovation Lessons
http://sloanreview.mit.edu/article/manufacturing-innovation-lessons-from-the-japanese-auto-industry/
C4 taskforce posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce
Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html
past posts mentioning "Toyota Turns Workers Into Problem Solvers"
https://www.garlic.com/~lynn/2023e.html#46 Boyd OODA at Linkedin
https://www.garlic.com/~lynn/2023e.html#38 Boyd OODA-loop
https://www.garlic.com/~lynn/2023c.html#54 US Auto Industry
https://www.garlic.com/~lynn/2023.html#60 Boyd & IBM "Wild Duck" Discussion
https://www.garlic.com/~lynn/2022h.html#92 Psychology of Computer Programming
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#19 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022d.html#109 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#85 Destruction Of The Middle Class
https://www.garlic.com/~lynn/2022.html#117 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2021k.html#26 Twelve O'clock High at IBM Training
https://www.garlic.com/~lynn/2021h.html#26 Whatever Happened to Six Sigma?
https://www.garlic.com/~lynn/2020.html#12 Boyd: The Fighter Pilot Who Loathed Lean?
https://www.garlic.com/~lynn/2019e.html#7 ISO9000, Six Sigma
https://www.garlic.com/~lynn/2019c.html#30 Coup D'Oeil: Strategic Intuition in Army Planning
https://www.garlic.com/~lynn/2019c.html#20 The Book of Five Rings
https://www.garlic.com/~lynn/2019.html#55 Bureaucracy and Agile
https://www.garlic.com/~lynn/2019.html#5 One Giant Step for a Chess-Playing Machine
https://www.garlic.com/~lynn/2018f.html#65 Why General Motors Is Cutting Over 14,000 Workers
https://www.garlic.com/~lynn/2018e.html#60 Excess Management Is Costing the U.S. $3 Trillion Per Year
https://www.garlic.com/~lynn/2018e.html#50 OT: Trump
https://www.garlic.com/~lynn/2018e.html#25 Why You Should Trust Your Gut, According to the University of Cambridge
https://www.garlic.com/~lynn/2018d.html#82 Quality Efforts
https://www.garlic.com/~lynn/2018d.html#44 Mission Command Is Swarm Intelligence
https://www.garlic.com/~lynn/2018d.html#8 How to become an 'elastic thinker' and problem solver
https://www.garlic.com/~lynn/2018c.html#45 Counterinsurgency Lessons from Malaya and Vietnam: Learning to Eat Soup with a Knife
https://www.garlic.com/~lynn/2017k.html#24 The Ultimate Guide to the OODA-Loop
https://www.garlic.com/~lynn/2017i.html#32 progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017i.html#2 Mission Command: The Who, What, Where, When and Why An Anthology
https://www.garlic.com/~lynn/2017h.html#8 Trump is taking the wrong approach to China on tech, says ex-Reagan official who helped beat Soviets
https://www.garlic.com/~lynn/2017g.html#100 Why CEO pay structures harm companies
https://www.garlic.com/~lynn/2017g.html#93 The U.S. Military Believes People Have a Sixth Sense
https://www.garlic.com/~lynn/2017g.html#59 Deconstructing the "Warrior Caste:" The Beliefs and Backgrounds of Senior Military Elites
https://www.garlic.com/~lynn/2017g.html#54 Boyd's OODA-loop
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Where did CKD disks come from? Newsgroups: comp.arch Date: Thu, 12 Dec 2024 15:39:15 -1000John Levine <johnl@taugh.com> writes:
program libraries were typically partitioned datasets (PDS) with directory at front. Channel program would do multi-track search of PDS directory looking for a PDS directory program entry, read it ... then do a channel program to move arm to that head position and read the program. For each seach compare, the search CCW would refetch the match from processor memory ... for the duration of the search, the device, controller and channel would be locked.
the architecture became heavily ingrained into the batch operating systems. around 1980, I offered to provide FBA support to them. I was told even if I provided fully integrated and tested, I still needed a couple hundred million in incremental sales to cover education and documentation for the changes ... and since they were already selling every disk made ... it would just change from CKD to FBA (with no incremental sales) ... also I couldn't use (FBA) life-time savings in the business case.
late 70s, I had been brought into large datacenter for major national grocery store chain ... had multiple systems in loosely-coupled shared DASD configuration (stores group into multiple geographical regions mapped to different systems). They were having horrendous performance problems and most of the corporate specialists had been brought though before I was called.
Had a classroom with tables covered with large paper piles of performance activity data from all the systems. After about 30mins I noticed that during worst performance periods, the aggregate I/O (summed across all the systems) peaked around 6-7/sec (3330, 19tracks/cyl, RPS) and asked what it was.
It turns out it was shared disk (for all systems) that contained all store applications ... and it was basically caped at doing two program loads/sec for the hundreds of stores across the country. It had a 3cyl PDS directory and would avg of 1.5 cyl multi-track search for each application ... i.e. full cyl. multi-track search of 19tracks at 60revs/sec (.317secs) followed by 9.5track (.16secs) ... during which time the disk was locked out for all systems and the controller (and all drives on that controller were locked out). Once the PDS entry was found&read, it could use the information to move the arm for reading/loading the program.
Somewhat engrained in the professional minds was typical 3330 would do 30-40 I/Os per sec ... individual system activity reports only showed the I/O activity counts for each specific system (with no data about aggregate disk I/Os across all systems or elapsed avg queued/waiting time).
I also was pontificating that between 60s and early 80s, disk relative system throughput had declined by order of magnitude (disks got 3-5 times faster while systems got 40-50 times faster). Disk division executive assigned the division performance group to refute the claim ... after a couple weeks they came back and basically I was slightly understating the problem (this got respun for customer presentations on how to configure disks for improved system throughput).
posts about getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning DASD, CKD, FBA, multi-track sarch
https://www.garlic.com/~lynn/submain.html#dasd
recent posts mentiong 16Aug1984 SHARE63, B874 on dasd configuration
https://www.garlic.com/~lynn/2024f.html#9 Emulating vintage computers
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#109 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#92 IBM DASD 3380
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#0 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Where did CKD disks come from? Newsgroups: comp.arch Date: Thu, 12 Dec 2024 21:55:58 -1000John Levine <johnl@taugh.com> writes:
not who but why:
https://retrocomputing.stackexchange.com/questions/24074/how-did-reserve-tracks-work-on-early-hard-disks
Count Key Data was IBM's answer to unify three very different methods of
data storage (Disk, Drum, Cells) into a single interface, while at the
same time offloading basic tasks to hardware.
... snip ...
discussion "spare tracks" & "standard configuration"
https://archive.computerhistory.org/resources/access/text/2014/07/102739924-05-03-acc.pdf
Jack: That was the first time in a sense that you had an index, which is now
standard configuration.
Al: Right.
Jack: Standard configuration. The other thing that we did, that we
subsequently cursed a lot, was the idea of variable record length and count
key data as the format. I don't know if IBM's gotten rid of that yet.
... snip ...
... seems to imply collective "we" team/group
and more discussion of 1301&1311 to CKD 2311 (disk)
https://archive.computerhistory.org/resources/text/Oral_History/IBM_1311_2311/IBM_1311_2311.oral_history.2005.102657931.pdf
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: HTTP Error 522 and OS/360 Error 222 Date: 13 Dec, 2024 Blog: Facebooksome of the MIT CTSS/7094 people went to 5th flr to do Multics, others went to the IBM science center on the 4th flr and modified 360/40 with virtual memory and did cp40/cms. cp40/cms morphed into cp67/cms (when 360/67 standard with virtual memory became available, precursor to vm370/cms). CTSS/RUNOFF was redone for CMS as SCRIPT
trivia: In late 70s, an IBM SE in LA, had done newscript for trs/80.
https://archive.org/details/NewScript_v7.0_1982_Tesler_Software_Corporation
He had also done ATM cash machine transaction implementation for a
California financial institution on VM370 370/158 that outperformed
ACP/TPF on 370/168.
other trivia: a co-worker was also responsible for the IBM internal
network (technology also used for the corporate sponsored
univ. BITNET), initially the science center CP67 wide-area network
(larger than arpanet/internet from the beginning until sometime
mid/late 80s, about the time the communication group forced the
conversion to SNA/VTAM) ... account by one of the GML inventors:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
One of the first mainstream IBM documents done in CMS script was the 370 architecture redbook (for distribution in red 3-ring binder). CMS script command line option either generated the 370 Principles of Operation subset or the full redbook with justifications, alternatives considered, implementation details, etc
IBM Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc
https://www.garlic.com/~lynn/submain.html#sgml
IBM Internal Network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Where did CKD disks come from? Newsgroups: comp.arch Date: Fri, 13 Dec 2024 13:03:19 -1000EricP <ThatWouldBeTelling@thevillage.com> writes:
univ got 360/67 with 2301 drum and bank of 2314 drives (replacing 709/1401) originally for tss/360 ... never came to production so ran as 360/65 ... when it arrived, I was hired fulltime responsible for os/360.
univ library then got an ONR grant to do an online catalog and some of the money went for 2321 datacell ... also was selected as one of the betatest sites for original CICS transaction processing product (and CICS debugging/support was added to my tasks).
when IPL'ing, 2321 made distinct sound as os/360 read the volser of each cell ... whirl, kerchuk, whirl, kerchuk, ....
2303 drum ran off same controller (2821) as used for 2311s ... 2301 was
similar to 2303 but read/wrote four tracks in parallel (four times the
transfer rate, 1/4 the number of "tracks" and each "track" four times
larger and needed its own controller)
https://en.wikipedia.org/wiki/IBM_drum_storage
https://en.wikipedia.org/wiki/History_of_IBM_CKD_Controllers
IBM CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
IBM CICS &/or BDAM
https://www.garlic.com/~lynn/submain.html#cics
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Building the System/360 Mainframe Nearly Destroyed IBM Date: 15 Dec, 2024 Blog: FacebookBuilding the System/360 Mainframe Nearly Destroyed IBM. Instead, the S/360 proved to be the most successful product launch ever and changed the course of computing
Knew IBMer who said his father was economist that IBM brought into the gov. trial. His father told him that the other computer companies testified that they all had known by the late 50s, that the single most important customer criteria was a compatible computer line; computer use was starting to rapidly increase and customers were finding that they had to frequently start again from scratch each time, moving to larger, higher throughput systems ... but IBM was the only computer maker with executives that managed to force individual manufacturing plants "toe the line".
triva; I took a two credit hour intro to fortran/computers and at the end of the semester, was hired to rewrite 1401 MPIO in assembler for 360/30. The univ had 709/1401 (709 tape->tape with 1401 unit record front end for 709) and was getting 360/67 for tss/360 (replacing 709/1401) ... temporarily getting 360/30 replacing 1401 pending 360/67 availability (360/30 had 1401 microcode compatibility, but 360/30 was part of getting 360 experience). The univ. shutdown the datacenter on weekends and I would have the whole place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a pile of hardware & software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had a 2000 card assembler program. This assembled under os/360 in os/360, but ran as stand-alone monitor; txt deck loaded/run with BPS loader. I then add OS/360 system services and with assemble option for either assemble for stand-alone (30mins) or OS/360 read/write (60mins, each DCB macro taking 5-6mins). Within a year of taking intro class, 360/67 arrived and I was hired fulltime responsible for OS/360 (tss/360 never came to production so ran as 360/65).
Student fortran had run in under second on 709 but initially over a minute on os/360. I install HASP cutting time in half. I then start redoing STAGE2 SYSGEN to carefully place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. OS/360 never beat 709 until I install univ of waterloo WATFOR.
IBM CSC came out to install (virtual machine) CP67/CMS (3rd after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my weekend 48hr dedicated time, initially rewriting lots of code to improve running OS/360 in virtual machine. OS/360 test stream ran 322 seconds bare/real machine, initially 856secs in virtual machine (534secs CP67 CPU). After a couple months I got CP67 CPU down to 113secs (from 534). I then start redoing other parts of CP67, page replacement algorithm, thrashing controls, scheduling (aka dynamic adaptive resource management), ordered arm seek queuing (replacing pure FIFO), multiple chained page requests channel programs optimizing transfers/revolution (2301 paging drum peak from 80/sec to 270/sec).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement and thrashing control posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
posts mentioning gov. trial and single most important customer
criteria
https://www.garlic.com/~lynn/2022d.html#32 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2012e.html#105 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2003.html#71 Card Columns
https://www.garlic.com/~lynn/2001.html#73 how old are you guys
https://www.garlic.com/~lynn/96.html#20 1401 series emulation still running?
https://www.garlic.com/~lynn/94.html#44 bloat
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Building the System/360 Mainframe Nearly Destroyed IBM Date: 15 Dec, 2024 Blog: Facebookre:
Before I graduated, I was hired into small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into independent business unit. I thought Renton datacenter largest in the world, 360/65s arriving faster than they could be installed, constantly being staged in hallways around the machine room (somebody joke that Boeing was getting 360/65s like other companies got keypunches). Lots of politics between Renton manager and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarge the room for a 360/67 for me to play with when I wasn't doing other stuff). When I graduate I join IBM CSC (instead of staying w/Boeing CFO).
One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters (including the US online branch office sales&marketing support HONE system was long time customer, also rapidly expanded to clones world-wide). With my highly optimized paging, lots ran 100%. I transfer out to San Jose Research in 2nd half of 70s and got to wander around lots of IBM (and non-IBM) silicon valley datacenters, including disk bldg14/engineering and bldg15/product-test across the street. They were running 7x24, pre-scheduled, stand-alone testing and mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment), requiring manual re-ipl. I offer to rewrite I/O supervisor making it bullet proof and never fail, allowing any amount of concurrent, on-demand testing, greatly improving productivity.
Along the way, I pushed to add "multiple exposures" for 3350FH (fixed-head/track feature) ... somewhat like 2305 fixed-head/track disk, allowing data transfer from fixed-head area overlapped with arm seeks. However there was POK "VULCAN" group that was working on electronic paging device, got it vetoed (afraid that it might compete with VULCAN). Eventually VULCAN was canceled, being told that IBM was selling every memory chip it made for processor system memory (at higher markup) ... but it was too late to resurrect multiple exposure for 3350FH. Then internally started getting IBM "1655" electronic paging devices (relogo'ed from another company) that could simulate 2305 CKD (at 1.5mbytes/sec) or FBA (at 3mbytes/sec). Note MVS never did get FBA support and there hasn't been any real CKD manufactured for decades, all being simulated on industry standard fixed-block disks.
trivia: In the first half of the 70s, IBM had the "Future System"
effort, totally different than 370 and was going to completely replace
it. Internal politics during FS was killing off 370 efforts, and the
lack of new 370 systems during the period is credited with giving the
clone 370 makers their market foothold. I continued to work on
360&370 stuff all during FS, even periodically ridiculing what
they were doing. When FS eventually implodes, there was mad rush to
get stuff back into the 370 product pipelines, including kicking off
quick&dirty 3033&3081 efforts in parallel ... more details:
http://www.jfsowa.com/computer/memo125.htm
trivia2: currently storage memory latency (for things like cache misses) when measured in count of processor cycles is similar to 60s disk latency when measured in count of 60s processor cycles (i.e. memory is the new disk).
note 1972, Learson tried (& failed) to block the bureaucrats, careerists, and MBAs from destroying Watson culture&legacy ...
20yrs later IBM has one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" (take-off on
the AT&T "baby bells" breakup decade earlier) in preparation for
breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
2022 (linkedin) post with more details
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Netscape Ecommerce Date: 15 Dec, 2024 Blog: Facebook... early ramp-up of NETSCAPE server workloads was met with FINWAIT problem ... servers started spending 90+% of CPU running FINWAIT list. For awhile internal NETSCAPE was adding sun servers and trying to distribute workloads ... then they got a large Sequent server that had solved the FINWAIT problem some time early ... it was another six months before other vendors started distributing/deployng FINWAIT fix.
I had been brought in as consultant, two former Oracle people (that had worked with when was at IBM and doing HA/CMP product) were there responsible for something called commerce server and they wanted to do payment transactions on the server ... now frequently called "electronic commerce". I had responsibility for everything between webservers and payment networks (payment gateways and associated protocol). Based on procedures/documentation/software I had to do for "electronic commerce" I did talk on "Why Internet Wasn't Business Critical Dataprocessing" that Postel (internet/rfc standards editor) sponsored at ISI/USC.
ecommerce payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
some recent posts mentioning FINWAIT problem:
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#18 MOSAIC
https://www.garlic.com/~lynn/2023d.html#57 How the Net Was Won
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022f.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
posts referencing Postel sponsored "Why Internet Isn't Business
Critical Dataprocessing" talk
https://www.garlic.com/~lynn/2024g.html#27 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024f.html#47 Postel, RFC Editor, Internet
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017j.html#42 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017j.html#31 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017d.html#92 Old hardware
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Building the System/360 Mainframe Nearly Destroyed IBM Date: 15 Dec, 2024 Blog: Facebookre:
I think at least half 360/67 (including lots of internal IBM) eventually were running CP/67
trivia ... before MS/DOS:
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/m
https://en.wikipedia.org/wiki/CP/M
before developing CP/m, kildall worked on IBM CP/67 at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
MTS folklore is it started out being scaffolding off MIT Lincoln Labs
(2nd CP67 installation after CSC itself) LLMPS
https://web.archive.org/web/20200926144628/michigan-terminal-system.org/discussions/anecdotes-comments-observations/8-1someinformationaboutllmps
other trivia: Boeing Huntsville got a 2cpu 360/67 with lot of 2250s for TSS/360 ... running it as two MVT systems ... but ran into the horrible MVT storage management early. They modified MVT13 to run in virtual memory mode (w/o paging), using it to offset some of the MVT storage management problems.
Note early last decade I was asked to track down decision to add
virtual memory to all 370s and found staff to executive making the
decision; basically same problem that Boeing Huntsville ran into with
MVT a few years earlier. Old archived post with pieces of the email
exchange:
https://www.garlic.com/~lynn/2011d.html#73
Ludlow was doing the initial VS2/SVS implementation on 360/67 ... a little bit of code to build single 16mbyte virtual memory and simple paging (very close to running MVT in CP67 16mbyte virtual machine). The biggest programming was channel programs passed to SVC0/EXCP contained virtual addresses and needed to create copies of the passed channel programs, replacing virtual addresses with real addresses (and borrowed CP67 CCWTRANS crafted into SVC0/EXCP).
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
a few posts mentioning MVT, Boeing Huntsville, 360/67, CCWTRANS
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2010m.html#16 Region Size - Step or Jobcard
https://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Early Email Date: 16 Dec, 2024 Blog: FacebookSome of the MIT CTSS/7094 people went to the 5th flr for Multics, others went to IBM science center, virtual memory hardware mods for 360/40 and CP40/CMS (which morphs into CP67/CMS when 360/67 standard with virtual memory becomes available, later morphs in CP67I & CP67SJ for internal 370s with virtual memory and VM370), CP67 "wide-area" network (morphs into the corpoate internal network, technology also used for the corporate sponsored univ. BITNET), GML (decade later morphs into ISO standard SGML and after another decades into HTML at CERN),
Electronic Mail and Text Messaging in CTSS, 1965 - 1973
https://multicians.org/thvv/anhc-34-1-anec.html
One of the inventors of GML in 1969, job had been to promote CP67
wide-area network
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
After decision was made to add virtual memory to all 370s, there was joint, distributed (network) development project between Cambridge and Endicott to add 370 virtual machine emulation to CP67 ("CP67H") ... and then further CP67 mods ("CP67I") to run on emulated virtual memory 370 virtual machines. CP67H run in a 360/67 virtual machine at Cambridge to further isolate details of unannounced 370 virtual memory (because there were professors, staff, and students from Boston area institutions also using the Cambridge CP67 machine). CP67I was regularly in use in CP67H 370 virtual machine for a year before the first engineering 370 machine with virtual memory was operational (booting CP67I on engineering 370 hardware was one of the original tests).
Edson was responsible for CP67 wide-area (morphing into corporate
internal network, larger than ARPANET/Internet from the beginning
until sometime mid/late 80s about the time the company communication
group forced the internal network conversion to SNA/VTAM).
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
Early 80s, I got HSDT project, T1 and faster computer links (both satellite and terrestrial) and some amount of battles with communication group. 60s, IBM had 2701 telecommunication controller that supported T1 links. Mid-70s IBM moves to SNA/VTAM and various issues seem to cap controllers at 56kbit links. Mid-80s I reported to same executive as person responsible for AWP164 (aka APPN) and I periodically needle him about coming over and working on "real" networking (TCP/IP).
Was also working with NSF director for interconnecting NSF Supercomputer Centers. At some point NSF gave UC a grant for UCB supercomputer center (regents plan was for UCSD getting next new bldg and it becomes San Diego Supercomputer instead) and I was asked to present HSDT to UCB, which seemed to prompt a 1983 request to do some work with Berkeley 10M and testing at Lick Observatory (east of San Jose) ... where they were also testing CCDs (as part of transition off film and supporting remote viewing). They eventually get a grant from Keck Foundation and it becomes Keck Observatory in Hawaii.
HSDT was suppose to get $20M to interconnect the NSF supercomputer
centers, but then congress cuts the budget, some other things happen
and eventually an RFP is released (in part based on what we aleady had
running). NSF 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
univ. bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
past posts mentioning CP67L, CP67H, CP67I. CP67SJ, 370 virtual memory
https://www.garlic.com/~lynn/2024f.html#112 IBM Email and PROFS
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2007i.html#16 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: John Boyd and Deming Date: 17 Dec, 2024 Blog: Linkedinre:
AMEX was in competition with KKR for (private equity) LBO (reverse
IPO) of RJR and KKR wins. KKR runs into trouble and hires away AMEX
president to help with RJR.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
In 1992, AMEX spins off much of its dataprocessing and financial outsourcing in the largest IPO (up until that time) as "First Data"
Also in 1992, IBM has one of the largest losses in the history of US
corporations and was being reorged into the 13 "baby blues" (take-off
on the AT&T "baby bells" breakup decade earlier) in preparation
for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
tactics used at RJR
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
In the 1st decade of this century, KKR does a take-over of First Data in the largest (private equity) LBO (reverse IPO) up until that time (and later spins it off to Fiserv).
IBM: No Longer The Investing Juggernaut Of Old
https://web.archive.org/web/20220114051312/https://seekingalpha.com/article/4479605-ibm-no-longer-investing-juggernaut-of-old
stock buybacks use to be illegal (because it was too easy for
executives to manipulate the market ... aka banned in wake of
'29crash)
https://corpgov.law.harvard.edu/2020/10/23/the-dangers-of-buybacks-mitigating-common-pitfalls/
Buybacks are a fairly new phenomenon and have been gaining in
popularity relative to dividends recently. All but banned in the US
during the 1930s, buybacks were seen as a form of market
manipulation. Buybacks were largely illegal until 1982, when the SEC
adopted Rule 10B-18 (the safe-harbor provision) under the Reagan
administration to combat corporate raiders. This change reintroduced
buybacks in the US, leading to wider adoption around the world over
the next 20 years. Figure 1 (below) shows that the use of buybacks in
non-US companies grew from 14 percent in 1999 to 43 percent in 2018.
... snip ...
Stockman and IBM financial engineering company:
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.
pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.
... snip ...
other detail, in 1972 IBM CEO Learson tries (but fails) to block
bureaucrats, careerists and MBAs from destroying Watson culture&legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
private equity posts
https://www.garlic.com/~lynn/submisc.html##private.equity
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Early Email Date: 17 Dec, 2024 Blog: Facebookre:
nii national information infrastucture & testbed
https://en.m.wikipedia.org/wiki/National_Information_Infrastructure
The National Information Infrastructure (NII) was the product of the
High Performance Computing Act of 1991. It was a telecommunications
policy buzzword, which was popularized during the Clinton
Administration under the leadership of Vice-President Al Gore.[1]
... snip ...
was attending nii meetings at llnl. feds wanted testbed participants to do it on their own nickel ... got some of it back when Singapore invited all the US participants to be part of the fully funded Singapore nii testbed
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
recent posts mentioning nii national information infrastructure
https://www.garlic.com/~lynn/2024c.html#76 Inventing The Internet
https://www.garlic.com/~lynn/2023g.html#67 Waiting for the reference to Algores creation documents/where to find- what to ask for
https://www.garlic.com/~lynn/2023g.html#25 Vintage Cray
https://www.garlic.com/~lynn/2023b.html#100 5G Hype Cycle
https://www.garlic.com/~lynn/2022h.html#12 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#9 Cloud Timesharing
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2022.html#16 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#109 Network Systems
https://www.garlic.com/~lynn/2021k.html#84 Internet Old Farts
https://www.garlic.com/~lynn/2021h.html#25 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2017d.html#73 US NII
https://www.garlic.com/~lynn/2011o.html#25 Deja Cloud?
https://www.garlic.com/~lynn/2011n.html#60 Two studies of the concentration of power -- government and industry
https://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000d.html#80 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#79 When the Internet went private
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Creative Ways To Say How Old You Are Newsgroups: alt.folklore.computers Date: Wed, 18 Dec 2024 12:29:14 -1000John Ames <commodorejohn@gmail.com> writes:
Amdahl had won the battle to make ACS, 360 compatible ... folklore was
that it was canceled because there was concern that it would advance
technology too fast and would loose control of market ... Amdahl then
leaves IBM ... starting his own mainframe compatible market
https://people.computing.clemson.edu/~mark/acs_end.html
As the quote above indicates, the ACS-1 design was very much an
out-of-the-ordinary design for IBM in the latter part of the 1960s. In
his book, Data Processing Technology and Economics, Montgomery Phister,
Jr., reports that as of 1968:
• Of the 26,000 IBM computer systems in use, 16,000 were S/360 models
(that is, over 60%). [Fig. 1.311.2]
• Of the general-purpose systems having the largest fraction of total
installed value, the IBM S/360 Model 30 was ranked first with 12%
(rising to 17% in 1969). The S/360 Model 40 was ranked second with 11%
(rising to almost 15% in 1970). [Figs. 2.10.4 and 2.10.5]
• Of the number of operations per second in use, the IBM S/360 Model 65
ranked first with 23%. The Univac 1108 ranked second with slightly over
14%, and the CDC 6600 ranked third with 10%. [Figs. 2.10.6 and 2.10.7]
---
Richard DeLamarter in Big Blue reproduces an undated IBM profit table
that indicates that the "system profit" for the S/360 Model 30 was 32%
and for the Model 40 was 35%. The "system profit" for the Model 65 was
24%, and the Models 75 and 85 were lumped together at a negative 17%
(that is, a loss). [Table 20, p. 98] Thus, the business trend was that
the low-end to mid-range S/360 computers were where IBM was making its
profits.
In the midst of this environment in 1968, the ACS project experienced a
dramatic change of direction. The original architecture had been
designed for the perceived performance and floating-point-precision
needs of customers such as the national labs, much like the CDC
6600. However, Gene Amdahl repeatedly argued for a S/360-compatible
design to leverage the software investment that IBM was making in the
S/360 architecture and to provide a wider customer base.
....
Jan1979, I was doing lots of work with engineering 4341 and branch office cons me into doing (6600) benchmark on the 4341 for national lab that was looking at getting 70 for compute farm (sort of leading edge of the coming cluster supercomputing tsunami) ... 4341 ran in 35.77secs compared to 6600 36.21secs.
also a small 4341 cluster had much higher aggregate throughput than IBM 3033, much cheaper, and much less floor space, power, and cooling.
Decade later, doing HA/CMP out of Los Gatos lab, it started out HA/6000 for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Ingres, Informix that had VAXCluster support in same source base with UNIX).
Early Jan1992, had meeting with Oracle CEO where IBM AWD executive tells Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. Then cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) .... we leave IBM a few months later.
1993 mainframe/rs6000 comparison (benchmark no. program iterations
compared to reference platform):
ES/9000-982 : 8CPU 408MIPS, 51MIPS/CPU
RS6000/990 : 126MIPS; 16CPU: 2BIPS; 128CPU: 16BIPS
Executive we reported to (for HA/CMP) went over to head up Somerset
... AIM (Apple, IBM, Motorola) single chip RISC ... including Motorola
88k bus/multiprocessor support (RS/6000 6chip didn't have cache&bus for
multiproceessor support).
https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/PowerPC_600
https://wiki.preterhuman.net/The_Somerset_Design_Center
1999, single chip PowerPC 440 hits 1,000 BIPS.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
some posts mentioning 4341 for national lab
https://www.garlic.com/~lynn/2022f.html#91 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2018.html#46 VSE timeline [was: RE: VSAM usage for ancient disk models]
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2016h.html#44 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2015h.html#106 DOS descendant still lives was Re: slight reprieve on the z
https://www.garlic.com/~lynn/2015h.html#71 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014j.html#37 History--computer performance comparison chart
https://www.garlic.com/~lynn/2014c.html#61 I Must Have Been Dreaming (36-bit word needed for ballistics?)
https://www.garlic.com/~lynn/2013c.html#53 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012n.html#45 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2009d.html#54 mainframe performance
https://www.garlic.com/~lynn/2006y.html#21 moving on
https://www.garlic.com/~lynn/2006x.html#31 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Early Email Date: 18 Dec, 2024 Blog: Facebookmy recent comments/replies internal network/email predating arpanet
... aka science center CP67-based wide-area network (based in
Cambridge) from the 60s morphs into the IBM Internal Network
... larger than arpanet/internet from the beginning until sometime
mid/late 80s, about the time the communication group forced the
converstion of the internal network to SNA/VTAM (instead of
TCP/IP). Technology was also used for the corporation sponsored
univ. BITNET
https://en.wikipedia.org/wiki/BITNET
which converted to TCP/IP (about the time of the forced conversion of
the internal network to SNA/VTAM).
SJMerc article about Edson (responsible for CP67 wide-area network, he
passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone
behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
Note got HSDT early 80s, T1 and faster computer links (both terrestrial and satellite) which brought me into conflicts with the communication group (in the 70s, IBM had 2701 telecommunication controller that supported T1, however in the mid70s with the change to SNA/VTAM and associated issues appeared to cap controllers at 56kbit/sec links).
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Creative Ways To Say How Old You Are Newsgroups: alt.folklore.computers Date: Thu, 19 Dec 2024 08:38:11 -1000antispam@fricas.org (Waldek Hebisch) writes:
I had some dealings with the BPS loader in my 1st undergraduate programming job in the 60s (rewriting 1401 MPIO in assembler for 360/30).
univ was getting 360/67 for tss/360 (replacing 709/1401), when it arrived, I was hired fulltime for os/360 (360/67 running as 360/65).
CSC then came out to install CP/67 (3rd after CSC itself and MIT lincoln labs) and I mostly got to play with it during my weekend time. Initially all CP/67 source was on OS/360, assemble and get TXT output and arrange in card tray with BPS loader at the front ... IPL the try of cards, transfer to CPINIT which writes core-image to disk ... and CP67 system ipl'ed from disk (tray of cards could also be written to tape and BPS loader IPL from tape). Within a few months got CP67/CMS update with CP67 source in CMS filesystem and assembled. EXEC could punch the equivalent of card tray to virtual punch, "transferred" to virtual reader and virtual IPL (where CPINIT writes image to disk).
Later I was adding support for CP67 "pageable" kernel, pieces of the kernel broken into <|= 4k chunks and "paged" ... reducing fixed memory requirements (especially for 256kbyte and 512kbyte memory 360/67). Problem was splitting pieces into <|= 4k chunks drove the number of ESD entries to more than the BPS loader max of 255 ... and I had to do some ugly hacks to keep the ESD entriess to 255.
I also discovered that the BPS loader passed a pointer to its ESD loader table & count of entries (to CPINIT) ... and so added the ESD table to pageable kernel area (before it was written to disk).
Later after joining science center ... I found copy of BPS loader source in card cabinet in 545 attic ... and modified it to handle more than 255 entries. This continued to be used for VM370.
disclaimer i never had copy of manuals & only source was BPS loader
(after joining IBM) ... so some of this was done by reverse
engineering. old archived post to afc & ibm-main
https://www.garlic.com/~lynn/2007f.html#1 IBM S/360 series operating systems history
some bitsavers
https://www.bitsavers.org/pdf/ibm/360/bos_bps/C24-3420-0_BPS_BOS_Programming_Systems_Summary_Aug65.pdf
wiki
https://en.wikipedia.org/wiki/IBM_Basic_Programming_Support
https://en.wikipedia.org/wiki/BOS/360
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Early Email Date: 19 Dec, 2024 Blog: Facebookre:
my recent comments/replies internal network/email predating arpanet
https://www.garlic.com/~lynn/2024g.html#73 Early Email
https://www.garlic.com/~lynn/2024g.html#75 Early Email
RSCS/VNET originally on CP/67 then on VM370 ... some problems with internal MVT/HASP and then MVS/JES2. RSCS/VNET did a NJE spoofing line-driver for HASP/JES2 ... however HASP/JES2 had to be kept on edge/boundary nodes because
1) original code (had "TUCC" in cols68-71 and) used spare entries in 255-entry psuedo device table (typically around 160-180 and would trash traffic if the origin or destination weren't in the local table) when the internal corporate network was well past 256 nodes (JES2 was eventually raised to 999 nodes, but it was after the internal network had passed 1000 nodes)
2) NJE and job control fields were intermixed in the header and origin MVS nodes at different release than destination, had habit of crashing destination MVS (as a result a large body of RSCS/VNET NJE spoofing code evolved that provided reformatting between origin MVS and destination MVS; there is infamous case in the 80s where San Jose MVS was crashing Hursley MVS systems and the Hursley VM370 VNET/RSCS was blamed because they hadn't been notified about and installed NJE spoofing driver that handled reformatting from modified San Jose format to Hursley format).
Later in the 80s, they stopped shipping the VNET/RSCS drivers to customers and only shipped the NJE spoofing driver (even for VNET/RSCS->VNET/RSCS) ... although internally they kept using the native drivers (since they had higher throughput) .... at least until the internal network was forced to convert to SNA/VTAM.
At the time that ARPANET converted from HOST/IMP protocol (on 1jan1983) to TCP/IP ... there was approx. 100 IMP nodes and 255 hosts ... while the internal network was rapidly approaching 1000 ... which it passed a few months later.
trivia:
As part of 80's HSDT funding, I was suppose to try and use some IBM content. Eventually I found the FSD Series/1 Zirpel T1 card for government customers replacing 60's 2701s (that were disintegrating). I then went to order half dozen Series/1 and was told there was a year's backlog ... that IBM had recently acquired ROLM and ROLM had placed a very large Series/1 order. The ROLM machine room manager I had known in their prior life at IBM ... and they offered me some of their Series/1 if I would help them with their testing processes.
Mid80s was having custom hardware built on the other side of the
Pacific. On Friday before leaving for trip to visit, I got a new
online forum announcement from Raleigh with the following definitions:
low-speed: 9.6kbits/sec,
medium speed: 19.2kbits/sec,
high-speed: 56kbits/sec,
very high-speed: 1.5mbits/sec
On Monday morning in conference room on the other size of the Pacific:
low-speed: <20mbits/sec,
medium speed: 100mbits/sec,
high-speed: 200mbits-300mbits/sec,
very high-speed: >600mbits/sec
About the same time, communication group was blocking release of
mainframe TCP/IP ... when that was over turned, they changed to it had
to be released by them (because they had corporate strategic
responsibility for everything that crossed datacenter walls). What
shipped got aggregate of 44kbytes/sec using nearly whole 3090
processor. I then did TCP/IP RFC1044 support and in some tuning tests
at Cray Research between Cray and IBM 4341, got sustained 4341 channel
throughput using only modest amount of 4341 processor (something like
500 times improvement in bytes moved per instruction executed).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HASP/ASP, JES/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
past posts mentioning series/1 zirpel T1 cards
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2016h.html#26 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#27 Old IBM Mainframe Systems
https://www.garlic.com/~lynn/2015e.html#83 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2014f.html#24 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013j.html#60 Mainframe vs Server - The Debate Continues
https://www.garlic.com/~lynn/2013j.html#43 8080 BASIC
https://www.garlic.com/~lynn/2013j.html#37 8080 BASIC
https://www.garlic.com/~lynn/2013g.html#71 DEC and the Bell System?
https://www.garlic.com/~lynn/2011g.html#75 We list every company in the world that has a mainframe computer
https://www.garlic.com/~lynn/2010e.html#83 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2009j.html#4 IBM's Revenge on Sun
https://www.garlic.com/~lynn/2008l.html#63 Early commercial Internet activities (Re: IBM-MAIN longevity)
https://www.garlic.com/~lynn/2008e.html#45 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2007f.html#80 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2004l.html#7 Xah Lee's Unixism
https://www.garlic.com/~lynn/2003d.html#13 COMTEN- IBM networking boxes
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The New Internet Thing Date: 20 Dec, 2024 Blog: Facebookre:
Last product did for IBM was HA/CMP, it started out HA/6000 for the
NYTimes to move their newspaper system (ATEX) off VAXCluster to
RS/6000, I rename it HA/CMP when I start doing technical/scientific
cluster scale-up with national labs (LLNL, LANL, NCAR, etc) and
commercial cluster scale-up with RDBMS vendors (Oracle, Sybase,
Ingres, Informix that have VAXCluster support in same source base with
UNIX)
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
Early Jan92, have meeting with Oracle CEO and IBM AWD executive Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told that we can't do anything with more than four system clusters (and we leave IBM a few months later).
Couple yrs later was brought in as consultant to NETSCAPE (had been renamed from MOSAIC, when NCSA complained about use of name), two of the former Oracle people (that were in the Hester/Ellison meeting) are there responsible for something called "commerce server" and want to do payment transactions on the server, NETSCAPE had also invented this stuff they call "SSL" they want to use, it is now frequently called "electronic commerce".
I have responsibility for everything between webservers and payment networks (including the internet payment gateways). Note they had previously done a pilot with a national sporting goods vendor that did national advertising during sunday football games and had an outage and it was eventually closed as NTF (no trouble found) after 3hrs of analysis. The payment networks had trouble desk requirement of 5min first level problem determination, using circuit-based analysis. One of my responsibilities is adapt trouble shooting analysis to packet-network environment meeting the 5min analysis (as well a greatly improving availability).
I had no responsibility for the browser side, but did give some classes for mostly recent college graduates (that would become paper millionaires). One example was showing multiple A-record support ... including examples from RENO/TAHOE client applications ... but was told that was too hard (took another year getting multiple A-record support into browsers), would joke if it wasn't in steven's book, they couldn't do it. Later I put together a talk about "Why Internet Wasn't Business Critical Dataprocessing" (based on what I had to do for payment transactions), that Postel sponsored at ISI/USC.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
some posts mentioning multiple a-record support
https://www.garlic.com/~lynn/2023d.html#57 How the Net Was Won
https://www.garlic.com/~lynn/2017b.html#21 Pre-internet email and usenet (was Re: How to choose the best news server for this newsgroup in 40tude Dialog?)
https://www.garlic.com/~lynn/2016g.html#49 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2013f.html#61 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012o.html#68 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2010m.html#76 towards https everywhere and strict transport security (was: Has there been a change in US banking regulations recently?)
https://www.garlic.com/~lynn/2009n.html#41 Follow up
https://www.garlic.com/~lynn/2009m.html#32 comp.arch has made itself a sitting duck for spam
other recent posts mentioning NETSCAPE and MOSAIC
https://www.garlic.com/~lynn/2024g.html#16 ARPANET And Science Center Network
https://www.garlic.com/~lynn/2024f.html#73 IBM 2250 Hypertext Editing System
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2023f.html#18 MOSAIC
https://www.garlic.com/~lynn/2023e.html#61 Early Internet
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#58 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#29 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022b.html#109 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022.html#3 GML/SGML/HTML/Mosaic
https://www.garlic.com/~lynn/2021g.html#84 EMV migration compliance for banking from IBM (1997)
https://www.garlic.com/~lynn/2021g.html#74 Electronic Signature
https://www.garlic.com/~lynn/2021c.html#86 IBM SNA/VTAM (& HSDT)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 4300 and 3370FBA Date: 20 Dec, 2024 Blog: Facebook4300 and 3370FBA FCS mid-1979 or so. Branch office found out I was using engineering 4341 in Jan1979 and cons me into doing CDC6600 fortran benchmark for national lab looking at getting 70 for compute farm (sort of leading edge of the coming cluster supercomputing tsunami). Small vm/4341 clusters also higher throughput than 3033, lower cost, much less floor space, power and environments.
80s, large corporations also ordering hundreds of VM/4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Inside IBM, conference rooms were becoming scarce, so many converted to vm/4341 rooms. MVS looked at explosion in departmental/distributed and wanted part of the market, however the only new non-datacenter disks were FBA (and MVS never supported FBA). Eventually CKD-simulation comes out as 3375 ... but it didn't do MVS much good, customers were deploying scores of distributed vm/4341s per support person, while MVS still required scores of support+operators per system.
trivia: in the wake of the Future System implosion
http://www.jfsowa.com/computer/memo125.htm
and the mad rush to get stuff back into the 370 product pipelines,
Endicott cons me into helping with the ECPS microcode assist for
138/148 (which were carried over to 4331/4341). At the time they were
executing approx. ten native/microcode instructions per simulated 370
instruction and (kernel) 370 kernel instructions would map approx
1-for-1 native instruction giving a ten times speed up. They had
6kbytes space for microcode and I was to identify the 6kbytes of
highest executed vm370 kernel code for redoing in microcode ... old
archived post with initial analysis showing highest executed 6kbytes
accounted for 79.55% of kernel execution.
https://www.garlic.com/~lynn/94.html#21
Note that Endicott wanted to start including VM370 pre-installed on every 138/148 & 4331/4341 shipped (sort of like current mainframe LPAR-PR/SM microcode), but POK (high-end mainframe) got it vetoed ... it would have been extreme embarrassment to head of POK who had convinced corporate to kill the VM370 product, shudown the development group and transfer all the people to POK for MVS/XA (Endicott eventually managed to save the VM370 product mission, but had to recreate a development group from scratch).
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
some posts 4341 cluster&distributed tsunamis
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2021h.html#107 3277 graphics
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018b.html#104 AW: mainframe distribution
https://www.garlic.com/~lynn/2016d.html#65 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#64 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#45 PL/I advertising
https://www.garlic.com/~lynn/2012n.html#52 history of Programming language and CPU in relation to each
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM S/38 Date: 21 Dec, 2024 Blog: FacebookAfter Future System imploded, S/38 included single-level-store (ala TSS/360 & MULTICs) and greatly simplified FS
One of the final nails in the FS coffin was analysis by the IBM Houston Scientific Center that if 370/195 applications were redone for FS machine made out of the fastest available hardware technology, it would have the throughput of 370/145 (aka about 30 times slowdown; aka there was a lot of throughput headroom between available technology and S/38 entry/low level market requirements).
trivia: my brother was Apple regional market manager (largest physical area CONUS) and when he come to town for business meetings, I could be included in business dinners (even arguing MAC design with MAC developers before MAC was announced). He also was able to dial into the S/38 that ran Apple to track manufacturing and delivery schedules.
AS/400 was to consolidate S/36, S/38, 8100, S/1, etc
https://en.wikipedia.org/wiki/IBM_AS/400#Fort_Knox
https://en.wikipedia.org/wiki/IBM_AS/400#AS/400
Late 80s, the last product we did at IBM was HA/CMP, it started out
HA/6000 for the NYTimes to move their newspaper system (ATEX) from DEC
VAXCluster to RS/6000, I then rename it HA/CMP when I start doing
technical/scientific cluster scale-up with national labs (LLNL, LANL,
NCAR, etc) and commercial cluster scale-up with RDBMS vendors (Oracle,
Sybase, Ingres, Informix that had VAXCluster support in same source
base with Unix.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
The S/88 product administrator then starts taking us around to their
customers and had me write a section for the corporate continuous
availability strategy document (it gets pulled when both
Rochester/AS400 and POK/mainframe complain). The executive we reported
directly to, moves over to head up Somerset (AIM, apple, ibm,
motorola) to do single chip power/pc ... working with motorola added
(M88k) cache consistency for shared memory mulitiprocessor.
https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/PowerPC_600
https://wiki.preterhuman.net/The_Somerset_Design_Center
and
https://en.wikipedia.org/wiki/IBM_AS/400#The_move_to_PowerPC
Early Jan1992, had meeting with CEO of Oracle and IBM AWD executive Hester told Ellison that we would have 16-system clusters mid92 and 128-system clusters ye92. Then late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we couldn't work with clusters that had more than four systems (we leave IBM a few months later).
I had continued to work on 360/370 all during FS, even periodically ridiculing what they were doing (which wasn't particular career enhancing) ... even doing a paged-mapped filesystems for CP67/CMS (ported to VM370/CMS) that had much higher throughput than the regular filesystem ... but was never released to customers ... the FS implosion gave such filesystems (even remotely single-level-store) a bad name (I had joked I learned what not to do observing TSS/360).
future system posts
https://www.garlic.com/~lynn/subtopic.html#futuresys
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
continuous availability, disaster
survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
ha/cmp post
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM PC/RT Date: 21 Dec, 2024 Blog: Facebook801/risc ROMP running CP.r operating system written in PL.8 was going to be the DISPLAYWRITER follow-on. When that got canceled, they decide to pivot to the UNIX workstation market and got the company that had done the at&t unix port to ibm/pc (pc/ix) to do one for ROMP.
They had all these PL.8 programmers they had to do something with ... and made the claim that they could implement an artificial virtual machine interface in PL.8 ... and that the total resources for doing both the VRM and the unix (aix) port to VRM would be less than doing AIX directly to ROMP.
NOTE: later IBM ACIS port of USB BSD unix directly to ROMP was less resources than either VRM or AIX (and way less than the combined)
The VRM (PC/RT) was dropped for RS/6000 and AIX went directly to RIOS hardware
some more RS/6000 in this recent comment in group post
https://www.garlic.com/~lynn/2024g.html#82 IBM S/38
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ha/cmp post
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 4300 and 3370FBA Date: 22 Dec, 2024 Blog: Facebookre:
we got some early topaz/3101 ... aka straight "glass teletype", before "block mode" support (sort of subset of 3270 "block mode") and then got download from japan and burned our own PROMs to upgrade our topaz/3101 with block mode support.
trivia: 360s were originally suppose to be ASCII machines ... but the
ASCII unit record gear wasn't ready yet ... so they ("temporarily")
went with old BCD gear (as "EBCDIC") and the rest is history ("EBCDIC"
become permanent) ... greatest "computer goof":
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
Early 80s, 3081 ships, supposedly were going to only be multiprocessor machines. Original 2-cpu 3081D aggregate MIPS was less than single processor Amdahl ... so they doubled the size of the processor cache for 3081K, for about the same MIPS as single processor Amdahl (however MVS claimed 2-CPU multiprocessor support only had 1.2-1.5 times the throughput of single processor, so even with same aggregate MIPS the MVS multiprocessor support made throughput much less).
Note: IBM ACP/TPF didn't have multiprocessor support and they were afraid that the whole airline/transaction market would move to Amdahl ... and some very unnatural things were done to VM/370 multiprocessor support that improved ACP/TPF throughput running in virtual machine (on 3081), however it degraded the VM/370 multiprocessor throughput of nearly every other customer's multiprocessor by 10-15%. Then some tweaks were done to VM/370 3270 terminal support to try and mask the system throughput degradation.
However there was a large 3-letter gov agency that had been long time CMS user (back to CP67/CMS days) which was all high-speed "glass" ASCII terminals and I was asked to visit to try and help. I had done a modification to CMS terminal line-mode that rather than do a separate SIO for every terminal line write, do chained CCWs for all pending lines in a single SIO; which reduced their Q1/interactive transactions by 1/3 from 60+ to 40+ per second, both cutting VM370 overhead and improving response (for all terminals including ASCII, not just 3270s).
Eventually IBM did come out with 3083 (3081 with one of the processors removed), primarily for the ACP/TPF market.
other 3081 ref
http://www.jfsowa.com/computer/memo125.htm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
misc posts mentioning ascii, multiprocessor, gov agency, etc
https://www.garlic.com/~lynn/2023f.html#87 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023c.html#77 IBM Big Blue, True Blue, Bleed Blue
https://www.garlic.com/~lynn/2022e.html#99 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#31 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022.html#80 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#75 IBM ITPS
https://www.garlic.com/~lynn/2021g.html#90 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2016.html#81 DEC and The Americans
https://www.garlic.com/~lynn/2014g.html#105 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2014f.html#21 Complete 360 and 370 systems found
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM S/38 Date: 22 Dec, 2024 Blog: Facebookre:
2nd half 80s, branch office asks if I could help (stanford) SLAC with
what becomes SCI (also asked to help LLNL with what becomes FCS)
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
Decade later did some consulting for Steve Chen (Cray YMP) ... at the
time CTO at Sequent (before IBM buys them and shuts them
down). Sequent people claimed they had done all the windows NT
multiprocessor scale-up work getting it running on 256-CPU machine
Sequent and DG do 64 board, each board four i486, 256 processor shared memory multiprocessor, Convex does 64 board, each board two HP snake risc processors, 128 processor shared memory multiprocessor ... some number of others doing SCI shared memory multiprocessors
Later, made several visits to adtech projects at major silicon valley chip mrker. They had a former SPARC10 engineer that was working on inexpensive SCI chip and wanted to do a SCI distributed shared memory implementation supporting 10,000 machines.
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
FCS (& FICON) posts
https://www.garlic.com/~lynn/submisc.html#ficon
Some posts mentioning SCI
https://www.garlic.com/~lynn/2024e.html#90 Mainframe Processor and I/O
https://www.garlic.com/~lynn/2024.html#37 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023g.html#16 370/125 VM/370
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#29 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022.html#118 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#45 OoO S/360 descendants
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2019d.html#81 Where do byte orders come from, Nova vs PDP-11
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016e.html#45 How the internet was invented
https://www.garlic.com/~lynn/2016c.html#70 Microprocessor Optimization Primer
https://www.garlic.com/~lynn/2016b.html#74 Fibre Channel is still alive and kicking
https://www.garlic.com/~lynn/2014m.html#176 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014d.html#18 IBM ACS
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#50 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013m.html#70 architectures, was Open source software
https://www.garlic.com/~lynn/2013g.html#49 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2008i.html#2 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006y.html#38 Wanted: info on old Unisys boxen
https://www.garlic.com/~lynn/2006q.html#24 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Progenitors of OS/360 - BPS, BOS, TOS, DOS (Ways To Say How Old You Are) Newsgroups: alt.folklore.computers Date: Mon, 23 Dec 2024 07:13:27 -1000Peter Flass <peter_flass@yahoo.com> writes:
SVC0/EXCP was called to execute channel programs (user space, either user programs, applications, and/or libraries). In the decision to add virtual memory to all 370s, the channel programs passed to SVC0/EXCP had virtual addresses ... while channels required real addresses.
This is same problem that CP67 had with channel programs from virtual machines ... and Ludlow doing the initial conversion of MVT->VS2, borrowed CP67 CCWTRANS to craft into SVC0/EXCP to make a copy of the passed channel programs, replacing virtual addresses with real addresses.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Success Of Failure Date: 23 Dec, 2024 Blog: Facebookgot a call summer of 2002 asking to submit a response to an unclassified BAA from IC-ARDA (since renamed IARPA) that was about to close (BAA was basically the agency didn't have the tools needed to do what was needed). We get a response in and have a couple meetings (little stilted since don't have clearances) demonstrating that we could do what was needed ... and then nothing (later heard the person that originally called us, was re-assigned out on Dulles access road). It wasn't until the Success of Failure articles a few years later got some idea what was going on.
... note success of failure accelerated after turn of the century with private equity buying up beltway bandits and gov. contractors and hiring prominent politicians to lobby congress to outsource gov. to their companies (laws were in place that blocked companies directly using money from gov. contracts for lobbying, but this was a way of skirting those laws). They also cut corners to skim as much money as possibly, example outsourcing for security clearances found companies doing the paper work, but not actually doing background checks.
AMEX was in competition with KKR for private equity LBO of RJR and KKR
wins. Then KKR runs into trouble with RJR and hires away AMEX
president to help
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
In 1992, IBM has one of the largest losses in the history of US
corporations and was being reorged into the 13 "baby blues" (take-off
on the AT&T "baby bells" breakup decade earlier) in preparation for
breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup and uses some of the same
tactics used at RJR
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
The former AMEX president then leaves IBM to head up another major
private-equity company
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/
"Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a
Washington-based global private equity firm whose 2006 revenues of $87
billion were just a few billion below ibm's. Carlyle has boasted
George H.W. Bush, George W. Bush, and former Secretary of State James
Baker III on its employee roster."
.. snip ...
success of failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failure
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
some posts mentioning BAA, IC-ARDA, IARPA
https://www.garlic.com/~lynn/2024f.html#63 Welcome to the defense death spiral
https://www.garlic.com/~lynn/2023e.html#40 Boyd OODA-loop
https://www.garlic.com/~lynn/2023d.html#11 Ingenious librarians
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022c.html#120 Programming By Committee
https://www.garlic.com/~lynn/2022c.html#40 After IBM
https://www.garlic.com/~lynn/2022b.html#114 Watch Thy Neighbor
https://www.garlic.com/~lynn/2021i.html#53 The Kill Chain
https://www.garlic.com/~lynn/2021g.html#66 The Case Against SQL
https://www.garlic.com/~lynn/2021f.html#68 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2021d.html#88 Bizarre Career Events
https://www.garlic.com/~lynn/2019e.html#129 Republicans abandon tradition of whistleblower protection at impeachment hearing
https://www.garlic.com/~lynn/2019e.html#54 Acting Intelligence Chief Refuses to Testify, Prompting Standoff With Congress
https://www.garlic.com/~lynn/2019e.html#40 Acting Intelligence Chief Refuses to Testify, Prompting Standoff With Congress
https://www.garlic.com/~lynn/2019.html#82 The Sublime: Is it the same for IBM and Special Ops?
https://www.garlic.com/~lynn/2019.html#49 Pentagon harbors culture of revenge against whistleblowers
https://www.garlic.com/~lynn/2018e.html#6 The Pentagon Is Building a Dream Team of Tech-Savvy Soldiers
https://www.garlic.com/~lynn/2017i.html#11 The General Who Lost 2 Wars, Leaked Classified Information to His Lover--and Retired With a $220,000 Pension
https://www.garlic.com/~lynn/2017h.html#23 This Is How The US Government Destroys The Lives Of Patriotic Whistleblowers
https://www.garlic.com/~lynn/2017f.html#101 Nice article about MF and Government
https://www.garlic.com/~lynn/2017c.html#47 WikiLeaks CIA Dump: Washington's Data Security Is a Mess
https://www.garlic.com/~lynn/2017c.html#5 NSA Deputy Director: Why I Spent the Last 40 Years In National Security
https://www.garlic.com/~lynn/2017b.html#35 Former CIA Analyst Sues Defense Department to Vindicate NSA Whistleblowers
https://www.garlic.com/~lynn/2017.html#64 Improving Congress's oversight of the intelligence community
https://www.garlic.com/~lynn/2016h.html#96 This Is How The US Government Destroys The Lives Of Patriotic Whistleblowers
https://www.garlic.com/~lynn/2016f.html#40 Misc. Success of Failure
https://www.garlic.com/~lynn/2016b.html#62 The NSA's back door has given every US secret to our enemies
https://www.garlic.com/~lynn/2016b.html#39 Failure as a Way of Life; The logic of lost wars and military-industrial boondoggles
https://www.garlic.com/~lynn/2015h.html#32 (External):Re: IBM
https://www.garlic.com/~lynn/2015f.html#26 Gerstner after IBM becomes Carlyle chairman
https://www.garlic.com/~lynn/2015f.html#20 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2015.html#72 George W. Bush: Still the worst; A new study ranks Bush near the very bottom in history
https://www.garlic.com/~lynn/2015.html#54 How do we take political considerations into account in the OODA-Loop?
https://www.garlic.com/~lynn/2014c.html#85 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2014c.html#66 F-35 JOINT STRIKE FIGHTER IS A LEMON
https://www.garlic.com/~lynn/2014.html#12 5 Unnerving Documents Showing Ties Between Greenwald, Omidyar & Booz Allen Hamilton
https://www.garlic.com/~lynn/2013o.html#76 Should New Limits Be Put on N.S.A. Surveillance?
https://www.garlic.com/~lynn/2013o.html#57 Beyond Snowden: A New Year's Wish For A Better Debate
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Creative Ways To Say How Old You Are Newsgroups: alt.folklore.computers Date: Mon, 23 Dec 2024 15:47:45 -1000Peter Flass <peter_flass@yahoo.com> writes:
the benchmark wasn't to show 4341 was faster than cdc6600 ... they were looking for price/performance, "small" foot print, "efficient" power & cooling ... aka 1979 "leading edge of the coming cluster supercomputing tsunami". Doing HA/CMP a decade later and over hundred RS/6000 (until IBM transferred HA/CMP cluster scaleup for announce as IBM supercomputer and we were told we couldn't work with anything involving more than four systems). Very shortly it will be 46yrs since that 4341 benchmark and now cluster supercomputing can be millions of "cores".
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 4300 and 3370FBA Date: 23 Dec, 2024 Blog: Facebookre:
also in the wake of the Future System implosion, I get roped into helping with 16-CPU SMP, tightly-coupled, shared-memory multiprocessor (in CP67->VM370 morph lots of stuff was dropped or simplified, including SMP support, I had recently added SMP support to VM370R3, originally for US online sales&marketing support HONE systems, each getting 2* throughput of single CPU system) and the 3033 processor engineers are con'ed into working on it in their spare time. Everybody thought it was great until somebody tells head of POK that it could be decades before POK favorite son operationg system ("MVS") had (effective) 16-CPU SMP support (aka MVS docs were saying 2-CPU SMP was only getting 1.2-1.5 throughput of single CPU system, POK doesn't ship 16-CPU SMP until after turn of the century). Head of POK then invites some of us to never visit POK again and 3033 processor engineers heads down and don't be distracted (contributing was head of POK had recently convinced corporate to kill VM370, shutdown the development group and transfer all the people to POK for MVS/XA ... Endicott eventually manages to save the VM370 product mission, but had to recreate a development group from scratch).
once 3033 was out the door, the processor engineers start work on 3090 (they would periodically invite me to sneak back into POK).
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67{L, H, I, SJ}, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared-memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Multics vs Unix Newsgroups: alt.folklore.computers Date: Mon, 23 Dec 2024 17:44:06 -1000Grant Taylor <gtaylor@tnetconsulting.net> writes:
In the 80s, as mainframes got larger, there appeared CP/VM subset
functions implemented directly in hardwareµcode to partition
machines ... LPAR & PR/SM
https://en.wikipedia.org/wiki/Logical_partition
which now can be found on many platforms, not just IBM mainframes ...
heavily leveraged by large cloud datacenter operations
https://aws.amazon.com/what-is/virtualization/
How is virtualization different from cloud computing?
Cloud computing is the on-demand delivery of computing resources over
the internet with pay-as-you-go pricing. Instead of buying, owning, and
maintaining a physical data center, you can access technology services,
such as computing power, storage, and databases, as you need them from a
cloud provider.
Virtualization technology makes cloud computing possible. Cloud
providers set up and maintain their own data centers. They create
different virtual environments that use the underlying hardware
resources. You can then program your system to access these cloud
resources by using APIs. Your infrastructure needs can be met as a fully
managed service.
... snip ...
cloud megadatacenters
https://www.garlic.com/~lynn/submisc.html#megadatacenter
ibm cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67{L, H, I, SJ}, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: CP/67 Multics vs Unix Newsgroups: alt.folklore.computers Date: Mon, 23 Dec 2024 22:06:06 -1000John Levine <johnl@taugh.com> writes:
Melinda's virtual machine history info
https://www.leeandmelindavarian.com/Melinda#VMHist
Univ. had got 360/67 to replace 709/1401 but ran as 360/65 with os/360 and I was still undergraduate but hired fulltime responsible for OS/360. Univ. shutdown datacenter on weekends and I would have it dedicated although 48hrs w/o sleep made monday classes hard.
CSC then came out and installed CP67 (3rd after CSC itself and MIT Lincoln labs) and I mostly played with it in my weekend dedicated time. Initially I mostly concentrated on pathlengths to improving running OS/360 in virtual machine. My OS/360 job stream ran 322secs on real machine, initially virtually ran 856secs (534secs CP67 CPU). After a couple months I had CP67 CPU down to 113secs (from 534). I then start redoing other parts of CP67, page replacement, dynamic adaptive resource management, scheduling and page thrashing controls, ordered arm seek queueing (from FIFO), multiple chained page transfers maximizing transfer/revolution (2301 paging drum improved from 80/sec able to do 270/sec peak), etc. Most of this CSC picks up for distribution in standard CP67.
After graduation, I join CSC and one of my hobbies was enhanced production operating systems for internal datacenters (the world-wide, online, sales&marketing support HONE systems was early and long-time customer). After decision to add virtual memory to all 370s, the morph of CP67->VM370 dropped or simplified a lot of stuff. During 1974 and early 1975, I was able to get most of it back into VM370R2 and then VM370R3.
In the wake of Future System implosion, Endicott ropes me into helping
with VM/370 ECPS microcode assist for 370 138/148 ... basically
identify 6kbytes of highest executed VM370 kernel paths for moving
into microcode. 138/148 avg. 10 native instruction per emulated 370
instruction and kernel 370 instruction would translate
approx. one-for-one into native ... getting 10 times speed up. Old
archived a.f.c post with initial analysis
https://www.garlic.com/~lynn/94.html#21
6kbytes instructions accounted for 79.55% of kernel execution (moved to native running ten times faster) ... a lot of it involved simulated I/O (it had to make a copy of the virtual channel programs substituting real addresses for virtual, the corresponding virtual pages also had to be "fixed" in real storage until the VM I/O had completed)
Science Center was on 4th flr and Multics was on the 5th ... looking at some amount of Multics, I figured I could do page mapped filesystem with lots of sharing features (which was faster and much less CPU than the standard requiring I/O emulation). Note that "Future System" did single-level-store ala Multics and (IBM) TSS/360 ... I had joked I had learned what not to do from TSS/360. However when FS imploded it gave anything that even slightly related to single-level-store a bad reputation (and I had trouble even getting my CMS paged-mapped filesystem used internally inside IBM).
ibm cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67{L, H, I, SJ}, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
dynamic adaptive resource management, scheduling, etc
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement, page thrashing controls, etc
https://www.garlic.com/~lynn/subtopic.html#wsclock
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
paged-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
360 &/or 370 microcode posts
https://www.garlic.com/~lynn/submain.html#mcode
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: CP/67 Multics vs Unix Newsgroups: alt.folklore.computers Date: Tue, 24 Dec 2024 08:40:50 -1000re:
trivia: CSC CP67 had 1052&2741 support, but univ. had some number
of TTY/ASCII terminals, so I added TTY/ASCII support ... and CSC
picked up and distributed with standard CP67 (as well as lots of my
other stuff). I had done a hack with one byte values for TTY line
input/output. Tale of MIT Urban Lab having CP/67 (in tech sq bldg
across quad from 545, multics & science center). Somebody down at
Harvard got an ascii device with 1200(?) char length ... they modified
CP67 field for max. lengths ... but didn't adjust my one-byte hack.
https://www.multicians.org/thvv/360-67.html
A user at Harvard School of Public Health had connected a plotter to a
TTY line and was sending graphics to it, and every time he did, the
whole system crashed. (It is a tribute to the CP/CMS recovery system
that we could get 27 crashes in in a single day; recovery was fast and
automatic, on the order of 4-5 minutes. Multics was also crashing quite
often at that time, but each crash took an hour to recover because we
salvaged the entire file system. This unfavorable comparison was one
reason that the Multics team began development of the New Storage
System.)
... snip ...
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Big Blues: The Unmaking of IBM Date: 24 Dec, 2024 Blog: Facebook"Big Blues: The Unmaking of IBM"
other similar to "The Unmaking of IBM":
1996 MIT Sloan The Decline and Rise of IBM
https://sloanreview.mit.edu/article/the-decline-and-rise-of-ibm/?switch_view=PDF
1995 l'Ecole de Paris The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
note 1972, Learson tried (& failed) to block the bureaucrats,
careerists, and MBAs from destroying Watson culture&legacy
... Learson's Management Briefing, Number 1-72: January 18, 1972
pg160-163 (including THINK magazine article), 30 years of Management
Briefings, 1958-1988:
http://www.bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
20yrs later IBM has one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" (take-off on
the AT&T "baby bells" breakup decade earlier) in preparation for
breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup. Before we get started, the board brings in the former AMEX president as CEO to try and save the company, who (somewhat) reverses the breakup.
How To Stuff Wild Ducks
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: CP/67 Multics vs Unix Newsgroups: alt.folklore.computers Date: Tue, 24 Dec 2024 13:49:54 -1000re:
Science Center and couple of the commercial online CP67 service spin-offs in the 60s did a lot of work for 7x24, dark-room, unattended operation. Also 60s was when IBM leased/rented machines with charges based on the "system meter" that ran whenever any cpu or any channel (I/O) was busy ... and a lot of work was done allowing system meter to stop when system was otherwise idle (there had to be no activity at all for at least 400ms before system meter would stop). One was special terminal I/O channel programs that would go idle (allowing system meter to stop), but immediately start up whenever characters were arriving.
trivia: long after IBM had switched to selling machines, (IBM batch) MVS system still had a 400ms timer event that guaranteed that system meter never stopped.
Late 80s, for IBM Austin RS/6000, AIX filesystem was modified to journal filesystem metadata changes using transaction memory (RIOS hardware that tracked changed memory) ... in part claiming it was more efficient.
Got the HA/6000 project in the late 80s, originally for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. AIX JFS enabled hot-standby unix filesystem take-over ... and some of RDBMS vendors supported (raw unix) concurrent shared disks.
Then IBM Palo Alto was porting journaled filesystem to machines that didn't have transaction memory and found that transaction journaling calls outperformed transaction memory (even when ported back to RS/6000).
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
virtual machine online commercial system posts
https://www.garlic.com/~lynn/submain.html#online
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Mainframe Channels Date: 25 Dec, 2024 Blog: FacebookSmaller 360s had integrated channels, same processor executed 360 microcode and channel microcode. 360/65 (& above) had separate external hardware channels. Same was true of 370, up through 158 were integrated channels ... and 165 & above were separate external channels. Channel architecture, processor passed a pointer to channel program of CCWs in processor memory. The channel sequentially fetched a CCW from processor memory and finished executing before fetching the next CCW. After FS imploded there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033 & 3081 in parallel.
They took the 158 integrated channel microcode and turned it into the 303x channel director, a separate hardware box. A 3031 was two 158 engines, one with just the integrated channel microcode and one with just the 370 microcode. A 3032 was 168 reworked to use the 303x channel director for external channels. A 3033 started out remapping the 168 logic to 20% faster chips.
Original 360 SIO (start I/O) instruction waited to all parties involved in the I/O had responded. 370 introduced SIOF (start I/O fast) could finish early before everything had responded (because it was rare that system issued I/O to something non-existant) ... and then I/O process just proceeded overlapped/asynchronously (new special interrupt was defined in the case something happened between the time the SIOF continued and when a SIO would have continued).
370/xa introduced SSCH (start subchannel) ... where even more things could continue asynchronously. These days have SAPs (system assist processors) where operating system turns over even more I/O to a separate, dedicated processor.
Original bus&tag was half-duplex, and had end-to-end handshake for every byte transferred nominally could handle up to 200ft (for slower speed I/O, but 2305 fixed head disk at 1.5mbytes/sec distance was much less for slow 370/158 integrated channels). For 3380 mbyte/sec transfer and up to 400ft, protocol changed to allow multiple bytes transferred between end-to-end handshake ("data streaming").
1988, branch office asks me if I could help LLNL with getting some
serial stuff (they were working with) standardized, which quickly
becomes FCS (fiber-channel standard, including some stuff I had done
in 1980), initially 1gbit transfer, full-duplex, aggregate
200mbyte/sec throughput
https://en.wikipedia.org/wiki/Fibre_Channel_Protocol
In the 90s, POK announces some fiber stuff they had been playing with
for at least a decade as ESCON (17mbytes/sec, when it was already
obsolete). IBM ESCON channel
https://en.wikipedia.org/wiki/ESCON
Then some of POK engineers become involved with FCS and define a
heavy-weight protocol that significantly cuts the native throughput
which is eventually announced as FICON.
https://en.wikipedia.org/wiki/FICON
The most recent public benchmark I've found is the z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time as "Peak I/O", there was a FCS announced for E5-2600 server blades claiming over a million IPOs (two such FCS having higher throughput than the 104 FICON). Note also IBM docs recommend SAPs (system assist processors that do actual I/O) be kept to 70% CPU (or what would have been more like 1.5M IOPS, rather than 2M). Also, no CKD DASD have been made for decades, all being emulated on industry standard fixed-block disks.
trivia: as undergraduate in the 60s, univ got 360/67 for tss/360
replacing 709/1401 and when it came in, I was hired fulltime
responsible for os/360 (360/67 running as 360/65). Univ. shutdown
datacenter on weekends, and I would have it dedicated, although 48hrs
w/o sleep made monday classes hard. then CSC came out to install
(virtual machine) CP/67 and I mostly got to play with it during my
dedicated weekend time. CP67 came with 1052 & 2741 terminal support
and automagic terminal type identification (using telecommunication
controller SAD CCW to switch terminal type port scanner for each
line). Univ. also had some TTY/ASCII terminals and so I added TTY
support integrated with automagic terminal type id. I then wanted to
have single dial-in number for all terminal types ("hunt group"), but
didn't quite work because IBM hard-wired line-speed for each
line. This kicked off a univ. project to build a clone
telecommunication controller, build a channel interface board for
Interdata/3 programmed to emulate IBM controller with the addition
that it did line dynamic auto baud rate. This is upgraded to
Interdata/4 for the channel interface with cluster of Interdata/3s for
the line interfaces ... and four of us get written up for (some part
of) IBM clone controller business ... Interdata and later Perkin-Elmer
selling as IBM mainframe controller.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
trivia2: The decision was made to add virtual memory to all 370s and Ludlow was doing initial MVT->VS2 transition on 360/67 (pending 370 with virtual memory) ... initially SVS, MVT running in 16mbyte virtual memory (similar to MVT running in CP67 16mbyte virtual machine). The biggest task was EXCP/SVC0, similar to CP67; channels required real addresses and the passed channel programs all had virtual addresses. Ludlow borrowed CP67 CCWTRANS for crafting into EXCP/SVC0 to make copy of each channel program replacing virtual addresses with real addresses.
360 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON (& FCS) posts
https://www.garlic.com/~lynn/submisc.html#ficon
some recent 303x channel director posts:
https://www.garlic.com/~lynn/2024g.html#55 Compute Farm and Distributed Computing Tsunami
https://www.garlic.com/~lynn/2024e.html#100 360, 370, post-370, multiprocessor
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#12 370 Multiprocessor
https://www.garlic.com/~lynn/2024b.html#48 Vintage 3033
https://www.garlic.com/~lynn/2024b.html#11 3033
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#26 Vintage 370/158
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#35 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#32 IBM Mainframe Lore
https://www.garlic.com/~lynn/2023f.html#27 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#91 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#3 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#114 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#36 "The Big One" (IBM 3033)
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023b.html#55 IBM 3031, 3032, 3033
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#53 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#64 For aficionados of the /360
https://www.garlic.com/~lynn/2022d.html#47 360&370 I/O Channels
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Y2K Quarter Century Date: 25 Dec, 2024 Blog: FacebookY2K was viewed as one-off event and also occurring during the Internet-bubble ... where Internet startups were competing for lots of people with any computer experience ... as a result lots of businesses were (forced to?) going off-shore for their Y2K projects. After the turn of century there were lots of stories about businesses continued to use those off-shore operations for non-Y2K projects.
There was also story about large financial institution contracting out Y2K work to a company that turned out to be a front for a criminal organization (discovered later when unexplained stealth financial transactions were traced to backdoors in the software).
Risk, Fraud, Exploits, Threats, Vulnerabilities posts
https://www.garlic.com/~lynn/subintegrity.html#fraud
some past posts mentioning Y2K and off-shore/over-seas
https://www.garlic.com/~lynn/2021e.html#60 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2017f.html#51 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017.html#0 Leap seconds
https://www.garlic.com/~lynn/2015c.html#74 N.Y. Bank Regulator Says Third-Party Vendors Provide Backdoor to Hackers
https://www.garlic.com/~lynn/2015b.html#21 Hackers stole from 100 banks and rigged ATMs to spew cash
https://www.garlic.com/~lynn/2014m.html#91 LEO
https://www.garlic.com/~lynn/2014k.html#63 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014h.html#25 How Comp-Sci went from passing fad to must have major
https://www.garlic.com/~lynn/2012h.html#18 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2012f.html#95 How do you feel about the fact that India has more employees than US?
https://www.garlic.com/~lynn/2011n.html#18 Great Brian Arthur article on the Second Economy
https://www.garlic.com/~lynn/2011h.html#67 Happy 100th Birthday, IBM!
https://www.garlic.com/~lynn/2010o.html#41 60 Minutes News Report:Unemployed for over 99 weeks!
https://www.garlic.com/~lynn/2010i.html#53 Of interest to the Independent Contractors on the list
https://www.garlic.com/~lynn/2009o.html#67 I would like to understand the professional job market in US. Is it shrinking?
https://www.garlic.com/~lynn/2009o.html#63 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2009o.html#37 Young Developers Get Old Mainframers' Jobs
https://www.garlic.com/~lynn/2009k.html#18 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2009i.html#9 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009d.html#2 IBM 'pulls out of US'
https://www.garlic.com/~lynn/2008q.html#55 Can outsourcing be stopped?
https://www.garlic.com/~lynn/2008n.html#27 VMware Chief Says the OS Is History
https://www.garlic.com/~lynn/2008i.html#65 How do you manage your value statement?
https://www.garlic.com/~lynn/2008.html#73 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008.html#57 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2006g.html#21 Taxes
https://www.garlic.com/~lynn/2005s.html#16 Is a Hurricane about to hit IBM ?
https://www.garlic.com/~lynn/2004b.html#2 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2003p.html#33 [IBM-MAIN] NY Times editorial on white collar jobs going
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: CMS Computer Games Date: 25 Dec, 2024 Blog: FacebookI regularly visited TYMSHARE .... which had started offering CMS-based online computer conferencing system, free to (ibm mainframe user group) SHARE starting AUG1976 as VMSHARE ... archives:
I also cut a deal with them to get monthly tape dump of all VMSHARE (and later also PCSHARE) files ... for making available on internal systems and the internal network (some resistance by people concerned about exposing internal employees to unfiltered customer comments)
On one visit, they demo'ed ADVENTURE that somebody had found on a
Stanford SAIL PDP10 and ported to CMS. I got a copy with full Fortran
source making it available internally
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure
shortly a PLI version appeared as well as versions with lots more
points.
A couple years later (the author of REXX did) a multi-user client/server 3270 CMS spacewar game ... leveraged the (internal) SPM service (originally CP67 ported to VM370, sort of superset of combination of VMCF, IUCV and SMSG) ... and RSCS/VNET had support so clients didn't have to be same system as the server. Almost immediately robot client players appeared beating human players ... and servers were enhanced to increase power use non-linearly as intervals between commands dropped below normal human reaction times (trying to level the playing field).
By which time we had a large collection of "demo" applications. Nearly every other internal system had "For Business Purposes Only" on the VM370 logon screen ... but we had "For Management Approved Uses" ... in corporate audit of the installation, a demand was to remove all "games" from the systems. We responded with they were "demo" applications and we had management approval.
a few posts mentioning vmhsare, adventure, spacewar
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: RSCS/VNET Date: 25 Dec, 2024 Blog: FacebookRSCS/VNET was originally done in 60s for CP67 ... for science center wide-area network ... item from one of the CSC 1969 GML inventors
which morphs into the internal corporate network (first CP67 then transitioning to VM370, larger than arpanet/internet from beginning until sometime mid/late 80s, about the time internal network was forced to convert to SNA/VTAM). It was also used for the corporate sponsored univ BITNET. HASP & later JES2 code originally had "TUCC" in cols 68-71 and used spare entries in the 255 entry psuedo device table for network nodes, typically somewhere around 160-180. First problem was that it would trash traffic where the origin and/or destination node wasn't in the local table (and the internal network had very early passed 255 entries). The implementation also intermixed network fields and job control fields and traffic between systems at slightly different releases crash destination MVS/JES host systems. As a result MVS/JES systems were kept at network edge/boundary behind VM370 systems.
RSCS/VNET had a clean layered implementation and did a NJE emulation driver to connect NJE (aka MVS/JES) systems into the internal network and generate necessary fields for the immediately connected MVS/JES (if didn't originate from MVS/JES system). Then there was a whole slew of further updates that if the originating transmission came from a MVS/JES, try to recognize the version and if necessary reorganize the fields for the immediately connected MVS/JES version (to keep it from crashing). There was a infamous case of newly modified San Jose MVS/JES was crashing Hursley MVS/JES systems and the Hursley RSCS/VNET was blamed (because they hadn't got the changes for fiddling the changed San Jose format for the Hursley MVS/JES).
Note JES was eventually enhanced to handle up to 999 node definitions, but it was after the internal network had already passed 1000 nodes.
Eventually marketing eliminated native RSCS/VNET drivers in the customer release .... but the internal network continued to use the native drivers because they had higher throughput (at least until the internal network was forced to convert to SNA/VTAM).
Edson responsible for science center wide-area network, internal
network, BITNET, etc (passed Aug2020)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
SJMerc article about Edson (responsible for CP67 wide-area network, he
passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone
behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
trivia: in the 60s, at univ took 2 credit hr intro to fortran/computers and at the end of the semester, was hired to rewrite 1401 MPIO in assembler for 360/30 (I was given lots of hardware&software manuals and got to design and implement my own monitor, device drivers, error recovery, storage management, etc ... and within a few weeks had 2000 card program). Univ. was getting 360/67 tss/360 replacing 709/1401 and 360/30 temporarily replaced 1401 pending getting 360/67. Within year of taking intro class, 360/67 arrived and I was hired fulltime responsible of os/360 (tss/360 never came to production and ran as 360/65). Student Fortran ran under second on 709 but well over minute on 360/67 os/360. I install HASP cutting time in half. I then start redoing stage2 sysgen, instead of starter system sysgen, run in production system with HASP and statements reordered to carefully place datasets and PDS members for optimizing arm seek and multi-track searches .... getting another 2/3rds improvement down to 12.9secs ... never got better than 709 until I install Univ. of Waterloo WATFOR.
CSC came out to install CP67 (3rd install after CSC itself and MIT
Lincoln Labs) and I mostly got to play with it during my weekend
dedicated time. For MVTR18, I modified HASP, eliminated 2870 RJE
support (to cut down fixed real storage) and added terminal support
and editor that implemented CMS edit syntax (but completely rewritten
since totally different environment) for CRJE support. See recent
comment for more related info
https://www.garlic.com/~lynn/2024g.html#95 IBM Mainframe Channels
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HASP, ASP, JES2, JES3, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Terminals Date: 25 Dec, 2024 Blog: Facebooktrivia: as undergraduate in the 60s, univ got 360/67 for tss/360 replacing 709/1401 and when it came in, I was hired fulltime responsible for os/360 (360/67 running as 360/65). Univ. shutdown datacenter on weekends, and I would have it dedicated, although 48hrs w/o sleep made monday classes hard. Then CSC came out to install (virtual machine) CP/67 (3rd installation after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my dedicated weekend time. CP67 came with 1052 & 2741 terminal support and automagic terminal type identification (using telecommunication controller SAD CCW to switch terminal type port scanner for each line). Univ. also had some TTY/ASCII terminals and so I added TTY support integrated with automagic terminal type id.
I then wanted to have single dial-in number for all terminal types
("hunt group"), but didn't quite work because IBM hard-wired
line-speed for each line. This kicked off a univ. project to build a
clone telecommunication controller, build a channel interface board
for Interdata/3 programmed to emulate IBM controller with the addition
that it did line dynamic auto baud rate. This is upgraded to
Interdata/4 for the channel interface with cluster of Interdata/3s for
the line interfaces ... and four of us get written up for (some part
of) IBM clone controller business ... Interdata and later Perkin-Elmer
selling as IBM mainframe controller.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
I didn't get home terminal (2741, Mar1970) until after graduating and joining IBM CSC.
Early 80s, got HSDT project, T1 and faster computer links, and some
conflicts with the communication group. In the 60s, IBM had 2701
controllers that supported T1 links. With the company adopting
SNA/VTAM mid-70s, issues appeared to cap links at 56kbytes/sec. Part
of HSDT funding supposedly called for some IBM content. I eventually
found FSD had done S/1 Zirpel T1 cards for the gov. installations that
were finding their 60s 2701s falling apart. Was also working with NSF
director and was suppose to get $20M to interconnect the NSF
supercomputer centers. Congress cuts the budget, some other things
happen and eventually RFP is released (in part based on what we
already had running). 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
Having equipment built on the other side of Pacific, on friday before
leaving for a visit, had Raleigh email announcement of new forum
low-speed: 9.6kbits/sec,
medium speed: 19.2kbits/sec,
high-speed: 56kbits/sec,
very high-speed: 1.5mbits/sec
On Monday morning in conference room on the other size of the Pacific:
low-speed: <20mbits/sec,
medium speed: 100mbits/sec,
high-speed: 200mbits-300mbits/sec,
very high-speed: >600mbits/sec
trivia: 60s, IBM rented/leased computers, charges based on "system
meter" that ran whenever any cpu or channel was busy. CSC and a couple
CP67 commercial online spinoffs of CSC, did a lot of work for 7x24,
dark room, unattended operation. Also processing and channel programs
that would allow "system meter" to stop during idle periods (including
special terminal channel programs that would release the channel with
no activity, but instantly on with arriving characters). "System
Meter" needed 400ms of complete idle before it would stop .... long
after IBM had switched to selling computers, MVS still had a 400ms
timer event that would guarantee system meter never stopped.
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
(virtual machine) commercial online services
https://www.garlic.com/~lynn/submain.html#online
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: CP/67 Multics vs Unix Newsgroups: alt.folklore.computers Date: Thu, 26 Dec 2024 09:55:24 -1000Peter Flass <peter_flass@yahoo.com> writes:
when CMS was updating filesystem metadata (allocated blocks, allocated files, location of files and associated records), it was always to new disk record locations ... and last thing was to rewrite master record that switched from the old metadata to the new metadata in single record write.
Around the time of transition from CP67/CMS to VM370/CMS, it was found that IBM 360&370 (CKD) disk I/O had a particular failure mode during power failure, the system memory could have lost all power but the CKD disk and channel could have enough power to finish a write operation in progress ... but since there was no power to memory it would finish the write with all zeros and then write record error check based on the propagated zeros. CMS was enhanced to have a pair of master records and update would alternate between the two with basically a version number appended at the end (so any partial zero write wouldn't identify it was most recent & valid).
This was later fixed for fixed-block disks ... where a write wouldn't start until it had all the data from memory (i.e. countermeasure to partial record writes with trailing zeros) ... but CKD disks and other IBM operating systems (that didn't have FBA disk support) tended to still be vulnerable to this particular power failure problem.
other trivia: 60s, IBM rented/leased computers, charges based on "system meter" that ran whenever any cpu or channel was busy. CSC and a couple of the CSC CP67 commercial online spinoffs, did a lot of work for 7x24, dark room, unattended operation, optimized processing and channel programs so "system meter" could stop during idle periods (including special terminal channel programs that would release the channel with no activity, but instantly on with arriving characters).
"System Meter" needed 400ms of complete idle before it would stop .... long after IBM had switched to selling computers, MVS still had a 400ms timer event that would guarantee system meter never stopped.
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
(virtual machine) commercial online services
https://www.garlic.com/~lynn/submain.html#online
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Token-Ring versus Ethernet Date: 26 Dec, 2024 Blog: FacebookThere was story that IBM did shielded CAT T/R for 3270 terminals because coax cable from datacenter to each 3270 terminal was starting to exceed some bldg load limits (shielded twisted pair was much lighter than coax, and T/R MAU could run hierarchical tree structure (each independent 3270 terminal wiring didn't have to run all the back to datacenter)
IBM AWD PC/RT workstation (PC/AT bus) did some of their own cards, including 4mbit T/R card. Then for AWD RS/6000 workstation with microchannel, AWD was told they couldn't do their own cards, but had to use PS2 microchannel cards. However the communication group was fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm) and severely performance kneecapped the PS2 microchannel cards i.e. the PC/RT (AT-bus) 4mbit T/R card had higher card throughput than the PS2 microchannel 16mbit T/R card (joke that PC/RT server with its 4mbit T/R card would have higher throughput than RS/6000 microchannel 16mbit T/R server). The new IBM Almaden Research bldg had been heavily provisioned with CAT wiring assuming 16mbit T/R cards. However they found that the $69 twisted-pair 10mbit Ethernet card over CAT wiring had higher throughput than the $800 16mbit T/R microchannel card and further the 10mbit Ethernet CAT wiring had higher aggregate LAN throughput and lower latency than 16mbit T/R LAN.
Aggravating that was aggregate card cost for 300 workstation environment was $20,700 for Ethernet versus $240,000 for 16mbit T/R ... difference of $219,300. For that difference could get at least five high-performance TCP/IP routers, each with 16 10mbit Ethernet LAN interfaces ... T/R: 300 workstations sharing single 16mbit LAN, while (5*16=) 80 10mbit LANs, only four workstations sharing each 10mbit LAN. Various of the high-performance TCP/IP routers could also be configured with T1 and T3 telco interfaces, IBM mainframe channel interfaces, non-IBM mainframe channel interfaces, FDDI 100mbit interfaces, etc.
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
misc. other posts mentioning AWD token-ring/ethernet
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2022f.html#18 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022b.html#85 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#49 IBM Downturn
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2018f.html#25 LikeWar: The Weaponization of Social Media
https://www.garlic.com/~lynn/2018d.html#24 8088 and 68k, where it went wrong
https://www.garlic.com/~lynn/2017h.html#15 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017g.html#73 Mannix "computer in a briefcase"
https://www.garlic.com/~lynn/2017f.html#111 IBM downfall
https://www.garlic.com/~lynn/2017d.html#21 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016c.html#83 opinion? Fujitsu USA
https://www.garlic.com/~lynn/2015h.html#108 25 Years: How the Web began
https://www.garlic.com/~lynn/2014m.html#128 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014h.html#88 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2013n.html#79 wtf ? - was Catalog system for Unix et al
https://www.garlic.com/~lynn/2013i.html#4 IBM commitment to academia
https://www.garlic.com/~lynn/2013g.html#84 Metcalfe's Law: How Ethernet Beat IBM and Changed the World
https://www.garlic.com/~lynn/2013b.html#32 Ethernet at 40: Its daddy reveals its turbulent youth
https://www.garlic.com/~lynn/2012n.html#70 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012.html#92 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011i.html#60 Speed matters: how Ethernet went from 3Mbps to 100Gbps... and beyond
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2010e.html#67 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2009r.html#15 Small Server Mob Advantage
https://www.garlic.com/~lynn/2008e.html#21 MAINFRAME Training with IBM Certification and JOB GUARANTEE
https://www.garlic.com/~lynn/2006l.html#35 Token-ring vs Ethernet - 10 years later
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: CP/67 Multics vs Unix Newsgroups: alt.folklore.computers Date: Fri, 27 Dec 2024 09:35:21 -1000Lynn Wheeler <lynn@garlic.com> writes:
RS/6000 AIX with Journal Filesystem released in 1990
https://en.wikipedia.org/wiki/IBM_AIX
AIX was the first operating system to implement a journaling file
system. IBM has continuously enhanced the software with features such as
processor, disk, and network virtualization, dynamic hardware resource
allocation (including fractional processor units), and reliability
engineering concepts derived from its mainframe designs.[8]
In 1990, AIX Version 3 was released for the POWER-based RS/6000
platform.[16] It became the primary operating system for the RS/6000
series, which was later renamed IBM eServer pSeries, IBM System p, and
finally IBM Power Systems.
... snip ...
Nick Donofrio approved HA/6000 1988 (required the journal filesystem that would be part of RS/6000 1990 release) ... and started at the IBM Los Gatos lab Jan1989 (I rename it HA/CMP when start doing technical/scientific cluster scaleup with national labs, LLNL, LANL, NCAR, etc and commercial cluster scaleup with RDBMS vendors, Oracle, Sybase, Ingres, Informix.
27 Years of IBM RISC
http://ps-2.kev009.com/rootvg/column_risc.htm
1990 POWER
IBM announces its new RISC-based computer line, the RISC System/6000
(later named RS/6000, nowadays eServer pSeries), running AIX Version
3. The architecture of the systems is given the name POWER (now commonly
referred to as POWER1), standing for Performance Optimization With
Enhanced RISC. They where based on a multiple chip implementation of the
32-bit POWER architecture. The models introduced included an 8 KB
instruction cache (I-cache) and either a 32 KB or 64 KB data cache
(D-cache). They had a single floating-point unit capable of issuing one
compound floating-point multiply-add (FMA) operation each cycle, with a
latency of only two cycles and optimized 3-D graphics capabilities.
The model 7013-540 (30 MHz) processed 30 million instructions per
second. Its electronic logic circuitry had up to 800,000 transistors per
silicon chip. The maximum memory size was 256 Mbytes and its internal
disk storage capacity was 2.5 GBytes.
Links: (for URLs see web page)
RISC System/6000 POWERstation/POWERserver 320
RISC System/6000 POWERstations/POWERservers 520 AND 530
RISC System/6000 POWERserver 540
RISC System/6000 POWERstation 730
RISC System/6000 POWERserver 930
AIX Version 3
AIX Version 3 is announced.
Links: (for URLs see web page)
AIX Version 3 (Februari, 1990)
Overview: IBM RISC System/6000 and related announcements
... snip ...
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: John Boyd and Deming Date: 28 Dec, 2024 Blog: Linkedinre:
Nick Donofrio approved HA/6000 in 1988 (required the journal filesystem that would be part of RS/6000 1990 release), originally for NYTimes to move their newspaper system (ATEX) off DEC Vaxcluster to RS/6000 (1990 also asked to be the AWD rep to C4). 1988, branch office also asked if I could help LLNL standardize some serial stuff they were working with, which quickly becomes fiber channel standard (including some stuff I had done in 1980, "FCS" ... initially 1gbit, full-duplex, 200mbytes/sec aggregate).
HA/6000 Started at the IBM Los Gatos lab Jan1989. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs, LLNL, LANL, NCAR, etc and commercial cluster scale-up with RDBMS vendors, Oracle, Sybase, Ingres, Informix. Early Jan1992 have meeting with Oracle CEO, IBM AWD Hester tells Ellison that we would have 16-system clusters by md92 and 128-system cluster by ye92. MidJan92, I update FSD about lots of work with national labs and FSD tells Kingston supercomputer group that they were going with HA/CMP. Late Jan92, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we were told that we can't work on anything with more than four processors (we leave IBM a few months later).
trivia: POK IBM gets around to shipping their fiber stuff as ESCON in the 90s (when it is already obsolete; 17mbytes/sec). Later some POK engineers becomes involved with FCS and define a heavy-weight protocol that significantly cuts the native throughput and is released as FICON. Latest public benchmark I can find is z196 "peak I/O" that gets 2M IOPS using 104 FICON. About the same time, a FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS w/higher throughput than 104 FICON).
The S/88 product administrtor also started taking us around to their customers and gets me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain).
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
FCS/FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: CP/67 Multics vs Unix Newsgroups: alt.folklore.computers Date: Sat, 28 Dec 2024 17:04:15 -1000Grant Taylor <gtaylor@tnetconsulting.net> writes:
(AT&T unix port) AIXV2 and (UCB BSD port) AOS ran on PC/RT. They then
added bunch of BSD'isms for AIXV3 for 1990 RS/6000 (RIOS power chipset).
Then start AIM (apple, IBM, Motorola) & Somerset, single chip
power/pc.
https://en.wikipedia.org/wiki/IBM_RS/6000
https://www.ibm.com/docs/en/power4?topic=rs6000-systems
so most of non-AIX systems are going to be for power/pc ... and then
power & power/pc eventually merge.
https://en.wikipedia.org/wiki/IBM_Power_microprocessors
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Y2K Quarter Century Date: 29 Dec, 2024 Blog: Facebookre:
from an old (y2k) "century" forum
Date: 7 December 1984, 14:35:02 CST
To: Distribution
1.In 1969, Continental Airlines was the first (insisted on being the
first) customer to install PARS. Rushed things a bit, or so I hear.
On February 29, 1972, ALL of the PARS systems canceled certain
reservations automatically, but unintentionally. There were (and
still are) creatures called "coverage programmers" who deal with such
situations.
2.A bit of "cute" code I saw once operated on a year by loading a byte
of packed data into a register (using INSERT CHAR), then used LA
R,1(R) to bump the year. Got into a bit of trouble when the year 196A
followed 1969. I guess the problem is not everyone is aware of the
odd math in calendars. People even set up new religions when they
discover new calendars (sometimes).
3.We have an interesting calendar problem in Houston. The Shuttle
Orbiter carries a box called an MTU (Master Timing Unit). The MTU
gives yyyyddd for the date. That's ok, but it runs out to ddd=400
before it rolls over. Mainly to keep the ongoing orbit calculations
smooth. Our simulator (hardware part) handles a date out to ddd=999.
Our simulator (software part) handles a date out to ddd=399. What we
need to do, I guess, is not ever have any 5-week long missions that
start on New Year's Eve. I wrote a requirements change once to try to
straighten this out, but chickened out when I started getting odd
looks and snickers (and enormous cost estimates).
... snip ... top of post, old email index
trivia: i was blamed for online computer conferencing on the internal
corporate network (larger than arpanet/internet from just about the
beginning until sometime mid/late 80s) in the late 70s and early 80s
(it really took off spring 1981 when I distributed trip report of
visit to Jim Gray at Tandem) ... folklore is when the corporate
executive committee was told, 5of6 wanted to fire me. One of the
outcomes was officially sanctioned moderated forum groups. From
IBMJargon ... copy here
https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing
https://www.garlic.com/~lynn/subnetwork.html#cmc
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual Storage Date: 29 Dec, 2024 Blog: Facebookearly last decade, I was asked to track down the executive decision to add virtual storage(/memory) to all 370s ... and found staff member to the executive. Basically MVT storage was so bad that region sizes typically had to be specified four times larger than used ... so that common 1mbyte 370/165 was only able to run four concurrent regions ... insufficient to keep system busy and justified. Adding virtual memory allowed number of concurrently running regions to be increased by factor of four times with little or no paging (capped at 15 because of 4bit storage protect key) ... similar to running MVT in a CP67 16mbyte virtual machine.
Ludlow was doing the initially implementation for VS2/SVS using a
360/67 (pending engineering 370 with virtual memory support) ... a
little bit of code to create single 16mbyte virtual memory table and
simple paging. Biggest issue was EXCP/SVC0, now the channel programs
passed had virtual addresses and channels required real addresses
... he borrows CP67 CCWTRANS to craft into EXCP/SVC0 to make copies of
the passed channel programs, replacing the virtual addresses with
real. Old archived post (previously posted to bit.listserv.ibm-main
newsgroup) with pieces of the email exchange
https://www.garlic.com/~lynn/2011d.html#73
trivia: undergraduate in the 60s, I had taken a two credit hr intro to fortran/computers and at the end of the semester was hired to rewrite 1401 MPIO in assembler for 360/30. The univ was getting a 360/67 for tss/360 replacing 709/1401 and temporarily replaced the 1401 with 360/30 (getting 360 experience) pending arrival of 360/67. The 360/67 arrived within a year of taking intro class and I was hired fulltime responsible for os/360 (tss/360 never came to production). Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing into an independent business unit. I think Renton datacenter largest in the world, 360/65s arriving faster than they could be installed (boxes constantly staged in hallways around machine room, joke that Boeing was getting 360/65s like other companies got keypunches).
Boeing also brings up the 2-CPU 360/67 to Seattle from Boeing Huntsville ... which Huntsville got for TSS/360 with lots of 2250 graphic displays for CAD/CAM work ... but ran as two 360/65 MVT systems. Huntsville had run into the MVT storage problem early and had modified MVT-R13 to run in virtual memory but w/o paging, just fiddling the virtual addresses partially compensated for the MVT storage problems (precursor to decision to add virtual storage to all 370s).
VS2/SVS is upgraded to VS2/MVS to get around 4bit storage protect keys, 15 region cap, by keeping regions separated by putting them in their own separate virtual address space (this ran into separate limitation, accelerating requirement for 31bit addressing).
other related posts
https://www.garlic.com/~lynn/2024g.html#72 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024f.html#90 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#81 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#52 IBM Vintage 1130
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021c.html#2 Colours on screen (mainframe history question)
https://www.garlic.com/~lynn/2017d.html#75 Mainframe operating systems?
https://www.garlic.com/~lynn/2016c.html#9 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2013i.html#47 Making mainframe technology hip again
https://www.garlic.com/~lynn/2013e.html#63 The Atlas 2 and its Slave Store
https://www.garlic.com/~lynn/2012f.html#10 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012d.html#33 TINC?
https://www.garlic.com/~lynn/2010m.html#16 Region Size - Step or Jobcard
https://www.garlic.com/~lynn/2010h.html#21 QUIKCELL Doc
https://www.garlic.com/~lynn/2009r.html#49 "Portable" data centers
https://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2006m.html#29 Mainframe Limericks
https://www.garlic.com/~lynn/2006c.html#2 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#26 Multiple address spaces
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2001h.html#14 Installing Fortran
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual Storage Date: 29 Dec, 2024 Blog: Facebookre:
In late 70s I'm working with Jim Gray and Vera Watson on original SQL/relational implementation (System/R) at San Jose Research and in fall of 1980 Jim Gray leaves IBM for TANDEM and palms off some stuff on me. A year later, at Dec81 ACM SIGOPS meeting, Jim asked me to help a TANDEM co-worker get his Stanford PHD that heavily involved GLOBAL LRU page replacement algorithm (and the "local LRU" forces from 60s academic work, were heavily lobbying Stanford to not award a PHD for anything involving GLOBAL LRU). Jim knew I had detailed stats on the CP67 Cambridge/Grenoble global/local LRU comparison (showing global significantly outperformed local).
Early 70s, IBM Grenoble Science Center had a 1mbyte 360/67 (155 4k pageable pages) running 35 CMS uses and had modified "standard" CP67 with working set dispatcher and local LRU page replacement ... corresponding to 60s academic papers. I was then at Cambridge which had 768kbyte 360/67 (104 4k pageable pages, only 2/3rds the number of Grenoble) and running 80 CMS users, with similar workload profiles, similar response, better throughput (with twice as many users) running my "standard" CP67 that I had originally done as undergraduate in the 60s. I had loads of Cambridge benchmarking data, in addition to the Grenoble APR73 CACM article and lots of detailed performance data from Grenoble.
Late 70s and early 80s, I had (also) been blamed for online computer conferencing on the IBM internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) which really took off spring of 1981 when I distributed a trip report of visit to Jim Gray at Tandem, only about 300 actively participated but claims 25,000 was reading (folklore that when corporate executive committee was told, 5of6 wanted to fire me). IBM blocked me from responding to Jim's request for local/global paging info for nearly a year, until fall of 1982 (I hoped that they believed it was punishment for online computer conferencing and not that they were meddling in academic dispute).
some refs:
L. Belady, The IBM History of Memory Management Technology, IBM Journal of R&D, V35N5
R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm for Virtual Memory Management, ACM SIGOPS, v15n5, 1981
P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6, jan80
J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, CACM16, APR73
D. Hatfield J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971
trivia: CSC had came out to install CP67 (precursor to VM370) at the
Univ. (3rd after CSC itself and MIT Lincoln Labs) and I mostly got to
play with it during my weekend time (univ. shutdown datacenter on
weekends and I had place dedicated, although 48hrs w/o sleep made
monday classes hard). Initially I rewrote pathlenghts to improve
OS/360 running in virtual machine. OS/360 test stream ran 322
seconds bare/real machine, initially 856secs in virtual machine
(534secs CP67 CPU). After a couple months I had CP67 CPU down to
113secs (from 534). I then start redoing other parts of CP67, page
replacement algorithm, thrashing controls, dynamic adaptive resource
management & scheduling, ordered arm seek queuing (replacing
FIFO), multiple chained page requests channel programs optimizing
transfers/revolution (2301 paging drum peak from 80/sec to 270/sec
peak).
After graduating and joining IBM, one of my hobbies was production enhanced operating systems for internal datacenters (including branch office online sales&marketing support HONE systems was early and long time customer)
trivia2: From IBMJargon ... copy here (public mainframers group)
https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
page replacement posts
https://www.garlic.com/~lynn/subtopic.html#clock
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual Storage Date: 30 Dec, 2024 Blog: Facebookre:
Note the 1st mainstream IBM document done in CMS SCRIPT was the 370
"red book" (i.e. 370 architecture, for distribution in red 3-ring
binders). CMS command line option either generated the full "red book"
(with lots of architecture & implementations notes,
justifications, alternatives, etc) or just the "Principles of
Operation" subset. Reference to the science center wide-area network
(that morphs into the corporate internal network) by one of the
inventors of "GML" at te science center in 1969 (after "GML" was
invented, tag processing was added to CMS script.
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
With the decision to add virtual memory to all 370s, a joint distributed (network) development project was done by Endicott with Science Center to add 370 virtual machine support (part of it was to also add CMS multi-level source update). Base system updates were my CP67L updates (part of my internal enhanced operating system distribution). Added to that was CP67H which was option for (virtual storage) 370 virtual machines, then added werre CP67I updates (a modified CP67 that ran on virtual storage 370). At Cambridge, CP67I was in regular operation (a year before the 1st engineering virtual storage 370 machine was operational, in fact CP67I was test case for that engineering machine) in a CP67H 370 virtual machine, running in a CP67L 360/67 virtual machine, running on a real 360/67 (the extra virtual machine layer was countermeasure for unannounced 370 virtual storage leaking because Cambridge CP67L system also had professor, staff, and students from Boston/Cambridge area educational institutions). Later three San Jose engineers added 3330 & 2305 device support to CP67I for CP67SJ which was still standard internal system until well after VM370 was available.
Also, decision to add virtual memory to all 370s lead to decision to do VM370 product, but in the morph of CP67->VM370 lots of features were simplified or dropped (including multiprocessor support). In 1974 I start adding features back into VM370R2-base, including kernel re-org for multiprocessor but not the actual multiprocessor support ... for my internal CSC/VM distribution. In 1975, I move to VM370R3-base and also add the actual multiprocessor support, initially for US branch office sales&markteing support HONE complex (all US HONE datacenters had been consolidated in Palo Alto, trivia: when FACEBOOK 1st moves into Silicon Valley, it was into a new bldg built next door to the former US HONE datacenter). The initial consolidated US HONE added single-system image, load-balancing and fall-over for largest IBM single-system image, shared DASD complex. Then with multiprocessor support, HONE were able to add a 2nd CPU to each system. After the Cal. earthquake, a 2nd US HONE datacenter complex was deployed in Dallas and then a 3rd in Boulder (and HONE clones were sprouting up all over the world as well as VMIC online systems were appearing in regions and larger branches).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Future System, Liquid Cooling, Clone 370s Date: 30 Dec, 2024 Blog: Facebooka reverse scenario, the IBM 3081 had such a large number of circuits that to package into reasonable volume required TCM, which then required liquid cooling
... aka Amdahl left IBM after ACS/360 was killed and before FS ... which was completely different from 370 and was going to completely replace it (internal politics was killing off 370 projects during FS, claim is that the lack of new 370 during the period gave the clone 370 makers, including Amdahl, their market foothold). When FS imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking of q&d 3033 and 3081 in parallel.
Initial aggregate MIPS of 2-CPU 3081D was less than single processor Amdahl ... they double the size of 3081 processor caches for 2-CPU 3081K to make aggregate MIPS about the same as single processor Amdahl ... although IBM docs were that MVS 2-CPU support(/overhead) only had 1.2-1.5 times the throughput of single processor (aka MVS 3081K only 1.2-1.5 times throughput of Amdahl single processor MVS).
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual Storage Date: 30 Dec, 2024 Blog: Facebookre:
Virtual memory for all 370s, because MVT couldn't get enough work in progress in 1mbyte real memory to keep 165 busy and justified. Then 165 was complaining that if they had to retrofit the full 370 virtual memory architecture, the virtual storage announce would have to slip six months ... eventually it was decided to retrench to the 165 subset and all the other models (that already had implement the full architecture) had to drop back to the 165-subset ... and all software that implemented use of the full architecture had to be redone for the 165-subset (retrofit of virtual memory for 155 & 165 became 155-II and 165-II).
in the mid-70s I was pontificating that systems were getting faster, faster than disks were getting faster. In early 80s, I wrote a tome that since 360 announce the relative system throughput of disks had declined by an order or magnitude (systems got 40-50 times faster while disks only got 3-5 times faster). A GPD/disk executive took exception and directed the division performance group to refute my claim. After a couple weeks they came back and essentially said I had slightly understated the issue. The analysis then was redone for a SHARE talk on configuring DASD for improved throughput (SHARE 63, 16Aug1984, B874).
Currently cache-miss/memory latency when measured in count of processor cycles is similar to 60s disk latency when measured in count of 60s processors cycles (memory is the new disk). Somewhat analogy to 60s multi-tasking with process that can execute while other tasks are waiting for disk I/O, is hardware able to execute instructions while other instructions are waiting on memory ... aka out-of-order execution, branch prediction, speculative execution, multithreading (simulating multiprocessor on single processor), etc.
Trivia: shortly after joining IBM, I got sucked into help a project to
multithread a 370/195. 195 had out-of-order execution but no branch
prediction and speculative execution ... so conditional branches
drained the pipeline and most codes only ran a half 195 rated
throughput. Multithreading, simulating two processors, each running at
half throughput, could keep the 375/195 at full throughput ... minus
the MVT (360/65MP and later MVS) multiprocessor throughput only
getting 1.2-1.5 times a single processor (not twice). However with the
decision to add virtual memory to all 370s, it was decided it would be
too difficult to retrofit virtual memory to 370/195 and new 195 work
is aborted. Multithreading from discussion of IBM killing Amdahl's
ACS/360.
https://people.computing.clemson.edu/~mark/acs_end.html
Sidebar: Multithreading
In summer 1968, Ed Sussenguth investigated making the ACS/360 into a
multithreaded design by adding a second instruction counter and a
second set of registers to the simulator. Instructions were tagged
with an additional "red/blue" bit to designate the instruction stream
and register set; and, as was expected, the utilization of the
functional units increased since more independent instructions were
available.
IBM patents and disclosures on multithreading include:
US Patent 3,728,692, J.W. Fennel, Jr., "Instruction selection in a
two-program counter instruction unit," filed August 1971, and issued
April 1973.
US Patent 3,771,138, J.O. Celtruda, et al., "Apparatus and method for
serializing instructions from two independent instruction streams,"
filed August 1971, and issued November 1973. [Note that John Earle is
one of the inventors listed on the '138.]
"Multiple instruction stream uniprocessor," IBM Technical Disclosure
Bulletin, January 1976, 2pp. [for S/370]
... snip ...
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track posts
https://www.garlic.com/~lynn/submain.html#dasd
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
a couple posts mentioning SHARE B874 and also multithreading
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2016c.html#12 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Dataprocessing Innovation Date: 31 Dec, 2024 Blog: FacebookMay2008 at Berkeley (a year after he disappears on sailing trip), there was a gathering to celebrate Jim Gray. Part of that celebration involved acknowledging Jim Gray as father of (modern) financial dataprocessing (including enabling electronic payment transactions). Jim's formalizing of DBMS ACID properties and transaction semantics provided the basis that was crucial in allowing financial auditors to move from requiring paper ledgers to trusting computer operations (gone 404, but lives on at wayback machine)
I had worked with Jim Gray and Vera Watson on original sql/relational DBMS ("System/R" developed on VM370/CMS 370/145) in late 70s and very early 80s at San Jose Research, before he leaves for Tandem fall of 1980. Did technology transfer (under the "radar" while company preoccupied with IMS-followon "EAGLE") to Endicott for SQL/DS. Then after "EAGLE" implodes there is a request for how fast could System/R be ported to MVS, eventually released as DB2 (originally for decision support only).
original sql/relational System/R
https://www.garlic.com/~lynn/submain.html#systemr
Some of the MIT CTSS/7094 people went to 545 tech sq for Project MAC on the 5th flr to do MULTICS. Others went to the IBM Science Center on the 4th flr, did virtual machines (added virtual memory to a 360/40 and did virtual machine CP40/CMS, when 360/67 standard with virtual memory became available, CP40/CMS mophs into CP67/CMS, later after decision to add virtual memory to all 370s, CP67/CMS morphs into VM370/CMS). Also did the CP67-based science center wide-area network that morphs into the corporate internal network, larger than arpanet/internet from the beginning until sometime mid/late 80s (when internal network was forced to convert to SNA/VTAM).
Science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM, posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
CTSS RUNOFF was redone for CMS as "SCRIPT" and after GML was invented at science center in 1969, GML tag processing added to SCRIPT (later morphs into SGML, HTML, XML, etc). CSC ported APL\360 to CP67/CMS (eliminating APL\360 timesharing support), memory management redone for demand page environment (and workspace sizes increased from 16kbytes to large virtual memory) and API added for systems services like file I/O ... for CMS\APL, in total enabling lots of real world applications. After 23Jun1969 unbundling announcements CP67/CMS HONE systems were created for online branch office services deploying lots of CMS\APL-based sales&marketing support applications (later moves to VM370/CMS and APL\CMS with HONE systems sprouting up all over the world).
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE and APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Dataprocessing Innovation Date: 31 Dec, 2024 Blog: Facebookre:
trivia: some early VS1 PCO effort was part of attempts to block/kill off VM370/CMS, claiming much better performance from PCO "benchmark" simulation ... which the CMS group had to repeat with real benchmarks. when PCO finally had real running code, it turned out PCO performance was in no way related to the simulation numbers. Also before PCO announce, somebody pointed out that "PCO" (personal computing option) was same as political party in France ... and it was quickly renamed VS/PC.
after FS implodes
http://www.jfsowa.com/computer/memo125.htm
there is mad rush to get stuff back into 370 product pipelines,
including kicking off quick&dirty 3033&3081 in parallel. I'm coned
into helping Endicott with ECPS microcode assist for 138/148 (also
used for 4300) ... initially do analysis to find the 6kbytes of most
executed kernel 370 code instruction paths for redoing in microcode
with 10:1 speedup; initial analysis in this archived post (6kbytes,
79.55% of kernel execution)
https://www.garlic.com/~lynn/94.html#21
We then showed VS1/VSE running faster under VM370 than on the real machine ... and Endicott tries to get corporate to allow VM370 preinstalled on every machine shipped ... but corporate wouldn't let them (in part because the head of POK had recently convinced corporate to kill the VM370 product, close the development group and transfer everybody to POK for MVS/XA; Endicott manages to save the VM370 product for the mid-range, but has to recreate a development group from scratch).
I was also asked to help with a 16-cpu 370 multiprocessor (and we rope the 3033 processor engineers into working on it in their spare time, a lot more interesting than remapping 168-3 logic to 20% faster chips) which everybody thought was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16 processor support (MVS doc. at the time was 2-CPU support was only 1.2-1.5 times the throughput of single processor; ... aka the MVS multiprocessor overhead would scale non-linear as processors increased ... and POK doesn't ship 16-cpu machine until after turn of the century). Note: In the original morph of CP67->VM370, lots of features were simplified or dropped, including multiprocessor support. I then put multiprocessor back into my (internal) VM370R3-based CSC/VM, originally for the branch office, online sales&marketing support HONE complexes, so they could add a 2nd processor to each system, 2-CPU getting twice the throughput of 1-CPU (with the help of some cache affinity work).
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
360&370 microcode posts
https://www.garlic.com/~lynn/submain.html#mcode
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE (& APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, next, index - home