List of Archived Posts

2023 Newsgroup Postings (08/07 - 10/06)

3270
DASD, Channel and I/O long winded trivia
DASD, Channel and I/O long winded trivia
DASD, Channel and I/O long winded trivia
HASP, JES, MVT, 370 Virtual Memory, VS2
Boyd and OODA-loop
HASP, JES, MVT, 370 Virtual Memory, VS2
HASP, JES, MVT, 370 Virtual Memory, VS2
IBM Storage photo album
Tymshare
DASD, Channel and I/O long winded trivia
Tymshare
Tymshare
Maneuver Warfare as a Tradition. A Blast from the Past
Copyright Software
Copyright Software
Copyright Software
Maneuver Warfare as a Tradition. A Blast from the Past
A U.N. Plan to Stop Corporate Tax Abuse
Copyright Software
Copyright Software
Copyright Software
Copyright Software
Copyright Software
EBCDIC "Commputer Goof"
EBCDIC "Commputer Goof"
Some IBM/PC History
Punch Cards
Copyright Software
Copyright Software
Apple Versus IBM
3081 TCMs
3081 TCMs
Copyright Software
IBM 360/67
Russian Democracy
Next-Gen Autopilot Puts A Robot At The Controls
IBM 360/67
Boyd OODA-loop
IBM 360/67
Boyd OODA-loop
Systems Network Architecture
Systems Network Architecture
IBM 360/65 & 360/67 Multiprocessors
IBM 360/65 & 360/67 Multiprocessors
IBM 360/65 & 360/67 Multiprocessors
Boyd OODA at Linkedin
IBM 360/67
VM370/CMS Shared Segments
VM370/CMS Shared Segments
VM370/CMS Shared Segments
VM370/CMS Shared Segments
VM370/CMS Shared Segments
VM370/CMS Shared Segments
VM370/CMS Shared Segments
IBM Jargon Dictionary
New Poll Shows 81% of Californians Support SB 362
Architecture, was ISA
USENET, the OG social network, rises again like a text-only phoenix
801/RISC and Mid-range
Since 9/11, US Has Spent $21 Trillion on Militarism at Home and Abroad
Early Internet
IBM Jargon
Early Internet
Computing Career
PDP-6 Architecture, was ISA
HASP, JES, MVT, 370 Virtual Memory, VS2
Wonking Out: Is the Fiscal Picture Getting Better or Worse? Yes
The IBM System/360 Revolution
The IBM System/360 Revolution
The IBM System/360 Revolution
microcomputers, minicomputers, mainframes, supercomputers
microcomputers, minicomputers, mainframes, supercomputers
microcomputers, minicomputers, mainframes, supercomputers
microcomputers, minicomputers, mainframes, supercomputers
microcomputers, minicomputers, mainframes, supercomputers
microcomputers, minicomputers, mainframes, supercomputers
microcomputers, minicomputers, mainframes, supercomputers
microcomputers, minicomputers, mainframes, supercomputers
microcomputers, minicomputers, mainframes, supercomputers
microcomputers, minicomputers, mainframes, supercomputers
Storage Management
Saving mainframe (EBCDIC) files
Storage Management
memory speeds, Solving the Floating-Point Conundrum
The Pentagon Gets More Moeny
Relational RDBMS
CP/67, VM/370, VM/SP, VM/XA
CP/67, VM/370, VM/SP, VM/XA
CP/67, VM/370, VM/SP, VM/XA
CP/67, VM/370, VM/SP, VM/XA
CP/67, VM/370, VM/SP, VM/XA
IBM DASD 3380
lotsa money and data sizes, Solving the Floating-Point Conundrum
CP/67, VM/370, VM/SP, VM/XA
A new Supreme Court case could trigger a second Great Depression
Fracking Fallout: Is America's Drinking Water Safe?
My Gun Has A Plane
Mainframe Tapes
Mainframe Tapes
CP/67, VM/370, VM/SP, VM/XA
Mobbed Up
CP/67, VM/370, VM/SP, VM/XA
3090 & 3092
What Will the US M1A1 Abrams Tanks Bring to the Ukrainian Battlefield?
A new Supreme Court case could trigger a second Great Depression
DataTree, UniTree, Mesa Archival
DataTree, UniTree, Mesa Archival
John Boyd and IBM Wild Ducks
DataTree, UniTree, Mesa Archival
Internet Host Table, 4-Feb-88

3270

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3270
Date: 07 Aug, 2023
Blog: Facebook
When 3274/3278 (controller & terminal) came out .... it was much worse than 3272/3277 and letter was written to the 3278 product admin. complaining. He eventually responded that 3278 wasn't targeted for interactive computing ... but for "data entry" (aka electronic keypunch). This was during period of studies of interactive computing showing quarter second response improved productivity. The 3272/3277 had .086sec hardware response .... which with .164 system response ... gives quarter second. For the 3278, they moved a lot of electronics back into the 3274 (reducing 3278 manufacturing cost) ... also driving up the coax protocol chatter ... and hardware response was .3-.5sec (based on data stream and coax chatter) ... precluding quarter second response.. Trivia: MVS/TSO would never notice since it was a rare MVS/TSO had even one second system response (my internally distributed systems tended to .11sec system interactive response).

Later with IBM/PC, a 3277 emulation card had 3-4 times the upload/download throughput of 3278 emulation card (because the significant difference in protocol chatter latency & overhead).

The 3277 also had enough electronics that it was possible to wire a large tektronix graphics screen into side of the terminal ("3277GA") ... sort of an inexpensive 2250

some recent 3272/3277 interactive computing
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
https://www.garlic.com/~lynn/2022h.html#96 IBM 3270
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021c.html#92 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021c.html#0 Colours on screen (mainframe history question) [EXTERNAL]
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 07 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#109 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#114 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia

My wife did a stint in POK responsible for loosely-coupled architecture (didn't remain long, in part repeated battles with communication group trying to force her into using SNA/VTAM for loosely-coupled operation). She also failed to get some throughput improvements for trouter/3088.

peer-coupled shared data (loosely-coupled) architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata

trivia: SJR had a 4341 (up to eight) cluster support using (non-SNA) 3088 ... cluster coordination operations taking sub-second elapsed time. To release it, communication group said that it had to be done using VTAM/SNA ... that implementation resulted in cluster coordination operations increasing upwards of 30sec elapsed time.

posts mentioning communication group fiercely fighting off client/server and distributed computing (in part trying to preserve their dumb terminal paradigm)
https://www.garlic.com/~lynn/subnetwork.html#terminal

some past posts mentioning trouter/3088:
https://www.garlic.com/~lynn/2015e.html#47 GRS Control Unit ( Was IBM mainframe operations in the 80s)
https://www.garlic.com/~lynn/2006j.html#31 virtual memory
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 08 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#109 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#114 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023e.html#1 DASD, Channel and I/O long winded trivia

z196 had "Peak I/O" benchmark getting 2m IOPS using 104 FICON (running over 104 FCS) ... about the same time FCS was announced for E5-2600 server blade (commonly used in cloud megadatacenters that have half million or more E5-2600 blades) claiming over million IOPS (two such FCS having higher throughput than 104 FICON).

At the time, max configured z196 was $30M and 50BIPS (industry benchmark, # of iterations compared to 158-3 assumed to be one MIP) or $600,000/BIPS. By comparison IBM had base list price of $1815 for E5-2600 blade and (same) benchmark was 500BIPS (ten times max configured z196) or $3.63/BIPS.

Note large cloud megadatacenters have been claiming for a couple decades that they assemble their own blade servers for 1/3rd the cost of brand name servers or $1.21/BIPS. Then about the same time there was press that server chip and component vendors were shipping at least half their product directly to cloud megadatacenters, IBM sells off its "server" business.

A large cloud operation will have at least a dozen (or scores) of megadatacenters around the world, each with at least half million blade servers (millions of processor cores) ... a megadatacenter something like 250 million BIPS or equivalent of five million max configured z196 systems.

FICON
https://en.wikipedia.org/wiki/FICON
Fibre Channel
https://en.wikipedia.org/wiki/Fibre_Channel

other Fibre Channel:

Fibre Channel Protocol
https://en.wikipedia.org/wiki/Fibre_Channel_Protocol
Fibre Channel switch
https://en.wikipedia.org/wiki/Fibre_Channel_switch
Fibre Channel electrical interface
https://en.wikipedia.org/wiki/Fibre_Channel_electrical_interface
Fibre Channel over Ethernet
https://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet

fcs &/or ficon posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 08 Aug, 2023
Blog: Facebook
re: re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#109 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#114 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023e.html#1 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023e.html#2 DASD, Channel and I/O long winded trivia

... repost .... I had also wrote test for channel/controller speed. VM370 had a 3330/3350/2305 page format that had "dummy short block" between 4k page blocks. Standard channel program had seek followed by search record, read/write 4k ... possibly chained to search, read/write 4k (trying to maximize number of 4k transfers in single rotation). For records on the same cylinder, but different track, in the same rotation, had to add a seek track. The channel and/or controller time to process the embedded seek could exceed the rotation time for the dummy block ... causing an additional revolution. Test would format cylinder with maximum possible dummy block size (between page records) and then start reducing to minimum 50byte size ... checking to see if additional rotation was required. 158 (also 303x channel director and 3081) had the slowest channel program processing. I also got several customers with non-IBM processors, channels, and disk controllers to run the tests ... so had combinations of IBM and non-IBM 370s with IBM and non-IBM disk controller.

posts mentioning getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some past posts mentioning (disk) "dummy blocks"
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2011e.html#75 I'd forgotten what a 2305 looked like
https://www.garlic.com/~lynn/2010m.html#15 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2010.html#49 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2006t.html#19 old vm370 mitre benchmark
https://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF
https://www.garlic.com/~lynn/2002b.html#17 index searching
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"

--
virtualization experience starting Jan1968, online at home since Mar1970

HASP, JES, MVT, 370 Virtual Memory, VS2

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HASP, JES, MVT, 370 Virtual Memory, VS2
Date: 08 Aug, 2023
Blog: Facebook
... in another life, my wife was in gburg jes group (reporting to crabby) until she was con'ed into going to POK to be responsible for loosely-coupled architecture (and doing peer-coupled shared data architecture).

periodically repost about a decade ago, being asked to track down decision to make all 370s virtual memory .... basically MVT storage management was so bad that regions had to be specified four times larger than used ... as result typical 1mbyte 370/165 would only run four regions concurrently ... insufficient to keep system busy and justified ... mapping MVT to 16mbyte virtual memory (for VS2/ SVS) would allow number of concurrent regions to be increased by factor of four (with little or no paging) ... quite similar to running MVT in CP67 16mbyte virtual machine. pieces of email exchange (in archived post)
https://www.garlic.com/~lynn/2011d.html#73

... post also mentions at univ, I crafted terminal support and an editor (with CMS edit syntax) into HASP to get a CRJE that I thot was better than IBM CRJE and TSO.

I would drop by Ludlow doing SVS prototype on 360/67, offshift. Biggest piece of code was EXCP ... almost exactly same as CP67 ... in fact Ludlow borrowed CCWTRANS from CP67 and crafted it into EXCP (aka applications called EXCP with channel programs ... which now had virtual addresses ... EXCP had to make copy of the channel programs, substituting real addresses for virtual).

HASP, ASP, JES, NJE/NJI, etc posts
https://www.garlic.com/~lynn/submain.html#hasp
loosely-coupled and peer-coupled shared data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

some recent post refs 370 virtual memory
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#71 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#103 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2023b.html#44 IBM 370
https://www.garlic.com/~lynn/2023b.html#41 Sunset IBM JES3
https://www.garlic.com/~lynn/2023b.html#24 IBM HASP (& 2780 terminal)
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#50 370 Virtual Memory Decision
https://www.garlic.com/~lynn/2023.html#34 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive

--
virtualization experience starting Jan1968, online at home since Mar1970

Boyd and OODA-loop

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boyd and OODA-loop
Date: 09 Aug, 2023
Blog: Linkedin
Boyd and OODA-loop
https://www.linkedin.com/posts/clintpope_ooda-mindset-mentalhealth-activity-7094897636498948096-S5xE/

... in briefings Boyd would talk about constantly observing from every possible facet (as countermeasures to numerous kinds of biases) ... also references to observation, orientation, decisions, and actions are constantly occurring asynchronous operations (not strictly serialized operations).

... when I was first introduced to Boyd, I felt a natural affinity from the way I programmed computers. There is anecdote from after the turn of century about (Microsoft) Gates complaining to Intel about transition to multi-core chips (from increasingly faster single core) because it was too hard to write programs where multiple things went on independently in asynchronously operating cores

Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
(loosely-coupled/cluster) peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
(loosely-coupled/cluster-scale-up) HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

posts mentioning the Gates/Intel anecdote
https://www.garlic.com/~lynn/2023d.html#49 Computer Speed Gains Erased By Modern Software
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#51 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2018d.html#57 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017e.html#52 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016c.html#60 Which Books Can You Recommend For Learning Computer Programming?
https://www.garlic.com/~lynn/2016c.html#56 Which Books Can You Recommend For Learning Computer Programming?
https://www.garlic.com/~lynn/2014m.html#118 By the time we get to 'O' in OODA
https://www.garlic.com/~lynn/2014d.html#85 Parallel programming may not be so daunting
https://www.garlic.com/~lynn/2013.html#48 New HD
https://www.garlic.com/~lynn/2012j.html#44 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012e.html#15 Why do people say "the soda loop is often depicted as a simple loop"?
https://www.garlic.com/~lynn/2008f.html#42 Panic in Multicore Land
https://www.garlic.com/~lynn/2007m.html#2 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007i.html#78 John W. Backus, 82, Fortran developer, dies

--
virtualization experience starting Jan1968, online at home since Mar1970

HASP, JES, MVT, 370 Virtual Memory, VS2

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HASP, JES, MVT, 370 Virtual Memory, VS2
Date: 09 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2

23Jun1969 unbundling announcement included starting to charge for (application) software (made the case that kernel software was still free). Then in the early/mid-70s, there was Future System program, completely different and to completely replace 370s (internal politics was killing off 370 efforts, and lack of new 370 stuff, is credited with giving clone 370 makers, their market foothold). When FS finally implodes there is mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 303x&3081 efforts in parallel. some more FS details
http://www.jfsowa.com/computer/memo125.htm

The rise of clone 370 makers also contributes to change decision about not charging for kernel software, incremental/new features charged for, transitioning to all kernel software charged for, by the early 80s. Then starts the OCO-wars (object code only).

Note TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
had started offering their CMS-based online computer conferencing free to SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
in Aug1976, archives here
http://vm.marist.edu/~vmshare

where some of the OCO-war discussions can be found. Lots of this also discussed at the monthly user group "BAYBUNCH" meetings hosted by (Stanford) SLAC.

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning vmshare & OCO-wars
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2023.html#68 IBM and OSS
https://www.garlic.com/~lynn/2022e.html#7 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022b.html#118 IBM Disks
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2017g.html#23 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017.html#59 The ICL 2900
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015d.html#59 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014m.html#35 BBC News - Microsoft fixes '19-year-old' bug with emergency patch
https://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013o.html#45 the nonsuckage of source, was MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013m.html#55 'Free Unix!': The world-changing proclamation made 30 years ago today
https://www.garlic.com/~lynn/2013l.html#66 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2012j.html#31 How smart do you need to be to be really good with Assembler?
https://www.garlic.com/~lynn/2012j.html#30 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#20 Operating System, what is it?
https://www.garlic.com/~lynn/2011o.html#33 Data Areas?
https://www.garlic.com/~lynn/2007u.html#8 Open z/Architecture or Not
https://www.garlic.com/~lynn/2007u.html#6 Open z/Architecture or Not
https://www.garlic.com/~lynn/2007k.html#15 Data Areas Manuals to be dropped
https://www.garlic.com/~lynn/2007f.html#67 The Perfect Computer - 36 bits?

--
virtualization experience starting Jan1968, online at home since Mar1970

HASP, JES, MVT, 370 Virtual Memory, VS2

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HASP, JES, MVT, 370 Virtual Memory, VS2
Date: 09 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2

I took two credit hr intro to fortran/comuters and at the end of semester, was hired to redo 1401 MPIO for 360/30. The univ had been sold 360/67 for tss/360 to replace 709/1401. Temporarily pending 360/67, the 1401 was replaced with 360/30 ... which had 1401 emulation and could continue to run 1401 MPIO ... I guess I was part of getting 360 experience. The univ. shutdown datacenter on weekends and they let me have the whole place dedicated (48hrs w/o sleep, making monday classes a little hard). They gave me a bunch of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks I had 2000 card assembler program (assembler option generated either 1) stand alone with BPS loader or 2) under OS/360 with system services macros).

I quickly learned when I came in sat. morning to first disassemble and clean 2540 printer/punch, clean tape drives, clean printer. Sometimes when I came in sat. morning, production had finished early and everything was powered off. Sometimes 360/30 wouldn't complete power-on and with lots of trial and error ... learned to put all the control units into CE mode, power on 360/30, individually power on control units, return control units to normal. Within a year of taking intro class, the 360/67 has arrived and I'm hired fulltime responsible for os/360 (tss/360 never came to production, so ran as 360/65 with os/360).

Student fortran jobs took under sec on 709 (IBSYS tape->tape), initially on 360/65, ran more than a minute. I install HASP and time is cut in half. I start modifying SYSGEN STAGE2 so I could run in production job stream ... then reorder things to place datasets and PDS members to optimize DASD seeks and multi-track searches ... cutting time another 2/3rds to 12.9sec. Sometimes heavy PTF activity would start affecting careful placement and as elapsed time crept towards 20secs, I would have to redo SYSGEN (to restore placement). Student jobs never got better than 709 until I install Univ. of Waterloo WATFOR.

Mid-70s, I would pontificate CKD was 60s technology trade-off offloading functions (like multi-track search) to abundant I/O resources because of scarce memory resources ... but that by mid-70s that trade-off was starting to flip and by early 80s I was also pontificating that between late 60s and early 80s, relative system DASD throughput had declined by an order of magnitude (i.e. disk got 3-5 times faster while system processing got 40-50 times faster). Some disk (GPD) division executive took exception and assigned the division performance group to refute by claims. However, after a few weeks they came back and said that I had understated the situation. They eventually respun this for a SHARE presentation about optimizing DASD configuration for improving system throughput, 16Aug1984, SHARE 63, B874.

This also showed up in some competition with IMS (when I was working with Jim Gray and Vera Watson) and System/R (original SQL/relational implementation). IMS claiming that System/R doubled disk space requirements and significantly increased I/O (for indexes). Counter was that IMS required significant human resources to maintain data structures. However, by the early 80s; 1) disk space price was significantly dropping, 2) real memory significantly increased allow index caching, reducing physical I/O, and 3) IMS required more admin resources that were getting scarcer and more expensive. trivia: we managed to do tech transfer ("under the radar" while company was preoccupied with next new DBMS, "EAGLE") to Endicott for SQL/DS. Then when EAGLE implodes, there was request how fast could System/R be migrated to MVS ... which is eventually announced as DB2 (originally for decision support *only*).

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

Recent observations is that (cache miss) latency to memory, when measured in count of processor cycles is comparable to 60s latency for disk I/O when measured in count of 60s processor cycles (real memory is the new disk).

trivia: late 70s, I was brought into a national grocery chain datacenter that had large VS2 loosely-coupled multi-system operation ... that was experiencing severe throughput problems (that already had most of the usual IBM experts brought through). They had classroom with tables completely covered with activity monitoring stacks of paper (from all the systems).

After about 30 mins, I started to notice that manually summing the throughput of a specific (shared) DASD activity across all the systems, was peaking at 7/sec. I asked what it was. They said it was the 3330 that had large PDS library (3cyl PDS directory) of all the store & controller applications. Little manual calculations showed avg PDS member lookup was taking avg two multi-track searchers, one for 19/60sec (.317) and one for 9.5/60sec (.158) or .475sec plus the seek/load of the actual member. Basically was limited to aggregate of slightly less than two application loads/sec for all systems and for all stores in the country.

Note the multi-track searches not only locked out all other activity to the drive ... but also made the controller busy, locking out activity to all other drives on the same controller (for all systems in the complex). So split the PDS into smaller pieces across multi-drives and created duplicated set of non-shared private drives for each system.

DASD, CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

some recent posts mentioning 709, 1401 mpio, student fortran 12.9sec, watfor
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards

some recent posts mentioning share 63, b874
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#0 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2021j.html#78 IBM 370 and Future System
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021e.html#33 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021.html#79 IBM Disk Division
https://www.garlic.com/~lynn/2021.html#59 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Storage photo album

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Storage photo album
Date: 10 Aug, 2023
Blog: Facebook
re:
https://www.ibm.com/ibm/history/exhibits/storage/storage_photo.html

I seem to remember the storage division (GPD/Adstar) web pages had a lot more ... but that was before IBM got rid of it.

Note: Late 80s, disk senior engineer got talk scheduled at an annual, internal, world-wide communication group conference, supposedly on 3174 performance ... but opened the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms. They had come up with a number of solutions that would address the problem, but they were constantly being vetoed by the communication group. The issue was the communication group had stranglehold on mainframes with their corporate strategic ownership of everything that crossed the datacenter walls ... and were fiercely fighting off client/server and distributed computing. One of the Adstar executives had partial work-around investing in distributed computing startups that would use IBM disks (he would periodically ask us to drop by his investments to see if we could offer any help).

Just a couple years later, IBM has one of the largest losses in history of US comapnies and was being reorg'ed into the 13 "baby blues" in preparation for breaking up the company ... old references:
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but we get a call from the bowels of Armonk (corp hdqtrs) asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, many of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts. Before we get started, a new CEO is brought in and reverses the breakup ... but not long later, the storage division is gone anyway.

getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
communication group fighting to preserve dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

2001 A History of IBM "firsts" in storage technology (gone 404, but still lives on at wayback machine, but doesn't have images, but URL associated text there)
https://web.archive.org/web/20010809021812/www.storage.ibm.com/hdd/firsts/index.htm

2003 wayback redirected
https://web.archive.org/web/20030404234346/http://www.storage.ibm.com/hddredirect.html?/firsts/index.htm
IBM Storage Technology has merged with Hitachi Storage to become Hitachi Global Storage Technologies. To visit us and find out more
http://www.hgst.com.


2004 & then no more
https://web.archive.org/web/20040130073217/http://www.storage.ibm.com:80/hddredirect.html?/firsts/index.htm

--
virtualization experience starting Jan1968, online at home since Mar1970

Tymshare

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Tymshare
Date: 10 Aug, 2023
Blog: Facebook
I would drop by TYMSHARE periodically and/or see them at the monthly BAYBUNCH meetings at SLAC
https://en.wikipedia.org/wiki/Tymshare
they then made their CMS-based online computer conferencing system available to the (mainframe user group) SHARE

https://en.wikipedia.org/wiki/SHARE_(computing) for free starting in Aug1976 ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE to get a monthly tape dump off all VMSHARE files for putting up on the internal network and systems (biggest problem was IBM lawyers concerned internal employees would be contaminated with information about customers).

On one visit they demo'ed ADVENTURE game. Somebody had found it on Stanford SAIL PDP10 system and ported to VM/CMS system. I got copy and made it available internally (would send source to anybody that could show they got all the points). Shortly versions with more points appeared and a port to PLI.
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure

trivia: they told a story about TYMSHARE executive finding out people were playing games on their systems ... and directed TYMSHARE was for business use and all games had to be removed. He quickly changed his mind when told that game playing had increased to 30% of TYMSHARE revenue.

Most IBM internal systems had "For Business Purposes Only" on the 3270 VM370 login screen; however, IBM San Jose Research had "For Management Approved Uses Only". It played a role when corporate audit said all games had to be removed and we refused.

website topic drift: (Stanford) SLAC (CERN sister institution) had first website in US (on VM370 system)
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

MD bought Tymshare ... part of buy-out, I was brought into Tymshare to evaluate GNOSIS as part of spin-off to Key Logic ... also tymshare asked me if I could find anybody in IBM that would make Engelbart an offer

Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167
Ann rose up to become Vice President of the Integrated Systems Division at Tymshare, from 1976 to 1984, which did online airline reservations, home banking, and other applications. When Tymshare was acquired by McDonnell-Douglas in 1984, Ann's position as a female VP became untenable, and was eased out of the company by being encouraged to spin out Gnosis, a secure, capabilities-based operating system developed at Tymshare. Ann founded Key Logic, with funding from Gene Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl mainframes. After closing Key Logic, Ann became a consultant, leading to her cofounding Agorics with members of Ted Nelson's Xanadu project.

... snip ...

... also If Discrimination, Then Branch: Ann Hardy's Contributions to Computing
https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/

commercial, virtual machine-based timesharing posts
https://www.garlic.com/~lynn/submain.html#timeshare

some posts mentioning Tymshare, gnosis, & engelbart
https://www.garlic.com/~lynn/2018f.html#77 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2015g.html#43 [Poll] Computing favorities
https://www.garlic.com/~lynn/2014d.html#44 [CM] Ten recollections about the early WWW and Internet
https://www.garlic.com/~lynn/2013d.html#55 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012i.html#40 GNOSIS & KeyKOS
https://www.garlic.com/~lynn/2012i.html#39 Just a quick link to a video by the National Research Council of Canada made in 1971 on computer technology for filmmaking
https://www.garlic.com/~lynn/2011c.html#2 Other early NSFNET backbone
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2010q.html#63 VMSHARE Archives
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2008s.html#3 New machine code
https://www.garlic.com/~lynn/2008g.html#23 Doug Engelbart's "Mother of All Demos"
https://www.garlic.com/~lynn/2005s.html#12 Flat Query
https://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: learning how to use a computer)
https://www.garlic.com/~lynn/2000g.html#22 No more innovation? Get serious
https://www.garlic.com/~lynn/aadsm17.htm#31 Payment system and security conference

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 13 Aug, 2023
Blog: Linkedin
Linkedin
https://www.linkedin.com/pulse/dasd-channel-io-long-winded-trivia-lynn-wheeler
also
https://www.garlic.com/~lynn/2023e.html#3 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023e.html#2 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023e.html#1 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#117 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#114 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#109 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia

As undergraduate in the 60s, univ hired me responsible for os/360 (univ. had sold 360/67 for tss/360 to replace 709/1401, but tss/360 never came to fruition so ran as 360/65). Univ. shutdown datacenter on weekends and I had place dedicated, although 48hrs w/o sleep made monday classes hard. Student fortran jobs ran under sec. on 709, but initially ran over minute on 360. I install HASP and cuts time in half. I then redo SYSGEN so it can be run in job stream, also reorg statements to carefully place datasets and PDS members optimizing arm seek and multi-track search, cutting another 2/3rds to 12.9sec. Never got better than 709 until installed (unv waterloo) WATFOR.

Cambridge Science center came out to install CP67 (3rd after CSC and MIT Lincol Labs, precursor to vm370) ... and I mostly got to play with it on weekends. Over a few months, I rewrote a lot of code ... OS/360 test ran 322secs, but in virtual machine ran 856secs ... CP67 CPU 534secs. I got that down to 113secs, reduction of 435secs). Then I did new page replacement algorithm and dynamic adaptive resource management (scheduling) for CMS interactive, improving throughput and number of users and cutting interactive response. CP67 original I/O was FIFO order and paging was single 4k transfer per I/O. I implemented ordered arm seek (increasing 2314 throughput) and for paging would chain multiple 4k to maximize transfers per revolution (for both 2314 disk and 2301 drum). 2301 drum was approx. 75 4k/sec; changes could get it up to nearly channel speed around 270/sec. Archived post with part of SHARE presentation on some of the work:
https://www.garlic.com/~lynn/94.html#18

some posts on undergraduate work
https://www.garlic.com/~lynn/2023c.html#68 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2013g.html#39 Old data storage or data base

--
virtualization experience starting Jan1968, online at home since Mar1970

Tymshare

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Tymshare
Date: 13 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#9 Tymshare

Other trivia: TYMNET
https://en.wikipedia.org/wiki/Tymnet

MD acquired TYMNET as part of TYMSHARE purchase ... and then fairly early sold off to BT
https://en.wikipedia.org/wiki/Tymshare#Tymshare_sold_to_McDonnell_Douglas

Then MD "merges" with Boeing (joke MD bought Boeing with Boeing's own money, turning it into financial engineering company)
https://mattstoller.substack.com/p/the-coming-boeing-bailout
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution

The 100yr, 2016 Boeing "century" publication had article that the "merger" with M/D nearly took down Boeing and might yet still.

... older history, as undergraduate in 60s, I was hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize investment, including offering services to non-Boeing entities). I thot Renton datacenter possibly largest in world, couple hundred million in IBM 360s, 360/65s arriving faster than could be installed, boxes constantly staged in hallways around machine room. Disaster plan had Renton being replicated up at new 747 plant in Everett (Mt. Rainier heats up and the resulting mud slide takes out Renton). Lots of politics between Renton director and CFO ... who only had a 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install a 360/67 for me to play with when I'm not doing other stuff).

Posts mentioning working for Boeing CFO and later MD "merges" with Boeing
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021b.html#40 IBM & Boeing run by Financiers
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019e.html#151 OT: Boeing to temporarily halt manufacturing of 737 MAX

--
virtualization experience starting Jan1968, online at home since Mar1970

Tymshare

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Tymshare
Date: 13 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023e.html#11 Tymshare

Tymshare Beginnings
https://en.wikipedia.org/wiki/Tymshare#Beginnings
Tymshare initially focussed on the SDS 940 platform, initially running at University of California Berkeley. They received their own leased 940 in mid-1966, running the Berkeley Timesharing System, which had limited time-sharing capability. IBM Stretch programmer Ann Hardy rewrote the time-sharing system to service 32 simultaneous users. By 1969 the company had three locations, 100 staff, and five SDS 940s.[6]

...
It soon became apparent that the SDS 940 could not keep up with the rapid growth of the network. In 1972, Joseph Rinde joined the Tymnet group and began porting the Supervisor code to the 32-bit Interdata 7/32, as the 8/32 was not yet ready. In 1973, the 8/32 became available, but the performance was disappointing, and a crash-effort was made to develop a machine that could run Rinde's Supervisor.

... snip ...

... trivia & topic drift; Univ had 709/1401 and IBM had sold them 360/67 for tss/360 to replace 709/1401. within year of taking intro to fortran/computers and 360/67 arriving, univ. hires me fulltimes responsible of os/360 (tss/360 never came to production and 360/67 was used as 360/65). Shutdown datacenter on weekends and I had it dedicated (but 48hrs w/o sleep could make monday class hard). I did a lot of os/360 optimization. Student fortran had run under second on 709, but was over a minute on 360/65. I installed HASP and cut time in half, then optimized disk layout (for arm seek and multi-track search) cutting another 2/3rds to 12.9sec ... never got better until I install WATFOR.

posts mentioning HASP, ASP, JES, NJE/NJI
https://www.garlic.com/~lynn/submain.html#hasp

Then CP67 was installed at univ (3rd after CSC itself and MIT lincoln labs) and I mostly got to play with it in weekend dedicated time. OS/360 benchmark ran 322secs standalone, 856secs in virtual machine. After a few months I cut CP67 overhead CPU from 534secs to 113secs (reduction of 435secs). There was TSS/360 IBMer still around and one weekend he did simulated 4user, interactive, fortran edit, compile, execute ... and I did same with CP67/CMS for 35 users on same hardware ... with better interactive response and throughput. Archived post with part of SHARE presentation on OS/360 and CP67 improvement work
https://www.garlic.com/~lynn/94.html#18

Then I did new page replacement and dynamic adaptive resource management (scheduling). Originally CP67 did FIFO I/O queuing and single 4k page transfer per I/O. I redid it with ordered seek queuing (increasing 2314 disk throughput) and chained multiple page transfers ordered to maximize transferrs per revolution (increasing 2314 disk and 2301 drum throughput, 2301 had been about 75/sec, change allowed it to hit nearly channel transfer at 270/sec).

CP67 had arrived with 1052 and 2741 support that did dynamic terminal type identification (capable of switching port scanner). Univ had some number of TTY/ascii ... so I added TTY/ascii support integrated with dynamically type identification (adding ascii port scanner support, trivia: the MES to add ascii port scanner arrived in Heathkit box). I then wanted to do a single phone number for all terminal types, but official IBM controller had port line speed hard wired (could change port terminal-type scanner but not the speed) so didn't work.

That starts a univ. project to do clone controller, build a mainframe channel interface for Interdata/3 programmed to simulate IBM controller ... with the addition could do dynamic line speed. This is then upgraded to Interdata/4 for the channel interface and a cluster of Interdate/3s for port scanners. Interdata (and then Perkin/Elmer) use it selling clone controller to IBM mainframe customers (along the way, upgrading to faster Interdatas) ... four of us get written up for (some part of) IBM mainframe clone controller business.

360 plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

I was then hired into Boeing CFO office (mentioned previously). When I graduate, I join IBM Cambridge Science Center. Note in the 60s two commercial spinoffs of CSC were IDC and NCSS (mentioned in the Tymshare wiki page). CSC had same size 360/67 (768kbyte memory) as univ. and I migrate all my CP67 enhancements to CSC CP67 (including global LRU replacement). The IBM Grenoble Science Center had a 1mbyte 360/67 and modified CP67 to implement working set dispatcher (found in academic literature of the period & "local LRU" replacement). Both CSC and Grenoble had similar workloads, but my CP67 with 80users (and 104 available 4kpages) had better response and throughput than the Grenoble CP67 with 35 users (and 155 available 4kpages). When virtual memory support was added to all IBM 370s, CP67 morphed into VM370 ... and TYMSHARE starts offering VM370 online services.

After transferring to SJR, I did some work with Jim Gray and Vera Watson on the original SQL/relational RDBMS. Jim then left IBM for Tandem. Later Jim contacts me and asks if I can help a Tandem co-worker get his Stanford Phd ... involving global LRU page replacement and the "local LRU" forces from the 60s are lobbying Stanford to not grant a Phd involving global LRU (Jim knew I had detailed Cambridge/Grenoble local/global comparison for CP67).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
commercial (virtual machine) online services
https://www.garlic.com/~lynn/submain.html#online
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
page replacement and working set posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

some posts mentioning CP67 work for ordered seek queuing and chaining multiple 4k page requests in single I/O
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022d.html#30 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#21 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2021e.html#37 Drums
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017e.html#4 TSS/8, was A Whirlwind History of the Computer
https://www.garlic.com/~lynn/2017d.html#65 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2016g.html#29 Computer hard drives have shrunk like crazy over the last 60 years -- here's a look back
https://www.garlic.com/~lynn/2013g.html#39 Old data storage or data base

--
virtualization experience starting Jan1968, online at home since Mar1970

Maneuver Warfare as a Tradition. A Blast from the Past

From: Lynn Wheeler <lynn@garlic.com>
Subject: Maneuver Warfare as a Tradition. A Blast from the Past
Date: 13 Aug, 2023
Blog: Facebook
Maneuver Warfare as a Tradition. A Blast from the Past
https://tacticalnotebook.substack.com/p/maneuver-warfare-as-a-tradition

In briefings, Boyd would also mention Guderian instructed Verbal Orders Only for the blitzgrieg ... stressing officers on the spot were encouraged to make decisions without having to worry about the Monday morning quarterbacks questioning what should have been done.

one of my old posts here
https://slightlyeastofnew.com/2019/11/18/creating-agile-leaders/

Impact Of Technology On Military Manpower Requirements (Dec1980)
https://books.google.com/books?id=wKY3AAAAIAAJ&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v
top of pg. 104:
# Verbal orders only, convey only general intentions, delegate authority to lowest possible level and give subordinates broad latitude to devise their own means to achieve commander's intent. Subordinates restrict communications to upper echelons to general difficulties and progress, Result: clear, high speed, low volume communications,

... snip ...

post also at Linkedin
https://www.linkedin.com/posts/lynnwheeler_maneuver-warfare-as-a-tradition-activity-7096921373977022464-XLku/

Boyd posts & URLs
https://www.garlic.com/~lynn/subboyd.html

past posts mentioning Verbal Orders Only
https://www.garlic.com/~lynn/2022h.html#19 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2020.html#12 Boyd: The Fighter Pilot Who Loathed Lean?
https://www.garlic.com/~lynn/2017i.html#5 Mission Command: The Who, What, Where, When and Why An Anthology
https://www.garlic.com/~lynn/2017i.html#2 Mission Command: The Who, What, Where, When and Why An Anthology
https://www.garlic.com/~lynn/2017f.html#14 Fast OODA-Loops increase Maneuverability
https://www.garlic.com/~lynn/2016g.html#13 Rogue sysadmins the target of Microsoft's new 'Shielded VM' security
https://www.garlic.com/~lynn/2016d.html#28 Manazir: Networked Systems Are The Future Of 5th-Generation Warfare, Training
https://www.garlic.com/~lynn/2016d.html#18 What Would Be Your Ultimate Computer?
https://www.garlic.com/~lynn/2015d.html#19 Where to Flatten the Officer Corps
https://www.garlic.com/~lynn/2015.html#80 Here's how a retired submarine captain would save IBM
https://www.garlic.com/~lynn/2014f.html#46 The Pentagon Wars
https://www.garlic.com/~lynn/2014.html#16 Command Culture
https://www.garlic.com/~lynn/2013k.html#48 John Boyd's Art of War
https://www.garlic.com/~lynn/2013e.html#81 How Criticizing in Private Undermines Your Team - Harvard Business Review
https://www.garlic.com/~lynn/2013e.html#10 The Knowledge Economy Two Classes of Workers
https://www.garlic.com/~lynn/2012k.html#7 Is there a connection between your strategic and tactical assertions?
https://www.garlic.com/~lynn/2012i.html#50 Is there a connection between your strategic and tactical assertions?
https://www.garlic.com/~lynn/2012h.html#63 Is this Boyd's fundamental postulate, 'to improve our capacity for independent action'?
https://www.garlic.com/~lynn/2012g.html#84 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012f.html#2 Did they apply Boyd's concepts?
https://www.garlic.com/~lynn/2012c.html#51 How would you succinctly desribe maneuver warfare?
https://www.garlic.com/~lynn/2012b.html#26 Strategy subsumes culture
https://www.garlic.com/~lynn/2012.html#45 You may ask yourself, well, how did I get here?
https://www.garlic.com/~lynn/2011l.html#52 An elusive command philosophy and a different command culture
https://www.garlic.com/~lynn/2011k.html#3 Preparing for Boyd II
https://www.garlic.com/~lynn/2011j.html#7 Innovation and iconoclasm
https://www.garlic.com/~lynn/2010i.html#68 Favourite computer history books?
https://www.garlic.com/~lynn/2010e.html#43 Boyd's Briefings
https://www.garlic.com/~lynn/2009j.html#34 Mission Control & Air Cooperation
https://www.garlic.com/~lynn/2009e.html#73 Most 'leaders' do not 'lead' and the majority of 'managers' do not 'manage'. Why is this?
https://www.garlic.com/~lynn/2008o.html#69 Blinkenlights
https://www.garlic.com/~lynn/2008h.html#63 how can a hierarchical mindset really ficilitate inclusive and empowered organization
https://www.garlic.com/~lynn/2008h.html#61 Up, Up, ... and Gone?
https://www.garlic.com/~lynn/2008h.html#8a Using Military Philosophy to Drive High Value Sales
https://www.garlic.com/~lynn/2007c.html#25 Special characters in passwords was Re: RACF - Password rules
https://www.garlic.com/~lynn/2007b.html#37 Special characters in passwords was Re: RACF - Password rules
https://www.garlic.com/~lynn/2006q.html#41 was change headers: The Fate of VM - was: Re: Baby MVS???
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#14 The Pankian Metaphor
https://www.garlic.com/~lynn/2004q.html#86 Organizations with two or more Managers
https://www.garlic.com/~lynn/2004k.html#24 Timeless Classics of Software Engineering
https://www.garlic.com/~lynn/2003p.html#27 The BASIC Variations
https://www.garlic.com/~lynn/2003h.html#51 employee motivation & executive compensation
https://www.garlic.com/~lynn/2002q.html#33 Star Trek: TNG reference
https://www.garlic.com/~lynn/2002d.html#38 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#36 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2001.html#29 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/99.html#120 atomic History
https://www.garlic.com/~lynn/aadsm28.htm#10 Why Security Modelling doesn't work -- the OODA-loop of today's battle

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 14 Aug, 2023
Blog: Facebook
... result from legal action, IBM 23Jun1969 "unbundling" announcement ... starting to charge for software .... and adding "copyright" banner as comments in headers of source files. They were able to make the case that kernel software could still be free. First half of 70s, IBM had the "future system" project ... new computer generation that was completely different and was going to completely replace 360/370.

During FS, internal politics was killing off 370 efforts ... the lack of new 370 during FS is credited with giving 370 clone makers (Amdahl, etc) their market foothold. When FS implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 303x&3081. Also with the rise of 370 clone makers, the decision was also made to start charging for kernel software (previous kernel software was still free, but incremental new kernel add-ons would be charged for, eventually after a few years, transitioning to all kernel software was being charged).

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Note: after joining IBM, I was to allowed to wander around IBM and customer datacenters. Director of one the largest customer financial datacenters liked me to stop by and talk technology. At some point the local IBM branch manager horribly offended the customer and in retaliation, the customer ordered an Amdahl system (lone Amdahl in vast football field of IBM systems). Up until them Amdahl had been primarily selling into the technical/scientific/university market ... and this would be the first clone 370 in true blue commercial account. Then I was asked to go live on site for 6-12 months (apparently to obfuscate why the customer was ordering an Amdahl machine). I talked it over with the customer and decided to decline the offer. I was then told that the branch manager was good sailing buddy of the IBM CEO, and if I didn't, I could forget having career, promotions, raises.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Posts mentioning commercial customer ordering Amdahl
https://www.garlic.com/~lynn/2023c.html#56 IBM Empty Suits
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#45 IBM 3081 TCM
https://www.garlic.com/~lynn/2022g.html#66 IBM Dress Code
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#82 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#60 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2022e.html#14 IBM "Fast-Track" Bureaucrats
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#21 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#95 IBM Salary
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021k.html#105 IBM Future System
https://www.garlic.com/~lynn/2021j.html#93 IBM 3278
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#81 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#61 IBM Starting Salary
https://www.garlic.com/~lynn/2021e.html#66 Amdahl
https://www.garlic.com/~lynn/2021e.html#15 IBM Internal Network
https://www.garlic.com/~lynn/2021d.html#85 Bizarre Career Events
https://www.garlic.com/~lynn/2021d.html#66 IBM CEO Story
https://www.garlic.com/~lynn/2021c.html#37 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2021.html#39 IBM Tech
https://www.garlic.com/~lynn/2021.html#8 IBM CEOs
https://www.garlic.com/~lynn/2019e.html#138 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2019e.html#29 IBM History
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2018f.html#68 IBM Suits
https://www.garlic.com/~lynn/2018e.html#27 Wearing a tie cuts circulation to your brain
https://www.garlic.com/~lynn/2018d.html#6 Workplace Advice I Wish I Had Known
https://www.garlic.com/~lynn/2018c.html#27 Software Delivery on Tape to be Discontinued
https://www.garlic.com/~lynn/2018.html#55 Now Hear This--Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017d.html#49 IBM Career
https://www.garlic.com/~lynn/2017c.html#92 An OODA-loop is a far-from-equilibrium, non-linear system with feedback
https://www.garlic.com/~lynn/2016h.html#86 Computer/IBM Career
https://www.garlic.com/~lynn/2016e.html#95 IBM History
https://www.garlic.com/~lynn/2016.html#41 1976 vs. 2016?
https://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2013l.html#22 Teletypewriter Model 33

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 14 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software

trivia/topic drift: a decade ago I was asked to track down decision to make all 370s virtual memory (found staff member for executive that made decision). Basically OS/MVT storage management was so bad that region sizes had to be specified four times larger than used ... so a typical 1mbyte 370/165 could only run four regions concurrently, insufficient to keep system busy and justified. Going to 16mbyte virtual memory could increase the number of concurrently running regions by factor of four with little or no paging. Initially OS/VS2 SVS was little different than running MVT in a 16mbyte CP67 virtual machine.

However, 165 people were then complaining that if they had to implement the full 370 virtual memory architecture ... the announce would slip six months. Eventually it was stripped back to subset maintaining the schedule ... and all the other models (and software) that already implemented/used the full architecture had to retrench to the 165 subset.

trivia: I would drop by to see Ludlow who was doing the SVS prototype offshift on 360/67. The biggest code was doing EXCP channel programs; aka application libraries built the I/O channel programs and passed address to SVC0/EXCP. Channel programs had to have real addresses ... which met that a copy of the passed channel programs was made substituting real addresses for virtual. This was same problem that CP67 had (running virtual machines) ... and Ludlow borrowed the CP67 CCWTRANS for crafting into EXCP.

archived post with pieces of email exchange
https://www.garlic.com/~lynn/2011d.html#73

recent posts mentioning Ludlow & SVS prototype
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#103 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2022h.html#93 IBM 360
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#41 MVS
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#91 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#58 Computer Security
https://www.garlic.com/~lynn/2022.html#10 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#59 370 Virtual Memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 14 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software

more FS background
http://www.jfsowa.com/computer/memo125.htm

one of my hobbies after joining IBM was enhanced production (360&370) operating systems for internal datacenters (circa 1975, somehow leaked to AT&T longlines, but that's another story) and the FS people periodically drop by to talk about my work ... I would ridicule what they were doing (which wasn't exactly career enhancing). One of the final nails in the FS coffin was study by Houston Science Center which showed if 370/195 software was redone for FS, running on machine made out of the fastest available hardware technology, it would have throughput of 370/145 (about 30times slowdown).

ref to ACS/360 shutdown ... terminated because executives were afraid it would advance state-of-the-art too fast and they would loose control of the market. Amdahl leaves shortly after acs/360 shutdown (lists some features that show up more than 20yrs later in the 90s with ES/9000).
https://people.cs.clemson.edu/~mark/acs_end.html

Amdahl had talk at large MIT auditorium shortly after starting his company and some of us from cambridge science center attended. Somebody asked him about business case he used with investors. He said that there was so much customer developed 360 software, that even if IBM were to totally walk away from 360(/370), it would keep him in business until the end of century. This sort of sounded like he was aware of FS ... but he has repeatedly claimed he had no knowledge of FS.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

recent posts mentioning Sowa's FS memo and end of ACS/360
https://www.garlic.com/~lynn/2023d.html#94 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#63 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2022h.html#120 IBM Controlling the Market
https://www.garlic.com/~lynn/2022h.html#117 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#93 IBM 360
https://www.garlic.com/~lynn/2022h.html#48 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022h.html#33 computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022h.html#2 360/91
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#22 3081 TCMs
https://www.garlic.com/~lynn/2022f.html#109 IBM Downfall
https://www.garlic.com/~lynn/2022e.html#61 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022b.html#99 CDC6000
https://www.garlic.com/~lynn/2022b.html#88 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#77 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#74 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#31 370/195

--
virtualization experience starting Jan1968, online at home since Mar1970

Maneuver Warfare as a Tradition. A Blast from the Past

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Maneuver Warfare as a Tradition. A Blast from the Past
Date: 14 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#13 Maneuver Warfare as a Tradition. A Blast from the Past

well ... long-winded "internet" followup ... would require spreading across multiple comments ... so instead redid as one large article
https://www.linkedin.com/pulse/maneuver-warfare-tradition-blast-from-past-lynn-wheeler/

Maneuver Warfare as a Tradition. A Blast from the Past
https://tacticalnotebook.substack.com/p/maneuver-warfare-as-a-tradition

In briefings, Boyd would also mention Guderian instructed Verbal Orders Only for the blitzgrieg ... stressing officers on the spot were encouraged to make decisions without having to worry about the Monday morning quarterbacks questioning what should have been done.

one of my old posts here
https://slightlyeastofnew.com/2019/11/18/creating-agile-leaders/

Impact Of Technology On Military Manpower Requirements (Dec1980)
https://books.google.com/books?id=wKY3AAAAIAAJ&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v
top of pg. 104:
# Verbal orders only, convey only general intentions, delegate authority to lowest possible level and give subordinates broad latitude to devise their own means to achieve commander's intent. Subordinates restrict communications to upper echelons to general difficulties and progress, Result: clear, high speed, low volume communications,

... snip ...

post redone as article with long-winded internet followup and topic drift (to some comments)
https://www.linkedin.com/posts/lynnwheeler_maneuver-warfare-as-a-tradition-activity-7096921373977022464-XLku/

coworker at cambridge science center and san jose research
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

and It's Cool to Be Clever: The Story of Edson C. Hendricks, the Genius Who Invented the Design for the Internet
https://www.amazon.com/Its-Cool-Be-Clever-Hendricks/dp/1897435630/

Ed tried to get IBM to support internet & failed, SJMN article (behind paywall but mostly free at wayback)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
additional correspondence with IBM executives (Ed passed Aug2020, his website at wayback machine)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Ed was also responsible for the world-wide internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). At the time ARPANET transition from IMPs/HOSTs (to TCP/IP) it had approx. 255 hosts while the internal network was closing in on 1000 hosts. One of our difficulties was with govs. over corporate requirement all links be encrypted ... especially gov. problems when links crossed national boundaries. Technology was also used for the corporate sponsored univ. "BITNET" (included EARN in Europe).
https://en.wikipedia.org/wiki/BITNET

Ed left IBM about the same time I was introduced to John Boyd and also I had HSDT project (and hired Ed to help), T1 and faster computer links and was suppose to get $20M from the director of NSF to interconnect the NSF Supercomputer centers. Then congress cuts the budget, some other things happen and finally a RFP is released (in part based on what we already had running) ... Preliminary announce (28Mar1986):
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing (precursor to social media) inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

The last product worked on at IBM was HA/CMP. It originally started out as HA/6000 for the NYTimes to migrate their newspaper system (ATEX) from (DEC) VAXCluster to RS/6000. I rename it HA/CMP when doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Informix, Ingres, Sybase). Early Jan1992 have meeting with Oracle CEO Ellison on cluster scale-up, planning 16-way mid92 and 128-system ye92. By end of Jan1992, cluster scale-up was transferred for announce as IBM Supercomputer and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later).

Some time later I'm brought in as consultant to small client/server startup. Two of the former Oracle people (in Ellison meeting) that we were working with on commercial HA/CMP, are there responsible for something called commerce server and want to do payment transactions on the server, the startup had also invented this technology they called "SSL" they want to use for payment transactions, sometimes now called "electronic commerce". I had responsibility for everything between webservers and the financial network. I would claim that it took me 3-10s the original effort for a well design&developed application to turn it into a service. Postel (IETF/Internet Editor)
https://en.wikipedia.org/wiki/Jon_Postel

would sponsor my talk on "Why Internet Isn't Business Critical Dataprocessing" based on work (software, procedures, documents) I had to do for "electronic commerce".

post from last year, intertwining Boyd and IBM
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

some posts mention "Why Internet Isn't Business Critical Dataprocessing"
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017i.html#18 progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable

--
virtualization experience starting Jan1968, online at home since Mar1970

A U.N. Plan to Stop Corporate Tax Abuse

From: Lynn Wheeler <lynn@garlic.com>
Subject: A U.N. Plan to Stop Corporate Tax Abuse
Date: 15 Aug, 2023
Blog: Facebook
A U.N. Plan to Stop Corporate Tax Abuse. Big business wants you to think that reforming corporate taxes is a boring and complicated subject, but it's actually simple and exciting.
https://theintercept.com/2023/08/12/tax-abuse-international-corporations/
The evil goal in this situation is various forms of international tax abuse, an umbrella term that covers both tax evasion and tax avoidance. In theory, these two things are different. Tax evasion is carried out largely by various terrible outlaw oligarchs and is against the law. By contrast, tax avoidance is conducted by the world's most prestigious corporations and is totally legal, because the tax avoiders write the tax laws.

... snip ...

... note 2010 CBO report that 2003-2009, tax collections were cut $6T and spending increased $6T for a $12T gap compared to fiscal responsibility budget (spending could not exceed tax revenue, which had been on its way to eliminating all federal debt) ... first time taxes were cut to not pay for two wars (also sort of confluence of Federal Reserve and Too Big To Fail needed huge federal debt, special interests wanted huge tax cut, and Military-Industrial Complex wanted huge spending increase and perpetual wars)

Tax havens could cost countries $4.7 trillion over the next decade, advocacy group warns. The U.K. continues to lead the so-called "axis of tax avoidance," which drains an estimated $151 billion from global coffers through corporate profit-shifting, a new report found.
https://www.icij.org/investigations/paradise-papers/tax-havens-could-cost-countries-4-7-trillion-over-the-next-decade-advocacy-group-warns/

OECD 'disappointed' over 'surprising' UN global tax report. The UN chief pushed for a bigger say in the international tax agenda and said the group of wealthy countries had ignored the needs of developing nations.
https://www.icij.org/investigations/paradise-papers/oecd-surprised-and-disappointed-over-un-global-tax-plan/

tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
Too Big To Fail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
related, WMD posts:
https://www.garlic.com/~lynn/submisc.html#wmds

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 16 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software

mid 90s there was some organization that did computized analysis of language used in all US patents (available online) ... found 30% of "computer" patents were found filed in different categories using very ambiguous descriptions... "submarine patents" just waiting for somebody that they could sue for violation.

... and litigation in the 60s resulting in IBM's "unbundling" announcement and starting to charge for software ... there were also requirement that IBM had to show the price charged had to be profitable ... developed business practice to forecast based on "low", "middle", and "high" price ... where the number sold times the price had to cover original development plus ongoing support/maintenance.

I got caught in this after FS imploded, mad rush to get stuff back into 370 product pipelines, and decision was made to start charging for kernel software (not just application software). After joining IBM one of my hobbies was enhanced production operating systems for internal datacenters ... and continued to work on 360&370 stuff all during the FS period (including periodically ridiculing what FS was doing). Bunch of my stuff (that I had been doing for internal datacenters) was selected to be the guinea pig for charged for, add-on kernel software and I had to spend time with lawyers and business people on kernel software charging practices.

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging algorithm posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 16 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software

I abstracted a bunch of (internet standards) IETF RFC files and put up on my website (following previous copyright guidelines, which includes statement that information from RFCs can be used if specific RFC copyright statement is included). That was changed around the turn of the century and I was contacted by a RFC author wanting to sue under the new conditions. I had to hire a copyright lawyer to explain to IETF that I exactly followed the previous rules and had done nothing with RFCs that were under the new rules. Also suggested to IETF that they make more prominent how things changed after the turn of the century.

some past posts mentioning the IETF RFC copyright change
https://www.garlic.com/~lynn/2018b.html#50 Nostalgia
https://www.garlic.com/~lynn/2015.html#95 56kbit modems
https://www.garlic.com/~lynn/2014e.html#47 TCP/IP Might Have Been Secure From the Start If Not For the NSA
https://www.garlic.com/~lynn/2010g.html#56 Reverse or inverse ARP from windows/linux - no way (!?!?)
https://www.garlic.com/~lynn/2010g.html#12 Reverse or inverse ARP from windows/linux - no way (!?!?)

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

similar but different problem ... I had scanned my physical copy of the SHARE LSRAD report and wanted to provide it to BITSAVER. However, congress had extended the copyright period shortly (earlier in 1979?) before LSRAD was published (and so copyright was still active) ... and so needed to find somebody in SHARE that would approve it being put up on BITSAVER ... turned it to be a difficult task. SHARE directory on BITSAVER:
https://bitsavers.org/pdf/ibm/share/
The_LSRAD_Report_Dec79.pdf 2011-11-30 11:57 147M

past posts mentioning LSRAD
https://www.garlic.com/~lynn/2022e.html#67 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022.html#128 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2015f.html#82 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014j.html#53 Amdahl UTS manual
https://www.garlic.com/~lynn/2013h.html#85 Before the PC: IBM invents virtualisation
https://www.garlic.com/~lynn/2013h.html#82 Vintage IBM Manuals
https://www.garlic.com/~lynn/2013e.html#52 32760?
https://www.garlic.com/~lynn/2012p.html#58 What is holding back cloud adoption?
https://www.garlic.com/~lynn/2012o.html#36 Regarding Time Sharing
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing
https://www.garlic.com/~lynn/2012i.html#40 GNOSIS & KeyKOS
https://www.garlic.com/~lynn/2012i.html#39 Just a quick link to a video by the National Research Council of Canada made in 1971 on computer technology for filmmaking
https://www.garlic.com/~lynn/2012f.html#58 Making the Mainframe more Accessible - What is Your Vision?
https://www.garlic.com/~lynn/2011p.html#146 IBM Manuals
https://www.garlic.com/~lynn/2011p.html#22 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#15 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#14 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#11 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#10 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#70 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#62 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011.html#89 Make the mainframe work environment fun and intuitive
https://www.garlic.com/~lynn/2011.html#88 digitize old hardcopy manuals
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2009n.html#0 Wanted: SHARE Volume I proceedings
https://www.garlic.com/~lynn/2009.html#70 A New Role for Old Geeks
https://www.garlic.com/~lynn/2009.html#47 repeat after me: RAID != backup
https://www.garlic.com/~lynn/2007d.html#40 old tapes
https://www.garlic.com/~lynn/2006d.html#38 Fw: Tax chooses dead language - Austalia
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 16 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software

more about IETF/RFC copyright, previous RFC contents:
Copyright (C) The Internet Society (2000). All Rights Reserved.

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.


... snip ...

the new use of contents of RFC:
5377 I Advice to the Trustees of the IETF Trust on Rights to Be Granted in IETF Documents, Halpern J., 2008/11/10 (8pp) (.txt=17843) (See Also 5378) (Refs 3935, 4071, 4371) (Ref'ed By 5744, 5745)

... snip ...

current documents carry the following
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/) license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document

... snip ...

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 16 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2023e.html#21 Copyright Software

i432 trivia: FS was heavily microcoded ... in part responsible for 30 times slowdown. After FS implosion, I did define 370 microcode for multiprocessor feature that could queue executable tasks and processor microcode could pull work off queue for execution (making semi-transparent, the number of processors; which i found similar in i432 a few years later).

Note at Asilomar SIGOPS meeting there was a presentation by i432 people about putting extremely complex functions in silicon ... and having to frequently cut new silicon chips every time fixes were necessary.

posts mentions lots of microcode for 370/125 5-processor machine
https://www.garlic.com/~lynn/submain.html#bounce
370/125 multiprocessor effortt would also do 138/148 ECPS
https://www.garlic.com/~lynn/94.html#21
SMP, multiprocessor, tightly-coupled and/or Compare-And-Swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

mention i432 presentation at SIGIOPS
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019c.html#33 IBM Future System
https://www.garlic.com/~lynn/2017g.html#28 Eliminating the systems programmer was Re: IBM cuts contractor bil ling by 15 percent (our else)
https://www.garlic.com/~lynn/2017e.html#61 Typesetting
https://www.garlic.com/~lynn/2016d.html#63 PL/I advertising
https://www.garlic.com/~lynn/2011c.html#7 RISCversus CISC
https://www.garlic.com/~lynn/2010h.html#40 Faster image rotation
https://www.garlic.com/~lynn/2010g.html#45 IA64
https://www.garlic.com/~lynn/2010g.html#1 IA64
https://www.garlic.com/~lynn/2009d.html#52 Lack of bit field instructions in x86 instruction set because of patents ?
https://www.garlic.com/~lynn/2006n.html#42 Why is zSeries so CPU poor?
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2002d.html#27 iAPX432 today?

Also after decision to add virtual memory to all 370s, there was new VM370 product group (morphing CP67 to VM370), some people split off from cambridge science center (4th flr, 545 tech sq, trivia multics was on 5th flr) and took over IBM Boston Programming Center on 3rd flr. When they outgrew 3rd flr, they moved out to vacant SBC bldg at Burlington Mall (off rt128). When FS imploded, the head of (IBM high-end) POK managed to convince corporate to kill VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA. They weren't planning on tell the people until the very last minute to minimize the numbers that might escape. The information leaked early and several managed to escape into the Boston area (joke the head of POK was major contributor to the infant DEC VAX/VMS project). There was hunt for the leak source, fortunately for me, nobody gave up the source. Eventually Endicott (mid-range competing with VAX) managed to save the VM370 product mission, but had to reconstitute development group from scratch.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning killing VM370 development out in Burlington Mall
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2014b.html#105 Happy 50th Birthday to the IBM Cambridge Scientific Center
https://www.garlic.com/~lynn/2011g.html#8 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2009b.html#67 IBM tried to kill VM?
https://www.garlic.com/~lynn/2005s.html#35 Filemode 7-9?
https://www.garlic.com/~lynn/2005j.html#25 IBM Plugs Big Iron to the College Crowd

Other IBM trivia:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 16 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2023e.html#21 Copyright Software
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software

Bunch of patents that we did working in the financial industry for secure financial (and other) operations. Was asked to work with boutique patent law firm. We were at nearly 50 drafts and the law firm told the employer that it would be over hundred patent applications. The executives looked how much it would cost to file both US and international and directed all claims be packaged as nine patents instead. At some point the patent office came back and said they were getting tired of humongous patents; the filing fee didn't even cover the cost of reading the claims and to repackage into at least 25-30 patents.
https://www.garlic.com/~lynn/aadssummary.htm

We had assigned rights to the work ... and patents were being drafted and applied for in our name after we had left the company. Some related details
https://www.garlic.com/~lynn/x959.html
and posts
https://www.garlic.com/~lynn/subpubkey.html#x959

some recent posts mentioning patent summary
https://www.garlic.com/~lynn/2023c.html#17 GlobalFoundries sues IBM for flogging 'chip secrets to Intel, Rapidus'
https://www.garlic.com/~lynn/2022b.html#103 AADS Chip Strawman
https://www.garlic.com/~lynn/2021g.html#74 Electronic Signature
https://www.garlic.com/~lynn/2021d.html#87 Bizarre Career Events
https://www.garlic.com/~lynn/2021d.html#19 IBM's innovation: Topping the US patent list for 28 years running

--
virtualization experience starting Jan1968, online at home since Mar1970

EBCDIC "Commputer Goof"

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: EBCDIC "Commputer Goof"
Date: 19 Aug, 2023
Blog: Facebook
EBCDIC was one of the greatest computer goofs of all time. IBM was planning on 360 being ASCII machine ... but the ASCII unit record gear wasn't going to be ready ... so they had to (supposedly, temporarily) reuse the old BCD gear. IBM's "father" of ASCII (gone 404 but still lives on at the wayback machine):
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
The culprit was T. Vincent Learson. The only thing for his defense is that he had no idea of what he had done. It was when he was an IBM Vice President, prior to tenure as Chairman of the Board, those lofty positions where you believe that, if you order it done, it actually will be done. I've mentioned this fiasco elsewhere. Here are some direct extracts

... snip ...

other
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

later when Learson was chairman and did try to block the bureaucrats, careerists, and MBAs from destroying the Watson legacy (but failed)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning ASCII and Bemer
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023.html#80 ASCII/TTY Terminal Support
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#100 IBM 360
https://www.garlic.com/~lynn/2022h.html#65 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#63 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2022d.html#24 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#116 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#51 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#91 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021d.html#92 EBCDIC Trivia
https://www.garlic.com/~lynn/2020.html#7 IBM timesharing terminal--offline preparation?

--
virtualization experience starting Jan1968, online at home since Mar1970

EBCDIC "Commputer Goof"

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: EBCDIC "Commputer Goof"
Date: 19 Aug, 2023
Blog: Facebook
re;
https://www.garlic.com/~lynn/2023e.html#24 EBCDIC "Commputer Goof"

also drifted into data encoding and storage capacity

original 3380 had equivalent 20 track spacings between each data track, that was then cut in half to double number of tracks (& cylinders) for 3380E, and spacing cut again to triple number of tracks (& cylinders) for 3380K ... still same 3mbyte/sec channels. other trivia, the (IBM) father of RISC computer asks me to help with his "wide-head" idea ... that handles 18 closely spaced tracks ... surface formatted with 16 data tracks plus servo track ... the "wide-head" would follow two servo-tracks on each side of the 16 data tracks transferring data at 3mbytes/sec from each track, 48mbytes/sec aggregate. Problem was IBM mainframe I/O wouldn't support 48mbyte/sec I/O ... anymore than they would support 48mbyte/sec RAID I/O (although various supercomputers supported HIPPI (sponsored by LANL) ... driven by disk array technology.

801/RISC, Iliad, ROMP, RIOS, PC/RT, RS/6000, Power, Power/pc, etc. posts
https://www.garlic.com/~lynn/subtopic.html#801

recent posts mentioning wide-head, 16 data tracks, 48mbytes/sec
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2021f.html#44 IBM Mainframe
https://www.garlic.com/~lynn/2021.html#56 IBM Quota
https://www.garlic.com/~lynn/2018d.html#17 3390 teardown
https://www.garlic.com/~lynn/2018d.html#12 3390 teardown

other trivia: when I transferred to San Jose Research, got to wander around IBM and customer datacenters in silicon valley. At the time bldg14 (disk engineering) and bldg15 (disk product test) across the street, were running stand-alone, prescheduled, (bldg4) around the clock mainframe testing. They said that they had tried MVS, but it had 15min mean-time-between failure (in that environment) requiring re-ipl. I offered to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity.

getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

bldg15 then got (engineering) 3033 (#3or#4, 1st outside POK) for DISK I/O product test. Since testing took percent or two of processing, scrounge up a 3830 controller and string of 3330s and put up private online service. At the time they were running air-bearing simulation program (part of thin-film, floating head design), getting a couple turn-arounds/month on SJR 370/195. We set things up so it can run on bldg15 3033 ... where it could get several turn-arounds/day (even tho 3033 was less than half the processing of the 195). Thin-film, floating heads (closer to surface, getting higher recording density and closer track spacing) initially shipped with FBA 3370 ... and then later with 3380.
https://www.computerhistory.org/storageengine/thin-film-heads-introduced-for-large-disks/

I was also sucked into doing some work on System/R, the original SQL/relational implementation. The IMS group down in STL was criticizing System/R that it doubled the disk space (for indexes) and increased the disk I/Os (for processing indexes) compared to IMS. The counter was the indexes significantly decreased the manual admin activity. Later in the 80s, cost of disk space significantly dropped, and system memory increased (use for caching indexes reduced the disk I/Os) ... flipping the trade-off with manual admin activity for wide-spread RDBMS.

posts mentioning system/r
https://www.garlic.com/~lynn/submain.html#systemr

recent posts mentioning air-bearing simulation
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#9 3880 DASD Controller
https://www.garlic.com/~lynn/2022d.html#11 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#74 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022.html#64 370/195
https://www.garlic.com/~lynn/2021k.html#97 IBM Disks
https://www.garlic.com/~lynn/2021j.html#97 This chemist is reimagining the discovery of materials using AI and automation
https://www.garlic.com/~lynn/2021f.html#40 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#28 IBM Cottle Plant Site
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380

--
virtualization experience starting Jan1968, online at home since Mar1970

Some IBM/PC History

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Some IBM/PC History
Date: 19 Aug, 2023
Blog: Facebook
before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP/67 (precursor to IBM's VM370) at npg (gone 404, but lives on at the wayback machine)
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html
npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

... some of the MIT 7094/CTSS people went to the 5th flr and did MULTICS. Others went to the IBM Science Center on the 4th flr and did CP40/CMS (on 360/40 with hardware mods for virtual memory). CP/40 morphs into CP/67 when 360/67 standard with virtual memory becomes available. When IBM decided to make virtual memory available on all 370s, some of the people split off from the science center and take-over the IBM Boston Programming Center on the 3rd flr ... later when they outgrow the 3rd flr, they move to vacant SBC bldg at Burlington mall (off rt128).

IBM science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

early in ACORN (IBM/PC), Boca said that they weren't interested in software ... and an add-hoc IBM group of some 20-30 was formed in silicon valley to do software ... would touch base with Boca every month to make sure nothing had changed. Then one month, Boca changes its mind and says if you want to do ACORN software, you have to move to Boca ... and the whole effort imploded

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM'sabout-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

Trivia: I had done dynamic adaptive resource manager ("wheeler" scheduler) for CP/67 as undergraduate in the 60s ... which is eventually included in VM370. Old email with Boca OS2 asking the Endicott VM370 group how to do scheduling, which they forwarded request to IBM Kingston VM370 group, which is then forwarded to me.

Date: Fri, 4 Dec 87 15:58:10 est
From: wheeler
Subject: os2 dispatching

fyi ... somebody in boca sent a message to endicott asking about how to do dispatch/scheduling (i.e. how does vm handle it) because os2 has several deficiencies that need fixing. VM Endicott forwarded it to VM Kingston and VM IBM Kingston forwarded it to me. I still haven't seen a description of OS2 yet so don't yet know about how to go about solving any problems.


... snip ... top of post, old email index

Date: Fri, 4 Dec 87 15:58:10 est
From: wheeler
To: somebody at bcrvmpc1 (i.e. internal vm network node in boca)
Subject: os2 dispatching

I've sent you a couple things that I wrote recently that relate to the subject of scheduling, dispatching, system management, etc. If you are interested in more detailed description of the VM stuff, I can send you some descriptions of things that I've done to enhance/fix what went into the base VM system ... i.e. what is there now, what its limitations are, and what further additions should be added.


... snip ... top of post, old email index

some history of PC market
https://arstechnica.com/features/2005/12/total-share/
https://arstechnica.com/features/2005/12/total-share/3
https://arstechnica.com/features/2005/12/total-share/4
https://arstechnica.com/features/2005/12/total-share/5

The IBM communication group was fiercely fighting off client/server and distributed computer and had heavily performance kneecapped the PS2 microchannel cards. The IBM AWD workstation division had done their own (PC/AT bus) 4mbit token-ring cards for the PC/RT. However AWD was told for the RS/6000 and microchannel, they couldn't do their own cards, but had to use PS2 cards. It turns out the IBM PS2 16mbit token-ring (microchannel) card had lower card throughput than the PC/RT 4mbit token-ring card (aka RS/6000 16mbit T/R server would have lower throughput than PC/RT 4mbit T/R server). The joke was RS/6000 forced to use PS2 microchannel cards could have same performance as PS2/486 for lots of things. Also a $69 10mbit Ethernet card had higher throughput than $800 PS2/microchannel 16mbit T/R card. One of the AWD work arounds was they came out with RS/6000m730 that had VMEbus in order to deploy high performance workstation cards. There was joke that Boca was loosing $5 on every PS2 sold, but they were planning on making it up with volume.

801/RISC, Iliad, ROMP, RIOS, PC/RT, RS/6000, Power, Power/pc, etc. posts
https://www.garlic.com/~lynn/subtopic.html#801

Also in late 80s, a senior disk engineer got a talk scheduled at annual, internal, world-wide communication group conference, supposedly on 3174 performance. However, he opened his talk that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing a drop in disk sales with data fleeing datacenters to more distributed computing friendly platforms. The disk division had come up with several solutions, but they were constantly being vetoed by the communication group (with their corporate strategic ownership for everything that crossed datacenter walls). Not only was tanking disk sales, but the whole IBM computer market and a couple years later, IBM has one of the largest losses in history of US companies ... and IBM was being reorganized into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM, but get call from the bowels of Armonk (corp. hdqtrs) asking if we could help with the IBM breakup. Business units were leveraging supplier contracts in other units via MOUs. After the breakups many of the supplier contracts will be other companies and the associated MOUs will have to be cataloged and turned into their own contracts. Before we get started, IBM board brings in a new CEO that reverses the breaku.

communication group stranglehold on datacenters
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Some years later after leaving IBM, I was doing some consulting work for Steve Chen, who at the time was Sequent CTO (before IBM bought Sequent and shut it down). The Sequent people would claim that they had done nearly all the work for NT multiprocessor scale-up.

--
virtualization experience starting Jan1968, online at home since Mar1970

Punch Cards

From: Lynn Wheeler <lynn@garlic.com>
Subject: Punch Cards
Date: 20 Aug, 2023
Blog: Facebook
univ. registration had tables all around the gym for classes where students filled in (manila) sense mark cards. cards were collected and fed into 519(?) to read sense marks and punched holes. The punched cards were then read with 2540 selecting the middle stacker. If there was some problem with card information ... a blank "card" (with colored stripe) was "punched" selecting the middle stacker (behind the card with a problem) ... numerous full card trays of student registrations cards. Afterwards scan the card trays for color striped cards

past posts mentioning 2540 middle stacker
https://www.garlic.com/~lynn/2022h.html#30 Byte
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2017e.html#101 The head of the Census Bureau just quit, and the consequences are huge
https://www.garlic.com/~lynn/2016c.html#100 IBM's 96 column punch card (was System/3)?
https://www.garlic.com/~lynn/2012i.html#12 IEBPTPCH questions
https://www.garlic.com/~lynn/2011k.html#8 Last card reader?
https://www.garlic.com/~lynn/2010i.html#73 History: Mark-sense cards vs. plain keypunching?
https://www.garlic.com/~lynn/2010h.html#64 Reproducing Punch (513/514)--consecutive numbering, mark sense reading
https://www.garlic.com/~lynn/2008k.html#47 IBM 029 keypunch -- 0-8-2 overpunch -- what hex code results?
https://www.garlic.com/~lynn/2007q.html#71 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007n.html#59 IBM System/360 DOS still going strong as Z/VSE
https://www.garlic.com/~lynn/2001b.html#20 HELP
https://www.garlic.com/~lynn/98.html#53 punch card editing, take 2

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 21 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2023e.html#21 Copyright Software
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023e.html#23 Copyright Software

CP67/CMS release came standard with full source distribution. The original CP67 source update only applied a single update file before assembly/compile. The update commands were delete, replace, insert sequence numbers of the existing file ... and any new added statements had to have their sequence numbers manually added. As undergraduate I was making so many major changes to CP67/CMS source, that I wrote a program to add sequence numbers to added statements before applying the changes. Later at the science center, we created an "update exec" that would incrementally add multiple source updates.

Later, VM370 development incremented the "exec" incremental update process into the update program and the editors supported generated source changes as incremental updates (instead of original file replaced with changed file). VM370 added monthly fix/update tapes as "PLC" which were incremental source update files to the base release source. For new release, all fix/updates were integrated into the base source (and files resequenced). To aid in migrating local source updates to new release ... I wrote a program that did a diff between the latest previous release (plus all updates) and the latest new reseequenced release sourc and generated a source update file that represented that turned the previous release source into the latest release (somewhat simplifying integrating local updates into latest release). When the local updates had been integrated with the generated "new release" updates. It could then be used to convert the local source update files from the previous release sequence numbers to the new release resequenced numbers.

In the mid-80s, I had large archive of CP67/CMS (from 60s & 70s) and VM370 (from 70s) files on tapes. Melinda was doing her history and asked if I had copy of the original CP67 incremental update execs.
https://www.leeandmelindavarian.com/Melinda#VMHist

I was able to pull them off tape and emailed (SJR, subsequently moved up the hill, Almaden had done the IBM email gateway to BITNET) them to her. old email:

Date: 09/06/85 14:45:21
From: wheeler
To: melinda

re: $; update only required that the source file have sequence numbers in order to apply the ./ D-R-I parameters. Update didn't require that any inserted lines also have appropriate sequence numbers. By convention (prior to cascading updating), it was usual that the resulting source have all valid sequence numbers (in many cases inforced by a assembler ISEQ statement). Frequently it was the policy that an update deck be applied "permenantly" to an assembler deck ... to become a new "base" ... using a combination of the original source deck and the update deck sequence nos. In some sense this could be considered "cascading update" procedure ... however the interval between update application spanned several weeks/months rather than seconds or milliseconds.


... snip ... top of post, old email index

Date: Fri, 6 Sep 1985 16:47:51 EDT
From: Melinda
To: wheeler

Lynn, thank you for your note (via Pat) giving your recollections of the origin of the multi-level update facility. I am really enjoying piecing all this together.

Ever since I got your note, I've been puzzling over what you meant by the "'$' stuff". I've finally concluded that the CMS-360 UPDATE must have required sequence numbers in the sequence number columns of the card images being inserted into a file. The preprocessor you mentioned must have filled in sequence numbers with appropriate increments, as UPDATE does today when there is a dollar sign in the ./ I or ./ R statements.

Is that correct?

Thanks again for your help, Melinda

P.S.: The paper I'm working on is to be presented at SEAS later this month. You won't be getting a VMSHARE/PCSHARE tape from me until I get back from SEAS at the end of the month.


... snip ... top of post, old email index

Date: Sun, 8 Sep 1985 14:10:41 EDT
From: Melinda
To: wheeler

Lynn, I was truly touched by your having spent part of your Saturday morning loading up those CP-67 EXECs for me. It was extraordinarily thoughtful of you and has helped me answer almost all of my questions about the CP-67 implementation.

I have been working my way through the EXECs and believe that I have them all deciphered now. I was somewhat surprised to see how much of the function was already in place by the Summer of 1970. In particular, I hadn't expected to find that the update logs were being put at the beginning of the textfiles. That has always seemed to me to be one of the most ingenious aspects of the entire scheme, so I wouldn't have been surprised if it hadn't been thought of right away. One thing I can't determine from reading the EXECs is whether the loader was including those update logs in the loadmaps. Do you recall?

Of the function that we now associate with the CTL option of UPDATE, the only substantial piece I see no sign of in those EXECs is the use of auxfiles. Even in the UPAX EXEC from late January, 1971, it is clear that all of the updates listed in the control files were expected to be simple updates, rather than auxfiles. I know, however, that auxfiles were fully implemented by VM/370 Release 1. I have a First Edition of the "VM/370 Command Language User's Guide" (November, 1972) that describes them. The control file syntax at that point was

updlevel upid AUX

Do you have any memories of the origin of auxfiles? Thank you again, Melinda


... snip ... top of post, old email index

... trivia the timing was fortunate since a few weeks later, Almaden had operations problem where random tapes were being being mounted as scratch and I "lost" a dozen tapes (including triple redundant tape copies of my 60s/70s archive, learned a lesson there)

BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

some past posts mentioning the event
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2021k.html#51 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2014e.html#35 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2014e.html#28 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013e.html#61 32760?
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2011f.html#80 TSO Profile NUM and PACK
https://www.garlic.com/~lynn/2011c.html#4 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#3 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009.html#8 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2006w.html#48 vmshare
https://www.garlic.com/~lynn/2006w.html#42 vmshare

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 21 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2023e.html#21 Copyright Software
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023e.html#23 Copyright Software
https://www.garlic.com/~lynn/2023e.html#28 Copyright Software

Within a year of taking 2h credit hr intro to fortran/computers, univ. hired me fulltime responsible for os/360 (univ had been sold 360/67 for tss/360 to replace 709/1401, but tss/360 wasn't ready for production ... so univ. ran it as 360/65 with os/360 for both administration and academics). Student fortran ran under second on 709, initially on os/360 they ran over minute. I install HASP and it cuts time in half. I then redo a lot of SYSGEN to run in production job stream and order datasets and PDS members to optimize arm seek and multi-track search cutting another 2/3rds to 12.9secs. It never gets better than 709 until I install WATFOR.

Some people from science center bring CP67 out to univ (3rd installl after science center itself, and MIT Lincoln Labs) that I mostly get to play with in my dedicated weekend window (monday morning class little hard after 48hrs w/o sleep). After a few months ... mostly rewriting CP67 pathlengths for running OS/360 in virtual machine ... I have a SHARE presentation ... part of it in this old archived post:
https://www.garlic.com/~lynn/94.html#18

OS/360 test run 322secs on bare machine, initially under CP/67, 856secs (CP67 534secs CPU). After a few months, I have it reduced to 435secs (CP/67 113secs CPU, reduction of 534-113=421secs CPU). I then do dynamic adaptive resource management ("wheeler" scheduler) and new page replacement algorithm. Original CP67 did DASD FIFO I/O and page transfer were single 4k transfer/IO. I implement ordered seek queuing (increases disk throughput and graceful degradation as load increase) and chaining multiple page transfers optimized for transfers/revolution (all queued for same disk arm). For 2301 drum increases throughput from approx. 75/sec to 270/sec peak.

DASD, CKD, FBA, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dyanamic adaptive resource management/scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging algorithms
https://www.garlic.com/~lynn/subtopic.html#wsclock

posts mentioning work as undergraduate on os/360 and cp/67 in the 60s
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023e.html#10 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023d.html#72 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#88 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#62 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022h.html#21 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022b.html#89 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021k.html#1 PCP, MFT, MVT OS/360, VS1, & VS2
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe
https://www.garlic.com/~lynn/2021e.html#19 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021b.html#64 Early Computer Use
https://www.garlic.com/~lynn/2019e.html#19 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2019c.html#28 CICS Turns 50 Monday, July 8
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018c.html#86 OS/360
https://www.garlic.com/~lynn/2017h.html#49 System/360--detailed engineering description (AFIPS 1964)
https://www.garlic.com/~lynn/2015h.html#21 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2015b.html#15 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2014m.html#134 A System 360 question
https://www.garlic.com/~lynn/2014f.html#76 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2013o.html#54 Curiosity: TCB mapping macro name - why IKJTCB?
https://www.garlic.com/~lynn/2013l.html#18 A Brief History of Cloud Computing
https://www.garlic.com/~lynn/2013g.html#39 Old data storage or data base
https://www.garlic.com/~lynn/2013.html#24 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012e.html#98 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012d.html#7 PCP - memory lane
https://www.garlic.com/~lynn/2012.html#36 Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011o.html#34 Data Areas?
https://www.garlic.com/~lynn/2011k.html#17 Last card reader?
https://www.garlic.com/~lynn/2010n.html#66 PL/1 as first language
https://www.garlic.com/~lynn/2010l.html#61 Mainframe Slang terms
https://www.garlic.com/~lynn/2008.html#33 JCL parms
https://www.garlic.com/~lynn/2006.html#15 S/360
https://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?

--
virtualization experience starting Jan1968, online at home since Mar1970

Apple Versus IBM

From: Lynn Wheeler <lynn@garlic.com>
Subject: Apple Versus IBM
Date: 22 Aug, 2023
Blog: Facebook
Early 70s, Learson tries to block the bureaucrats, careerists, and MBAs destroying the Watson legacy, he fails
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

a decade later, I'm blamed for online computer conferencing where IBM downward slide is discussed (folklore is when corporate executive committee was told, 5of6 wanted to fire me) ... takes another decade; IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but get call from the bowels of Armonk (corp. hdqtrs) asking if we could help with the IBM breakup. Business units were leveraging supplier contracts in other units via MOUs. After the breakups many of the supplier contracts will be other companies and the associated MOUs will have to be cataloged and turned into their own contracts. Before we get started, IBM board brings in a new CEO that reverses the breakup,

Note late 80s, senior disk engineer gets talk scheduled at annual, internal, world-wide communication group conference supposedly on 3174 performance ... but opens talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing to more distributed computing friendly platforms. They had come up with a number of solutions that were constantly being vetoed by the communication group. The issue was the communication group had stranglehold on datacenters with their corporate strategic ownership of everything that crossed the datacenter walls and were fiercely fighting off client/server and distributed computing. The datacenter stranglehold wasn't just affecting disks and couple years later, IBM has its huge loss.

... example: AWD (workstation division) had done their own (AT bus) 4mbit token-ring card for the PC/RT. However, for the (microchannel) RS/6000, they were told they couldn't do their own cards, but had to use the PS2 microchanel cards (that had been heavily performance kneecapped by the communicatiion group). It turns out the microchannel 16mbit token-ring card had lower card throughput than the PC/RT 4mbit T/R card (RS/6000 16mbit T/R server had lower throughput than PC/RT 4mbit T/R server). Joke that for many things a RS/6000 wouldn't have any better throughput than PS2/486. AWD countermeasure was RS6000m730 that had VMEbus, where they could get workstation performance cards. Aggravating situation was $69 10mbit Ethernet cards had higher throughput than the $800 PS2 microchannel 16mbit T/R cards (as well as the PC/RT 4mbit T/R cards)

overall could consider microchannel a possible countermeasure to clone IBM/PCs ... but communication group severely performance kneecapping microchannel cards (fiercely fighting off client/server and distributed computing trying to preserve its dumb terminal paradigm) undoing everything. Disk division was trying to play in distributed computing ... but was constantly being blocked by the communication group. For partial countermeasure, disk division executive was investing in distributed computing startups (that would use IBM disks). He would periodically ask us to visit his investments to see how they were doing and if we could provide any help.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
posts mentioning communication group stranglehold
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
801/RISC, Iliad, ROMP, RIOS, PC/RT, RS/6000, Power, Power/pc, etc. posts
https://www.garlic.com/~lynn/subtopic.html#801
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

recent posts mentioning disk division executive investing in distributed computing startups
https://www.garlic.com/~lynn/2023e.html#8 IBM Storage photo album
https://www.garlic.com/~lynn/2023d.html#112 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023c.html#16 IBM Downfall
https://www.garlic.com/~lynn/2022e.html#45 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022d.html#70 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022d.html#54 Another IBM Down Fall thread
https://www.garlic.com/~lynn/2021j.html#113 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#79 IBM Downturn
https://www.garlic.com/~lynn/2019e.html#27 PC Market

--
virtualization experience starting Jan1968, online at home since Mar1970

3081 TCMs

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 3081 TCMs
Date: 22 Aug, 2023
Blog: Facebook
Talks about FS effort (internal politics during FS was killing off 370 projects), when FS implodes, mad rush to get stuff back into the 370 product pipelines including kicking off quick&dirty 303x (3033 started out 168 logic remapped to 20% faster chips) and 3081 (enormous number of circuits versus performance for any other computer of the era) in parallel. Some conjecture that TCMs were required in order to package the enormous number of circuits in reasonable physical volume)
http://www.jfsowa.com/computer/memo125.htm

single processor Amdahl processor was about about same (hardware) performance as aggregate of two processor 3081 .... but significantly higher MVS throughput (MVS documentation claiming two processor had 1.2-1.5 times throughput of single processor ...in large part mp software overhead).

After FS implodes, I got roped into helping on 16-processor 370, and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thot it was great until somebody told the head of POK that it could be decades before POK's favorite son operating system (MVS) had effective 16-way support (POK doesn't ship 16-way machine until after turn of the century). Then some of us were invited to never visit POK again, and 3033 processor engineers directed to heads down on 3033 (once 3033 was out the door, they start on trout/3090).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

recent posts mentioning MVS documentation that two processor had 1.2-1.5 times single processor throughput (in large part MVS mp software overhead)
https://www.garlic.com/~lynn/2023d.html#12 Ingenious librarians
https://www.garlic.com/~lynn/2023c.html#77 IBM Big Blue, True Blue, Bleed Blue
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#91 IBM 4341
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#52 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022h.html#49 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022h.html#32 do some Americans write their 1's in this way ?
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022e.html#55 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022.html#81 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021i.html#78 IBM ACP/TPF
https://www.garlic.com/~lynn/2021h.html#66 CSC, Virtual Machines, Internet
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2021d.html#45 POK 370 Multiprocessor Machines
https://www.garlic.com/~lynn/2021c.html#95 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#73 I/O processors, What could cause a comeback for big-endianism very slowly?
https://www.garlic.com/~lynn/2021.html#74 Airline Reservation System

--
virtualization experience starting Jan1968, online at home since Mar1970

3081 TCMs

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 3081 TCMs
Date: 23 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#31 3081 TCMs

... should have guessed it as soon as saw that 3092 (service processor) required 3370 FBA. 3092 started out as highly modified version of vm370 release 6 on 4331 and all service panels were done with CMS IOS3270. It then morphs into pair of 4361s (each with a 3370 FBA running vm370).

Note that real CKD DASD hasn't been manufactured for decades, all being simulated (for MVS) on industry standard fixed-block disks. Disks requiring physical fixed block for error correcting ... can be seen going back to 3380 where records/track formulas required rounding record size up to 3380 "cell size".

Circa 1980, I offered the MVS group 3370 FBA support. I was told even if I provided fully integrated and tested support ... I had to show $26M incremental "profit" (something like $200M in additional sales) to cover documentation and training .... however since IBM was already selling every disk it could make ... it would just translate into same amount of FBA $$$ (not additional sales) ... AND I wasn't allowed to use life-time savings in business case justification.

DASD, CKD, FBA, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd

service processor trivia: IBM FE had "scoping" incremental diagnostic process ... starting with lowest level components. Starting with 3081 TCMs, it was no longer possible to directly scope components ... so service processor was invented with huge number of probes into TCMs. This was a "UC" processor with 3310 disk ... however a whole monitor had to be developed with device drivers, interrupt handlers, its own error recovery, storage management, etc. Same time 3033 processor engineers moved over to trout/3090 ... the 3090 service processor group was formed and I got to know the manager of the group fairly well. Instead of inventing whole new operating system for 3092 ... he decided to scaffold it off VM370/CMS.

Note, late 70s and early 80s, I was blamed for online computer conferencing (precursor to modern social media) on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s). It really took off spring 1981 when I distributed trip report of visit to Jim Gray at Tandem (and the 3092 manager was active participant) ... folklore is that when the corporate executive committee was told, 5of6 wanted to fire me (and the 3092 group got a new manager).

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
other folklore
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Sometime later, 3092 group contacted me about including a problem diagnostic/determination app that I had written in REXX (DUMPRX ... and was in use by nearly every internal datacenter) in the 3092 (note by then any knowledge that I had known about 3092 back to inception had evaporated).

Date: 31 October 1986, 16:32:58 EST
To: wheeler
From: ????
Re: 3090/3092 Processor Controll and plea for help

The reason I'm sending this note to you is due to your reputation of never throwing anything away that was once useful (besides the fact that you wrote a lot of CP code and (bless you) DUMPRX.

I've discussed this with my management and they agreed it would be okay to fill you in on what the 3090 PC is so I can intelligently ask for your assistance.

The 3092 (3090 PC) is basically a 4331 running CP SEPP REL 6 PLC29 with quite a few local mods. Since CP is so old it's difficult, if not impossible to get any support from VM development or the change team.

What I'm looking for is a version of the CP FREE/FRET trap that we could apply or rework so it would apply to our 3090 PC. I was hoping you might have the code or know where I could get it from (source hopefully).

The following is an extract from some notes sent to me from our local CP development team trying to debug the problem. Any help you can provide would be greatly appreciated.


... snip ... top of post, old email index

Date: 23 December 1986, 10:38:21 EST
To: wheeler
From: ????
Re: DUMPRX

Lynn, do you remember some notes or calls about putting DUMPRX into an IBM product? Well .....

From the last time I asked you for help you know I work in the 3090/3092 development/support group. We use DUMPRX exclusively for looking at testfloor and field problems (VM and CP dumps). What I pushed for back aways and what I am pushing for now is to include DUMPRX as part of our released code for the 3092 Processor Controller.

I think the only things I need are your approval and the source for RXDMPS.

I'm not sure if I want to go with or without XEDIT support since we do not have the new XEDIT.

In any case, we (3090/3092 development) would assume full responsibility for DUMPRX as we release it. Any changes/enhancements would be communicated back to you.

If you have any questions or concerns please give me a call. I'll be on vacation from 12/24 through 01/04.


... snip ... top of post, old email index

dumprx trivia: never could figure out why DUMPRX wasn't released to customers. I eventually did get permission to give talks on how I implemented it at customer user group meetings ... afterwards did start to see some similar implementations in some customer shops.

DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx

some archived posts mentioning TCMs, 3092 & dumprx
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2022g.html#7 3880 DASD Controller
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2019.html#76 How many years ago?
https://www.garlic.com/~lynn/2014e.html#14 23Jun1969 Unbundling Announcement
https://www.garlic.com/~lynn/2013c.html#25 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012m.html#0 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012e.html#38 A bit of IBM System 360 nostalgia

--
virtualization experience starting Jan1968, online at home since Mar1970

Copyright Software

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Copyright Software
Date: 24 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#14 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#16 Copyright Software
https://www.garlic.com/~lynn/2023e.html#19 Copyright Software
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2023e.html#21 Copyright Software
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023e.html#23 Copyright Software
https://www.garlic.com/~lynn/2023e.html#28 Copyright Software
https://www.garlic.com/~lynn/2023e.html#29 Copyright Software

"Charged For" "Dynamic Adaptive Resource Management"

short answer: I had crammed a whole lot of code into the charged-for "resource manager" ... available for VM370R3PLC9. Then "resource manager" for Release4 had about 90% of the code moved to the base (free) release (w/o changing the price). Then for R5, the development group picks up responsibility for the "resource manager" (previously I was both development and customer support) and merges it with some other code and changes the name.

longer answer: In the morph from CP67->VM370 they dropped and/or simplified a lot of stuff (including eliminating lots of my stuff as well as multiprocessor support). Beginning of 1974, I start migration my enhanced production operating system for internal datacenters from CP67 to VM370. I had an automated benchmarking system (synthetic workload, vary number of users, execution characteristics and configuration to match real-live systems) was 1st moved to VM370 (Including the "autolog" command), however VM370 would regularly crash w/o finishing benchmark scenarios. The next thing I had to do was migrate the CP67 kernel serialization mechanism to VM370 (eliminating the constant crashes), allowing benchmark tests to complete. By end of 1974, I had production VM370R2PLC9 base CSC/VM distribution for internal datacenters. Then decision was made to release my "resource manager" to customers, as a charged-for add-on ... in which I included a large amount of other code.

Before "resource manager" customer distribution for VM370r3.9 base, I did a series of 2000 automated benchmarks that took three months elapsed time, used to validate dynamic adaptive resource management across a wide variety of workloads and configurations. Also done at CSC was a CMS\APL (converted to APL\CMS) analytical system model (made available on the world-wide online sales&marketing support HONE system as Performance Predictor where branch people could enter customers' workload&configuration and ask what happens when workload &/or configuration is changed). The final 2000 benchmarks was done in conjunction with the Performance Predictor ... before starting, the Performance Predictor would predict the benchmark, then the benchmark results would be compared with actual results (helping validate both the dynamic adaptive resource algorithms as well as the Performance Predictor). The first 1000 benchmarks were manually selecting for uniform distribution across observed customer&internal datacenter operations. The second 1000 benchmark characteristics were selected by Performance Predictor ... searching for possibly anomalous situations.

Part of the bunch of stuff included in the "resource manager" was needed multiprocessor support, but not multiprocessor support itself. I then did multiprocessor support for CSC/VM, initially VM370R3 for the consolidated US online sales&marketing support HONE system up in Palo Alto which had a maxed-out 8-plex loosely-coupled (cluster) with large shared disk farm ... allowing them to add a 2nd processor to each system (trivia: when Facebook 1st moves to silicon valley, it is into a new bldg built next door to the former HONE datacenter).

Then it was decided to ship multiprocessor support in customer VM370R4 ... which required majority of the code in "resource manager" ... and at the time, IBM guidelines was that (kernel) direct hardware support had to be free. Resolution was to move 90% of the code from the "resource manager" into the free base source (w/o changing the price for the "resource manager"). Then for VM370R5, the "resource manager" was merged with some other code and the official VM370 group takes over responsibility for support&development.

... additional in misc other comments in this thread.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
dynamic adaptive resource manager ("wheeler" scheduler)
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging algorithm posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

some recent performance predictor posts:
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#7 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/67

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/67
Date: 24 Aug, 2023
Blog: Facebook
several univ. were sold 360/67s for tss/360 ... which never really came to production fruition. Stanford and UnivMich wrote their own virtual memory operating systems for it ... most others just used it as 360/65 with OS/360. Boeing Huntsville modified MVT13 with virtual memory support (but no paging) just to address the horrible MVT storage management problem, especially apparent for their long running 2250 CAD applications that the pair of 360/67s were used for.

Some of the MIT CTSS/7094 people went to the 5th flr and did Multics. Others went to the 4th flr to the IBM Cambrdige Science and did virtual machines (first CP/40 for a 360/40 with virtual memory hardware mods, which morphs into CP/67 when 360/67 standard with virtual memory became available), the internal network (also used for the corporate sponsored univ BITNET), lots of performance and online work. CTSS RUNOFF was redone for CMS as SCRIPT. Then in 1969, GML was invented at CSC and GML tag processing added to SCRIPT (a decade later, GML morphs into ISO standard SGML, and after another decade mophs into HTML at CERN. Trivia: 1st webserver in the US is on (Stanford) SLAC's VM370 system (CP67 having morphed into VM370)
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML history posts
https://www.garlic.com/~lynn/submain.html#sgml

I had taken 2credit hr intro to fortran/computers and at the end of the semester was hired to re-implement 1401 MPIO on 360/30. The univ had (also) been sold 360/67 to replace 709/1401 ... and temporarily got 360/30 to replace 1401. 360/30 had 1401 emulation so could continue to run MPIO directly. I guess I was part of gaining 360 experience. I was given a bunch of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. The univ. shutdown the datacenter over the weekend and I got to have the whole place to myself (although 48hrs w/o sleep made monday classes hard), and within a few weeks I had a 2000 card assembler program.

Within year of taking intro class, the 360/67 arrived and I was hired fulltime responsible for OS/360 (and continued to have my weekend dedicated time). Student fortran jobs used to take under second on 709, but took over a minute with OS/360. I installed HASP and cut the time in half. I then redo STAGE2 SYSGEN so it can be run in production jobstream and organized statements for careful placement of datasets and PDS members for optimized disk arm seek and (PDS directory) multi-track search, cutting student fortran jobs by another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I install Univ. of Waterloo WATFOR.

MVT storage management trivia, a decade ago, I was asked to track down decision to make virtual memory standard on all 370s. I found somebody that was staff to executive making decision. MVT storage management was so bad that regions were being specified four times larger than used. As a result a typical 1mbyte 370/165 could only run four concurrent regions, insufficient to keep the system busy and justified. Mapping MVT to 16mbyte virtual address space would allow increasing number of concurrent regions by factor of four times with little or no paging (very similar to running MVT in a CP/67 16mbyte virtual machine). Archived post with pieces of the exchanged email
https://www.garlic.com/~lynn/2011d.html#73

Note: before I graduate, I was hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thought Renton datacenter was possibly largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Also disaster plans to replicate Renton up at the new 747 plant in Everett (Mt. Rainier heats up and the resulting mud slide takes out Renton datacenter).

... while responsible for OS/360 at Univ., CSC came out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) ... and I mostly got to play with it on weekends ... starting out rewriting lots of code improving OS/360 in virtual machine. Test jobstream run 322 seconds on real machine. initially under CP/67, 856secs (CP67 534secs CPU). After a few months, I have it reduced to 435secs (CP/67 113secs CPU, reduction of 534-113=421secs CPU). Archived post with part of SHARE presentation on the initial pathlength work:
https://www.garlic.com/~lynn/94.html#18

I then do dynamic adaptive resource management ("wheeler" scheduler) and new page replacement algorithm. Also original CP67 did DASD FIFO I/O and page transfer were single 4k transfer/IO. I implement ordered seek queuing (increases disk throughput and graceful degradation as load increase) and chaining multiple page transfers optimized for transfers/revolution (all queued for same disk arm). For 2301 drum, increases throughput from approx. 75/sec to 270/sec peak.

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare page algorithm posts
https://www.garlic.com/~lynn/subtopic.html#wsclock

While at Boeing, they enlarge the CFO's 360/30 machine room (used for payroll) up at Boeing Field to add a 360/67 for me to play with when I'm not doing other stuff. When I graduate, I join CSC (instead of staying at Boeing) and one of my hobbies was enhanced production operating systems for internal datacenters. IBM Grenoble Science Center had 1mbyte 360/67 (155 pageable pages after fixed storage requirement) and modified CP67 to implement "working set dispatcher" that had been written up in 60s academic literature, running 35 concurrent users. CSC had 768k 360/67 (104 pagealbe pages after fixed storage requirements) and I was running 80 users doing similar workloads with better interactive response and throughput (than Grenoble 2/35usrs and 50% more available memory).

some recent 709, 1401, mpio, 360/30, fortran, watfor & boeing cfo posts
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards

some posts mentioning working set dispatcher, grenoble, and cambridge
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2021j.html#19 Windows 11 is now available
https://www.garlic.com/~lynn/2021j.html#18 Windows 11 is now available
https://www.garlic.com/~lynn/2017k.html#35 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2016c.html#0 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015c.html#66 Messing Up the System/360
https://www.garlic.com/~lynn/2014m.html#138 How hyper threading works? (Intel)
https://www.garlic.com/~lynn/2013l.html#25 Teletypewriter Model 33
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist
https://www.garlic.com/~lynn/2011l.html#6 segments and sharing, was 68000 assembly language programming

some posts mentioning 360/67, tss/360, stanford, Univ. Michigan, MTS
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021e.html#43 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021b.html#27 DEBE?
https://www.garlic.com/~lynn/2019c.html#28 CICS Turns 50 Monday, July 8
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2017d.html#75 Mainframe operating systems?
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2016c.html#6 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015f.html#62 3705
https://www.garlic.com/~lynn/2015d.html#35 Remember 3277?
https://www.garlic.com/~lynn/2015c.html#52 The Stack Depth

--
virtualization experience starting Jan1968, online at home since Mar1970

Russian Democracy

From: Lynn Wheeler <lynn@garlic.com>
Subject: Russian Democracy
Date: 24 Aug, 2023
Blog: Facebook
Looking back on Russia's war with Ukraine: What U.S. and EU should have done
https://www.washingtontimes.com/news/2023/aug/21/looking-back-on-russias-war-with-ukraine-what-us-a/
... has list of years that former soviet block countries joined NATO and EU

... note the military-industrial complex had wanted a war so badly that corporate reps were telling former eastern block countries that if they voted for IRAQ2 invasion in the UN, they would get membership in NATO and (directed appropriation) USAID (can *ONLY* be used for purchase of modern US arms, aka additional congressional gifts to MIC complex not in DOD budget). Then from the law of unintended consequences, the invaders were told to bypass ammo dumps looking for WMDs, when they got around to going back, over a million metric tons had evaporated (showing up later in IEDs)
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/

then there is "Was Harvard responsible for the rise of Putin" ... after the fall of the Soviet Union, those sent over to teach capitalism were more intent on looting the country (and the Russians needed a Russian to oppose US looting). John Helmer: Convicted Fraudster Jonathan Hay, Harvard's Man Who Wrecked Russia, Resurfaces in Ukraine
http://www.nakedcapitalism.com/2015/02/convicted-fraudster-jonathan-hay-harvards-man-who-wrecked-russia-resurfaces-in-ukraine.html
If you are unfamiliar with this fiasco, which was also the true proximate cause of Larry Summers' ouster from Harvard, you must read an extraordinary expose, How Harvard Lost Russia, from Institutional Investor. I am told copies of this article were stuffed in every Harvard faculty member's inbox the day Summers got a vote of no confidence and resigned shortly thereafter.

... snip ...

How Harvard lost Russia; The best and brightest of America's premier university came to Moscow in the 1990s to teach Russians how to be capitalists. This is the inside story of how their efforts led to scandal and disgrace (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20130211131020/http://www.institutionalinvestor.com/Article/1020662/How-Harvard-lost-Russia.html
Mostly, they hurt Russia and its hopes of establishing a lasting framework for a stable Western-style capitalism, as Summers himself acknowledged when he testified under oath in the U.S. lawsuit in Cambridge in 2002. "The project was of enormous value," said Summers, who by then had been installed as the president of Harvard. "Its cessation was damaging to Russian economic reform and to the U.S.-Russian relationship."

... snip ...

I was tangentially involved; I was asked to help with part of the program making Russia democratic country, involving deploying 5000 brick&mortor banks across Russia at $1m/bank. Before the mechanics was worked out, the whole democratic effort imploded.

... US style capitalist kleptocracy has long history ... even predating banana republics

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds

Archived posts mentioning 5,000 banks for the Russian Democratic program
https://www.garlic.com/~lynn/2022f.html#76 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022c.html#37 The Lost Opportunity to Set Post-Soviet Russia on a Stable Course
https://www.garlic.com/~lynn/2022b.html#104 Why Nixon's Prediction About Putin and Ukraine Matters
https://www.garlic.com/~lynn/2018c.html#50 Anatomy of Failure: Why America Loses Every War It Starts
https://www.garlic.com/~lynn/2018b.html#60 Revealed - the capitalist network that runs the world
https://www.garlic.com/~lynn/2017j.html#35 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017i.html#69 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017f.html#69 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017b.html#83 Sleepwalking Into a Nuclear Arms Race with Russia
https://www.garlic.com/~lynn/2017.html#7 Malicious Cyber Activity
https://www.garlic.com/~lynn/2016h.html#62 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015f.html#44 No, the F-35 Can't Fight at Long Range, Either

--
virtualization experience starting Jan1968, online at home since Mar1970

Next-Gen Autopilot Puts A Robot At The Controls

From: Lynn Wheeler <lynn@garlic.com>
Subject: Next-Gen Autopilot Puts A Robot At The Controls
Date: 25 Aug, 2023
Blog: Facebook
Next-Gen Autopilot Puts A Robot At The Controls
https://hackaday.com/2023/08/25/next-gen-autopilot-puts-a-robot-at-the-controls/

Non/relaxed stable airframes are more efficient but require faster than human responses ... so they are fly-by-wire computer operation ... pilot may still have stick that provides "intention" to the computer, but computer then figures out control of flt surfaces
https://en.wikipedia.org/wiki/Fly-by-wire

When 747s were brand new, autopilot landing was touching down the same exact spot on SEATAC runway and the touch down area was starting to severely crack. Folklore is that they then started to randomly slightly vary the glide slope approach landing signal ... to spread (auto-pilot) touch down over a larger area.

More recent story in Afghanistan, Air Force drones were having more accidents/crashes (being operated by pilots back in the US) compared to Army (nearly identical) drones that had autopilot landing, locally operated by non-pilots.
http://www.theregister.co.uk/2009/04/29/young_usaf_predator_pilot_officer_slam/

posts mentioning fly-by-wire
https://www.garlic.com/~lynn/2023.html#60 Boyd & IBM "Wild Duck" Discussion
https://www.garlic.com/~lynn/2021i.html#84 IBM Downturn
https://www.garlic.com/~lynn/2021e.html#5 The Plane Paradox: More Automation Should Mean More Training
https://www.garlic.com/~lynn/2021d.html#80 OSS in China: Prelude to Cold War
https://www.garlic.com/~lynn/2021c.html#10 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#43 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#12 Boyd: The Fighter Pilot Who Loathed Lean?
https://www.garlic.com/~lynn/2019d.html#39 The Roots of Boeing's 737 Max Crisis: A Regulator Relaxes Its Oversight
https://www.garlic.com/~lynn/2019d.html#38 Trump's Message to U.S. Intelligence Officials: Be Loyal or Leave
https://www.garlic.com/~lynn/2019.html#69 Digital Planes
https://www.garlic.com/~lynn/2018f.html#80 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018e.html#84 Top CEOs' compensation increased 17.6 percent in 2017
https://www.garlic.com/~lynn/2018e.html#56 NATO is a Goldmine for the US/Military Industrial Complex
https://www.garlic.com/~lynn/2018e.html#50 OT: Trump
https://www.garlic.com/~lynn/2018c.html#63 The F-35 has a basic flaw that means an F-22 hybrid could outclass it -- and that's a big problem
https://www.garlic.com/~lynn/2017h.html#55 Pareto efficiency
https://www.garlic.com/~lynn/2017h.html#54 Pareto efficiency
https://www.garlic.com/~lynn/2017c.html#79 An OODA-loop is a far-from-equilibrium, non-linear system with feedback
https://www.garlic.com/~lynn/2017c.html#51 F-35 Replacement: F-45 Mustang II Fighter -- Simple & Lightweight
https://www.garlic.com/~lynn/2016b.html#88 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#6 For those who like to regress to their youth? :-)
https://www.garlic.com/~lynn/2015f.html#42 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015f.html#20 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2014j.html#8 Super Cane's Computers run Windows
https://www.garlic.com/~lynn/2014g.html#22 Has the last fighter pilot been born?
https://www.garlic.com/~lynn/2014f.html#90 A Drone Could Be the Ultimate Dogfighter
https://www.garlic.com/~lynn/2014d.html#3 Let's Face It--It's the Cyber Era and We're Cyber Dumb
https://www.garlic.com/~lynn/2013n.html#15 Boyd Blasphemy: Justifying the F-35
https://www.garlic.com/~lynn/2013m.html#101 Boyd Blasphemy: Justifying the F-35
https://www.garlic.com/~lynn/2011n.html#20 UAV vis-a-vis F35
https://www.garlic.com/~lynn/2011l.html#63 UAV vis-a-vis F35
https://www.garlic.com/~lynn/2003d.html#6 Surprising discovery

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/67

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/67
Date: 26 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
recent posts mentioning 360/67 & (Univ. Michigan) MTS
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2023.html#48 MTS & IBM 360/67

late 70s, home office with a portable microfiche viewer and CDI miniterm (later replace with IBM 3101, glass teletype)... plant site had microfiche printer that could route output to and get overnight turn around

late 70s home office

Not long after leaving IBM, I was brought in as consultant to small client/server startup. Two former oracle people (that we had worked with on IBM's HA/CMP commercial cluster scale-up ... before cluster scale-up was transferred for announce as IBM supercomputer and we were told we couldn't work on anything with more than four processors and we decide to leave IBM) were there responsible for something called "commerce server" and wanted to do payment transactions on the server, the startup had also invented something they called "SSL" that they want to use, it is now frequently called "electronic commerce". I had responsibility for everything between webservers and the financial industry payment networks. Some of the other consultants were also spending time up the road at GOOGLE scaling up their internet infrastructures (including modifying internet boundary routers to do load balancing traffic to the increasing numbers of backend servers).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

trivia: Postel (Internet standards/rfc editor),
https://en.wikipedia.org/wiki/Jon_Postel
sponsored my talk on "Why Internet Isn't Business Critical Dataprocessing" based on the software & procedures I had to do for electronic commerce.

periodically mentioned; 1st webserver in the US was on (Stanford) SLAC VM370 system:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning
internet business critical dataprocessing

https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#7 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019e.html#87 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019b.html#100 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?

--
virtualization experience starting Jan1968, online at home since Mar1970

Boyd OODA-loop

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Boyd OODA-loop
Date: 26 Aug, 2023
Blog: Linkedin
OODA-loop
https://en.wikipedia.org/wiki/OODA_loop

I was introduced to Boyd in early 80s and used to sponsor his briefings (physical "foils" and overhead projector). He made references to all OODA parts are continually running concurrently&asynchronous and to observe from every possible facet (countermeasure to various kinds of biases). I once suggested Boyd could plausibly have taken 1846 Halleck
https://www.amazon.com/Elements-Instruction-Fortification-Embracing-ebook/dp/B004TPMN16/
loc5019-20:
A rapid coup d'oeil prompt decision, active movements, are as indispensable as sound judgment; for the general must see, and decide, and act, all in the same instant.

... snip ...

changing "see" to "observe" and added "orientation" for experience, learning, adapting (some sort of "orientation" might have been implied by "sound judgement") ... however Chet said he has seen no evidence that Boyd read Halleck. Related, in early 90s, one of the "fathers of AI" (Peter Bogh Andersen) claimed AI was being done wrong because it lacked "context".

Boyd would also talked about Guderian's Verbal Orders Only for the blitzgrieg; Impact Of Technology On Military Manpower Requirements (Dec1980) ... (Spinney's paper embedded starting pg55)
https://books.google.com/books?id=wKY3AAAAIAAJ&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v
top of pg104 (Spinney pg49):
# Verbal orders only, convey only general intentions, delegate authority to lowest possible level and give subordinates broad latitude to devise their own means to achieve commander's intent. Subordinates restrict communications to upper echelons to general difficulties and progress, Result: clear, high speed, low volume communications,

... snip ...

Boyd spend some time on E/M theory & design of aircraft (and using OODA to battle status quo and entrenched bureaucracies)
https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory
and "fingerspitszengefuhl" ... this references "mental map" (analogous to "context")
https://en.wikipedia.org/wiki/Fingerspitzengef%C3%BChl

... frequently referred to "intuition" in many cases no words to describe ... from HBS article; "How Toyota Turns Workers Into Problem Solvers"
http://hbswk.hbs.edu/item/how-toyota-turns-workers-into-problem-solvers
To paraphrase one of our contacts, he said, "It's not that we don't want to tell you what TPS is, it's that we can't. We don't have adequate words for it. But, we can show you what TPS is."

We've observed that Toyota, its best suppliers, and other companies that have learned well from Toyota can confidently distribute a tremendous amount of responsibility to the people who actually do the work, from the most senior, experienced member of the organization to the most junior. This is accomplished because of the tremendous emphasis on teaching everyone how to be a skillful problem solver.


... snip ...

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html

posts mentioning "OODA", "fingerspitszengefuhl", & "TPS"
https://www.garlic.com/~lynn/2023c.html#54 US Auto Industry
https://www.garlic.com/~lynn/2023.html#60 Boyd & IBM "Wild Duck" Discussion
https://www.garlic.com/~lynn/2022h.html#92 Psychology of Computer Programming
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#19 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2021h.html#26 Whatever Happened to Six Sigma?
https://www.garlic.com/~lynn/2019e.html#7 ISO9000, Six Sigma
https://www.garlic.com/~lynn/2019c.html#30 Coup D'Oeil: Strategic Intuition in Army Planning
https://www.garlic.com/~lynn/2019c.html#20 The Book of Five Rings
https://www.garlic.com/~lynn/2019.html#55 Bureaucracy and Agile
https://www.garlic.com/~lynn/2018d.html#8 How to become an 'elastic thinker' and problem solver
https://www.garlic.com/~lynn/2017k.html#24 The Ultimate Guide to the OODA-Loop
https://www.garlic.com/~lynn/2017i.html#32 progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017g.html#100 Why CEO pay structures harm companies
https://www.garlic.com/~lynn/2017g.html#93 The U.S. Military Believes People Have a Sixth Sense
https://www.garlic.com/~lynn/2014h.html#25 How Comp-Sci went from passing fad to must have major

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/67

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/67
Date: 26 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67

Stanford did Orvyl/Wylbur for their 360/67 ... Orvyl was the virtual memory operating system, Wybur editor was later ported to MVS and still survives in some places
https://en.wikipedia.org/wiki/ORVYL_and_WYLBUR
https://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML
http://www.stanford.edu/dept/its/support/wylorv/

some Univ Michigan MTS for their 360/67
https://en.wikipedia.org/wiki/Michigan_Terminal_System
https://web.archive.org/web/20221216212415/http://archive.michigan-terminal-system.org/
https://web.archive.org/web/20230206194103/http://archive.michigan-terminal-system.org/myths
https://web.archive.org/web/20050212073808/www.itd.umich.edu/~doc/Digest/0596/feat01.html
https://web.archive.org/web/20050212073808/www.itd.umich.edu/~doc/Digest/0596/feat02.html
https://web.archive.org/web/20050212183905/www.itd.umich.edu/~doc/Digest/0596/feat03.html

late 60s, my wife was in UMich CICE (computer information and control engineering) graduate school and one weekend was in lab working with PDP when demonstration came through outside the bldg. She went over and pressed her nose to the window watching the (SDS) demonstration pass thru the area ... after they were gone, she noticed that every window she could see was broken except the one she was standing at.
https://cse.engin.umich.edu/about/history/
http://um2017.org/History_of_Engineering_1940-1970.html

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

Boyd OODA-loop

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Boyd OODA-loop
Date: 27 Aug, 2023
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2023e.html#38 Boyd OODA-loop

bureaucratic battles trivia: Boyd would claim that they spent 18months collecting written authorization for every detail Spinney testified about in the congressional hearing ... but SECDEF still tried to have them jailed. SECDEF then tried to have Boyd banned from Pentagon for life (but Boyd still had congressional coverage). Gone behind paywall, but mostly free at wayback machine (not all pages captured, have to select individual pages at bottom; 3&8 missing)
https://web.archive.org/web/20070320170523/http://www.time.com/time/magazine/article/0,9171,953733,00.html
also >https://content.time.com/time/magazine/article/0,9171,953733,00.html also >https://content.time.com/time/magazine/article/0,9171,953733,00.html

He then talked about helping with a trimmed down F16 (tigershark/F20) ... 1/3rd the cost, simpler, sufficient for 80-90% of mission profiles in the world (potenally up to 10times aggregate plane hrs in the air between more planes for the dollar and more flt-hrs per maint-hrs). Realizing that USAF would never accept, they pitched to other countries. Then the MIC F16 forces lobby congress to offer "directed appropriation" USAID (can only be spent on F16s) to all potential F20 customers. The countries would tell them that they could either spend their own money on F20s or get F16s "for free" (congressional gift to military-industrial complex that doesn't show up in Pentagon budget).

Don't know if I was tainted by Boyd association or not. Summer 2002, an IC-ARDA (since renamed IARPA) unclassified BAA was about to close (basically said that none of the tools the agency had did the job) and we get a call asking to submit a response (before the close). We get response in and then have a couple meetings showing we could "do the job". Then nothing, total silence. We find out later the IC-ARDA person that called us, was re-assigned (somewhere Dulles access road) and then the Success of Failure articles started appearing:
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/

Similar USAID scenario shows up later for IRAQ2 invasion, claims that MIC wanted the war so badly that corporate reps were telling former Soviet block countries that if they voted in UN for IRAQ2 invasion, they would get NATO membership and "directed appropriation" USAID (could only be spent for new US military arms & equipment).

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html
Success of Failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failure
Military-Industrial(-Congressional) Complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
Weapons of Mass Destruction posts
https://www.garlic.com/~lynn/submisc.html#wmds

some posts mentioning F20/tigershark and USAID
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2021k.html#94 Finland picks F-35
https://www.garlic.com/~lynn/2021k.html#93 F20/Tigershark & Directed Appropriations
https://www.garlic.com/~lynn/2021h.html#108 Tigershark: When What Might Have Been Became What Never Was
https://www.garlic.com/~lynn/2021e.html#46 SitRep: Is the F-35 officially a failure? Cost overruns, other issues prompt Air Force to look for "clean sheet" fighter
https://www.garlic.com/~lynn/2021d.html#80 OSS in China: Prelude to Cold War
https://www.garlic.com/~lynn/2021c.html#8 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2019d.html#91 Why F-5s Beat Out F-16s For The Navy's Latest Commercial Aggressor Contract
https://www.garlic.com/~lynn/2018.html#11 This is the plane that almost beat out the legendary F-16
https://www.garlic.com/~lynn/2017j.html#73 A-10
https://www.garlic.com/~lynn/2017c.html#51 F-35 Replacement: F-45 Mustang II Fighter -- Simple & Lightweight
https://www.garlic.com/~lynn/2016g.html#57 Boyd F15, F16, F20
https://www.garlic.com/~lynn/2016b.html#50 A National Infrastructure Program Is a Smart Idea We Won't Do Because We Are Dysfunctional
https://www.garlic.com/~lynn/2015.html#54 How do we take political considerations into account in the OODA-Loop?
https://www.garlic.com/~lynn/2013i.html#78 Has the US Lost Its Grand Strategic Mind?

--
virtualization experience starting Jan1968, online at home since Mar1970

Systems Network Architecture

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Systems Network Architecture
Date: 27 Aug, 2023
Blog: Facebook
Starting in the early 80s, I had HSDT project with mainframe T1 and faster computer links ... while SNA was stuck with 56kbits/sec ... until the late 80s with the 3737 hack that spoofed VTAM as a local CTCA, immediately ACKing RUs (before actually transmitted) ... had boatload of memory and M68K processors to simulate a host VTAM ... but still limited to 2mbits aggregate over short haul terrestrial T1 (1.5mbits/sec full duplex, 3mbits/sec aggregate) ... aka VTAM pacing wasn't designed for faster links even for low latency terrestrial. HSDT had even been doing full-speed aggregate full-duplex T1 over geo-sync satellites.

some old 3737 email
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2011g.html#email881005

For awhile I reported to the same executive as original APPN (aka AWP164) architect and working on TCP/IP and periodically telling the APPN person that he should come work on real networking ... that the SNA organization would never appreciate what he was doing. When it came time to announce APPN, the SNA organization "non-concurred". Escalation eventually resulted in the APPN announcement letter being carefully rewritten to not imply that there was any relationship between APPN and SNA.

The communication group was fiercely fighting off client/server, distributed computing and release of mainframe TCP/IP. Apparently some customers managed to get the announcement of TCP/IP turned around. Then they changed their tactic and said that since they had corporate ownership of everything that crossed datacenter walls, mainframe TCP/IP had to be released through them. What shipped got aggregate of 44kbytes/sec using nearly whole 3090 CPU. I then did the changes for RFC1044 support and in some tuning tests at Cray Research between Cray and 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed). In the 90s, the communication group hired a silicon valley contractor to implement TCP/IP directly in VTAM. What he initially demo'ed was TCP significantly faster than LU6.2. He was then told that everybody "knows" that LU6.2 is significantly faster than a "correct" TCP implementation, and they would only be paying for a correct implementation.

My wife got dragged into being co-author for response to a highly secure gov. agency request, where she included 3-tier networking architecture. We were then out making customer executive presentation on TCP/IP, 3-tier architecture, Ethernet, routers, etc. ... and taking arrows in the back (with fabricated misinformation) by SNA, SAA, and token-ring forces. Decade earlier she had been con'ed into going to POK responsible for "loosely-coupled" architecture; she didn't remain long, in part ongoing battles with communication group trying to force her into using SNA/VTAM for loosely-coupled operation. Before that she was co-author of AWP39, "peer-to-peer" networking (since SNA had co-opted "system network architecture" but wasn't a "network", they had to explicitly qualify with "peer-to-peer")

Also late 80s, senior disk engineer gets a talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance. However he opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing datacenters to more distributed computing friendly platforms. They had come up with a number of solutions that were constantly vetoed by the communication group (with their corporate ownership of everything that crossed datacenter walls and fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm). Communication group datacenter stranglehold wasn't just disks but whole mainframe and a couple years later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company. reference gone 404 ... lives on at wayback
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM, but we get a call from the bowels of Armonk (corp hdqtrs) asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, many of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts. Before we get started, a new CEO is brought in and reverses the breakup.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
3-tier posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
dumb terminal paradigm)
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
loosely-coupled, peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning both AWP39 and AWP164
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2017d.html#29 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2015h.html#99 Systems thinking--still in short supply
https://www.garlic.com/~lynn/2014e.html#15 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014.html#99 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#26 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013g.html#44 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012o.html#52 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012c.html#41 Where are all the old tech workers?
https://www.garlic.com/~lynn/2011l.html#26 computer bootlaces
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010e.html#5 What is a Server?
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?
https://www.garlic.com/~lynn/2009q.html#83 Small Server Mob Advantage
https://www.garlic.com/~lynn/2009l.html#3 VTAM security issue
https://www.garlic.com/~lynn/2009i.html#26 Why are z/OS people reluctant to use z/OS UNIX?
https://www.garlic.com/~lynn/2009e.html#56 When did "client server" become part of the language?
https://www.garlic.com/~lynn/2008d.html#71 Interesting ibm about the myths of the Mainframe
https://www.garlic.com/~lynn/2007r.html#10 IBM System/3 & 3277-1
https://www.garlic.com/~lynn/2007q.html#46 Are there tasks that don't play by WLM's rules
https://www.garlic.com/~lynn/2007o.html#72 FICON tape drive?
https://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications
https://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?
https://www.garlic.com/~lynn/2007b.html#48 6400 impact printer
https://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
https://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970

Systems Network Architecture

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Systems Network Architecture
Date: 27 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture

Note early in HSDT, was working with NSF director and was supposed to get $20M to interconnect the NSF Supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we aleady had running). Preliminary announce (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing (precursor to social media) inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

We had been asked to do HA/6000 project for the NYTimes to move their newspaper system (ATEX) from DEC VAXCluster to RS/6000. We start working with the four primary RDBMS that had both VAXCluster and open system support (Informix, Ingres, Oracle, Sybase) ... and I do an API with some VAXCluster semantics to ease the port to HA/6000. Mainframe DB2 wan't portable and Toronto was just starting on "Shelby" for OS2 (aka would also be named DB2, but would be a long time before it had cluster support).

spent some time w/Epstein, founder of Sybase (would license Sybase to Microsoft for SQL Server).
https://en.wikipedia.org/wiki/Sybase
https://learnsql.com/blog/history-ms-sql-server/

I then rename HA/6000 to HA/CMP when I start work on technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (16way mid92, 128way ye92). Early jan1992 have a cluster scale-up meeting with Oracle CEO Ellison. Then end jan1992, cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Contributing, mainframe DB2 group were complaining that if we were allowed to go ahead, it would be at least 5yrs ahead of them.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

When first transferred to SJR, did some work with Jim Gray and Vera Watson on original SQL/relational implementation, "System/R". Later involved in transferring technology ("under the radar") to Endicott for SQL/DS. Then when "EAGLE" implodes, there was request how fast could System/R be ported to MVS ... which is eventually released as DB2 (originally for decision support only).

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/65 & 360/67 Multiprocessors

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/65 & 360/67 Multiprocessors
Date: 28 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#39 IBM 360/67

A 360/67 "simplex" was pretty much a 360/65 with virtual memory. Then it starts to diverge, 360/67 duplex (two processor) had channel controller and multi-ported memory ... processors could access all channels, channel and processors could be doing memory transfers concurrently. See 360/67 simplex/duplex functional at bitsaver:
http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/

the original announce was for up to four-way systems ... so the control register fields have specifications for four processor systems. I believe all built systems were 2-way except for one 3-way. That 3-way had additional features, like rather than control registers just showing the "channel controller" switch settings ... it was possible to change configuration settings by changing values in control registers via software.

65MP would sort of simulate multiprocessor channels by having dual channel controllers connected at same channel/controller/device addresses.

MVT/MP had single kernel "spin-lock" ... only one processor could be executing kernel code at a time (which tended to seriously limit throughput of two processor system). Charlie was doing fine-grain system locking for CP67 at science center and invented compare-and-swap (CAS chosen for Charlie's initials). We had meetings in POK with the owners' of 370 architecture to try and get CAS added to 370. They said the POK favorite son operating system (MVT) people claimed that 360 "test-and-set" was sufficient (because of their single kernel spin-lock). We were told that to get CAS justified for 370, had to come up with uses other than multiprocessor kernel locking ... thus was born the use by multi-threaded applications to do various kinds of serialization operations (w/o needing kernel lock calls) whether running single or multiple processor systems.

trivia: a decade ago I was asked if I could track down the decision to add virtual memory to all 370s; I found staff member that reported to executive that made the decision. Basically, MVT storage management was so bad that regions had to be specified four times larger than normally used ... as a result a 1mbyte 370/165 would only be running four regions concurrently, insufficient to keep system busy and justified. Mapping MVT into a 16mbyte virtual memory would allow increasing number of concurrently running regions by a factor of four (aka VS2), with little or no paging ... very similar to running MVT in a CP67 16mbyte virtual machine. Old archived email with pieces of the email exchange
https://www.garlic.com/~lynn/2011d.html#73

Mentions stopping by seeing Ludlow that was doing the initial VS2 implementation on 360/67. The biggest piece of code was similar to what is done for virtual machines, creating copy of channel programs (that had virtual addresses specified) substituting real addresses for the virtual addresses. In fact, Ludlow borrows CP67 CCWTRANS for crafting into EXCP/SVC0.

note after decision was made to add virtual memory to all 370s, decision was made to do 370 virtual machines and lots of stuff was dropped and/or greatly simplified (including multiprocessor support) ... extended comment (in public Internet Old Farts Club on sofware copyright) about morph of CP67->VM370 and I would spend some about of 1974, redoing vm370 with dropped cp67 features
https://www.garlic.com/~lynn/2023e.html#33 Copyright software

science center (including having done virtual machines) posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SMP, multiprocessor, tightly-coupled, and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

recent posts mentioning decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#71 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#103 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2023b.html#44 IBM 370
https://www.garlic.com/~lynn/2023b.html#41 Sunset IBM JES3
https://www.garlic.com/~lynn/2023b.html#24 IBM HASP (& 2780 terminal)
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#50 370 Virtual Memory Decision
https://www.garlic.com/~lynn/2023.html#34 IBM Punch Cards

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/65 & 360/67 Multiprocessors

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/65 & 360/67 Multiprocessors
Date: 28 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#43 IBM 360/65 & 360/67 Multiprocessors

some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS, others went to the IBM science center on the 4th flr and did virtual machine cp/40 for a 360/40 that had virtual memory hardware added, which morphs into cp/67 when 360/67 become available with virtual memory standard. CTSS RUNOFF was rewritten for CMS as "SCRIPT" (when GML was invented at the science center in 1969, GML tag processing was added to SCRIPT, after a decade GML morphs into ISO standard SGML and after another decade morphs into HTML at CERN). Co-worker was responsible for internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s), technology also used for the corporate sponsored univ "BITNET".
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
https://en.wikipedia.org/wiki/IBM_Generalized_Markup_Language
https://en.wikipedia.org/wiki/SGMLguid

coworker responsible for internal nework
https://en.wikipedia.org/wiki/Edson_Hendricks
some background, Ed failing to convince IBM to use TCP/IP and internet; SJMN news article gone 404, but lives on at wayback machine
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
some more from his web pages (at wayback machine)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

univ "bitnet"
https://en.wikipedia.org/wiki/BITNET

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML/SGML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

early use of internal network was distributed development project with endicott to add support to CP67 for 370 virtual memory 145 ... then modified CP67 to run with 370 virtual memory architecture. CP67L running on real 360/67 and providing 360 & 360/67 virtual machines, CP67H running in 360/67 virtual machine providing both 360 & 370 virtual machines and CP67I running in 370 virtual machines (the extra layer of indirection was because profs, staff, and students from boston/cambridge univ. were using the science center CP67/CMS and wanted to prevent the unannounced 370 virtual memory from leaking). This was in regular use a year before the 1st engineering machine (370/145) with virtual memory was operational (and was used for testing the machine). Then some engineers from San Jose came out and added 3330&2305 device support for CP67SJ which was deployed widely internally, even after VM370 was operational.

Along the way, 370/165 engineers started complaining if they had to retrofit full 370 virtual memory architecture to 165, it would slip virtual memory announce by six months. Eventually decision was made to drop back to 165 subset (and software that implemented support for the dropped features had to be redone and most of the other models already having implemented full architecture, had to be reworked to the 165 subset.

as mentioned up thread, then spent some amount of 1974 upgrading vm370 to cp67 level (in morph of cp67->vm370 lots of function/feature were dropped and/or greatly simplified).
https://www.garlic.com/~lynn/2023e.html#33 Copyright software

some posts mentioning cp67l, cp67h, cp67i, cp67sj:
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2007i.html#16 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/65 & 360/67 Multiprocessors

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/65 & 360/67 Multiprocessors
Date: 28 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#43 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors

trivia: my wife was in gburg jes group (and one of the catchers for ASP/JES3) ... she was then con'ed into going to POK to be responsible for loosely-coupled architecture (where she did peer-coupled shared data architecture). She didn't remain long because of 1) repeated battles with the communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX except for IMS hot-standby).

My wife has story about asking Vern Watts (father of IMS DBMS) who he was going to ask for permission to do IMS hot-standby. He said "nobody", he would just tell them when it was all done.

(mainframe) peer-coupled shared data posts
https://www.garlic.com/~lynn/submain.html#shareddata

some recent posts mentioning loosely-coupled, jes3, vern watts
https://www.garlic.com/~lynn/2023d.html#43 AI Scale-up
https://www.garlic.com/~lynn/2023c.html#48 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#47 IBM ACIS
https://www.garlic.com/~lynn/2022h.html#1 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#74 IBM/PC
https://www.garlic.com/~lynn/2022d.html#51 IBM Spooling
https://www.garlic.com/~lynn/2022c.html#79 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022b.html#11 Seattle Dataprocessing
https://www.garlic.com/~lynn/2021k.html#114 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021c.html#39 WA State frets about Boeing brain drain, but it's already happening
https://www.garlic.com/~lynn/2021b.html#72 IMS Stories
https://www.garlic.com/~lynn/2021.html#55 IBM Quota

--
virtualization experience starting Jan1968, online at home since Mar1970

Boyd OODA at Linkedin

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boyd OODA at Linkedin
Date: 29 Aug, 2023
Blog: Facebook
Boyd OODA at Linkedin
https://www.linkedin.com/feed/hashtag/?keywords=ooda
and
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

OODA-loop
https://en.wikipedia.org/wiki/OODA_loop

I was introduced to Boyd in early 80s and used to sponsor his briefings (physical "foils" and overhead projector). He made references to all OODA parts are continually running concurrently&asynchronous and to observe from every possible facet (countermeasure to various kinds of biases). I once suggested Boyd could plausibly have taken 1846 Halleck
https://www.amazon.com/Elements-Instruction-Fortification-Embracing-ebook/dp/B004TPMN16/
loc5019-20:
A rapid coup d'oeil prompt decision, active movements, are as indispensable as sound judgment; for the general must see, and decide, and act, all in the same instant.

... snip ....

changing "see" to "observe" and added "orientation" for experience, learning, adapting (some sort of "orientation" might have been implied by "sound judgement") ... however Chet said he has seen no evidence that Boyd read Halleck. Related, in early 90s, one of the "fathers of AI" (Peter Bogh Andersen) claimed AI was being done wrong because it lacked "context".

Boyd also talked about Guderian's Verbal Orders Only for the blitzgrieg; Impact Of Technology On Military Manpower Requirements (Dec1980) ... (Spinney's paper embedded starting pg55)
https://books.google.com/books?id=wKY3AAAAIAAJ&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v
top of pg104 (Spinney pg49):
# Verbal orders only, convey only general intentions, delegate authority to lowest possible level and give subordinates broad latitude to devise their own means to achieve commander's intent. Subordinates restrict communications to upper echelons to general difficulties and progress, Result: clear, high speed, low volume communications,

... snip ...

Using OODA to battle status quo and entrenched bureaucracies) ... Boyd had story that they spent 18months making sure that there was written authorization covering all details for Spinney's testimony in congressional hearing. However SECDEF 1st wanted to have them behind bars, then blaming Boyd for the article, wanted him banned from the Pentagon and transferred to Alaska. Boyd had congressional coverage and SEDDEF directive was shortly rescinded. Gone behind paywall, but mostly free at wayback machine
https://web.archive.org/web/20070320170523/http://www.time.com/time/magazine/article/0,9171,953733,00.html
also >https://content.time.com/time/magazine/article/0,9171,953733,00.html ... may have to click on individual page numbers (instead just "Next>>", not all pages captured). also
https://content.time.com/time/magazine/article/0,9171,953733,00.html
Supposedly SECDEF directed new classification "NO-SPIN" ... unclassified but not to be shared with Spinney.

Confusion and disorder
https://slightlyeastofnew.com/2023/08/29/confusion-and-disorder/

Boyd spent some time on E/M theory & design of aircraft
https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory

He talked about helping with a trimmed down F16 (tigershark/F20) ... 1/3rd the cost, simpler, sufficient for possibly 80-90% of mission profiles in the world (potentially up to 10times aggregate plane hrs in the air between more planes for the dollar and more flt-hrs per maint-hrs). Realizing that USAF would never accept, they pitched to other countries. Then the MIC F16 forces lobby congress to offer "directed appropriation" USAID (can only be spent on F16s) to all potential F20 customers. The countries would tell them that they could either spend their own money on F20s or get F16s "for free" (congressional gift to military-industrial complex that doesn't show up in Pentagon budget).

... also "fingerspitszengefuhl" ... this references "mental map" (analogous to "context")
https://en.wikipedia.org/wiki/Fingerspitzengef%C3%BChl

from New Conception of War, John Boyd, The U.S. Marines, And Maneuver Warfare
https://www.usmcu.edu/Portals/218/ANewConceptionOfWar.pdf
"Fingerspitzengefuhl and the Glue", pg104;
German military tradition had a label for the key enabler of this style of war: fingerspitzengefuhl, which literally meant "finger-tip feeling." 42 This was an intuitive ability to look at a given situation, immediately grasp the essentials, and rapidly act.

... pg105:
Throughout Patterns of Conflict," Boyd hammered on the need for warriors to use fingerspitzengefuhl to be "adaptable and unpredictable . . . because the moment you start becoming rigid or non-adaptable and predictable, you know the game's over."

... snip ...

... "intuition", in many cases because no words to describe ... from HBS article; "How Toyota Turns Workers Into Problem Solvers"
http://hbswk.hbs.edu/item/how-toyota-turns-workers-into-problem-solvers
To paraphrase one of our contacts, he said, "It's not that we don't want to tell you what TPS is, it's that we can't. We don't have adequate words for it. But, we can show you what TPS is."

We've observed that Toyota, its best suppliers, and other companies that have learned well from Toyota can confidently distribute a tremendous amount of responsibility to the people who actually do the work, from the most senior, experienced member of the organization to the most junior. This is accomplished because of the tremendous emphasis on teaching everyone how to be a skillful problem solver.


... snip ...

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

recent posts specifically mentioning F20/tigershark
https://www.garlic.com/~lynn/2023e.html#40 Boyd OODA-loop
https://www.garlic.com/~lynn/2023b.html#71 That 80s Feeling: How to Get Serious About Bank Reform This Time and Why We Won't
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2021k.html#94 Finland picks F-35
https://www.garlic.com/~lynn/2021k.html#93 F20/Tigershark & Directed Appropriations
https://www.garlic.com/~lynn/2021h.html#108 Tigershark: When What Might Have Been Became What Never Was
https://www.garlic.com/~lynn/2021f.html#39 'Bipartisanship' Is Dead in Washington
https://www.garlic.com/~lynn/2021e.html#46 SitRep: Is the F-35 officially a failure? Cost overruns, other issues prompt Air Force to look for "clean sheet" fighter
https://www.garlic.com/~lynn/2021d.html#80 OSS in China: Prelude to Cold War
https://www.garlic.com/~lynn/2021c.html#8 Air Force thinking of a new F-16ish fighter

other recent posts mentioning TPS:
https://www.garlic.com/~lynn/2023e.html#38 Boyd OODA-loop
https://www.garlic.com/~lynn/2023c.html#54 US Auto Industry
https://www.garlic.com/~lynn/2023.html#60 Boyd & IBM "Wild Duck" Discussion
https://www.garlic.com/~lynn/2022h.html#92 Psychology of Computer Programming
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#51 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022h.html#19 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022d.html#109 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#85 Destruction Of The Middle Class
https://www.garlic.com/~lynn/2022.html#117 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2021k.html#26 Twelve O'clock High at IBM Training
https://www.garlic.com/~lynn/2021h.html#26 Whatever Happened to Six Sigma?
https://www.garlic.com/~lynn/2020.html#12 Boyd: The Fighter Pilot Who Loathed Lean?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/67

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/67
Date: 29 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#39 IBM 360/67

some of the MIT CTSS/7094 people went to 5th flr to do multics, others went to ibm science center on 4th (and did virtual machine CP67). Some friendly rivalry between 4th & 5th flrs. One of MULTICS installations was USAFDC.
https://www.multicians.org/sites.html
https://www.multicians.org/mga.html#AFDSC
https://www.multicians.org/site-afdsc.html

before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP/67 (precursor to IBM's VM370) at npg (gone 404, but lives on at the wayback machine)
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html
npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

In spring 1979, some from USAFDC wanted to come by to talk about getting 20 4341 VM370 systems. When they finally came by six months later, the planned order had grown to 210 4341 VM370 systems. Earlier in jan1979, I had been con'ed into doing a 6600 benchmark on an internal engineering 4341 (before product shipping to customers) for a national lab that was looking at getting 70 4341s for a compute farm (sort of leading edge of the coming cluster supercomputing tsunami).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning afdc & 210 4341s:
https://www.garlic.com/~lynn/2023d.html#86 5th flr Multics & 4th flr science center
https://www.garlic.com/~lynn/2022g.html#8 IBM 4341
https://www.garlic.com/~lynn/2018e.html#92 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017j.html#93 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017e.html#58 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2015f.html#85 Miniskirts and mainframes
https://www.garlic.com/~lynn/2012i.html#44 Simulated PDP-11 Blinkenlight front panel for SimH
https://www.garlic.com/~lynn/2012e.html#45 Word Length
https://www.garlic.com/~lynn/2006k.html#32 PDP-1

some posts mentioning cp/67, cp/m, npg
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023d.html#62 Online Before The Cloud
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023.html#99 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#30 IBM Change
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022d.html#90 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#44 CMS Personal Computing Precursor
https://www.garlic.com/~lynn/2022c.html#100 IBM Bookmaster, GML, SGML, HTML
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#29 Unix work-alike
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#111 The Rise of DOS: How Microsoft Got the IBM PC OS Contract
https://www.garlic.com/~lynn/2021k.html#22 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021i.html#101 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021h.html#81 Why the IBM PC Used an Intel 8088
https://www.garlic.com/~lynn/2021f.html#76 IBM OS/2
https://www.garlic.com/~lynn/2021c.html#89 Silicon Valley
https://www.garlic.com/~lynn/2021c.html#57 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021.html#69 OS/2

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS Shared Segments

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS Shared Segments
Date: 30 Aug, 2023
Blog: Facebook
trivia: I had done a paged mapped filesystem for CP67/CMS ... allowing shared segment to be cms loadmod directly from the filesystem ... and then in 1974, part of large amount of CP and CMS stuff moved to VM370. A very small subset of the shared segment support, was then picked up for VM370 Release3 as DCSS (before that, shared segments had been restricted to CP "IPL" command, requiring shared segment to be defined in DMKSNT and savesys/loadsys). Note that CMS used compilers/assemblers that all generated OS/360 TXT RLD adcons ... which all had to be resolved before GENMOD (and savesys) ... effectively fixing the address they could be loaded. Note: TSS/360(TSS/370) supported "real" relative adcons ... allowing shared segments to be loaded into virtual address space at any virtual address (aka same shared segments could be loaded at different virtual addresses simultaneously). Having all shared segments that might be used concurrently by CMS users required system wide unique virtual addresses (CMS would do a hack with multiple different shared segments of the same data/program were created at different virtual addresses).

My hack was adding code in programs to simulate the TSS/360 relative adcons (enabling single, same exact shared imaged to be concurrently shared at different virtual addresses), I had done that for editors, various exec processors, several other commonly used CMS programs.

page mapped CMS filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
posts discussing supporting (tss/360 like) relative adcons
https://www.garlic.com/~lynn/submain.html#adcon

some old, archive email with author of fulist/browse/iso3270 about shared segment support
https://www.garlic.com/~lynn/2001f.html#email781010
https://www.garlic.com/~lynn/2010e.html#email790316
https://www.garlic.com/~lynn/2005t.html#email791012

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS Shared Segments

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS Shared Segments
Date: 30 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#48 VM370/CMS Shared Segments

Note: In the morph of CP67->VM370, a lot of features/function/performance was dropped (after decision was made to add virtual memory to all 370s, decision was made to do vm370 product).

MVT storage management trivia, a decade ago, I was asked to track down decision to make virtual memory standard on all 370s. I found somebody that was staff to executive making decision. MVT storage management was so bad that regions were being specified four times larger than used. As a result a typical 1mbyte 370/165 could only run four concurrent regions, insufficient to keep the system busy and justified. Mapping MVT to 16mbyte virtual address space would allow increasing number of concurrent regions by factor of four times with little or no paging (very similar to running MVT in a CP/67 16mbyte virtual machine). Archived post with pieces of the exchanged email
https://www.garlic.com/~lynn/2011d.html#73

I then spent 1974 migrating lots of CP67 to VM370 release 2. I had an automated benchmarking system (for which the original autolog command was done for) that could specify workload characteristic (no users, what kind of execution) and configuration for synthetic job stream ... which was 1st added to VM370 to be able to compare before and after. However, VM370 was constantly crashing ... so the next changes was to add the CP67 kernel serialization to keep VM370 from crashing during benchmarks. Some old email about getting VM370 up to CP67 for my CSC/VM system (one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters).
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

Various stuff shipped for customers as part of Release3, Release4, and my charged-for Resource Manager add-on.

dynamic adaptive resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging support posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
smp, multiprocessor, tightly-coupled, and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS Shared Segments

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS Shared Segments
Date: 30 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#48 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments

old archived post with original analysis of vm370 for selecting code for copy to microcode
https://www.garlic.com/~lynn/94.html#21

at the same time the 370/125 group had con'ed me into working on a 125 5-processor multiprocessor. the 115/125 machines had a 9-position memory bus for microprocessors ... 115 had all the microprocessors the same about 800 native kips, implementing the controller and 370 microcode. the 125 was the same except for 50% faster micoprocessor used for 370; about 1.2mips native resulting in about 120kips 370. For 125, I would do all the ECPS microcode plus some more for doing things like queuing work for execution ... that available/idle 370 microcode could pull off for execution (closer to 370/xa SIE with several additional enhancements) ... also queued interface for the disk controller to pull work off (based on arm position and rotation). Endicott was afraid it would overlap 370/148 throughput and in the escalation, I had to argue both sides .... but the 125 5-way multiprocessor was canceled.

then Endicott wanted to pre-install vm370 on every 138/148 shipped (early software analogy to PR/SM-LPAR)... but head of POK was in the process of getting corporate to kill the vm370 product (shutdown the development group and move everybody to POK for MVS/XA) and corporate blocked pre-installing vm370. Endicott eventually managed to acquire the vm370 product mission, but had to recreate a development group from scratch.

vm370 shutdown trivia: they weren't going to tell vm370 group until just before the shutdown/move ... to minimize the numbers that might escape into the boston area. The information managed to leak early and several managed to escape (at the time DEC VMS was in its infancy and joke was head of POK was a contributor to VMS). They then was searching for the source of the leak, fortunately for me, nobody gave up the source.

posts mentioning some of 125 multiprocesor design
https://www.garlic.com/~lynn/submain.html#bounce
SMP, multiprocessor, tightly-coupled, and/or caompre-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
360 & 370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS Shared Segments

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS Shared Segments
Date: 30 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#48 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments

Early 80s, I got approval to do presentations on how ECPS was done at the monthly BAYBUNCH user meetings (hosted at Stanford SLAC). Afterwards several people from Amdahl cornered me to hear more ... they told me about Amdahl MACROCODE (370 like instructions running in microcode mode, extremely easier than the native microcode used in high-end machines used by both Amdahl and IBM). They said it was originally developed to respond to the myriad number of trivial 3033 microcode changes that were required for MVS to run. At the time they were using it to develop "hypervisor" (subset of VM370 function all in microcode). Note IBM wasn't able to respond with PR/SM&LPAR on 3090 until almost a decade later.

For 3081, customers weren't converting to MVS/XA as planned. Amdahl was meeting with some success because they were able to run both MVS and MVS/XA concurrently on the same machine using hypervisor. Some of the former NEPC people had done the VMTOOL for MVS/XA development&test that was never intended for customers .. but the Amdahl competition (w/microcode hypervisor), prompted releasing VMTOOL as "VM/MA" (migration aid) and then "VM/SF" (system facility). Then POK proposed a few hundred person group to upgrade VMTOOL to the feature/function/performance of VM/370 (for VM/XA).

trivia: to do virtual machines in XA-mode required microcode ... while they were at it they did some changes for performance ... made available as SIE with VMMA&VMSF. Problem was that 3081 didn't have the microcode space ... so SIE microcode had to be paged in&out (making it really slow to enter/exit virtual machine mode).

After FS imploded there was mad rush to get stuff back in the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel. I also got roped into helping on a project to do 16-way 370 multiprocessor ... and we got the 3033 processor engineers to work on it in their spare time (a lot more interesting that remapping 168 logic to 20% faster chips) ... which everybody thot was great until somebody told head of POK that it could be decades before POK's favorite son operating system (MVS) had effective 16-way support (POK doesn't ship 16-way machine until after turn of century) ... and some of us were invited to never visit POK again. The 3033 processor engineers would have me sneak back into POK anyway ... and once 3033 was out the door, they start on trout/3090. I would also have email with them about trout ... including '81 email going on about why 3081 SIE had poor performance ... but it would be high performance for 3090.

various old email that mentions "SIE"
https://www.garlic.com/~lynn/2006j.html#email810630
https://www.garlic.com/~lynn/2007d.html#email820916
https://www.garlic.com/~lynn/2003j.html#email831118
https://www.garlic.com/~lynn/2007c.html#email860121

360 & 370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled, compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

a few posts mentioning macrocode, hypervisor, ecps, vmtool
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2013n.html#46 'Free Unix!': The world-changing proclamation made30yearsagotoday

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS Shared Segments

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS Shared Segments
Date: 30 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#48 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments

fyi ... Donofrio ref in one of my yr-old comments: The Man That Helped Change IBM
https://smallbiztrends.com/2022/08/the-man-that-helped-change-ibm.html
This week I celebrated my 700th episode of The Small Business Radio Show with Nicholas (Nick) Donofrio who began his career in 1964 at IBM. Ironically, I started at IBM in 1981 for the first 9 years of my career. Nick lasted a lot longer and remained there for 44 years. His leadership positions included division president for advanced workshops, general manager of the large-scale computing division, and executive vice president of innovation and technology. He has a new book about his career at IBM called "If Nothing Changes, Nothing Changes.

... snip ...

... I was in all-hands Austin meeting where it was said that Austin had told IBM CEO that it was doing RS/6000 project for NYTimes to move their newspaper system (ATEX) off VAXCluster ... but it would be dire consequences for anybody to let it leak that it wasn't being done.

One day Nick stopped in Austin and all the local executives were out of town. My wife put together hand drawn charts and estimates for doing the NYTimes project for Nick ... and he approved it. Possibly contributed to offending so many people in Austin that suggested that we do the project in San Jose.

to my IBM Wild Ducks (linkedin) posts
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Started out HA/6000 (for NYTimes) but I rename it HA/CMP when I start doing scientific/technical scale-up with national labs and commercial cluster scale-up with RDBMS vendors ... aka 16way mid92, 128way ye92.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

end of jan92, cluster scale-up was transfered for announce as IBM supercomputer and we were told we couldn't work on anything with more than 4processors (we leave IBM a few months later).

hacmp psots
https://www.garlic.com/~lynn/subtopic.html#hacmp
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

When I first transfer to San Jose Research, did some work with Jim Gray and Vera Watson on the original SQL/relational RDBMS, "System/R" ... and involved in tech. transfer (under the "radar" while IBM was pre-occupied with "EAGLE", the next generation DBMS) to Endicott for SQL/DS. Then later when EAGLE implodes, there was request how fast could System/R be ported to MVS ... that is eventually announced as DB2, originally for decision-support *ONLY*.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

I also got to wander around lots of IBM and customer datacenters in the area; including disk engineering (bldg14) and disk product test (bldg15) across the street. Bldg14 was running stand-alone, 7x24, pre-scheduled mainframe testing. They said they had recently MVS, but it had 15min between crashes (in that environment). I offered to rewrite I/O supervisor, making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing, greatly improving productivity.

Bldg15 then gets very early (3or4) engineering 3033 and since testing only took a percent or two, we scrounge 3830 controller and string of 3330s for our own private online servuce (including running 3270 coax under the street to my office.

They then get one of the 1st engineering 4341s and I get con'ed into doing (6600) benchmark for national lab that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). Five production (clustered) 4341s were much cheaper than 3033, had higher throughput, used much less space, power, and cooling. The 4341s with 3370s didn't require datacenter and companies were ordering hundreds at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Inside IBM, departmental conference rooms became in short supply, having been taken over for departmental computing rooms.

I then write (internal only) research report about the work for disk division and happened to mention the MVS MTBF, bringing down the wrath of the MVS organization on my head (informally I was told they tried to have me separated from the company).

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

recent posts mentioning bldg15 engineering 3033&4341
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#48 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022g.html#8 IBM 4341
https://www.garlic.com/~lynn/2022f.html#91 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#107 IBM Future System
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021d.html#57 IBM 370
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021.html#76 4341 Benchmarks
https://www.garlic.com/~lynn/2021.html#53 Amdahl Computers

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS Shared Segments

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS Shared Segments
Date: 01 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#48 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments

After I joined IBM one of my hobbies was enhanced production operating systems and one of my long time customers was the world-wide sales&marketing support online HONE system. SE training included part of large SE group on customer shop. After 23jun1969 unbundling announce and starting to charge for SE services ... and they couldn't figure out how not to charge for trainee SEs at customer shop. HONE started out with branch office online access to practice with IBM guest operating systems in CP67 virtual machines. Cambridge then port apl\360 for CMS\APL, supporting virtual address sized workspaces (apl storage management redone for demand page environment), and system services API ... enabling real-world applications. HONE CMS\APL-based sales&marketing applications started appearing eventually coming to dominate all activity, squeezed out the guest operating system practice by trainee SEs.

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling

Palo Alto Science Center then did morph of CMS\APL->APL\CMS (for VM370/CMS) as well as the 370/145 APL microcode assist (APL programs on 145 with mcode-assist could run as fast as on 168 ... nearly ten times speedup). PASC also did PALM and 5100. (trivia: tThe person responsible for 145 APL microcode assist also worked with SLAC on what was super optimized FORTRANQ available internally, then FORTRAN-HX)
https://en.wikipedia.org/wiki/IBM_5100

Mid-70s, the US HONE datacenters were consolidated across the back parking lot from Palo Alto Science Center with VM370 168s. HONE 1st deployed CP67 & then on VM370, APL shared system (around 100k) was accessible via IPL command (the increasing HONE APL applications were both cpu&memory hungry, the 145 microcode help with CPU, but 145 didn't have enough memory). I provided page-mapped filesystem with shared segment support, so shaed APL could be invoked directly via CMS command (not requiring CP "IPL"). For HONE, the HONE online environment "SEQUAIA", large APL application (for mostly computer illiterate branch people) and PASC helped HONE to merge it into the HONE APL module ... resulting 400k+ of shared segments. Some of the more CPU hungry APL applications were then being redone in FORTRAN ... and being able to invoke APL as CMS application (rather than IPL), allowed to invoke the FORTRAN program from APL and then return to APL. HONE enhanced eight VM370 single processor systems to "single-system-image", shared DASD, loosely-coupled with load-balancing and fall-over (largest in the world). I then got multiprocessor support into VM370R3-based CSC/VM (initially for HONE), allowing them to add a 2nd processor to each system (16 processor single system image, really largest in the world). Trivia: when FACEBOOK 1st moves into silicon valley, it is into a new bldg built next door to the former consolidated US HONE datacenter.

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

Late 1980, when Jim Gray is leaving SJR & System/R for TANDEM, he tries to get me to take over his DBMS consulting with the IMS group.

Epstein had left the UCB Relational Inges for Britain-Lee as Chief Scientist ... which was doing fast DBMS disk controller (offloading much of DBMS processing to the controller). One of their biggest customers was large government agency that was also a long-time CP67 (and then VM370) customer. When Epstein leaves for Teradata, Britain-Lee was trying to hire Epstein's replacement from the System/R group and the person that agreed to go, tried to strong-arm me into going with him. Note Epstein later leaves Teradata to found Sybase.

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

IMS story: my wife as in gburg JES group and one of the catchers for ASP/JES3; then co-author of JESUS (JES Unified System), all the features in the respective systems that their customers couldn't live without (for various reasons never came to fruition). She is then con'ed into going to POK to be responsible for loosely-coupled architecture (where she does peer-coupled shared data architecture). She doesn't remain long because 1) sporadic battles with communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX & Parallel SYSPLEX), except for IMS hot-standby.

She has story about asking Vern Watts who he would ask permissions to do hot-standby; he replies nobody, he would just do it and tell them when he was all done.

peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370/CMS Shared Segments

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370/CMS Shared Segments
Date: 01 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#48 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments

other 1401 emulation trivia; i took 2 credit hr intro to fortran/computers, at the end of the semester I was hired to rewrite 1401 MPIO for 360/30. Univ. had 709/1401 (709 tape->tape & 1401 unit record front-end, card->tape & tape->printer/punch). The univ was sold 360/67 for tss/360 to replace 709/1401. The univ got 360/30 replacing 1401 pending available of 360/67 ... I guess getting 360 experience; 360/30 had 1401 emulation and continued to 1401 MPIO ... so my effort was to get more 360 experience. The univ. shutdown datacenter on weekends ... and I would have the whole place dedicated (although 48hrs w/o sleep made monday classes difficult). I was given a bunch of hardware & software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error handlers, storage management, etc ... and within a few weeks I had 2000 card assembler (MPIO) program. Within a year of taking intro class the 360/67 had arrived and I was hired fulltime responsible for os/360 (tss/360 wasn't production quality so was run as 360/65). A couple of recent posts about 360
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#39 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#43 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#45 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#47 IBM 360/67
and recen SNA/VTAM posts
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023e.html#42 Systems Network Architecture

Student Fortran that ran under second on 709, ran over minute on OS/360. I install HASP which cuts it in half. I then start redoing STAGE2 SYSGEN, allowing it to run in production jobstream and reording statements to place datasets and PDS members for optimized arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student Fortran doesn't beat 709 until I install Univ. Waterloo WATFOR.

Then science center came out to install CP/67 (3rd after cambridge itself and MIT Lincoln Lab) ... and I mostly played with it in my dedicated weekend window. I started rewriting a lot of CP67 pathlengths for running guest OS/360. OS/360 test stream ran 322secs on bare machine and originally under CP67 856sec (CP/67 CPU 534secs). After a few months I've got CP/67 overhead CPU down to 113secs (from 534secs), part of SHARE presentation in archived post:
https://www.garlic.com/~lynn/94.html#18

Before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Boeing Renton datacenter possibly largest in the world, couple hundred million in 360s, 360/65s arriving faster than could be installed, boxes constantly staging in the hallways around the machine room. Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarge it to install a 360/67 for me to play with when I wasn't doing other stuff). There was also disaster plan to replicate Renton up at the new 747 plant in Everett (where Mt. Rainier heats up and the resulting mud slide takes out Renton).

also in
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Note: in the early 80s, I'm introduced to John Boyd and would sponsor his briefings at IBM. He tells story of being very vocal that the electronics across the trail wouldn't work, so possibly as punishment, he is put in command of "spook base" (about the same time I'm at Boeing). Boyd biography has "spook base" a $2.5B "windfall" for IBM (ten times Renton). Spook base ref (gone 404, still at wayback):
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
also
https://en.wikipedia.org/wiki/Operation_Igloo_White
ref mentioning John Boyd and IBM "Wild Ducks"
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Jargon Dictionary

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Jargon Dictionary
Date: 02 Sep, 2023
Blog: Facebook
Copy of ibmjarg.pdf also here
https://comlay.net/ibmjarg.pdf

6670 was ibm copier3 with compulter link. SJR deployed them to departmental areas with colored paper in the alternate paper drawer ... for printing the (output) separator page ... since page was mostly blank, the driver was modified to select a random quotation for printing on the separator page ... selected from three files, one was Mike's IBM Jargon. I then did a version for EMAC's RMAIL zippy yow quotation signature. Last (re: 1981 datamation) line is from earlier version:
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

Late 70s and early 80s, I was blamed for online computer conferencing on the internal network (larger than arpanet/internet from just about a beginning until sometime mid/late 80s). It really took off spring 1981 after distributing trip report of visit to Jim Gray at Tandem; only possible 300 active participants, but claims upwards 25,000 was reading. Foklore is when the corporate executive committee was told, 5of6 wanted to fire me.
MIP envy - n. The term, coined by Jim Gray in 1980, that began the Tandem Memos (q.v.). MIP envy is the coveting of other's facilities - not just the CPU power available to them, but also the languages, editors, debuggers, mail systems and networks. MIP envy is a term every programmer will understand, being another expression of the proverb The grass is always greener on the other side of the fence.

... snip ...

Above is slight misinterpretation, I was on business trip and came back and found copy "MIP Envy", Jim had left for me when he departs for Tandem
https://www.garlic.com/~lynn/2007d.html#email800920
and
http://jimgray.azurewebsites.net/papers/mipenvy.pdf

some quotes from Tandem Memo summary of the executive summary ... also Learson trying to prevent the bureaucrats and careerists from destroying the Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

online computer communication posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning ibmjarg.pdf and/or mipenvy.pdf files
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#16 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#12 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2022h.html#124 Corporate Computer Conferencing
https://www.garlic.com/~lynn/2022h.html#56 Tandem Memos
https://www.garlic.com/~lynn/2022f.html#84 Demolition of Iconic IBM Country Club Complex "Imminent"
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#85 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#64 IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#45 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#3 IBM Games
https://www.garlic.com/~lynn/2022d.html#98 Datamation Archive
https://www.garlic.com/~lynn/2022d.html#70 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022d.html#53 Another IBM Down Fall thread
https://www.garlic.com/~lynn/2022d.html#37 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#25 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#64 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#47 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2022c.html#13 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#124 System Response
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#90 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#78 Channel I/O
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing

--
virtualization experience starting Jan1968, online at home since Mar1970

New Poll Shows 81% of Californians Support SB 362

From: Lynn Wheeler <lynn@garlic.com>
Subject: New Poll Shows 81% of Californians Support SB 362
Date: 02 Sep, 2023
Blog: Linkedin
New Poll Shows 81% of Californians Support SB 362
https://www.tomkemp.ai/blog/2023/8/27/new-poll-shows-81-of-californians-support-sb-362

Late last century was brought into cal. state to help word smith some legislation. At the time they were working on electronic signature, data breach notification, and opt-in privacy sharing. Some of the participants had done in-depth public surveys and the #1 issue was "identity theft" (usually from some breach) resulting in fraudulent financial transactions. The issue was that normally entities take breach countermeasures in self protection, however in this case the institutions weren't at risk (it was the public) and little or nothing was being done. It was hoped the publicity from breach notifications would motivate institutions to take countermeasures. Since then there has been a number of federal bills, about evenly divided, 1) similar to cal. legislation and 2) bills that require conditions that would effectively eliminate/nullify requirement for notification.

"Opt-in privacy sharing" would require institutions to have written approval from individuals to share personal information. Along the way a (federal preemption) provision was added to GLBA for "opt-out" privacy sharing (institutions could share personal information unless there was record of individual objecting). A couple years later at a national, annual privacy conference in Wash DC, there was a panel of FTC commissioners. Somebody in the audience got up and claimed he worked for a call-center technology company used by many financial institutions. He said that those answering 1-800 opt-out calls had no mechanism to make an "opt-out" record ... and wanted to know if the FTC was going to do anything about it. They ignored him.

data breach notification posts
https://www.garlic.com/~lynn/submisc.html#data.breach.notification
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
identity theft posts
https://www.garlic.com/~lynn/submisc.html#identity.theft

--
virtualization experience starting Jan1968, online at home since Mar1970

Architecture, was ISA

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Architecture, was ISA
Newsgroups: alt.folklore.computers
Date: Sat, 02 Sep 2023 16:12:56 -1000
John Levine <johnl@taugh.com> writes:
By the way, on Lynn's web site it says he's been posting like crazy on Facebook in the past month, but I haven't checked.

well, lots of (facebook) 360, 360/30, 360/65, 360/mp, 360/67 (& couple other things) ... some repeated in different groups
https://www.garlic.com/~lynn/2023e.html

--
virtualization experience starting Jan1968, online at home since Mar1970

USENET, the OG social network, rises again like a text-only phoenix

From: Lynn Wheeler <lynn@garlic.com>
Subject: USENET, the OG social network, rises again like a text-only phoenix
Date: 02 Sep, 2023
Blog: Facebook
USENET, the OG social network, rises again like a text-only phoenix
https://www.theregister.com/2023/08/30/usenet_revival/

use to do lots of comp.arch and a.f.c. ... hardly at all anymore ... although a.f.c. post about hr ago.

predating usenet ... TYMHARE
https://en.wikipedia.org/wiki/Tymshare
started offering their (vm370) CMS-based online computer conferencing system, "free" to (mainframe user group) SHARE in Aug1976 as VMSHARE ... archives here
http://vm.marist.edu/~vmshare
after M/D bought TYMSHARE in 84, vmshare was moved to different platform.

I had cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE files for internal network&systems ... biggest problem was lawyers concerned internal employees being directly exposed to (unfiltered) customer information.

Late 70s and early 80s, I was blamed for online computer conferencing on the internal network in the late 70s and early 80s; it really took after I distributed a trip report of visit to Jim Gray in spring of 1981, only about 300 were directly participating but claims upwards of 25,000 were reading. Folklore is that when corporate executive committee was told, 5of6 wanted to fire me.

trivia: FSD (that was primary funding for RP3) asked my wife to audit RP3 ... afterwards they cut the funding.

Something more akin to usenet occured with BITNET
https://en.wikipedia.org/wiki/BITNET
with mailing lists
https://en.wikipedia.org/wiki/LISTSERV

After leaving IBM ... I did a number of drivers for pagesat (satellite) full usenet feed, along with Boardwatch magazine article, in return for a free feed.
http://www.art.net/lile/pagesat/netnews.html

computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

a few posts mentioning pagesat & vmshare
https://www.garlic.com/~lynn/2021i.html#99 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2021i.html#95 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2018e.html#55 usenet history, was 1958 Crisis in education
https://www.garlic.com/~lynn/2018e.html#51 usenet history, was 1958 Crisis in education

other posts mentioning pagesat
https://www.garlic.com/~lynn/2022e.html#40 Best dumb terminal for serial connections
https://www.garlic.com/~lynn/2022b.html#7 USENET still around
https://www.garlic.com/~lynn/2022.html#11 Home Computers
https://www.garlic.com/~lynn/2017h.html#118 AOL
https://www.garlic.com/~lynn/2017h.html#110 private thread drift--Re: Demolishing the Tile Turtle
https://www.garlic.com/~lynn/2017g.html#51 Stopping the Internet of noise
https://www.garlic.com/~lynn/2017b.html#21 Pre-internet email and usenet (was Re: How to choose the best news server for this newsgroup in 40tude Dialog?)
https://www.garlic.com/~lynn/2016g.html#59 The Forgotten World of BBS Door Games - Slideshow from PCMag.com
https://www.garlic.com/~lynn/2015h.html#109 25 Years: How the Web began
https://www.garlic.com/~lynn/2015d.html#57 email security re: hotmail.com
https://www.garlic.com/~lynn/2013l.html#26 Anyone here run UUCP?
https://www.garlic.com/~lynn/2012b.html#92 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2010g.html#82 [OT] What is the protocal for GMT offset in SMTP (e-mail) header time-stamp?
https://www.garlic.com/~lynn/2010g.html#70 What is the protocal for GMT offset in SMTP (e-mail) header
https://www.garlic.com/~lynn/2009l.html#21 Disksize history question
https://www.garlic.com/~lynn/2009j.html#19 Another one bites the dust
https://www.garlic.com/~lynn/2007g.html#77 Memory Mapped Vs I/O Mapped Vs others
https://www.garlic.com/~lynn/2006m.html#11 An Out-of-the-Main Activity
https://www.garlic.com/~lynn/2005l.html#20 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
https://www.garlic.com/~lynn/2001h.html#66 UUCP email
https://www.garlic.com/~lynn/2000e.html#39 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS

--
virtualization experience starting Jan1968, online at home since Mar1970

801/RISC and Mid-range

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 801/RISC and Mid-range
Date: 03 Sep, 2023
Blog: Facebook
1980 there was effort to make all (microprogrammed) mid-range systems (4361/4381, as/400, etc) and controllers, 801/risc ... for various reasons they all floundered and reverted to custom CISC (some engineers leaving for other vendors doing RISC). ROMP (801/RISC) was going to be DISPLAYWRITER follow-on, when that was canceled, they decided to retarget to the UNIX workstation market (PC/RT) and got the company that had done AT&T UNIX port for IBM/PC (PC/IX), to do one for ROMP (AIX). Follow-on was RIOS & RS/6000. Then Somerset/AIM (apple, ibm, motorola) single-chip, 64-bit 801/RISC follow-on was used for both AIX and AS/400 follow-on (and Apple, folklore is because AIM wasn't doing power efficient for battery powered, Apple then moves to Intel).

801/risc, iliad, romp, pc/rt, rios, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

trivia: In 1980, STL (now SVL) was bursting at the seems and they were moving 300 people from the IMS group to offsite bldg, with dataprocessing back to the STL datacenter. I get con'ed into doing channel-extender support with 3270 channel-attached controllers, so their 3270 human factors was the same as back in STL. Then the hardware vendor tries to get IBM to release my support, but there is a group in POK playing with some serial stuff, that gets it vetoed (afraid that if it was in the market, it would make it more difficult to get their stuff released).

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

In 1988, IBM branch office asked if I could help LLNL (national lab) standardized some serial stuff that they were playing with, which quickly becomes fibre-channel standard (including some stuff I had done in 1980), FCS, 1gbit full-duplex, 200mbyte/sec aggregate. Then POK gets their stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec). Then POK becomes involved with FCS and define a heavy-weight protocol that radically reduces the throughput, released as "FICON". The most recent public benchmark I've found is the z196 "PEAK I/O" that got 2M IOPS using 104 FICON running over 104 FCS. Note for max-configured z196, industry standard benchmark (number of iterations compared to 370/158 assumed to be 1MIPS) was 50BIPS. About the same time there was a FCS announced for E5-2600 blade claiming over million IOPS (two such FCS having higher throughput than 104 FICON).

FICON (& FCS) posts
https://www.garlic.com/~lynn/submisc.html#ficon

Also E5-2600 blade (common component in cloud megadatacenters provisioned with half million or more "blades") running same industry standard CPU benchmark was 500BIPS (each ten times max-configured Z196 ... similar processing gap between max-configured IBM "mainframe" and a blade continue)

Other IOPS trivia: z196 "PEAK I/O" benchmark was with "CKD DASD" which hasn't been manufactured for decades, all being simulated on industry standard fixed-block disks.

note: 4331/4341 midrange follow-on, 4361/4381 were originally going to be 801/risc microprocessors ... but that floundered and became custom cisc. In JAN1979, I got con'ed into doing benchmark on engineering 4341 for national lab that was looking at getting 70 for compute farm, sort of the leading edge of the coming cluster supercomputing (and cloud megadatacenter) tsunami. Then in the 80s, large corporations were ordering hundreds of vm/4341s at a time for distributing out in departmental areas, sort of the leading edge of the coming distributed computing tsunami (inside IBM, departmental conference rooms were in short supply when so many were taken over for vm/4341 rooms).. Also small clusters of 4or5 vm/4341s were much less expensive than 3033, much higher aggregate throughput, much lower power, cooling and floor space.

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

Since 9/11, US Has Spent $21 Trillion on Militarism at Home and Abroad

From: Lynn Wheeler <lynn@garlic.com>
Subject: Since 9/11, US Has Spent $21 Trillion on Militarism at Home and Abroad
Date: 04 Sep, 2023
Blog: Facebook
just to 1sep2021;

Since 9/11, US Has Spent $21 Trillion on Militarism at Home and Abroad. "Our $21 trillion investment in militarism has cost far more than dollars."
https://www.commondreams.org/news/2021/09/01/911-us-has-spent-21-trillion-militarism-home-and-abroad
The National Priorities Project (NPP), an initiative of the Institute for Policy Studies, estimates that of the $21 trillion the U.S. invested in "foreign and domestic militarization" in the aftermath of September 11, 2001, $16 trillion went to the military, $3 trillion to veterans' programs, $949 billion to DHS, and $732 billion to federal law enforcement.

... snip ...

2002 congress lets the fiscal responsibility act lapse (spending can't exceed tax revenue, on its way to eliminating all federal debt). 2010, CBO had report that 2003-2009, taxes reduced by $6T and spending increased by $6T for a $12T gap (compared to fiscal responsible budget), first time taxes were cut to not pay for two wars. Sort of confluence of the Too-Big-To-Fail wanted huge federal debt, special interests wanted huge tax cut, and military-industrial complex wanted huge spending increase (wasn't just the enormous spending increase but also the enormous tax reduction).

recent Boyd's OODA-loop and military spending ...
https://www.garlic.com/~lynn/2023e.html#5 Boyd and OODA-loop
https://www.garlic.com/~lynn/2023e.html#38 Boyd OODA-loop
https://www.garlic.com/~lynn/2023e.html#40 Boyd OODA-loop
https://www.garlic.com/~lynn/2023e.html#46 Boyd OODA at Linkedin

Few recent posts mentioning Eisenhower's military-industrial-congressional complex (graft and corruption) warning
https://www.garlic.com/~lynn/2023b.html#63 Congress Has Been Captured by the Arms Industry
https://www.garlic.com/~lynn/2023b.html#37 The Architects of the Iraq War: Where Are They Now? They're all doing great, thanks for asking
https://www.garlic.com/~lynn/2021i.html#53 The Kill Chain
https://www.garlic.com/~lynn/2021d.html#3 How Ike Led
https://www.garlic.com/~lynn/2019c.html#60 America's Monopoly Crisis Hits the Military
https://www.garlic.com/~lynn/2019c.html#52 The Drone Iran Shot Down Was a $220M Surveillance Monster

Boyd & IBM at linkedin
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Spinney's website
http://chuckspinney.blogspot.com/
Pentagon Layrinth
http://chuckspinney.blogspot.com/p/pentagon-labyrinth.html
Chet's website
https://slightlyeastofnew.com/
mentions Patterns of Conflict (Boyd's 1st briefing I sponsored at IBM)
https://slightlyeastofnew.com/2023/08/17/where-are-we-going-and-a-stab-at-a-fix/

trivia: both Greenspan and Stockman (Reagan's budget director) claim credit for revamping Social Security in the 80s supposedly to fully take care of the future baby boomer retirements. However they both claimed that it was actually to disguise the increase in military spending (w/o raising income tax rate), 1) borrowing the additional inflow to the SS trust fund for increased military spending and 2) starting to tax SS benefits, using it for increased military spending.

posts mentioning Greenspan, Stockman and (baby boomer) SS
https://www.garlic.com/~lynn/2017k.html#51 Taxing Social Security Benefits
https://www.garlic.com/~lynn/2017b.html#43 when to get out???
https://www.garlic.com/~lynn/2017.html#11 Attack SS Entitlements
https://www.garlic.com/~lynn/2016h.html#91 Your Social Security cuts are already on the way
https://www.garlic.com/~lynn/2016h.html#63 GOP introduces plan to massively cut Social Security

After turn of the century, there wasn't even effort to disguise the military spending while at the same time cutting taxes. Recently been binge rewatching "Boston Legal" ... surprising all the references to how the two wars were fabricated along with the military graft and corruption.

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

fiscal responsibility act
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds
too-big-to-fail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax haven, tax lobbying posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Early Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Early Internet
Date: 06 Sep, 2023
Blog: Facebook
We were working with NSF Director and was suppose to get $20M to interconnect the NSF Supercomputer centers. Then congress cuts the budget, some other things happen and eventually an RFP is released. from Preliminary Announcement (28Mar1986)
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

Mosaic Launches an Internet Revolution (@NCSA)
https://new.nsf.gov/news/mosaic-launches-internet-revolution

some came out west from NCSA to do a Mosaic startup, name changed to NETSCAPE when NCSA complains about use of "MOSAIC". ... who did they get "NETSCAPE" from?

Internal politics prevented us from bidding.. NSF Director tried to help, writing IBM a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other agencies, about how slow IBM was with HSDT project (just made the internal politics worse).

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Jargon

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Jargon
Date: 06 Sep, 2023
Blog: Facebook
IBM Jargon:
https://comlay.net/ibmjarg.pdf

when ibm jargon was young and "Tandem Memos" was new ...
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

this talks about it more, starts out with Learson trying (& failing) to block the bureaucrats/careerists (and MBAs) from destroying Watson legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
MIP envy - n. The term, coined by Jim Gray in 1980, that began the Tandem Memos (q.v.). MIP envy is the coveting of other's facilities - not just the CPU power available to them, but also the languages, editors, debuggers, mail systems and networks. MIP envy is a term every programmer will understand, being another expression of the proverb The grass is always greener on the other side of the fence.

... snip ...

I was on business trip and found copy when I get back (after Jim had left for Tandem)
https://www.garlic.com/~lynn/2007d.html#email800920
and
http://jimgray.azurewebsites.net/papers/mipenvy.pdf

Late 70s & early 80s, I was blamed for online computer conferencing on the internal network. "Tandem Memos" took off after I distributed a trip report about visit to Jim spring of 1981 (less than 300 were really active, but claims upwards of 25,000 were reading). Folklore is after corporate executive committee was told, 5of6 wanted to fire me.

Note somewhat evolved after TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
provided their (vm370) CMS-based online computer conferencing system, "free" to (mainframe user group) SHARE in Aug1976 as VMSHARE ... archives here
http://vm.marist.edu/~vmshare
after M/D bought TYMSHARE in 84, vmshare was moved to different platform.

I had cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE files for internal network&systems ... biggest problem was lawyers concerned internal employees being directly exposed to (unfiltered) customer information.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

tymshare, ann hardy, vmshare posts
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022g.html#92 TYMSHARE
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2019d.html#27 Someone Else's Computer: The Prehistory of Cloud Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

Early Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Early Internet
Date: 07 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#61 Early Internet
last year
https://www.garlic.com/~lynn/2022g.html#16 Early Internet
https://www.garlic.com/~lynn/2022g.html#17 Early Internet
https://www.garlic.com/~lynn/2022g.html#18 Early Internet

co-worker at science center was responsible for internal network, larger than arpanet/internet from just about the beginning until sometime mid/late 80s.
https://en.wikipedia.org/wiki/Edson_Hendricks
we transferred from science center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
to San Jose research late 70s. Edson fails to get IBM to move internal network to tcp/ip; gone behind paywall, but at wayback
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
some more from his web pages (also at wayback machine)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

The great cutover from host&IMPs to internetworking 1jan1983, there was approx. 100 IMPS and 255 hosts ... when internal network was rapidly approaching 1000 ... post with world-wide corporate locations that added one or more systems in 1983:
https://www.garlic.com/~lynn/2006k.html#8

one of the big internal network issues was encryption was required for links ... lots of hassles with governments, especially when links crossed national boundaries (circa 1985, major link encryptor vendor claimed the internal network had at least half of all link encryptors in the world). Started HSDT early 80s with T1 and faster computer links (both terrestrial and satellite). I hated when I had to pay for T1 link encryptors and faster ones were difficult to find.

... technology also used for corporate sponsored univ. BITNET (also larger than arpanet/internet for a time)
https://en.wikipedia.org/wiki/BITNET

late 70s and early 80s, I was blamed for online computer conferencing on the internal network ... really took off spring 1981 when I distributed a trip report about visit to Jim Gray at Tandem (less than 300 participated, but claims upwards of 25,000 were reading). Folklore is after corporate executive committee was told, 5of6 wanted to fire me.

Note somewhat evolved after TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
provided their (vm370) CMS-based online computer conferencing system, "free" to (mainframe user group) SHARE in Aug1976 as VMSHARE ... archives here
http://vm.marist.edu/~vmshare
after M/D bought TYMSHARE in 84, vmshare was moved to different platform.

I had cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE files for internal network&systems ... biggest problem was lawyers concerned internal employees being directly exposed to (unfiltered) customer information. TYMSHARE also told story that executive learning that customers were playing games, directed that TYMSHARE was for business and all games had to be removed. He changed his mind after being told that game playing had grown to 30% of revenue.

science center post
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

posts referencing NSF OASC 28Mar1986 Preliminary Announce
https://www.garlic.com/~lynn/2002k.html#12
and 3Apr1986 NSF (HSDT) letter to IBM
https://www.garlic.com/~lynn/2023e.html#42 Systems Network Architecture
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#55 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023d.html#3 IBM Supercomputer
https://www.garlic.com/~lynn/2023c.html#70 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#69 NSFNET (Old Farts)
https://www.garlic.com/~lynn/2023c.html#48 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#29 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023b.html#100 5G Hype Cycle
https://www.garlic.com/~lynn/2023b.html#65 HURD
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#43 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022h.html#12 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2022g.html#43 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#16 Early Internet
https://www.garlic.com/~lynn/2022f.html#108 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#90 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#37 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022e.html#22 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#8 VM Workship ... VM/370 50th birthday
https://www.garlic.com/~lynn/2022d.html#104 IBM'S MISSED OPPORTUNITY WITH THE INTERNET
https://www.garlic.com/~lynn/2022d.html#79 ROMP
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022c.html#77 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#57 ASCI White
https://www.garlic.com/~lynn/2022c.html#52 IBM Personal Computing
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#126 Google Cloud
https://www.garlic.com/~lynn/2022b.html#121 Lack of Unix in 70s/80s hacker culture?
https://www.garlic.com/~lynn/2022b.html#109 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#98 CDC6000
https://www.garlic.com/~lynn/2022b.html#81 SUSE Reviving Usenet
https://www.garlic.com/~lynn/2022b.html#79 Channel I/O
https://www.garlic.com/~lynn/2022b.html#65 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#32 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2022.html#124 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2022.html#66 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#62 File Backup
https://www.garlic.com/~lynn/2022.html#0 Internet
https://www.garlic.com/~lynn/2021k.html#130 NSFNET
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#83 Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#56 Lick Observatory
https://www.garlic.com/~lynn/2021k.html#51 VM/SP crashing all over the place

1983 1000th node:

1000th node globe

--
virtualization experience starting Jan1968, online at home since Mar1970

Computing Career

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Computing Career
Date: 09 Sep, 2023
Blog: Facebook
really long winded reply ... at end of semester taking 2 credit hr intro to fortran/computers, I was hired to re-implement 1401 MPIO for 360/30 (nearly 60yrs ago). The univ. had been sold 360/67 for tss/360 to replace 709/1401 (709 tape->tape; 1401 unit record front end card->tape & tape->printer/punch). Temporarily the 1401 was replaced with 360/30 to get 360 experience (360/30 had 1401 emulation and could continue to run 1401 MPIO, I guess my job was part of gaining 360 experience). The univ. shutdown the datacenter on weekends and I would have the place dedicated to myself (although 48hrs w/o sleep made monday classes hard). I was given a bunch of software&hardware manuals and got to design, implement, test my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. Within a few weeks, I had a 2000 card MPIO assembler program.

Then within a year of taking intro class, the 360/67 had arrived and I was hired fulltime responsible for OS/360 (tss/360 never came to production fruition, so ran as 360/65 w/OS360). Trivia: student fortran ran under second on 709. Initially on 360/65 OS/360, ran over minute. Installing HASP cut that time in half. I then start revamp of STAGE2 SYSGEN, being able to run in production job stream (instead of starter system) and reorging statements to carefully place datasets and PDS members to optimize arm seek and PDS director multi-track search, cutting time another 2/3rds to 12.9secs. Sometimes heavy PTF activity destroying careful ordering would increase time to 20+secs, at which time I would be forced to do intermediary sysgen to get the performance back. Never got better than 709 until I install Univ. Waterloo WATFOR.

Then some people came out from science center to install CP67/CMS (precursor to VM370/CMS) ... 3rd after Cambridge itself and MIT Lincoln Labs ... mostly got to play with it in my dedicated weekend time. Starting rewriting large amounts of CP67 to improve OS360 in virtual machine. OS/360 test jobstreem ran 322 secs on bare machine and initially 856secs in virtual machine (CP67 534secs CPU). After a few months, I have it reduced to 435secs (CP/67 113secs CPU, reduction of 534-113=421secs CPU). Part of SHARE presentation on pathlength work (in this archive post):
https://www.garlic.com/~lynn/94.html#18
Some CSC (& virtual machine) history
http://www.bitsavers.org/pdf/ibm/cambridgeScientificCenter/320-2022_CSC_Introduction_to_CP-67_CMS_196809.pdf
https://www.leeandmelindavarian.com/Melinda#VMHist

I then do dynamic adaptive resource management ("wheeler" scheduler) and new page replacement algorithm. Original CP67 did DASD FIFO I/O and page transfers were single 4k transfer/IO. I implement ordered seek queuing (increases disk throughput and graceful degradation as load increase) and chaining multiple page transfers optimized for transfers/revolution (all queued for same disk arm position). For 2301 drum, increases throughput from approx. 75/sec to 270/sec peak. Univ. gets some TTY/ASCII and add ASCII terminal support to CP67 (when ASCII terminal port scanner arrives for IBM terminal control unit, it is a box labeled "HEATHKIT"). I then want to have single dial-up number ("hunt group") for all terminal types, I can dynamically switch port scanner terminal type for each line but IBM had taken short-cut and hardwired the line speed (so doesn't quite work). Univ. starts a project to implement clone controller; build a channel interface board for Interdata/3 programmed to emulate IBM terminal control unit, with the addition it can do dynamic line speed (later enhanced to be a Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces; Interdata and later Perkin-Elmer sell it as IBM clone controller).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer

360 plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Univ. got 2250 graphic display; I modify Lincoln Labs 2250 fortran library to interface to CMS editor for full screen editor.
https://en.wikipedia.org/wiki/IBM_2250
I also added 2741 & TTY terminal support to HASP(/MVT18) with an editor that implemented CMS EDIT syntax (for CRJE that I thought was better than TSO).
https://en.wikipedia.org/wiki/Houston_Automatic_Spooling_Priority

Then before I graduate, I'm hired fulltime into small group in the Boeing CFO's office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better montetize the investment, including offering services to non-Boeing entities). I think Renton datacenter is possibly largest in the world, a couple hundred million in 360 systems, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between Renton director and CFO who only has a 360/30 up at Boeing field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I'm not doing other stuff). There is also disaster plan to replicate Renton datacenter up at new 747 plant in Everett (Mt. Rainier heats up and the resulting mud slide takes out Renton). 747#3 is flying skies of Seattle getting FAA flt certification.

In the early 80s, I'm introduced to John Boyd and would sponsor his briefings. One of his stories was about being very vocal that the electronics across the trail wouldn't work and possibly as punishment he is put in command of "spook base" (about the same time I'm at Boeing). His biography claims "spook base" was $2.5B "windfall" for IBM (ten times Renton). couple "spook base" refs
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

Some IBM&Boyd reference:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
also references Learson trying to block bureaucrats, careerists, and MBAs destroying Watson legacy (but fails) ... and 20yrs later, IBM has one of the largest loses in US company history and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://www.linkedin.com/pulse/boyd-ibm-wild-duck-discussion-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/

A decade ago, I was asked to track down decision to add virtual memory support to all 370s. I found a member of staff for executive that made the decision. Basically, MVT storage management was so bad that regions normally had to be specified four times larger than used. As a result typical 1mbyte 370/165 could only run four concurrent regions, insufficient to keep processor busy (& justified). VS2 started out running MVT in 16mbyte virtual memory, allowing number of concurrently, running regions to be increased by factor of four with little or no paging (very similar to running MVT in a CP67 16mbyte virtual machine). Pieces of email exchange (including wandering into other subjects):
https://www.garlic.com/~lynn/2011d.html#73

ibm posts for z/vm 50th
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/

boyd posts and web urls
https://www.garlic.com/~lynn/subboyd.html
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
hasp, asp, jes2, jes3, ngi/nge posts
https://www.garlic.com/~lynn/submain.html#hasp
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning 709, 1401, mpio, os/360, and boeing cfo
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#11 360 Powerup
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

PDP-6 Architecture, was ISA

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: PDP-6 Architecture, was ISA
Newsgroups: alt.folklore.computers
Date: Sat, 09 Sep 2023 13:21:00 -1000
John Levine <johnl@taugh.com> writes:
S/360 and the PDP-6 were both announced in 1964, so they obviously both had been designed some time before, as far as I know without either having knowledge of the other. The first 360s were shipped in 1965, so DEC probably shipped a PDP-6 before IBM shipped a 360, but I would say they were simultaneous, and the -6 definitely was not popular, even though it was well loved by its friends.

one of the co-workers (at science center) told story that in the gov/IBM trial ... that BUNCH members testified that by 1959 all realized that (because of software costs), all realized that compatible architecture was needed across the product lines ... but IBM was only one with executives that managed to enforce compatibility (also required description that all could follow).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

account of the end of the ACS/360 (ACS started out incompatible, but Amdahl managed to carry the day for compatibility), executives shut it down because they were afraid it would advance the state-of-the-art too fast and would loose control of the market
https://people.cs.clemson.edu/~mark/acs_end.html
Amdahl leaves IBM shortly later.

Early 70s, IBM has the "Future System" project (as countermeasure to clone mainframe i/o controllers), completely different and going to completely replace 370. Internal politics were shutting down 370 efforts (claim is the lack of new 370 during the FS period, is credited with giving 370 clone makers their market foothold, aka attempting failed countermeasure to clone controllers enabling rise of clone systems).

Amdahl gave talk in large MIT auditorium shortly after forming his company. Somebody in the audience asked him what justification for his company did he use with investors. He said that there was enough customer 360 software, that even if IBM was going to completely walk away from 360 ... it was sufficient to keep him in business through the end of the century (sort of implying he knew about "FS", but in later years he claimed he knew nothing about FS).

When FS finally implodes, there was mad rush to get stuff back into 370 product pipelines ... including kicking off the quick&dirty 3033&3081 projects in parallel
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

trivia: decade ago I was asked (in newsgroups) if I could track down the IBM decision to add virtual memory to all 370s and found staff to executive making the decision, basically (OS/360) MVT storage management was so bad that regions frequently had to be specified four times larger than used, resulting in typical 1mbyte 370/165 only being able to run four regions concurrently, insufficient to keep processor busy and justified. Mapping MVT to 16mbyte virtual memory, allowed increasing concurrently running regions by a factor of four times with little or no paing (initially MVT->VS2 was little different than running MVT in CP67 16mbyte virtual machine). Pieces of that email exchange in this archived (11mar2011 afc) post
https://www.garlic.com/~lynn/2011d.html#73

trivia2: the 360 (& then 370) architecture manual was moved to CMS SCRIPT (redone from CTSS RUNOFF), command line option either generated the principles of operation subset or the full architecture manual (with engineering notes, justifications, alternative implementations for different models, etc).
http://www.bitsavers.org/pdf/ibm/360/princOps/
http://www.bitsavers.org/pdf/ibm/370/princOps/

trivia3: 370/165 engineers complained that if they had to do the full 370 virtual memory architecture, it would delay virtual memory announce by six months. Eventually decision was just to do the 165 subset, and other models (and software) having already done the full architecture had to drop back to the 165 subset.

other discussion in this (linkedin) post starting with Learson attempting to block the bureaucrats, careerists, and MBAs destroying the watson legacy (but failed) ... two decades later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

lots of recent posts mentioning 370 virtual memory
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#43 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023e.html#25 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#118 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#113 VM370
https://www.garlic.com/~lynn/2023d.html#110 APL
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#100 IBM 3083
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#86 5th flr Multics & 4th flr science center
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023d.html#78 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#71 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#62 Online Before The Cloud
https://www.garlic.com/~lynn/2023d.html#61 mainframe bus wars, How much space did the 68000 registers take up?
https://www.garlic.com/~lynn/2023d.html#36 "The Big One" (IBM 3033)
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023d.html#29 IBM 3278
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#17 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#3 IBM Supercomputer
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#105 IBM 360/40 and CP/40
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#7 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#103 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023b.html#44 IBM 370
https://www.garlic.com/~lynn/2023b.html#41 Sunset IBM JES3
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023b.html#24 IBM HASP (& 2780 terminal)
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2023.html#103 IBM ROLM
https://www.garlic.com/~lynn/2023.html#99 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#57 Almost IBM class student
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#50 370 Virtual Memory Decision
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2023.html#47 370/125 and MVCL instruction
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2023.html#34 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#30 IBM Change
https://www.garlic.com/~lynn/2023.html#20 IBM Change
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive

--
virtualization experience starting Jan1968, online at home since Mar1970

HASP, JES, MVT, 370 Virtual Memory, VS2

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: HASP, JES, MVT, 370 Virtual Memory, VS2
Date: 10 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2

My wife was also co-author of JESUS (JES Unified System) specification ... all the features of JES2&JES3 that the respective users couldn't live w/o ... for various reasons never came to fruition.

HASP/ASP, JES2/JES3, NJI/NJE posts
https://www.garlic.com/~lynn/submain.html#hasp

MVS trivia (MVS song sung at SHARE HASP sing-along)
http://www.mxg.com/thebuttonman/boney.asp
from above:
Words to follow along with... (glossary at bottom)

If it IPL's then JES won't start, And if it gets up then it falls apart, MVS is breaking my heart, Maybe things will get a little better in the morning, Maybe things will get a little better. The system is crashing, I'm having a fit, and DSS doesn't help a bit, the shovel came with the debugging kit, Maybe things will get a little better in the morning, Maybe things will get a little better. Work Your Fingers to the Bone and what do you get? Boney Fingers, Boney Fingers!


from glossary
$4K - MVS was the first operating system for which the IBM Salesman got a $4000 bonus if he/she could convince their customer to install VS 2.2 circa 1975. IBM was really pissed off that this fact became known thru this

... snip ...

this was also about the time that CERN presented a study comparison of MVS/TSO and VM370/CMS ... copies freely available at SHARE (outside IBM) ... inside IBM copies were stamped "IBM Confidential - Restricted" (available on need to know basis only, aka limiting IBMers that saw it because conflicted with internal party line ... giving joke about inside IBM, employees were like mushrooms, kept in the dark and fed ....

Slightly before, Learson trying(& failing) to block the bureaucrats, careerists, MBAs from destroying Watson legacy ... two decades later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

Note, not long later, TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
provided their (vm370) CMS-based online computer conferencing system, "free" to (mainframe user group) SHARE in Aug1976 as VMSHARE ... archives here
http://vm.marist.edu/~vmshare
after M/D bought TYMSHARE in 84, vmshare was moved to different platform.

I had cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE files for internal network&systems ... biggest problem was lawyers concerned about internal employees being directly exposed to (unfiltered) customer information. TYMSHARE also told story that executive learning that customers were playing games, directed that TYMSHARE was for business and all games had to be removed. He changed his mind after being told that game playing had grown to 30% of revenue.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some posts mentioning CERN MVS/TSO - VM370/CMS bake-off
https://www.garlic.com/~lynn/2022h.html#39 IBM Teddy Bear
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2014b.html#105 Happy 50th Birthday to the IBM Cambridge Scientific Center
https://www.garlic.com/~lynn/2010q.html#34 VMSHARE Archives

other posts mentioning MVS boney fingers
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2022f.html#41 MVS
https://www.garlic.com/~lynn/2022f.html#34 Vintage Computing
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2021.html#25 IBM Acronyms
https://www.garlic.com/~lynn/2019b.html#92 MVS Boney Fingers
https://www.garlic.com/~lynn/2014f.html#56 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2009q.html#14 Electric Light Orchestra IBM song, in 1981?

--
virtualization experience starting Jan1968, online at home since Mar1970

Wonking Out: Is the Fiscal Picture Getting Better or Worse? Yes

From: Lynn Wheeler <lynn@garlic.com>
Subject: Wonking Out: Is the Fiscal Picture Getting Better or Worse? Yes.
Date: 11 Sep, 2023
Blog: Facebook
Wonking Out: Is the Fiscal Picture Getting Better or Worse? Yes.
https://www.nytimes.com/2023/09/08/opinion/taxes-deficit-inflation-gdp.html

... note ... Jan1999 I was asked to help try and prevent the coming economic mess (we failed) ... including improving integrity of securitized mortgage supporting documents. They then find that they can pay rating agencies for triple-A rating (when the rating agencies knew they weren't worth triple-A, from Oct2008 congressional hearing), enabling no-document liar loans; securitized, pay for triple-A, and able to sell over $27T into the bond market 2001-2008. Then they found that they could design securitized mortgages to fail and take out CDS gambling bets. AIG was holding the largest amount of bets and was negotiating to pay off at 50cents on the dollar when SECTREAS steps in (2008), has them sign a document that they can't sue those making the bets and take TARP funds to pay off bets at face value. The largest recipient of TARP funds was AIG and the largest recipient of face-value payoffs was firm formally headed by SECTREAS (joke was that so many GS people had been hired into dept. of treasury that it was the company's branch office in washington DC).

2002, congress lets the fiscal responsibility act lapse (spending couldn't exceed tax revenue, on its way to eliminating all Federal debt). CBO 2010 report that 2003-2009, tax revenue was cut $6T and spending increased $6T for $12T gap (compared to fiscal responsible budget), first time taxes were cut to not pay for two wars. Sort of confluence of the Federal Reserve and Too-Big-To-Fail wanted huge federal debt, special interests wanted huge tax cut, and military-industrial(-congressional) complex wanted huge spending increase. Since then annual deficits have been periodically reduced (mostly some reduction in spending), but tax revenue has yet to be restored.

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
Fiscal Responsibility Act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
Too Big To Fail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
ZIRP funds
https://www.garlic.com/~lynn/submisc.html#zirp
triple-A rated toxic CDOs
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

A few recent posts mentioning AIG, SECTREAS, and TARP
https://www.garlic.com/~lynn/2023c.html#65 Economic Mess and IBM Science Center
https://www.garlic.com/~lynn/2023b.html#60 Free Money Turned Brains to Mush, Now Some Banks Fail
https://www.garlic.com/~lynn/2023b.html#36 Silicon Valley Bank collapses after failing to raise capital
https://www.garlic.com/~lynn/2022f.html#77 Closing Down the Billionaire Factory
https://www.garlic.com/~lynn/2022e.html#106 Price Wars
https://www.garlic.com/~lynn/2022b.html#116 Modern Capitalism Is Weirder Than You Think It also no longer works as advertised
https://www.garlic.com/~lynn/2022b.html#96 Oil and gas lobbyists are using Ukraine to push for a drilling free-for-all in the US
https://www.garlic.com/~lynn/2022b.html#24 Did Ben Bernanke Implement QE before the 2008 Financial Crisis?
https://www.garlic.com/~lynn/2021k.html#68 The System
https://www.garlic.com/~lynn/2021i.html#65 Matt Stoller: #OccupyWallStreet Is a Church of Dissent, Not a Protest
https://www.garlic.com/~lynn/2021f.html#13 Elizabeth Warren hammers JPMorgan Chase CEO Jamie Dimon on pandemic overdraft fees
https://www.garlic.com/~lynn/2021e.html#85 How capitalism is reshaping cities
https://www.garlic.com/~lynn/2021e.html#41 The Whistleblower Trying to Stop the Next Financial Crisis
https://www.garlic.com/~lynn/2019e.html#154 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2019e.html#147 The Ongoing Effort to Write Wall Street Out of the 2008 Financial Crisis
https://www.garlic.com/~lynn/2019e.html#45 Corporations Are People
https://www.garlic.com/~lynn/2019d.html#64 How the Supreme Court Is Rebranding Corruption
https://www.garlic.com/~lynn/2019c.html#39 Deutsche Bank To Launch EU50 Billion "Bad Bank" Housing Billions In Toxic Derivatives
https://www.garlic.com/~lynn/2019b.html#83 Firefighting: The Financial Crisis and Its Lessons
https://www.garlic.com/~lynn/2019b.html#9 England: South Sea Bubble - The Sharp Mind of John Blunt
https://www.garlic.com/~lynn/2019.html#45 Jeffrey Skilling, Former Enron Chief, Released After 12 Years in Prison
https://www.garlic.com/~lynn/2019.html#15 TARP Funds and Noncompliant
https://www.garlic.com/~lynn/2019.html#9 Balanced Federal Budget

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM System/360 Revolution

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The IBM System/360 Revolution
Newsgroups: alt.folklore.computers
Date: Wed, 13 Sep 2023 08:35:33 -1000
recent post in facebook groups:

The IBM System/360 Revolution
https://www.youtube.com/watch?v=8c0_Lzb1CJw
[Recorded April 7, 2004]

Computer pioneers and National Medal of Technology awardees Erich Bloch, Fred Brooks, Jr. and Bob Evans with current IBM technology chief Nick Donofrio discuss the extraordinary IBM System/360 project.

IBM launched the System/360 on April 7, 1964. Many consider it the biggest business gamble of all time. At the height of IBM's success, Thomas J. Watson, Jr. bet the company's future on a new compatible family of computer systems that would help revolutionize modern organizations. This lecture presents a behind-the-scenes view of the tough decisions made by some of the people who made them, and discusses how the System/360 helped transform the government, science and commercial landscape.


Computer History Museum
https://www.youtube.com/@ComputerHistory
https://computerhistory.org/

old (archived) 24 Mar 2004 bit.listserv post of the event
https://www.garlic.com/~lynn/2004d.html#15 "360 revolution" at computer history museuam (x-post)

linkedin year old post about IBM "downfall", starting with Learson trying(& fails) to block bureaucrats, careerists, MBAs from destroying Watson Legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
a couple decades later, IBM has one of the largest losses in history of US companies history and was being reorganized into the 13 "baby blues" in preparation for breaking up the company.

some other recent posts mentioning Donofrio
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2023.html#110 If Nothing Changes, Nothing Changes
https://www.garlic.com/~lynn/2023.html#52 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2022f.html#57 The Man That Helped Change IBM

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM System/360 Revolution

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The IBM System/360 Revolution
Date: 13 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#68 The IBM System/360 Revolution

Amdahl guote about design of 360:
He said they identified at all the companies that had the recourses to pay them lots of money. Then they went to these companies and asked them what products/programs they would pay big money for. They designed these programs and identified common services they all relied upon and designed those services. Then they designed an OS that would well support those services and programs. Then they designed a customer support system. Then they designed the hardware to support the OS, services, programs, and customer support.

... snip ...

ACS was not going to be compatible ... Amdahl then prevailed with a 360 compatible ACS ... then executives shut it down because they were afraid it would advance the state of the art too fast, and they would loose control of the market, also some ACS/360 features that show up more than 20yrs later with ES/9000. Amdahl leaves IBM shortly later
https://people.cs.clemson.edu/~mark/acs_end.html

Early 70s, IBM has "Future System" project (claimed to be countermeasure to clone controllers), completely different from 360&370 and was going to completely replace it and internal politics was shutting down 370 activities (claim that lack of new 370 was instrumental in clone 370s gaining market share ... aka a failed countermeasure to clone controllers, help give rise to clone 370 systems). When FS implodes there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Shortly after forming his clone mainframe company, Amdahl gave presentation at large MIT auditorium and several of us from science center went over. Somebody in the audience asked him what justification did he use with investors for his clone mainframe company. He said that customers had spent so much money developing 360/370 software, that even if IBM was going to completely walk away from 360/370, it would be enough to keep him in business until the end of the century (sort of implying he knew about IBM's Future System, but in later years he claimed he knew nothing about FS).

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Account starting with Learson attempting(& failing) to block the bureaucrats, careerists, and MBAs from destroying the Watson legacy ... a couple decades later IBM has one of the largest losses in history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

I also tell story that Amdahl had been selling into the technical/scientific/univ market, but had yet been able to break into the IBM "true blue" commercial market and a large IBM commercial customer orders an Amdahl system (lone Amdahl system in a vast sea of IBM systems) ... supposedly in retribution for something the IBM branch manager did. I was asked to go onsite for 6-12months apparently to obfuscate the reason for the order and declined. I was then told I would never have a career, promotions, and/or raises at IBM.

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

also: The 360 Revolution
https://www.vm.ibm.com/history/360rev.pdf

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM System/360 Revolution

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: The IBM System/360 Revolution
Newsgroups: alt.folklore.computers
Date: Wed, 13 Sep 2023 15:56:51 -1000
re:
https://www.garlic.com/~lynn/2023e.html#68 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#69 The IBM System/360 Revolution
and
https://www.garlic.com/~lynn/2023e.html#65 PDP-6 Architecture, was ISA

last year posts related to 50th anniv of VM/370 (after decision to add virtual memory to all 370s)
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-7-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50-part-8-lynn-wheeler/

I've mentiioned various times asked to track down decision to add virtual memory to all 370s. Found staff to executive making decision. Basically MVT storage management was so bad that region sizes frequently had to be specified four times larger than used. As a result typical 1mbyte 370/165 only ran four regions concurrently, insufficient to keep processor busy and justified. Moving MVT to 16mbyte virtual memory (similar to running MVT in CP67 16mbyte virtual machine) allowed number of concurrently running regions to be increased by factor of four with little or no paging. Archived 2011 afc post with pieces of email exchange
https://www.garlic.com/~lynn/2011d.html#73

Science center had joint project with Endicott to enhance CP67 virtual machine support to support 370 virtual memory architecture ... and then modify CP67 to run with 370 virtual memory (instead of 360/67 virtual memory). My CP67L ran on the real 360/67, CP67H ran in virtual 360/67 (supporting 370 virtual machines), CP67I ran in virtual 370 (extra layer not running CP67H on real hardware was because Cambridge system also had professors, staff and students using the system, and wanted extra security layer preventing unannounced virtual memory from leaking). This was in regular use a year before before the 370 hardware (engineering 370/145) was operational with virtual memory support (and CP67I was used for testing real hardware). Then three people from San Jose added 3330 and 2305 device support for "CP67SJ" (sometimes cp370) ... which was in production use on internal 370 machines long before (and after) VM/370 operational.

In the morph from CP67->VM370, lots of CP67 features were dropped and/or simplified. I then spent parts of 1974, adding CP67 (initially) into VM/370 for CSC/VM (aka after joining IBM one of my hobbies was production operating systems for internal datacenters). I would sometime needle the MULTICS people on the 5th flr that at some point, I was supporting more CSC/VM production internal systems than the total number of MULTICS systems that ever existed (aka some of the MIT CTSS/7094 people went to 5th flr to do MULTICS, others went to the science center on 4th flr and did internal network, virtual machine, invented)

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

some recent posts mentioning CP67L, CP67H, CP67I, CP67SJ
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 14 Sep, 2023
Blog: Facebook
long-winded recent post talking about CP67 (precursor to VM370) ... and then having to migrate a lot of CP67 features to VM370 (and some number of details over the years).
https://www.garlic.com/~lynn/2023e.html#48 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#49 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments

After transfering to san jose research in late 70s, got to wander around lots of datacenters in silicon valley (both ibm and non-ibm), including disk engineering and disk product test across the street. They had been running stand-alone, pre-scheduled, 7x24 mainframe testing ... mentioned that they had recently tried to do testing under MVS, but MVS had 15min mean-time-between-failure (in that environment), requiring manual re-ipl. I offered to rewrite I/O supervisor so it was bullet-proof and never fail so they could do any amount of concurrent, on-demand testing greatly improving productivity. Disk product test (bldg15) would get early engineering processors (#3 or #4 engineering 3033, testing only took percent or two CPU, so scrounge a 3830 controller and 3330 string and put up our own private online service).

Then a engineering 4341 and in Jan1979 was con'ed into doing benchmark on 4341 for national lab that was looking at getting 70 for a compute farm (sort of leading edge of the coming cluster supercomputing tsunami). Also found that small cluster of 4341s was much cheaper, much higher throughput, much smaller environmentals, power, cooling and footprint, than 3033. Then in the 80s, large corporations were ordering hundreds of vm4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed, departmental computing tsunami, inside IBM, departmental conference rooms were becoming scarce because so many were being converted to vm4341 rooms).

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

Note, 4300s sold into the same midrange market as DEC VAX machines and in similar numbers (except for the large corporation orders of hundreds at a time). Note in mid-80s, IBM was expecting 4300 follow-on (4361 & 4381) would show similar explosion of sales ... but this archived post of DEC VAX sales (year, model, US/non-US) show that by mid-80s, the midrange market was moving to PC&workstation servers:
https://www.garlic.com/~lynn/2002f.html#0

In late 80s, was to do HA/6000, originally for NYTimes to convert their newspaper system (ATEX) from VAXCluster to RS/6000. I change the name to HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with major RDBMS vendors (Oracle, Sybase, Informix, Ingres ... that had VAXCluster support in same source base as UNIX, I do some APIs that implement VAXCluster semantics to simplify the port). Early Jan1992, IBM (Hester, some others) have meeting with Oracle CEO Ellison and tell them HA/CMP would have 16-CPU HA/CMPs by mid-year and 128-CPU HA/CMPs by ye-1992. Then by the end of Jan1992, cluster scale-up is transferred for announce as IBM supercomputing (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later). Mainframe DB2 may have contributed, complaining that if we were allowed to proceed, it would be years ahead of them.

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Around turn of the century, the PC processor chip vendors were moving to micro-op execution implementation where instructions were hardware translated into a series of micro-ops for execution (largely negating performance difference between CISC and RISC). Implementations of cluster supercomputers and large cloud megadatacenters (each megadatacenter having half-million or more "blade" servers) were starting to look more and more similar (some cluster supercomputers were identical to technology in cloud megadatacenters).

trivia: 1980, I get con'ed into doing channel-extender for STL (since renamed SVL) that was moving 300 people from IMS group to offsite bldg with dataprocessing service back to STL ... so they can put channel-sttached 3270 controllers at the offsite bldg. Then in 1988, the IBM branch office asks me to help LLNL (national lab) get some serial stuff they are playing with, standardized. It quickly becomes fibre channel standard ("FCS", including some stuff I had done in 1980), 1gbit, full-duplex, 200mbyte/sec aggregate. Then POK announces some of their serial stuff with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy-weight protocol that significantly reduces throughput, that is eventually released as FICON. Most recent public benchmark I can find is z196 disk "peak I/O" that gets 2M IOPS using 104 FICON (running over 104 FCS). About the same time a FCS was announced for E5-2600 blades (commonly used for megadatacenters) claiming over a million IOPS (two such FCS having higher throughput than 104 FICOON). As an aside, IBM DASD benchmarks were supposedly CKD, something that hasn't been manufactured for decades, being simulated on industry fixed-block disks. Also max-configured z196 was benchmarked at 50BIPS (industry standard benchmark that counted number of iterations compared to 370/158 assumed to be 1MIPS) and went for $30M ($600,000/BIPS). By comparison E5-2600 blade was 500BIPS (same industry standard benchmark, ten times z196) and IBM had base list price of $1815 ($3.62/BIPS).

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 14 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers

Note oft repeated FS saga: Early 70s, IBM started Future System project, totally different than 370 and was going to completely replace it (internal politics were killing off 370 efforts and lack of new 370 during the period is credited with giving clone 370 makers, their market foothold). Then when 370 implodes there is mad rush to get stuff back into 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

memo125 talks about 3081 being warmed over FS technology that resulted in enormous increase in number of circuits compared to performance (much more than any other technology of the period). Some conjecture that the enormous increase in circuits is what gave rise to TCMs (package the enormous circuits in reasonable physical volume) and requirement for water cooling. Note 3081 originally were going to be multiprocessor only (no single processor). There were applications running on single 3081D processor had lower throughput than 3033. The cache size was doubled for 3081K bringing that throughput slightly better than 3033. At the time, the single processor Amdahl had about same MIPS as aggregate of two-processor 3081K and much higher MVS throughput; MVS documentation talked about two-processor MVS having about 1.2-1.5 times throughput of single processor (because of significant MVS multiprocessor overhead).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
smp, multiprocessor, tightly-coupled and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 14 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#72 microcomputers, minicomputers, mainframes, supercomputers

Cache-align kernel storage allocation ... I think became more critical with four processor 3084 (don't want end of one kernel storage allocation sharing same cache line as start of some other kernel storage allocation) where instead of contending with one other processor, there was contention with three other processors ... significantly increasing cache processing overhead. Both MVS and VM370 went to cache line sensitive storage allocation (claiming something like 3-5% improvement in kernel processing throughput) ... would also help two processor 3081 ... but not as significantly. It didn't help with multiple processors contending for the same value in the same cache-line ... but helped with different processors contending for different locations (not just locks) in the same cache line.

Charlie had come up with compare-and-swap instruction when he was doing fine-grain kernel multiprocessor locking for CP67 (precursor to VM370) at the science center (name compare-and-swap chosen because "CAS" were Charlie's initials). Then there was several meetings in POK with the 370 architecture owners, trying to get it added to 370 architecture. Their response was that the POK favorite son operating system (MVT, before becoming VS2) said that the 360 "test-and-set" was sufficient for multiprocessing (in part because MVT effectively had a single global kernel spin-lock, one reason that MVT/VS2 had so little multiprocessor throughput increase). Eventually came up with use by multi-threaded applications (like large DBMS&RDBMS; whether running single processor or multiprocessor) ... some examples still shown in principles of operation (possible to perform numerous kinds of operations with single, non-interruptable instruction w/o requiring to be disabled for interrupts and/or requiring explicit locks).

smp, multiprocessor, tightly-coupled and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some past posts mentioning (3084) cache-line sensitivity
https://www.garlic.com/~lynn/2011k.html#84 'smttter IBMdroids
https://www.garlic.com/~lynn/2011b.html#68 vm/370 3081
https://www.garlic.com/~lynn/2008e.html#40 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2003j.html#42 Flash 10208

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 15 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#72 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#73 microcomputers, minicomputers, mainframes, supercomputers

I had been con'ed into helping Endicott with 138/148 ECPS microcode assist and then in early 80s, got permission to give presentations at monthly user group BAYBUNCH meetings (hosted at Stanford SLAC) on the implementation. Old archived post with original early analysis for ECPS
https://www.garlic.com/~lynn/94.html#21
posts mentioning 360/370 microcode
https://www.garlic.com/~lynn/submain.html#360mcode

After meetings, the Amdahl people would grill me for additional information. They said that they were in process of implementing HYPERVISOR with MACROCODE. They had originally developed MACROCODE in the late 70s to respond to the numerous trivial IBM changes for 3033 required for MVS to run. Normal high-end microcode was horizontal and was difficult and time-consuming to program. MACROCODE was 370-like instruction set that ran in microcode mode, that made it trivial to respond to IBM 3033 changes. Amdahl was then were using it to do VM370 virtual machine subset all in microcode.

POK was finding that customers weren't converting to MVS/XA as planned ... although Amdahl was having success with HYPERVISOR and being able to run both MVS and MVS/XA concurrently on the same machine. Note in the wake of FS implode and mad rush with quick&dirty 3033&3081, the head of POK also convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (or supposedly MVS/XA wouldn't ship on time). They weren't going to tell the people until the very last minute to minimize the number that might escape. The information leaked early and several escaped into the Boston area (this was in infancy of DEC VAX/VMS development and joke was the head of POK was major contributor to VMS). There was also a hunt for the source of the leak, fortunately for me, nobody gave up the leaker. Endicott eventually saved the VM370 product responsibility but had to recreate a development group from scratch ... and wanted to ship VM370 pre-installed on every 138/148 shipped (but POK did manage to block that).

Some of the VM370 transplants did create the VMTOOL, a greatly simplified virtual machine used during MVS/XA development. With the success of Amdahl HYPERVISOR in running MVS & MVS/XA concurrently for MVS/XA conversion, it was decided to ship VMTOOL, 1st as VM/MA (migration aid) and then VM/SF (system facility). Then there was corporate battle, POK wanted to form a few hundred people group to upgrade VMMA/VMSF to feature, function, and performance of VM/370 for VM/XA. A sysprog in Rochester had added full 370/XA support to VM/370 that Endicott wanted to ship (POK won).

Note IBM didn't ship (virtual machine) PR/SM & LPAR until 1988 for 3090 (almost decade after Amdahl's HYPERVISOR).

some posts mentioning MACROCODE, HYPERVISOR, ECPS, PR/SM, LPAR, VMTOOL
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 15 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#72 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#73 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers

When I first joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters and one of my long-time customers was HONE, evolving into the online world-wide sales&marketing support system. In the mid-70s, the US HONE datacenters were consolidated in silicon valley (trivia: when FACEBOOK 1st moved into silicon valley, it was into a new bldg built next door to the former US consolidated HONE datacenter). The VM/370 system was upgraded to what I believed was largest single-system image, loosely-coupled (cluster) complex in the world with load-balancing and fall-over ... and then multiprocessor support was added to their VM370 release 3 system to double the number of processors to 16 (at the time, POK was doing its best to eradicate VM370 ... and so when similar support finally ships this century, I would joke about IBM not releasing software before its time).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled, and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

posts mentioning joke about "from annals of release no software before its time"
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2021j.html#108 168 Loosely-Coupled Configuration
https://www.garlic.com/~lynn/2019d.html#106 IBM HONE
https://www.garlic.com/~lynn/2018c.html#95 Tandem Memos
https://www.garlic.com/~lynn/2017d.html#45 FW: What are mainframes
https://www.garlic.com/~lynn/2017d.html#42 What are mainframes
https://www.garlic.com/~lynn/2017c.html#63 The ICL 2900
https://www.garlic.com/~lynn/2015h.html#2 More "ageing mainframe" (bad) press
https://www.garlic.com/~lynn/2014i.html#21 IBM to sell Apples
https://www.garlic.com/~lynn/2014h.html#33 Can Ginni really lead the company to the next great product line?
https://www.garlic.com/~lynn/2014h.html#21 Is end of mainframe near?
https://www.garlic.com/~lynn/2014h.html#16 Emulating z CPs was: Demonstrating Moore's law
https://www.garlic.com/~lynn/2014g.html#103 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2013o.html#80 "Death of the mainframe"
https://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013f.html#74 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#35 Reports: IBM may sell x86 server business to Lenovo
https://www.garlic.com/~lynn/2012o.html#29 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012o.html#11 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012n.html#9 How do you feel about the fact that today India has more IBM employees than any of the other countries in the world including the USA.?
https://www.garlic.com/~lynn/2012l.html#56 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012l.html#41 The IBM zEnterprise EC12 announcment
https://www.garlic.com/~lynn/2012l.html#39 The IBM zEnterprise EC12 announcment
https://www.garlic.com/~lynn/2012j.html#13 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012j.html#2 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012f.html#0 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012e.html#27 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2012d.html#64 Layer 8: NASA unplugs last mainframe
https://www.garlic.com/~lynn/2012.html#82 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#77 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#44 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011m.html#61 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
https://www.garlic.com/~lynn/2011h.html#32 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#8 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011c.html#88 Hillgang -- VM Performance
https://www.garlic.com/~lynn/2011.html#10 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010q.html#35 VMSHARE Archives
https://www.garlic.com/~lynn/2010o.html#64 They always think we don't understand
https://www.garlic.com/~lynn/2010o.html#26 Global Sourcing with Cloud Computing and Virtualization
https://www.garlic.com/~lynn/2010o.html#11 The Scariest Company in Tech
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010g.html#81 What is the protocal for GMT offset in SMTP (e-mail) header time-stamp?
https://www.garlic.com/~lynn/2010g.html#48 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2010g.html#4 Handling multicore CPUs; what the competition is thinking
https://www.garlic.com/~lynn/2009r.html#4 70 Years of ATM Innovation

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 15 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#72 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#73 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#75 microcomputers, minicomputers, mainframes, supercomputers

My wife was in the gburg JES group and one of the catchers for ASP/JES3 and then one of the co-authors of JESUS (JES Unified System) specification (all the features of the two systems that the respective customers couldn't live w/o, for various reasons, never came to fruition). She was then con'ed into going to POK responsible for loosely-coupled architecture where she did Peer-Coupled Shared Data architecture. She didn't remain long because 1) periodic battles with communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX and Parallel SYSPLEX), except for IMS hot-standby. She has story about asking Vern Watts who he was going to ask permission to do IMS hot-standby. He said nobody, he would just do it, and then tell them when it was all done.

HASP/ASP, JES2/JES3, NJI/NJE posts
https://www.garlic.com/~lynn/submain.html#hasp
peer-coupled shared data posts
https://www.garlic.com/~lynn/submain.html#shareddata

In late 80s, we had project that started out HA/6000 for the NYTimes to convert their newspaper system (ATEX) off VAXCluster to RS/6000. Nick Donofrio ... mentioned in this recent post
https://www.garlic.com/~lynn/2023e.html#68 The IBM System/360 Revolution

stops by and my wife presented him five hand drawn charts and he approved the project and funding. I rename it HA/CMP when we start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with major RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had vaxcluster support in the same source base with unix support. I do some APIs that implement VAXcluster semantics that simplify the port. Early Jan1992, there is meeting with Oracle CEO Ellison and some of his people on cluster scale-up where (IBM) Hester presents we would have 16-CPU by mid-92 and 128-CPU by ye-92. However, by end of JAN1992, cluster scale-up is transferred for announce as IBM supercomputer (technical/scientific *ONLY*) and we are told we couldn't work on anything with more than four processor (we leave IBM a few months later). Contributing was (mainframe) DB2 complaining that if we were allowed to proceed, it would be years ahead of them.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

a few other posts mentioning Nick Donofrio
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2023.html#110 If Nothing Changes, Nothing Changes
https://www.garlic.com/~lynn/2023.html#52 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2022f.html#57 The Man That Helped Change IBM
https://www.garlic.com/~lynn/2004d.html#15 "360 revolution" at computer history museuam (x-post)

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 15 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#72 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#73 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#75 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#76 microcomputers, minicomputers, mainframes, supercomputers

(interdata minicomputer) from recent long winded comment about mainframe activity as undergraduate in the 60s:
https://www.garlic.com/~lynn/2023e.html#64 Computing Career

Univ. gets some TTY/ASCII and I add ASCII terminal support to CP67 (when ASCII terminal port scanner arrives for IBM terminal control unit, it is a box labeled "HEATHKIT"). I then want to have single dial-up number ("hunt group") for all terminal types, I can dynamically switch port scanner terminal type for each line but IBM had taken short-cut and hardwired the port line speed (so doesn't quite work). Univ. starts a project to implement clone controller; build a channel interface board for Interdata/3 programmed to emulate IBM terminal control unit, with the addition it can do dynamic line speed (later enhanced to be a Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces). Four of us get written up as responsible for (some part of) clone controller business. Interdata and later Perkin-Elmer sell it as IBM clone controller.
https://en.wikipedia.org/wiki/Interdata
Interdata, Inc., was a computer company, founded in 1966 by a former Electronic Associates engineer, Daniel Sinnott, and was based in Oceanport, New Jersey. The company produced a line of 16- and 32-bit minicomputers that were loosely based on the IBM 360 instruction set architecture but at a cheaper price.[2] In 1974, it produced one of the first 32-bit minicomputers,[3] the Interdata 7/32. The company then used the parallel processing approach, where multiple tasks were performed at the same time, making real-time computing a reality.[4]

... snip ...

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 16 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#72 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#73 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#75 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#76 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#77 microcomputers, minicomputers, mainframes, supercomputers

Trivia: about same time asked to help LLNL with serial standardization for what becomes FCS, also was asked to participate in standadization meetings for SCI (by Gustavson at Stanford SLAC) ... SCI was used for several things, serial for NUMA memory bus (used by Convex, SGI, Sequent, Data General, etc), I/O etc. Following doesn't mention IBM ... but I was regular at monthly meetings at SLAC
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
some of which feeds into infiniband
https://en.wikipedia.org/wiki/InfiniBand

The RS/6000 engineer that had done SLA (sort of ESCON with lots of incompatible fixes/enhancements) wanted to start on an 800mbit/sec version and instead we con him into joining FCS standard committee)

Note that precursor to FCS and SCI was HIPPI standardization of the 100mbyte parallel/copper Cray channel championed by LANL. Then HP & others were working on serial HIPPI somewhat concurrently with the FCS activity https://ieeexplore.ieee.org/document/186715

from long ago and far away


FIBER E-MAIL FCS industry discussions as seen on Fiber Reflector
P1394 E-MAIL Reflector of IEEE P1394 industry reflector managed by SUN
HIPPI E-MAIL HIPPI industry discussions as seen on HIPPI Reflector
X3T9 E-MAIL Common X3T9 E-Mail forum for IPI, HIPPI, FCS, SCSI, and FDDI
FC-IP E-MAIL FCS Internet Protocol (IP)
HIPPI FORUM X3T9.3 High Performance Parallel Interface (HIPPI) - formerly called HSC.
FIBER FORUM Fiber FORUM - especially as it relates to ANSI X3T9.3
MIFP TERS3820 EMIF Presentation and Proposal for FCS
FABRIC FORUM FABRIC portion of FCS
FC-SB E-MAIL FC-SB (Fibre Channel Single Byte Command Set)
IPI E-MAIL IPI industry discussions as seen on IPI Reflector
FC-0 FORUM FC-0 portion of FCS
FC-FA E-MAIL Fibre Channel Fabric reflector from Network Systems
X3T9 FORUM ANSI X3T9 forum: general information regarding .2, .3, and .5 standards.
HIPPI E-MAIL91 1991 part of HIPPI E-MAIL
HIPPI E-MAIL90 1990 part of HIPPI E-MAIL
FIBER E-MAIL91 1991 part of FIBER E_MAIL
FIBER E-MAIL90 1990 part of FIBER E_MAIL
FC_STATE TERS3820 Latest FC-PH V2.2 State Machine
SSA2_1 TERS3820 SSA-SCSI Description
SSA0_5 TERS3820 SSA-PH Description
FC2ADF1F TERS3820 Alias Address Identifier Partitioning,
FC2ADP1F TERS3820 Address Identifier Partitioning, Revision 5,
IPI-2 FORUM X3T9.3 Intelligent Peripheral Interface - Device Specific
FC0_11 TERS3820 Version 1.1 of the FC-0 portion of FC-PH. LB editorial comments included
FCIPREQT TERS3820 IBM requirements on FC-IP proposal
FCPH2_2 FORUM FC-PH Version 2.2 Comments
FIBER MINUTES X3T9.3 Fiber Channel Standard (FCS) Meeting Minutes.
FC2IFL1D TERS3820 Proposed Intra-Fabric Communication Protocol, Revision 3
FC2SVCDB TERS3820 Fiber Channel Services Addition to FC-FG Document, Rev. 4
PH_22_RS TERS3820 Responses to IBM comments on FC-PH, 2.2
PH22 TERS3820 ES/Research Consolidated Comments on FC-PH, 2.2
WELCOME FORUM Welcome to DFCI!!!
IPI FORUM General IPI forum covering IPI-3,-2,-1,-0
FC-4 FORUM FC-4 portion of FCS
FCREQREP TERS3820 Proposed definitions for FC-4 terms
ISOCH TERS3820 Isochronous Paper from Frank Koperda
NMSERVRQ TERS3820 ABS Name Server requirements -- 3/92 presentation
FC4COMON TERS3820 Common FC-4 and IPI-3 mappings -- 3/92 presentation
X3T9_3 MINUTES X3T9.3 Plenary Meeting Minutes
SI_PH FORUM Serial Interface - Physical
FC0_1 TERS3820 Level 1.0 of the FC-0 chapters of the FC-PH. File has been tersed.
SI_IPI2 FORUM Serial Interface - IPI-2 Mapping
SI_SCSI FORUM Serial Interface - SCSI Mapping
FC0_94 TERS3820 Revision level 0.94 of FC-0. The list file has been TERSED.
FC-2 FORUM FC-2 portion of FCS
IPI-3 FORUM X3T9.3 Intelligent Peripheral Interface - Device Generic
FCSTATE TERS3820 FCS State Machine Document from Ken Ocheltree
FC0_93 TERS3820 FC-0 version 0.93, TERSED to save space
FC0CONN TERS3820 FC-0 Connector Section of FC-PH
FCSPERF FORUM Fiber Channel Standard Performance Discussions
FC0_92 TERS3820 FC-0, Ver 0.92, Tersed to save space
LPORT TERS3820 Overview of low cost Fibrechannel loop port characteristics
FCPH21 TERS3820 Comments on FC-PH Version 2.1
FCPHCOM TERS3820 Comments by Bryan Cook on FC-PH vers 2.1
SWITCH SWG SWITCH Forum to monitor development of an IBM FCS Switch (access is control
TEST FORUM Test FORUM for testing/playing
IPIHIPPI TERS3820 HIPPI/IPI-3 Mapping Document
FC-1 FORUM FC-1 portion of FCS
FC1R1_6 TERS3820 FC-1 rev. 1.6
SWITCHM2 TERS3820 Minutes of 2nd Switch Meeting on 29-30Jan91 Hawthorne, NY
HIPPI MINUTES X3T9.3 HIPPI Working Group Minutes
FC1R1_5 TERS3820 FC-1 vers 1.5
PRIMSIG TERS3820 ANSI Primitive Signal presentation - 12/4/90
BBSW TERS3820 Broadband Tree Switch description from Zurich Research
FCSMODS TERS3820 FC-2 Recommendations from Frank Koperda
FC2_15 TERS3820 FC-2 vers 1.5
FC-3 FORUM FC-3 portion of FCS
FC1R1_4 TERS3820 FC-1 vers 1.4
SUNDEV TERS3820 Sunway device level sequences
ADDRPROP TERS3820 FCS Address Segmentation and Assignment Proposal



... snip ...

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

some posts also mentioning SCI, sequent, numa-q
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#29 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022.html#118 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#45 OoO S/360 descendants
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2019d.html#81 Where do byte orders come from, Nova vs PDP-11
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017c.html#49 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016e.html#45 How the internet was invented
https://www.garlic.com/~lynn/2016c.html#70 Microprocessor Optimization Primer
https://www.garlic.com/~lynn/2016b.html#74 Fibre Channel is still alive and kicking
https://www.garlic.com/~lynn/2014m.html#176 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014d.html#18 IBM ACS
https://www.garlic.com/~lynn/2014.html#71 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#50 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013m.html#70 architectures, was Open source software
https://www.garlic.com/~lynn/2013g.html#49 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2009s.html#59 Problem with XP scheduler?
https://www.garlic.com/~lynn/2009o.html#29 Justice Department probing allegations of abuse by IBM in mainframe computer market
https://www.garlic.com/~lynn/2008i.html#2 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006y.html#38 Wanted: info on old Unisys boxen
https://www.garlic.com/~lynn/2006q.html#24 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 16 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#72 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#73 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#75 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#76 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#77 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers

Note after leaving IBM Research (& System/R, original sql/relational) for Tandem, Jim Gray did a study of system failures ... had found that hardware reliability was increasing by order to two orders of magnitude ... and majority of system outages had shifted to software, human and environmental (earthquake, flooding, power, etc) ... part of presentation (note copied on internal IBM copy machine ... can be seen where every copy embossed with copy machine id, happened after internal ibm document leak to industry press).
https://www.garlic.com/~lynn/grayft84.pdf
also
https://jimgray.azurewebsites.net/papers/TandemTR86.2_FaultToleranceInTandemComputerSystems.pdf

when i was out doing marketing for our HA/CMP project, I coined to the terms disaster survivability and geographic survivability (to differentiate from disaster/recovery). I then was asked to write a section for the corporate continuous availability strategy" document ... but it got pulled when both Rochester (AS/400) and POK (mainframe) complained that they couldn't meet the requirements.

For (commercial) cluster scale-up also needed logical locking (DLM) and free disk access for all members of the cluster. As a result needed the inverse of "Reserve/Release" for loosely-coupled operations ... cluster member that appeared to not be responding, the cluster needed a "fencing" operation in fabric/switches to block a member (removed from the cluster, failure where member hadn't actually completely failed, but temporarily suspended and later wakes up to try and complete "stale" operation) from doing disk I/Os. Got that for HIPPI/IPI3 switches and then needed it for FCS switches.

Before that we had worked with Hursley using their 9333 ... originally 80mbit/sec, full-duplex serial copper that packetized SCSI commands for SCSI disks. I then wanted to have 9333 evolve into partial speed, interoperable FCS ... but (we had left IBM and) instead it evolves into non-interoperable SSA:
https://en.wikipedia.org/wiki/Serial_Storage_Architecture
https://www.ibm.com/support/pages/overview-ibm-ssa-raid-cluster-adapter

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability posts
https://www.garlic.com/~lynn/submain.html#available

a couple posts mentiong "fenching" versus "reserve(/release)"
https://www.garlic.com/~lynn/2017k.html#58 Failures and Resiliency
https://www.garlic.com/~lynn/2011c.html#77 IBM and the Computer Revolution
https://www.garlic.com/~lynn/94.html#17 Dual-ported disks?

recent distributed lock manager posts
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#63 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#62 IBM DB2
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019e.html#11 To Anne & Lynn Wheeler, if still observing
https://www.garlic.com/~lynn/2018d.html#69 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018c.html#33 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017b.html#82 The ICL 2900
https://www.garlic.com/~lynn/2014k.html#40 How Larry Ellison Became The Fifth Richest Man In The World By Using IBM's Idea
https://www.garlic.com/~lynn/2014.html#73 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013o.html#44 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013m.html#87 'Free Unix!': The world-changing proclamation made 30 yearsagotoday
https://www.garlic.com/~lynn/2013m.html#86 'Free Unix!': The world-changing proclamation made 30 yearsagotoday

a few recent posts mentioning 9333 and SSA
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2022e.html#47 Best dumb terminal for serial connections
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2021k.html#127 SSA
https://www.garlic.com/~lynn/2021g.html#1 IBM ESCON Experience

--
virtualization experience starting Jan1968, online at home since Mar1970

microcomputers, minicomputers, mainframes, supercomputers

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: microcomputers, minicomputers, mainframes, supercomputers
Date: 16 Sep, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#72 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#73 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#75 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#76 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#77 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#79 microcomputers, minicomputers, mainframes, supercomputers

straying a little; before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on IBM CP/67 (precursor to IBM's VM370) at npg (gone 404, but lives on at the wayback machine)
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html
npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

as I've repeatedly posted, in Jan1979 (before first customer ship) I was con'ed into benchmarking vm/4341 for national lab that was looking at getting 70 for computer farm (sort of the leading edge of the coming cluster supercomputing tsunami). Then in the early 80s, large corporations were making orders of hundreds of vm/4341s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami ... and a combination of the cluster supercomputing tsunami and distributed computer tsunami along with the internet, feeds the cloud megadatacenter tsunami)

some posts mentioning cluster supercomputing and distributed computing tsunamis
https://www.garlic.com/~lynn/2022d.html#86 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2021c.html#63 Distributed Computing
https://www.garlic.com/~lynn/2021.html#76 4341 Benchmarks
https://www.garlic.com/~lynn/2018f.html#93 ACS360 and FS
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017j.html#93 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2017d.html#5 IBM's core business
https://www.garlic.com/~lynn/2017c.html#87 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017b.html#36 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2017.html#21 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2014g.html#83 Costs of core

--
virtualization experience starting Jan1968, online at home since Mar1970

Storage Management

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Storage Management
Date: 17 Sept, 2023
Blog: Facebook
In the late 70s, I did CMSBACK, initially for San Jose Research and the consolidated US HONE datacenters (up in Palo Alto; aka HONE was the world-wide online sales&marketing support system, started on CP67 in the late 60s, but quickly required that all mainframe orders had to be first checked by HONE applications, mostly written in APL) and then spread to lots of other internal datacenters. It did incremental file backup & had online user interface where could select restores from all their backed up files. A decade later, PC&workstation clients/agents were added and released to customers as Workstation Datasave Facility (WDSF). Then GPD/ADstar took it over and renamed it ADSM ... subsequently renamed TSM and now IBM Storage Protect.
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager

old cmsback email (discussing adding file name regular expression and then date/time ranges, to the user search interface)
https://www.garlic.com/~lynn/2006t.html#email791025
in this archived post to (usenet) a.f.c & vmesa-l
https://www.garlic.com/~lynn/2006t.html#24 CMSBACK

hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
backup/archive posts
https://www.garlic.com/~lynn/submain.html#backup

--
virtualization experience starting Jan1968, online at home since Mar1970

Saving mainframe (EBCDIC) files

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Saving mainframe (EBCDIC) files
Date: 17 Sept, 2023
Blog: Facebook
The univ had 709/1401 and I tooke 2 credit hr intro to fortran/computers. Within a year, the 709/1401 had been replaced with 360/67 (for tss/370), but ran as 360/65 and I was hired fulltime responsible for os/360 (univ. shutdown datacenter over the weekends and I would have the whole place dedicated, other 48hrs w/o sleep made monday classes hard). Then some from science center came out to install cp67 (3rd install after CSC itself and MIT lincoln labs) and I would mostly play with it during my 48hr weekend window. Univ. had got some TTY/ASCII terminals (had to get ascii port scanner for telecommunication control unit, which arrived in "HEATHKIT" box) and I added ASCII terminal support to CP67 (1042&2741 support) ... using standard IBM supplied terminal translate tables.

CP67 also had support for dynamically switch terminal type port scanner and I wanted to have single dial-up number ("hunt group") for all terminal types, didn't quite work since IBM had taken short-cut and hard-wired port line-speed. This kicked off Univ. project to implement our own clone controller, built channel interface board for Interdata/3 programmed to emulate IBM controller with the addition it could do dynamic line speed. First glitch was held the channel interface for too long an interval (holding the memory bus, locking out location 80 timer update) and the machine would red-light. Next was didn't notice that IBM controller convention would store arriving byte starting in low-order bit position ... resulting in ASCII "bytes" had their bit positions reversed (which were handled by the "official" IBM ASCII terminal translate tables) ... and had to emulate IBM controller by reversing bit order of terminal "bytes". Later Interdata/3 was upgraded to Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata (and later Perkin-Elmer) would sell box as clone controller and four of us get written up for (some part of) the IBM clone controller business.
https://en.wikipedia.org/wiki/Interdata

posts mentioning clone controller work
https://www.garlic.com/~lynn/submain.html#360pcm

Later in the 80s, mainframe support had to differentiate between (IBM terminal controller convention) bit-reversed terminal (ASCII) bytes and non-bit-reversed ASCII bytes (like from TCP/IP).

TCP/IP trivia: IBM communication group was fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm and install base) and trying to block release of mainframe TCP/IP. When they lost (possibly because of some influential customers), they changed their tactic and since the communication group had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbyte/sec aggregate throughput using nearly whole 3090 processor. I then did the enhancements for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341 got sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed). Later in the 90s, the communication group hired a silicon valley contractor to implement TCP/IP support directly in VTAM. What he initial demo'ed had TCP throughput much higher than LU6.2. He was then told that everybody "knows" that a "proper" TCP/IP implementation throughput is much slower than LU6.2 and they would only be paying for a "proper" implementation.

posts mentioning communication group trying to preserve dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
posts mentioning RFC1044 work
https://www.garlic.com/~lynn/subnetwork.html#1044

ascii trivia: EBCDIC was one of the greatest computer goofs of all time. IBM was planning on 360 being ASCII machine ... but the ASCII unit record gear wasn't going to be ready ... so they had to (supposedly, temporarily) reuse the old BCD gear. IBM's "father" of ASCII (gone 404 but still lives on at the wayback machine):
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
The culprit was T. Vincent Learson. The only thing for his defense is that he had no idea of what he had done. It was when he was an IBM Vice President, prior to tenure as Chairman of the Board, those lofty positions where you believe that, if you order it done, it actually will be done. I've mentioned this fiasco elsewhere.

... saved ...

other
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

non-mainframe backup: In the 80s, I had a dozen tapes in the (new IBM Research) Almaden datacenter tape library that had (multiple tape copy) backup of files from the 60s &early 70s ... and then they had an operational problem where random tapes were being mounted as "scratch" and I lost 60s & early 70s archive. After that I started downloading lots of stuff to my PC for safe keeping. Email exchange with Melinda Varian asking if I had copy of the original (CP67) CMS multi-level source update implementation. I managed to pull it off and email it ... only shortly before the tape(s) got overwritten.
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908
Melinda's VM history page
https://www.leeandmelindavarian.com/Melinda#VMHist

... other Learson trivia: he was trying (& failed) to block the bureaucrats, careerists, and MBAs from destroying Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
... two decades later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company; reference gone 404 ... lives on at wayback
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

posts mentioning EBCDIC "goof"
https://www.garlic.com/~lynn/2023e.html#24 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023.html#80 ASCII/TTY Terminal Support
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#100 IBM 360
https://www.garlic.com/~lynn/2022h.html#65 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#63 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2022d.html#24 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#116 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#51 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#91 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021d.html#92 EBCDIC Trivia
https://www.garlic.com/~lynn/2020.html#7 IBM timesharing terminal--offline preparation?

Posts mentioning Almaden tape library and Melinda history
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022c.html#83 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2021k.html#51 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2018e.html#65 System recovered from Princeton/Melinda backup/archive tapes
https://www.garlic.com/~lynn/2017i.html#76 git, z/OS and COBOL
https://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014e.html#28 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2014d.html#19 Write Inhibit
https://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013h.html#9 IBM ad for Basic Operating System
https://www.garlic.com/~lynn/2013e.html#61 32760?
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2012k.html#72 Any cool anecdotes IBM 40yrs of VM
https://www.garlic.com/~lynn/2012i.html#22 The Invention of Email
https://www.garlic.com/~lynn/2011g.html#29 Congratulations, where was my invite?
https://www.garlic.com/~lynn/2011f.html#80 TSO Profile NUM and PACK
https://www.garlic.com/~lynn/2011c.html#4 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#3 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2010l.html#0 Old EMAIL Index
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2006w.html#42 vmshare
https://www.garlic.com/~lynn/2003j.html#14 A Dark Day

--
virtualization experience starting Jan1968, online at home since Mar1970

Storage Management

From: Lynn Wheeler <lynn@garlic.com>
Subject: Storage Management
Date: 18 Sept, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#81 Storage Management

Note GPD/Adstar (IBM disk division) picking up WDSF for ADSM (PC&workstation clients/agents for backup up to mainframe) was just part of countermeasures to the communication group. Late 80s, a senior disk engineer got a talk at an internal, world-wide, annual communication group conference, supposedly on 3174 performance ... but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing fall in disk sales with data fleeing datacenters to more distributed computing friendly platforms. The disk division was coming up solutions that were constantly being vetoed by the communication group.

The issue was that the communication group had stranglehold on mainframe datacenters with their corporate strategic ownership of everything that crossed datacenter walls, and were fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm and install base). The GPD/Adstar executive that had picked up WDSF (for ADSM) was also investing in distributed computing startups that would use IBM disk (as another countermeasure) and would periodically ask us to drop in on his investment to see if we could lend a hand.

communication group fighting to preserve dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

However by 1992 (stranglehold wasn't just disks, but whole mainframe business), IBM has one of the largest losses in history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company ... reference gone 404 ... but lives on at wayback
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

We had already left the company, but get a call from the bowels of Armonk corporate hdqtrs asking if we could help with the company breakup. Lots of business units were using supplier contracts in other units (via MOUs) ... which would be in other corporations after the breakup. All those MOUs would have to be cataloged and turned into their own contracts. Before we get started, the board brings in a new CEO that "reverses" the breakup (but it was too late for the disk division ... and it is no more).

Note part of it can be traced back two decades earlier, Learson was trying (but failed) to block the bureaucrats, careerists and MBAs from destroying the Watson legacy ... some details
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
posts also mentioning that I got to play disk engineer in bldg14&15
https://www.garlic.com/~lynn/subtopic.html#disk
backup/archive posts
https://www.garlic.com/~lynn/submain.html#backup

--
virtualization experience starting Jan1968, online at home since Mar1970

memory speeds, Solving the Floating-Point Conundrum

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: memory speeds, Solving the Floating-Point Conundrum
Newsgroups: comp.arch
Date: Wed, 20 Sep 2023 14:25:32 -1000
John Levine <johnl@taugh.com> writes:
The 801 was a little project at IBM Research to see how much they could strip down the hardware and still get good performance with a highly optimizing compiler. It was never intended to be a product but it worked so well that they later used it in channel controllers and it evolved into ROMP and POWER. Vax was always intended to be a flagship product since it was evident that the 16 bit PDP-11 was running out of gas and (much though its users wished otherwise) word addressed 36 bit machines were a dead end,

I would sometimes claim that John was attempting to go to the opposite extreme of the design of the (failed) Future System effort.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Late 70s was to use 801/risc to replace of large number of different custom CISC microprocessors ... going to common programming (in place of large amount of different programming software) ... common Iliad chip to replace custom CISC microprocess in low & mid-range 370s ... 4361&4381 follow-on to 4331&4341 (aka 4341 ran about 1MIPS 370, but its CISC microprocessor avg of ten native instructions per simulated 370 instruction).

I helped with the white paper that showed VLSI technology had gotten to the point where most of 370 instruction could be directly implemeted in circuits ... rather than simulating in programming at avg. of ten native instructions per 370 instruction. For that and other reasons, the various 801/risc of the period floundered (and found some number of 801/risc engineers going to risc efforts at other vendors).

Iliad was 16bit chip, Los Gatos lab was doing "Blue Iliad", 1st 32bit 801/risc ... really large, hot chip that never reached production (Los Gatos had previously done JIB-prime ... a really slow CISC microprocessor used in the 3880 disk controller (thru much of the 80s). Dec80. one of them gave two weeks notice, but spent last two weeks on "Blue Iliad" chip ... before leaving for risc snake at HP.

801/RISC ROMP was going to be used for the displaywriter follow-on (all written in PL.8) ... when that got canceled, they decided to retarget for the unix workstation market (and got the company that had done AT&T unix port to IBM/PC as PC/IX, to do port for ROMP). Some things needed to be added to ROMP, like protection domain for an "open" unix operating system (not needed for the "closed" displaywriter.

For instance any code could reload segment registers as easily as they could load addressees in general registers). This led to ROMP being called "40bit addresses" and RIOS being called 52bit (even tho was 32bit). There were sixteen segment registers indexed by the top four bits of a 32bit address (with 28bit segment displacent), ROMP had 12bit segment identifiers (28+12=40) and RIOS had 24bit segment identifiers (28+24=52) ... aka RIOS description was still used long after moved unix model and user code could no longer abritarily change segment values (aka 40/52 designation was from when any code could arbitrarily change segment values).

... and old email from long ago and far away

Date: 79/07/11 11:00:03
To: wheeler

i heard a funny story: seems the MIT LISP machine people proposed that IBM furnish them with an 801 to be the engine for their prototype. B.O. Evans considered their request, and turned them down.. offered them an 8100 instead! (I hope they told him properly what they thought of that)
... snip ... top of post, old email index

801/risc, iliad, romp, rios, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801

some specific posts mentioning "blue iliad"
https://www.garlic.com/~lynn/2022h.html#26 Inventing the Internet
https://www.garlic.com/~lynn/2022d.html#80 ROMP
https://www.garlic.com/~lynn/2022c.html#77 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021b.html#50 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021.html#65 Mainframe IPL
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#33 Cluster Systems
https://www.garlic.com/~lynn/2018e.html#54 Tachyum Prodigy: performance from architecture
https://www.garlic.com/~lynn/2017h.html#58 On second thoughts
https://www.garlic.com/~lynn/2014j.html#100 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014i.html#9 With hindsight, what would you have done?
https://www.garlic.com/~lynn/2013l.html#39 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2013i.html#8 DEC Demise (was IBM commitment to academia)
https://www.garlic.com/~lynn/2013i.html#0 By Any Other Name
https://www.garlic.com/~lynn/2012l.html#82 zEC12, and previous generations, "why?" type question - GPU computing
https://www.garlic.com/~lynn/2011m.html#24 Supervisory Processors
https://www.garlic.com/~lynn/2011e.html#16 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2010n.html#82 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010l.html#42 IBM zEnterprise Announced
https://www.garlic.com/~lynn/2010f.html#83 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010f.html#82 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010f.html#78 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010e.html#3 "Unhackable" Infineon Chip Physically Cracked - PCWorld
https://www.garlic.com/~lynn/2010d.html#7 "Unhackable" Infineon Chip Physically Cracked - PCWorld
https://www.garlic.com/~lynn/2010c.html#54 Processes' memory
https://www.garlic.com/~lynn/2010c.html#29 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010c.html#20 Processes' memory
https://www.garlic.com/~lynn/2009m.html#63 What happened to computer architecture (and comp.arch?)
https://www.garlic.com/~lynn/2008g.html#56 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2007l.html#53 Drums: Memory or Peripheral?
https://www.garlic.com/~lynn/2007h.html#17 MIPS and RISC
https://www.garlic.com/~lynn/2006u.html#31 To RISC or not to RISC
https://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64?
https://www.garlic.com/~lynn/2006n.html#37 History: How did Forth get its stacks?
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004n.html#21 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004f.html#28 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003.html#3 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2002l.html#27 End of Moore's law and how it can influence job market
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002g.html#39 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000.html#16 Computer of the century
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/98.html#25 Merced & compilers (was Re: Effect of speed ... )

--
virtualization experience starting Jan1968, online at home since Mar1970

The Pentagon Gets More Moeny

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Pentagon Gets More Moeny
Date: 22 Sept, 2023
Blog: Facebook
The Pentagon Gets More Money
https://bracingviews.com/2021/09/20/the-pentagon-gets-more-money/

goes along with success of failure ... never conclusive win because that could cut off the flow of money ("perpetual wars")
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/
Boyd quote
http://www.theamericanconservative.com/articles/john-boyds-art-of-war/
"Here too Boyd had a favorite line. He often said, 'It is not true the Pentagon has no strategy. It has a strategy, and once you understand what that strategy is, everything the Pentagon does makes sense. The strategy is, don't interrupt the money flow, add to it.'"

... snip ...

'Colossal Backdoor Bailout': Outrage as Pentagon Funnels Hundreds of Millions Meant for Covid Supplies to Private Defense Contractors. "If you can't get a Covid test or find an N95, it's because these contractors stole from the American people to make faster jets and fancy uniforms."
https://www.commondreams.org/news/2020/09/22/colossal-backdoor-bailout-outrage-pentagon-funnels-hundreds-millions-meant-covid

and Eisenhower's warning about the military-industrial(-congressional) complex (early drafts included "-congressional") ... "for-profit" arms merchants constantly looking for increasing amounts of money; Eisenhower's farewell address
https://en.wikipedia.org/wiki/Eisenhower%27s_farewell_address
Perhaps best known for advocating that the nation guard against the potential influence of the military-industrial complex, a term he is credited with coining

... snip ...

Pentagon's Budget Is So Bloated That It Needs an AI Program to Navigate It. Codenamed GAMECHANGER, an AI program helps the military make sense of its own "byzantine" and "tedious" bureaucracy.
https://theintercept.com/2023/09/20/pentagon-ai-budget-gamechanger/
The Bunker: Ship Overboard!
https://www.pogo.org/analysis/2023/09/the-bunker-ship-overboard
The Pentagon may be too busy to fix its half-century-old budget
https://www.defenseone.com/policy/2023/08/pentagon-may-be-too-busy-fix-its-half-century-old-budget-process-reform-group-says/389439/
What's in the Pentagon's budget? Here's what to know.
https://breakingdefense.com/2023/03/whats-in-the-pentagons-budget-heres-what-to-know-updating-tracker/
Getting the defense budget right: A (real) grand total, over $1.4
https://responsiblestatecraft.org/2023/05/07/getting-the-defense-budget-right-a-real-grand-total-over-1-4-trillion/
How a New Budget Loophole Could Send Pentagon Spending Soaring Even Higher
https://www.counterpunch.org/2023/06/22/how-a-new-budget-loophole-could-send-pentagon-spending-soaring-even-higher/

https://www.garlic.com/~lynn/subboyd.html military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
"perpetual war" posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
success of failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failure

before Eisenhower, there was Smedley Butler:

Gangsters of Capitalism: Smedley Butler, the Marines, and the Making and Breaking of America's Empire
https://www.amazon.com/Gangsters-Capitalism-Smedley-Breaking-Americas-ebook/dp/B092T8KT1N/
Smedley Butler
https://en.wikipedia.org/wiki/Smedley_Butler

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some recent posts mentioning Smedley Butler
https://www.garlic.com/~lynn/2023.html#17 Gangsters of Capitalism
https://www.garlic.com/~lynn/2022g.html#19 no, Socialism and Fascism Are Not the Same
https://www.garlic.com/~lynn/2022f.html#76 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022f.html#65 Gangsters of Capitalism
https://www.garlic.com/~lynn/2022f.html#24 The Rachel Maddow Show 7/25/22
https://www.garlic.com/~lynn/2022e.html#38 Wall Street's Plot to Seize the White House
https://www.garlic.com/~lynn/2022.html#51 Haiti, Smedley Butler, and the Rise of American Empire
https://www.garlic.com/~lynn/2022.html#9 Capitol rioters' tears, remorse don't spare them from jail
https://www.garlic.com/~lynn/2021j.html#104 Who Knew ?
https://www.garlic.com/~lynn/2021i.html#56 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021i.html#54 The Kill Chain
https://www.garlic.com/~lynn/2021i.html#37 9/11 and the Saudi Connection. Mounting evidence supports allegations that Saudi Arabia helped fund the 9/11 attacks
https://www.garlic.com/~lynn/2021i.html#33 Afghanistan's Corruption Was Made in America
https://www.garlic.com/~lynn/2021h.html#101 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021h.html#96 The War in Afghanistan Is What Happens When McKinsey Types Run Everything
https://www.garlic.com/~lynn/2021h.html#38 $10,000 Invested in Defense Stocks When Afghanistan War Began Now Worth Almost $100,000
https://www.garlic.com/~lynn/2021g.html#67 Does America Like Losing Wars?
https://www.garlic.com/~lynn/2021g.html#50 Who Authorized America's Wars? And Why They Never End
https://www.garlic.com/~lynn/2021g.html#22 What America Didn't Understand About Its Longest War
https://www.garlic.com/~lynn/2021f.html#80 After WW2, US Antifa come home
https://www.garlic.com/~lynn/2021f.html#21 A People's Guide to the War Industry
https://www.garlic.com/~lynn/2021c.html#96 How Ike Led
https://www.garlic.com/~lynn/2021b.html#91 American Nazis Rally in New York City
https://www.garlic.com/~lynn/2021.html#66 Democracy is a threat to white supremacy--and that is the cause of America's crisis
https://www.garlic.com/~lynn/2021.html#32 Fascism

--
virtualization experience starting Jan1968, online at home since Mar1970

Relational RDBMS

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Relational RDBMS
Date: 23 Sept, 2023
Blog: Facebook
trivia: Mainline IBM put up so much resistance to "relational" that the first product shipped was for Multics in 1976
https://en.wikipedia.org/wiki/Multics_Relational_Data_Store

Some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS. Others went to the IBM Science Center on the 4th flr and did virtual machine (initially CP/40 on a 360/40 with virtual memory hardware mods, morphs into CP/67 when 360/67, standard with virtual memory, becomes available, precursor to VM370), internal network technology also used for corporate sponsored univ. BITNET, lots of online apps, invented GML in 1969 and GML tag processing added to CP67/CMS SCRIPT document formating (decade later morphs into ISO standard SGML and after another decade morphs into HTML at CERN), lots of other stuff.

IBM science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

System/R (SQL/Relational) was originally done at IBM San Jose Research on VM/370 370/145 ... and then tech transfer in early 80s to Endicott for SQL/DS (released 1981) https://en.wikipedia.org/wiki/IBM_SQL/DS

Then when the official new DBMS EAGLE, implodes there is request to port System/R to MVS which is released in 1983
https://en.wikipedia.org/wiki/IBM_Db2

IBM Toronto lab then did a "portable" C-language simplified RDBMS (originally "Shelby" for OS2) in the 90s.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
In late 80s, NYTimes had been told that there would be a HA/6000 enabling them to port their newspaper system (ATEX) from VAXCluster to RS/6000. Working with major portable RDBMS vendors that had VAXcluster and Unix support in the same source base (Oracle, Informix, Ingres, Sybase), I do cluster distributed lock manager with VAXCluster semenatics to simplify the port.

I then rename it HA/CMP when I start doing cluster technical/scientific scale-up with the national labs and commercial cluster scale-up with the major (portable) RDBMS vendors. There is IBM executive meeting with Oracle CEO early Jan1992 saying that there would be 16-processor HA/CMP by mid-92, and 128-system by YE-92. However, by end of Jan1992, cluster scale-up was transfered for announce as IBM Supercomputer (technical/scientific only) and we were told we weren't allowed to work with anything that had more than four processors (we leave IBM a few months later). Part of it was probably complaints by mainframe DB2 that if we were allowed to proceed, we would be years ahead of them.

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

posts mentioning DLM/distributed lock manager
https://www.garlic.com/~lynn/2023e.html#79 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#63 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#62 IBM DB2
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019e.html#11 To Anne & Lynn Wheeler, if still observing
https://www.garlic.com/~lynn/2018d.html#69 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018c.html#33 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2017b.html#82 The ICL 2900
https://www.garlic.com/~lynn/2014k.html#40 How Larry Ellison Became The Fifth Richest Man In The World By Using IBM's Idea
https://www.garlic.com/~lynn/2014.html#73 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013o.html#44 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013m.html#87 'Free Unix!': The world-changing proclamation made 30 yearsagotoday
https://www.garlic.com/~lynn/2013m.html#86 'Free Unix!': The world-changing proclamation made 30 yearsagotoday
https://www.garlic.com/~lynn/2012d.html#28 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2011f.html#8 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010n.html#82 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010l.html#14 Age
https://www.garlic.com/~lynn/2010k.html#54 Unix systems and Serialization mechanism
https://www.garlic.com/~lynn/2010b.html#32 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009m.html#84 A Faster Way to the Cloud
https://www.garlic.com/~lynn/2009m.html#39 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2009k.html#67 Disksize history question
https://www.garlic.com/~lynn/2009k.html#36 Ingres claims massive database performance boost
https://www.garlic.com/~lynn/2009h.html#26 Natural keys vs Aritficial Keys
https://www.garlic.com/~lynn/2009b.html#40 "Larrabee" GPU design question
https://www.garlic.com/~lynn/2009.html#3 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2008r.html#71 Curiousity: largest parallel sysplex around?
https://www.garlic.com/~lynn/2008k.html#63 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008i.html#18 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008g.html#56 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008d.html#70 Time to rewrite DBMS, says Ingres founder
https://www.garlic.com/~lynn/2008b.html#69 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2007v.html#43 distributed lock manager
https://www.garlic.com/~lynn/2007v.html#42 Newbie question about db normalization theory: redundant keys OK?
https://www.garlic.com/~lynn/2007s.html#46 "Server" processors for numbercrunching?
https://www.garlic.com/~lynn/2007q.html#33 Google And IBM Take Aim At Shortage Of Distributed Computing Skills
https://www.garlic.com/~lynn/2007n.html#49 VLIW pre-history
https://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database
https://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007i.html#61 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007c.html#42 Keep VM 24X7 365 days
https://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?
https://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R
https://www.garlic.com/~lynn/2006o.html#32 When Does Folklore Begin???
https://www.garlic.com/~lynn/2006j.html#20 virtual memory
https://www.garlic.com/~lynn/2006c.html#41 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005h.html#26 Crash detection by OS
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#10 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004m.html#5 Tera
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/aadsm28.htm#35 H2.1 Protocols Divide Naturally Into Two Parts
https://www.garlic.com/~lynn/aadsm27.htm#54 Security can only be message-based?
https://www.garlic.com/~lynn/aadsm26.htm#17 Changing the Mantra -- RFC 4732 on rethinking DOS
https://www.garlic.com/~lynn/aadsm21.htm#29 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm16.htm#22 Ousourced Trust (was Re: Difference between TCPA-Hardware and a smart card and something else before
https://www.garlic.com/~lynn/aadsmore.htm#time Certifiedtime.com

--
virtualization experience starting Jan1968, online at home since Mar1970

CP/67, VM/370, VM/SP, VM/XA

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CP/67, VM/370, VM/SP, VM/XA
Date: 24 Sept, 2023
Blog: Facebook
when IBM US HONE datacenters were consolidated in silicon valley, their VM/370 systems were enhanced to "single system image", loosely-coupled, with load-balancing and fall-over across eight system complex (largest in the world). In the morph of CP67->VM370, lots of features were simplified and/or dropped, including multiprocessor support. Then for VM/370 Release 3, (originally for HONE) put multiprocessor support back in ... so HONE could add a 2nd processor to each system ... then even larger, largest in the world. This is in period when head of POK had convinced corporate to kill vm370 product, shutdown the development group, and transfer all the people to POK for MVS/XA (supposedly otherwise MVS/XA wouldn't ship on time). Endicott eventually manages to save the VM370 product mission, but has to recreate a development group from scratch. 30yrs later when VM finally ships similar "single system support" support (for customers), I post jokes about IBM not releasing software before its time.

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

Note: the 23jun1969 unbundling announcement started to charge for (application) software (IBM managed to make the case, kernel software was still free), SE services, maint. etc. Prior to that SE trainee included sort of apprenticeship as part of large group at customer site. After unbundling, they couldn't figure out how not to charge for trainee SE time at customer site. Thus was born HONE, branch office online access to CP/67 datacenters for practicing with guest operating systems in virtual machines. The science center then ported APL\360 to CMS as CMS\APL, redoing lots of stuff to transition from 16kbyte, swapped workspaces to large, virtual memory, demand paged workspaces and added API for system services (like file I/O) enabling lots of real world applications. Then HONE started offering CMS\APL-based sales&marketing support applications, which came to dominate all HONE activity (and SE practice with guest operating systems dwindled away).

23Jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle

The science center was also doing lots of performance modeling and monitoring work, including extensive profiling of configuration and workload (eventually morphing into capacity planning). One such was APL analytical system model, which was offered on HONE called the Performance Predictor where branch people could enter customers' configuration and workload information and ask "what-if" questions about changes to configuration and/or workload. A version of the Performance Predictor also used internally inside HONE for load-balancing across the single-system0image complex.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

After joining IBM, one of my hobbies was enhanced production operation systems for internal datacenters and HONE was one of my long time customers (1st CP/67 and then VM/370 ... after I had moved a lot of stuff from CP/67 to VM/370 originally for Release 2).

In the early 70s, IBM had the "Future System" effort, completely different than 370 and was going to completely replace it. Internal politics was killing off 370 efforts (lack of new 370 during the period is credited with giving the clone 370 makers their market foothold). Then when FS implodes, there is mad rush to get stuff back in the 370 product pipelines, including kicking off the quick&dirty 303x&308x efforts. 3033 started off remapping the 168 logic to 20% faster chips. They took a 158 engine with just the integrated channel microcode (and no 370 microcode) for the 303x channel director. A 3031 was two 158 engines, one with just the 370 microcode and 2nd with just the integrated channel microcode (303x channel director). A 3032 was 168 reworked to use 303x channel director for external channels.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

Possibly the rise of clone 370s also resulted in deciding to start charging for kernel software and a bunch of my enhancements were selected as the initially guinea pig (and I had to spend time with planners & lawyers on kernel charging policies). I had done previously done automated benchmarking (changing configuration and workload, when I first started migration to VM370 from CP67, automated benchmarks were the 1st thing, and then because VM370 would constantly crash, the next changes were a bunch of CP67 integrity features). For the first release ran 2000 benchmarks (that took 3 months elapsed time) with configuration and workload changes, methodically selected to cover range of real live environments. A version of the Performance Predictor was modified to predict the results of a benchmark and then compare the results with the prediction for the 1000 benchmarks. Then for the 2nd 1000 benchmarks, the Performance Predictor would select the benchmarks, searching for possibly anomalous combinations.

fairshare/wheeler scheduler
https://www.garlic.com/~lynn/subtopic.html#fairshare
automated benchmark posts
https://www.garlic.com/~lynn/submain.html#benchmark

trivia: It was initially available for Release3-PLC9, as the "Dynamically Adaptive Resource Manager" (or "wheeler scheduler") but 90% of the initial code was just general cleanup ... including kernel rework needed for multiprocessor support (but not the actual multiprocessor support). Then for Release 4, they decided to release multiprocessor support ... but it was dependent on the redesign in my charged for code (and initial charging policy was hardware support would still be free, and free software couldn't require charged-for software). They eventually decided to move 90% of the "wheeler scheduler" code into the "free" base ... so they could ship free multiprocessor support (and the release 4 "wheeler scheduler" price stayed the same as the release 3)

Besides "wheeler scheduler" as guinea pig for charged for kernel software, Endicott also con'ed me into help with ECPS for 138/148 ... archived post with initial analysis
https://www.garlic.com/~lynn/94.html#21

Early 80s, I get permissions to give presentation on how ECPS was done at monthly user group BAYBUNCH meetings hosted by Stanford SLAC. After presentations, the Amdahl people would quiz me for more information. They say that they had done MACROCODE (370-like instructions running in microcode mode) initially to quickly respond to the myriad of 3033 microcode changes required to run MVS ... and were in the process of implementing HYPERVISOR.

Now some of the VM370 group that had been transferred to POK for MVS/XA had done the VMTOOL, simple virtual machine support for MVS/XA development (never intended for customers). Then when they found that customers weren't converting to MVS/XA as planned, but Amdahl was having some success being able to run MVS & MVS/XA with the hypervisor. With Amdahl success, they decide to release VMTOOL, 1st as VM/MA (migration aid) and then as VM/SF (system facility). A internal Rochester sysprog adds XA/370 support to VM/370 which Endicott is looking at releasing. POK counters with proposal for couple hundred person group to upgrade VMMA/VMSF to feature, function and performance of VM/370 (POK wins).

some posts mentioning VM/MA, VM/SF, & "SIE"
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022g.html#58 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#82 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2019b.html#78 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit

--
virtualization experience starting Jan1968, online at home since Mar1970

CP/67, VM/370, VM/SP, VM/XA

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CP/67, VM/370, VM/SP, VM/XA
Date: 24 Sept, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA

Trivia: I took a two credit hr intro to fortran/computers (univ. had 709/1401). Then within a year of taking intro class, univ. hires me fulltime responsible for os/360. Univ. had been sold 360/67 for tss/360 to replace 709/1401, but tss/360 never came to production fruition so ran as 360/65 w/os360. The univ. shutdown datacenter on weekends and I would have the whole place dedicate to myself (although monday classes were hard after 48hrs w/o sleep). Student fortran jobs ran under sec. on 709 (tape->tape), but initially ran well over a minute on 360/65. I install HASP and it cuts it in half. Then I start redoing STAGE2 SYSGEN so I could run it in production jobstream and also re-org statements to carefully place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. It never gets better than 709, until I install Univ. Waterloo WATFOR.

Then science center comes out to install CP67 (3rd install after science center and MIT Lincoln Labs) ... and I mostly get to play with it on weekends, rewriting loads of CP67 code. Six months later science center is having CP67/CMS class at the Beverely Hills Hilton. I arrive sunday and get asked if I could teach the CP67 class (the people that were going to teach it had given notice the Friday before that they were leaving to form NCSS). Archived post w/part of SHARE presentation on both OS/360 and CP67 work (initially CP67 pathlength for running OS/360 in virtual machine, started out test jobstream run 322 seconds on real machine. initially under CP/67, 856secs (CP67 534secs CPU). After a few months, I have it reduced to 435secs (CP/67 113secs CPU, reduction of 534-113=421secs CPU).
https://www.garlic.com/~lynn/94.html#18

Before I graduate, I'm hired full-time into small group in the Boeing CFO's office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Renton datacenter is largest in the world, something like couple hundred million in IBM 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between Renton director and CFO (who only has a 360/30 up at Boeing Field for payroll, although they enlarge it for a 360/67 for me to play with when I'm not doing other stuff). When I graduate, I join the science center, instead of staying at Boeing.

--
virtualization experience starting Jan1968, online at home since Mar1970

CP/67, VM/370, VM/SP, VM/XA

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CP/67, VM/370, VM/SP, VM/XA
Date: 24 Sept, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA

Early 80s, I also have HSDT project (T1 and faster computer links), was working with NSF director and suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and finally NSF releases RFP (calling for full T1, in part based on what we already had running). 28Mar1986 preliminary announce
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing, precursor to social media, inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP/director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

IBM communication group was fiercely fighting off client/server and distributed computing and trying to block release of mainframe TCP/IP support. Possibly was influential customers got that changed, and the communication group changed their strategy and said that since they had corporate strategic ownership for everything that crossed datacenter walls, it had to be released through them. What ships gets 44kbyte/sec aggregate throughput using nearly whole 3090 processor. I then add support for RFC1044 and in some tuning tests at Cray Research between a Cray and IBM 4341, gets full sustained channel throughput using only modest amount of 4341 processor (something like a 500 times improvement in bytes moved per instruction executed). Later in the 90s, the communication group hires a silicon valley contractor to implement TCP/IP support directly in VTAM. He initially demos with TCP running much faster than LU6.2. He is then told that everybody knows a "proper" TCP/IP implementation is much slower than LU6.2, and they would only be paying for a "proper" implementation.

RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Mid-80s, the communication group presents a study to the corporate executive committee that customers wouldn't want T1 support before well into the 90s. They had looked at customer 37x5 "fat pipes" (multiple 56kbit links running in parallel, treated as single logical link), find they dropped to zero by 6or7 56kbit links. What they didn't know (or didn't want analyzed) was T1 telco tariffs at the time were about the same as 5or6 56kbit links. In trivial survey, we found 200 customers that just switch to full T1 (using non-IBM controller).

Communication group was also spreading misinformation internally that SNA/VTAM could be used for NSFNET. Somebody collects the executive misinformation emails and forwards it to us. Archived post with some heavily clipped and redacted (to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109

Also mid-80s, I get con'ed into turning out a VTAM/NCP as IBM Type1 product, implemented by baby-bell on IBM Series/1 with enormous more feature, funtion, performance (all resources owned by distributed simulated VTAM out in the Series/1s and using cross-domain protocol for "real" host VTAMs). some of baby bell spring 1986 presentation to the COMMON user group
https://www.garlic.com/~lynn/99.html#70
Part of my fall 1986 to the SNA ARB pitch in Raleigh in this archived post
https://www.garlic.com/~lynn/99.html#67 I took
real live baby bell deployment with 64k terminals and ran it through the HONE 3275 configurator for comparison. Communication group repeatedly said it wasn't valid and everytime I showed it was done with their own HONE 3275 configurator (didn't want to be confused with facts). what the communication group does next to kill the project can only be described as truth is greater than fiction.

post about co-worker (responsible for the IBM internal network and technology also used for the corporate sponsored univ BITNET) at the science center in the 70s and then we transfer to IBM San Jose Research in 1977
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
more info in this post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

CP/67, VM/370, VM/SP, VM/XA

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CP/67, VM/370, VM/SP, VM/XA
Date: 24 Sept, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA

1980, STL (since renamed SVL) was bursting at the seams and moving 300 people from the IMS group to offsite bldg with dataprocessing support back to STL datacenter. They had tried "remote 3270" support, but found the human factors totally unacceptable. I then get con'ed into channel-extender support so they can place channel attached 3270 controllers at the offsite bldg with no discernible vm/370 response difference between offsite and inside STL.

Then there was an attempt to get it released to customers, but there was group in POK playing with some serial stuff that were afraid it would make it more difficult to get their stuff released, and get it vetoed. Then in 1988, the IBM branch office asks me to help LLNL (national lab) get some serial stuff standardized, which quickly becomes fibre-channel standard, "FCS" (including some stuff I had done in 1980), initially 1gibt, full-duplex 2gbit aggregate, 200mbyte/sec). Then in 1990, the POK group get their stuff released with ES/9000 as ESCON (17mbyte/sec) when it is already obsolete.

FICON trivia: some POK engineers eventually get involved in FCS and define a heavy-weight protocol that radically reduces the throughput that eventually ships as FICON. Most recent public benchmark I can find is z196 "Peak I/O" that gets 2M IOPS using 104 FICON (running over 104 FCS). About the same time there is a new FCS announced for E5-2600 blades claiming over million IOPS (two such FCS with higher throughput than 104 FICON).

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON support
https://www.garlic.com/~lynn/submisc.html#ficon

FICON
https://en.wikipedia.org/wiki/FICON
Fibre Channel
https://en.wikipedia.org/wiki/Fibre_Channel

other Fibre Channel:

Fibre Channel Protocol
https://en.wikipedia.org/wiki/Fibre_Channel_Protocol
Fibre Channel switch
https://en.wikipedia.org/wiki/Fibre_Channel_switch
Fibre Channel electrical interface
https://en.wikipedia.org/wiki/Fibre_Channel_electrical_interface
Fibre Channel over Ethernet
https://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet

1980 STL/SVL vm/370 logo for offsite bldg
https://www.garlic.com/~lynn/vmhyper.jpg

--
virtualization experience starting Jan1968, online at home since Mar1970

CP/67, VM/370, VM/SP, VM/XA

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CP/67, VM/370, VM/SP, VM/XA
Date: 24 Sept, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#90 CP/67, VM/370, VM/SP, VM/XA

3033 started out 168 logic remapped to 20% faster chips. For channel director they took 158 engine w/integrated channel microcode (and w/o 370 microcode). A 3031 was two 158 engines, one with just the 370 microcode and 2nd for the channel director with just the integrated channel microcode. Note: the 165 microcode avg. 2.1 processor cycles per 370 instruction, that was improved to avg 1.6 processor cycles per 370 instruction for 168 and avg. one processor cycle per 370 instruction for 3033.

After the FS project imploded (that was going to completely replace 370), there was mad rush to get stuff back into product pipelines ... including kicking off the quick&dirty 303x&308x efforts in parallel
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

After transferring to San Jose Research in 1977, I wandered around lots of silicon valley ... including disk engineering and product test across the street. They had been doing pre-scheduled, around the clock stand-alone testing ... they claimed to have tried MVS but it had 15min mean-time-between-failure in that environment. I rewrite I/O supervisor so it was bullet proof and never fail ... so they can do any amount of on-demand concurrent testing. Bldg15 gets the 1st engineering 3033 (#3or#4) outside POK engineering and we put up private online service on it (since testing only takes percent or two of the processor). The channel director was periodically hanging and had to be manual reset/re-impl. We then find a hack that if I quickly hit all six channels with CLRCH, it would re-impl itself.

getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

other 3033 trivia: after FS implodes, I get sucked into a 16-processor 370 effort ... that everybody thought was great ... and we suck the 3033 engineers into it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Then somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) has effective 16-way support (POK doesn't ship 16-processor machine until after turn of century) ... and some of us get invited to never visit POK again (and the 3033 processor engineers are told heads down on 3033 only, they do have me sneak back into POK periodically). After the 3033 is out the door, they start on trout/3090.

SMP, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM DASD 3380

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DASD 3380
Date: 24 Sept, 2023
Blog: Facebook
I had been pontificating that CKD was 60s technology trade-off between scarce memory resources and abundant I/O resources and by the mid-70s, the trade-off was starting to invert. In the early 80s, I did a memo that between 360 announce and then, relative disk system throughput had declined by an order of magnitude (systems had gotten 40-50 times faster while disks had only gotten 3-5 times faster). A disk executive had taken exception and assigned the disk division performance group to refute my claims. After a couple weeks they came back that I had somewhat understated the problem. This was respun for a SHARE presentation about configuring disks for improving system throughput, 16Aug1984, SHARE 63, B874.

We had discussions in SHARE performance groups about 3330->3380 migration ... that capacity got several times greater than throughput increased (divide acesses/sec by mbytes ... for accesses per mbyte per second) ... in 3330->3380 migration, needed to hold 3380 at 80% capacity to maintain same access/mbytes/sec. Lots of SHARE discussion about bean counters didn't understand the issue, jokes about IBM selling 3880 microcode for "fast 3380s" at higher price ... i.e. by limiting number of cylinders that can be accessed.

Note: original 3380 had 20 track spacings between each data track. That was cut in half for 3380E (and number of data cylinders doubled) and then cut again with number of data cylinders tripled for 3380K. Then IBM did offer (what we joked about at SHARE) a 3380J ... a 3380K limited to 1/3rd the number of cylinders (same as original 3380, but disk arm only has to move 1/3rd the distance).

some posts mentioning SHARE 63, B874
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#0 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#78 IBM 370 and Future System
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021.html#79 IBM Disk Division
https://www.garlic.com/~lynn/2021.html#59 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain Its Current Importance

--
virtualization experience starting Jan1968, online at home since Mar1970

lotsa money and data sizes, Solving the Floating-Point Conundrum

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: lotsa money and data sizes, Solving the Floating-Point Conundrum
Newsgroups: comp.arch
Date: Sun, 24 Sep 2023 15:52:47 -1000
Michael S <already5chosen@yahoo.com> writes:
IIRC, z10 was IBM's last "native CISC" design in zArch series. Starting from z196 they crack load-op into 2 or more uOus, just like majority of x86 cores does. It's hard to be sure, because terminology use by IBM is so unique.

they other thing part of z10->z196 was claim that at least half of the per processor thruoughput increase was introduction of out-of-order execution, branch prediction, etc ... i also assumed it implied moving to micro-ops ...

past posts mentioning z10->z196 per processor improvement
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#12 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#64 Mainframes
https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark
https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud
https://www.garlic.com/~lynn/2021i.html#2 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2019d.html#63 IBM 3330 & 3380
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2018f.html#12 IBM mainframe today
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2016f.html#91 ABO Automatic Binary Optimizer
https://www.garlic.com/~lynn/2016f.html#36 z/OS Operating System size
https://www.garlic.com/~lynn/2016b.html#103 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015h.html#81 IBM Automatic (COBOL) Binary Optimizer Now Availabile
https://www.garlic.com/~lynn/2015g.html#19 Linux Foundation Launches Open Mainframe Project
https://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?
https://www.garlic.com/~lynn/2015.html#43 z13 "new"(?) characteristics from RedBook
https://www.garlic.com/~lynn/2014m.html#105 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2014l.html#56 This Chart From IBM Explains Why Cloud Computing Is Such A Game-Changer
https://www.garlic.com/~lynn/2014h.html#68 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#67 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014c.html#96 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2014c.html#62 Optimization, CPU time, and related issues
https://www.garlic.com/~lynn/2014.html#97 Santa has a Mainframe!
https://www.garlic.com/~lynn/2013m.html#51 50,000 x86 operating system on single mainframe
https://www.garlic.com/~lynn/2013m.html#35 Why is the mainframe so expensive?
https://www.garlic.com/~lynn/2013l.html#70 50,000 x86 operating system on single mainframe
https://www.garlic.com/~lynn/2013j.html#86 IBM unveils new "mainframe for the rest of us"
https://www.garlic.com/~lynn/2013g.html#93 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013b.html#6 mainframe "selling" points
https://www.garlic.com/~lynn/2012n.html#44 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2012n.html#14 System/360--50 years--the future?

--
virtualization experience starting Jan1968, online at home since Mar1970

CP/67, VM/370, VM/SP, VM/XA

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CP/67, VM/370, VM/SP, VM/XA
Date: 24 Sept, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#90 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#91 CP/67, VM/370, VM/SP, VM/XA

other CP67 trivia: the CP67 delivered to the univ. had 1052 and 2741 terminal including being able to dynamically identify what kind of terminal on a line and switch the IBM telecommunicaton terminal type port scanner for each line. The univ. had some number of TTY/ASCII terminals (the ascii/tty port scanner had previously arrived in Heathkit box) and I added TTY/ASCII support to CP67 and able to switch port scanner terminal type between 1052, 2741, and TTY. I then wanted to have a single dial-in phone number (hunt group)
https://en.wikipedia.org/wiki/Line_hunting

I can dynamically switch port scanner terminal type for each line but IBM had taken short-cut and hardwired the line speed (so doesn't quite work). Univ. starts a project to implement clone controller; build a channel interface board for Interdata/3 programmed to emulate IBM terminal control unit, with the addition it can do dynamic line speed (later enhanced to be a Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces; Interdata and later Perkin-Elmer sell it as IBM clone controller) ... four of us get written up as responsible for (some part of) clone controller business
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer

clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

other trivia: 360 was originally suppose to be ASCII machine ... but ASCII unit record gear wasn't ready, so they were going to (temporarily) extend BCD (refs gone 404, but lives on at wayback machine)
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
other
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM

some posts mentioning Bob Bemer ASCII, clone controller and CP67 TTY terminal support:
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023.html#80 ASCII/TTY Terminal Support
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2017g.html#109 Online Terminals
https://www.garlic.com/~lynn/2016h.html#71 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016e.html#0 Is it a lost cause?
https://www.garlic.com/~lynn/2014g.html#24 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2011n.html#45 CRLF in Unix being translated on Mainframe to x'25'
https://www.garlic.com/~lynn/2011n.html#5 Any candidates for best acronyms?

--
virtualization experience starting Jan1968, online at home since Mar1970

A new Supreme Court case could trigger a second Great Depression

From: Lynn Wheeler <lynn@garlic.com>
Subject: A new Supreme Court case could trigger a second Great Depression
Date: 26 Sept, 2023
Blog: Facebook
A new Supreme Court case could trigger a second Great Depression. America's Trumpiest court handed down a shockingly dangerous decision. The Supreme Court is likely, but not certain, to fix it.
https://www.vox.com/scotus/2023/9/23/23864355/supreme-court-cfpb-unconstitutional-consumer-financial-fifth-circuit-great-depression

Jan1999, I was asked to help try and block the coming economic mess (we failed). I was given background on how some investment bankers walked away "clean" from the S&L crises, were then running Internet IPO mills (invest a few million, then IPO for a few billion, needed to fail to leave field clear for next round), and were predicted next to get into "securitized mortgages".

One of my tasks was to improve the integrity of mortgage supporting documentation ... but they then find that they could pay rating agencies for triple-A rating on securitized mortgages (when the rating agencies knew they weren't worth triple-A, from Oct2008 congressional testimony), and sell into bond market, being able to do $27T 2001-2008. Triple-A rating enables being able to do no-documentation, liar loans ... triple-A rating "trumps" documentation, and with no document, there is no issue of documentation integrity.

From the law of unintended consequences ... the largest fines in the economic mess were for the robo-signing operations, fabricating the (missing) documentation needed for forecloses and other legal activity. In theory the fines were to go to fund for aiding the victims of the mortgage scams ... but some of the institutions setup to administer the victim aid funds ... were being run by some of the same people behind the economic mess (and little of the funds actually went to the victims).

Congress felt that they had to show that they were doing something for the public/voters (obfuscating that many participated in the economic mess) and they passed Dodd-Frank/CFPB legislation, but severely kneecapped. Lobbyists from TBTF would provide draft CFPB legislation, congress would release copies to the press, and then the same lobbyists would supply press with criticism of the drafts (tactic of discrediting the legislation). Another tactic was making the wording so obtuse that it would be nearly impossible for regulatory agencies to come up with regulations to implement the legislation. Another tactic was making the burden apply equal to all institutions, so small community banks were extremely overburden and bring legal actions (further discrediting the process).

Banks' Lobbyists Help in Drafting Financial Bills
http://dealbook.nytimes.com/2013/05/23/banks-lobbyists-help-in-drafting-financial-bills/
How Wall Street Defanged Dodd-Frank; Battalions of regulatory lawyers burrowed deep in the federal bureaucracy to foil reform.
http://www.thenation.com/article/174113/how-wall-street-defanged-dodd-frank
Josh Rosner on How Dodd Frank Institutionalizes Too Big to Fail
http://www.nakedcapitalism.com/2013/05/josh-rosner-on-how-dodd-frank-institutionalizes-too-big-to-fail.html
Dodd-Frank/CFPB
https://en.wikipedia.org/wiki/Dodd-Frank_Wall_Street_Reform_and_Consumer_Protection_Act

economic mess" posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
Too Big To Fail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
(Triple-A rated) toxic CDOs posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
S&L crisis post
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
regulatory capture
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
ZIRP funds
https://www.garlic.com/~lynn/submisc.html#zirp
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

--
virtualization experience starting Jan1968, online at home since Mar1970

Fracking Fallout: Is America's Drinking Water Safe?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Fracking Fallout: Is America's Drinking Water Safe?
Date: 26 Sept, 2023
Blog: Facebook
Fracking Fallout: Is America's Drinking Water Safe?
https://www.nakedcapitalism.com/2023/09/fracking-fallout-is-americas-drinking-water-safe.html

Ohio Injection Wells Suspended Over 'Imminent Danger' to Drinking Water. Environmental groups have called for a suspension of all Class II wells injecting into the Ohio shale for over a decade, describing the shale as "holier than a Swiss cheese."
https://insideclimatenews.org/news/13092023/ohio-injection-wells-suspended-over-imminent-danger-to-drinking-water/
Ohio Environmentalists, Oil Companies Battle State Over Dumping of Fracking Wastewater. Advocates for both sides say public drinking water may be tainted by underground leaks of "produced water."
https://insideclimatenews.org/news/14052023/ohio-pennsylvania-fracking-wastewater/
Pincushion America: The irretrievable legacy of drilling everywhere on drinking water
https://resourceinsights.blogspot.com/2012/06/pincushion-america-irretrievable-legacy.html
Exemptions for fracking under United States federal law
https://en.wikipedia.org/wiki/Exemptions_for_fracking_under_United_States_federal_law

regulatory capture posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture

some past fracking posts
https://www.garlic.com/~lynn/2023c.html#83 $209bn a year is what fossil fuel firms owe in climate reparations
https://www.garlic.com/~lynn/2021g.html#12 The fracking boom is over. Where did all the jobs go?
https://www.garlic.com/~lynn/2021e.html#42 The Blind Strategist: John Boyd and the American Art of War
https://www.garlic.com/~lynn/2021c.html#26 Fighting to Go Home: Operation Desert Storm, 30 Years Later
https://www.garlic.com/~lynn/2020.html#22 The Saudi Connection: Inside the 9/11 Case That Divided the F.B.I
https://www.garlic.com/~lynn/2019e.html#143 "Undeniable Evidence": Explosive Classified Docs Reveal Afghan War Mass Deception
https://www.garlic.com/~lynn/2019e.html#114 Post 9/11 wars have cost American taxpayers $6.4 trillion, study finds
https://www.garlic.com/~lynn/2019e.html#105 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#85 Just and Unjust Wars
https://www.garlic.com/~lynn/2019e.html#70 Since 2001 We Have Spent $32 Million Per Hour on War
https://www.garlic.com/~lynn/2019e.html#67 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019e.html#58 Homeland Security Dept. Affirms Threat of White Supremacy After Years of Prodding
https://www.garlic.com/~lynn/2019e.html#26 Radical Muslim
https://www.garlic.com/~lynn/2018b.html#65 Doubts about the HR departments that require knowledge of technology that does not exist
https://www.garlic.com/~lynn/2017b.html#5 Trump to sign cyber security order
https://www.garlic.com/~lynn/2016f.html#24 Frieden calculator
https://www.garlic.com/~lynn/2016b.html#40 Here's The New Study The Fracking Industry Doesn't Want You to See
https://www.garlic.com/~lynn/2016.html#43 Thanks Obama
https://www.garlic.com/~lynn/2015d.html#54 The Jeb Bush Adviser Who Should Scare You
https://www.garlic.com/~lynn/2014h.html#83 Wastewater well suspended after "frackquakes" rock Colorado

other posts mentioning big oil
https://www.garlic.com/~lynn/2023c.html#81 $209bn a year is what fossil fuel firms owe in climate reparations
https://www.garlic.com/~lynn/2023.html#35 Revealed: Exxon Made "Breathtakingly" Accurate Climate Predictions in 1970's and 80's
https://www.garlic.com/~lynn/2022g.html#89 Five fundamental reasons for high oil volatility
https://www.garlic.com/~lynn/2022g.html#21 'Wildfire of disinformation': how Chevron exploits a news desert
https://www.garlic.com/~lynn/2022f.html#16 The audacious PR plot that seeded doubt about climate change
https://www.garlic.com/~lynn/2022e.html#69 India Will Not Lift Windfall Tax On Oil Firms Until Crude Drops By $40
https://www.garlic.com/~lynn/2022d.html#96 Goldman Sachs predicts $140 oil as gas prices spike near $5 a gallon
https://www.garlic.com/~lynn/2022c.html#117 Documentary Explores How Big Oil Stalled Climate Action for Decades
https://www.garlic.com/~lynn/2021i.html#28 Big oil's 'wokewashing' is the new climate science denialism
https://www.garlic.com/~lynn/2021g.html#72 It's Time to Call Out Big Oil for What It Really Is
https://www.garlic.com/~lynn/2021g.html#16 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021g.html#13 NYT Ignores Two-Year House Arrest of Lawyer Who Took on Big Oil
https://www.garlic.com/~lynn/2021g.html#3 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021e.html#77 How climate change skepticism held a government captive
https://www.garlic.com/~lynn/2018d.html#112 NASA chief says he changed mind about climate change because he 'read a lot'
https://www.garlic.com/~lynn/2014m.html#27 LEO
https://www.garlic.com/~lynn/2013e.html#43 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012e.html#30 Senators Who Voted Against Ending Big Oil Tax Breaks Received Millions From Big Oil
https://www.garlic.com/~lynn/2012d.html#61 Why Republicans Aren't Mentioning the Real Cause of Rising Prices at the Gas Pump
https://www.garlic.com/~lynn/2007s.html#67 Newsweek article--baby boomers and computers

--
virtualization experience starting Jan1968, online at home since Mar1970

My Gun Has A Plane

From: Lynn Wheeler <lynn@garlic.com>
Subject: My Gun Has A Plane
Date: 26 Sept, 2023
Blog: Facebook
Fairchild Republic A-10 Thunderbolt II ("My Gun Has A Plane")
https://en.wikipedia.org/wiki/Fairchild_Republic_A-10_Thunderbolt_II

Boyd redid the F15 design, originally swing-wing follow-on to F111 ... he eliminated the swing-wing and cut the weight nearly in half. Then behind the YF16&YF17 (that became F16 & F18) and helped with A10, A New Conception of War
https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/Books-by-topic/MCUP-Titles-A-Z/A-New-Conception-of-War/
PDF->kindle, loc1783-88:
Boyd's collaboration with associate Pierre Sprey on the development of the A-10 close air support (CAS) aircraft sparked his exploration of history. The project was Sprey's, with Sprey consulting Boyd on performance analysis, E-M Theory, and views on warfare in general. When designing the A-10, Sprey had to determine what aircraft features provided the firepower and loiter time required by ground forces, while also granting survivability against the enemy ground fire that would inevitably be directed against it.4The German Wehrmacht had pioneered both the design and employment of dedicated CAS aircraft in World War II.

... snip ...

Another Boyd acolyte ... was graduate of first USAF academy class and on fast track to general, when he says Boyd destroyed his career by challenging him to do what was right, ... later wrote a book
https://www.amazon.com/Pentagon-Wars-Reformers-Challenge-Guard-ebook/dp/B00HXY969W/
HBO turned into movie
https://en.wikipedia.org/wiki/The_Pentagon_Wars
related NYT article: Corrupt from top to bottom
https://www.nytimes.com/1993/10/03/books/corrupt-from-top-to-bottom.html

decade earlier, Boyd had story that they spent 18months making sure that Spinney was covered in all details in congressional testimony (all details were "authorized"), however SECDEF blamed Boyd for the article and wanted him banned from the Pentagon and transferred to Alaska. Boyd had congressional coverage and SECDEF directive was rescinded week later. Gone behind paywall, but mostly free at wayback machine
https://web.archive.org/web/20070320170523/http://www.time.com/time/magazine/article/0,9171,953733,00.html
also
https://content.time.com/time/magazine/article/0,9171,953733,00.html

Burton would also say that he got 30mm DU shell down from nearly $100 to $13

... then Desert Storm, Pentagon had claims about precision bombing that it was 100 times better than WW2 and only needed 1/100th the bombs to do the same amount of damage. Note that desert storm was 43days and only the last 100 hrs was land war. GAO desert storm air effectiveness study had A10s doing over million 30mm DU shells (@$13/shell, $13m total) and 5000 Maverick missiles (@$144,000, $72M). The DU shells were so effective that Iraqi crews were walking away from their tanks (as sitting ducks, later description of fierce tank battles with coalition forces taking no damage, don't mention if Iraqi tanks had anybody home). There was also a problem with Mavericks that accounted for some number of friendly fire deaths (friendly fire deaths from precision bombing also in the current wars).
http://www.gao.gov/products/NSIAD-97-134

Boyd was credited for the "Desert Strom" left-hook ... and there have been lots of explanations why the Abrams M1s weren't in position to cut off the retreating Republican Guard ... another could be that Boyd was using official M1 specs and didn't really realize how tightly tethered they were to supply&maintenance.

Boyd posts and WEB URLs
https://www.garlic.com/~lynn/subboyd.html
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

posts mentioning GAO air effectiveness study
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022d.html#15 Russia's most advanced tank in service was obliterated by Ukraine just days after it was deployed, according to reports
https://www.garlic.com/~lynn/2021k.html#91 This Air Force Targeting AI Thought It Had a 90% Success Rate. It Was More Like 25%
https://www.garlic.com/~lynn/2021k.html#32 Twelve O'clock High at IBM Training
https://www.garlic.com/~lynn/2021i.html#53 The Kill Chain
https://www.garlic.com/~lynn/2021h.html#80 Warthog/A-10
https://www.garlic.com/~lynn/2021f.html#51 Martial Arts "OODA-loop"
https://www.garlic.com/~lynn/2021e.html#40 The Blind Strategist: John Boyd and the American Art of War
https://www.garlic.com/~lynn/2019e.html#83 Collins radio and Braniff Airways 1945
https://www.garlic.com/~lynn/2018e.html#57 NATO is a Goldmine for the US/Military Industrial Complex
https://www.garlic.com/~lynn/2018d.html#101 The Persistent Myth of U.S. Precision Bombing
https://www.garlic.com/~lynn/2018c.html#2 FY18 budget deal yields life-sustaining new wings for the A-10 Warthog
https://www.garlic.com/~lynn/2018b.html#108 The Iraq War continues to divide the U.S. public, 15 years after it began
https://www.garlic.com/~lynn/2018b.html#79 What the Gulf War Teaches About the Future of War
https://www.garlic.com/~lynn/2017j.html#73 A-10
https://www.garlic.com/~lynn/2016f.html#102 Chain of Title: How Three Ordinary Americans Uncovered Wall Street's Great Foreclosure Fraud
https://www.garlic.com/~lynn/2015f.html#42 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015b.html#59 A-10
https://www.garlic.com/~lynn/2015b.html#16 Keydriven bit permutations
https://www.garlic.com/~lynn/2014h.html#90 Friden Flexowriter equipment series
https://www.garlic.com/~lynn/2014f.html#73 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014f.html#68 A-10 Attack Jets Rack Up Air-to-Air Kills in Louisiana War Game
https://www.garlic.com/~lynn/2014f.html#46 The Pentagon Wars
https://www.garlic.com/~lynn/2014d.html#2 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014c.html#83 11 Years to Catch Up with Seymour

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Tapes

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Tapes
Date: 28 Sept, 2023
Blog: Facebook
I had 60s&70s archive on tapes in the tape library at the new IBM Almaden Research bldg ... when they had a problem that had an operator mounting random tapes as "scratch" ... and included my following tapes (some of the cambridge/cp67 files were from when I was undergraduate in the 60s)
001018 10FILES, CP/67 SOURCE & SYSTEM 001381 8FILES, CAMBRIDGE WHEELER ARCHIVE 001642 8FILES, CMS SYSTEM & MY FILES 001720 8FILES, RESOURCE MANAGER 3.7, CMS, MISC 001822 10FILES, CP/67 SOURCE & SYSTEM 001954 1FILE, RESOURCE MANAGER 3.4 LISTINGS 002090 10FILES, CP/67 SOURCE & SYSTEM 002826 1FILE, CP2.0 SOURCE 004376 5FILES, CSC/VM 2.15 + LOCAL 004789 8FILES, CAMBRIDGE ARCHIVE

not long before the tape library problem, Melinda asked if I had archive of the original multi-level source update implementation ... I was able to pull it archive tapes and email copy to Melinda ... and the "poof" it was all gone. After that I started keeping my own private copies.

Melinda's history web site:
https://www.leeandmelindavarian.com/Melinda#VMHist
old email in archived posts
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908

trivia: in the 60s, IBM rental charges were based on the "system meter" (which ran anytime a CPU and/or channel was doing something). the science center and some of the commercial online CP67 spin-offs, had done a lot of work on offshift cost reduction for 7x24 availability; no human, dark room, automated operator, allowing system meter to stop when system was idle. This included channel programs that allow the channel to go to sleep, but immediately wake up for any characters arriving. more trivia: complete system had to be idle continuously for 400ms to allow "system meter" to stop ... and long after IBM had switched from rent to purchase, MVS still had a timer event that woke up every 400ms (guaranteeing "system meter" would never stop)

Other trivia: I had done CMSBACK in the 70s, initially for SJR (before moved up the hill to new Almaden facility) and consolidated US HONE consolidate datacenters up in Palo Alto (originated for branch office SEs to practice their operating systems skills in CP67 virtual machines, but then CMS\APL based sales&marketing applications came to dominate all activity, and HONE centers were propagated world-wide; also when FACEBOOK 1st moved into silicon valley, it was into a new bldg built next door to the former US consolidated HONE datacenters). CMSBACK then propagated to several other internal datacenters. A co-worker did a user interface that allowed them to search their backed up files and request specific copies (requiring automated tape mount requests). Later in the 80s, PC & workstation agents/clients were done and released to customers as WDSF. Then GPD/Adstar picks it up and renames it ADSM. Later when disk division was being eliminated, ADSM was transferred to Tivoli and renamed TSM.

backup/archive posts
https://www.garlic.com/~lynn/submain.html#backup
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

archived recent posts mentioning CMSBACK
https://www.garlic.com/~lynn/2023e.html#81 Storage Management
https://www.garlic.com/~lynn/2023c.html#104 IBM Term "DASD"
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2022f.html#94 Foreign Language
https://www.garlic.com/~lynn/2022c.html#85 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022c.html#41 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2022c.html#33 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2022c.html#32 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2022.html#65 CMSBACK
https://www.garlic.com/~lynn/2022.html#61 File Backup
https://www.garlic.com/~lynn/2021j.html#36 Programming Languages in IBM
https://www.garlic.com/~lynn/2021j.html#22 IBM 3420 Tape
https://www.garlic.com/~lynn/2021h.html#100 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2021h.html#98 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2021h.html#93 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2021h.html#88 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2021g.html#89 Keeping old (IBM) stuff
https://www.garlic.com/~lynn/2021g.html#2 IBM ESCON Experience
https://www.garlic.com/~lynn/2021c.html#63 Distributed Computing
https://www.garlic.com/~lynn/2021.html#26 CMSBACK, ADSM, TSM

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Tapes

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Tapes
Date: 28 Sept, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes

I took two credit hr intro to fortran/computers and at the end of semester, was hired to redo 1401 MPIO for 360/30. The univ had been sold 360/67 for tss/360 to replace 709/1401. Temporarily pending 360/67, the 1401 was replaced with 360/30 ... which had 1401 emulation and could continue to run 1401 MPIO ... I guess I was part of getting 360 experience. The univ. shutdown datacenter on weekends and they let me have the whole place dedicated (48hrs w/o sleep, making monday classes a little hard). They gave me a bunch of hardware and software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks I had 2000 card assembler program (assembler option generated either 1) stand alone with BPS loader or 2) under OS/360 with system services macros).

First thing I learned coming in on weekends was to clean all the tape drives (had 7trk 200bpi, 556bpi, 800bpi and 9trk 800bpi), disassemble 2540 reader/punch and clean it, clean 1403. Sometimes when I arrived, production had finished early, the room was dark and everything had been powered off. Sporadic the 360/30 wouldn't complete power on ... and with some trial and error, I learned to put all controllers in CE-mode, power on 360/30, power on individual controllers, and then take controllers out of CE-mode. Within a year of taking intro class, the 360/67 had arrived and I was hired fulltime responsible for OS/360 (TSS/360 never came to production fruition).

Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services; I thought Renton datacenter possibly largest in the world, 360/65s arriving faster than they could be installed, boxes constant staged in hallways around the machine room. Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room and install 360/67 for me to play with when I'm not doing other stuff).

some recent posts mentioning Univ. (709/1401, 360/30, 360/67, os/360), Boeing CFO and Renton datacenter
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#58 Almost IBM class student
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#35 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2020.html#32 IBM TSS

--
virtualization experience starting Jan1968, online at home since Mar1970

CP/67, VM/370, VM/SP, VM/XA

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CP/67, VM/370, VM/SP, VM/XA
Date: 29 Sept, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#90 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#94 CP/67, VM/370, VM/SP, VM/XA

... 360/195 was single processor only machine ... there weren't 195 multiprocessor, tightly-coupled systems (although 195 MVT supported multiprogramming/multiple regions). I've periodically mentioned that not long after joining IBM (at IBM Science Center), the 195 group asked me to help "hyper/multi-threading" the 370/195 (aka a 360/195 that added a few of the new 370 instructions & instruction retry). The end of ACS/360 (IBM executives were afraid it would advance the state of the art too fast, and IBM would loose control of the market, Amdahl leaves IBM shortly after ACS/360 was killed)
https://people.cs.clemson.edu/~mark/acs_end.html

has reference to dual i-stream patents (aka "Sidebar: Multithreading"). At the end of the article also has ACS/360 features that show up more than 20yrs later with ES/9000. The 195 had 64 instruction pipeline and supported out-of-order execution ... but no branch prediction or speculative execution ... as a result conditional branches drained the pipeline and most codes ran at only half 195 rated processing ... going to hyperthreading with two/dual i-streams simulating two processor machine ... each running at half processor rate might keep machine fully utilized ... ignoring the fact that 360/65MP MVT (and then VS2/MVS) was rarely able to run multiprocessor at twice throughput because of the operating system serialized multiprocessing locking, frequently only 1.2-1.5 times throughput, NOT two times, see my previous reference about it could be decades before POK favorite son operating system/MVS had effective 16-way support).

... note that after deciding to add virtual memory to all 370s, it was decided to stop any new work on 195 ... since it would be too hard to retrofit virtual memory to 195 (halting 195 multithreaded work & two/dual i-stream)

trivia: a decade ago, I was asked to try and track down the justification for adding virtual memory to all 370s. Turns out that (basically) MVT storage management was so bad that MVT (concurrently executing) regions had to be specified four times larger than typically used. As a result a normal 1mbyte 370/165 would only have four regions ... not enough to keep 165 processor sufficiently utilized to be justified. Going to 16mbyte virtual memory would allow increasing number of regions by a factor of four times with little or no paging. old archive post with pieces of email x-change
https://www.garlic.com/~lynn/2011d.html#73

trivia2:... after other models had full 370 virtual memory architecture implemented ... and software was starting to implement support ... 165 group started complaining that 370 virtual memory announce would have to slip six months if they had to implement the full 370 virtual memory architecture. After a lot of back&forth, it was decided to drop back to the 165 "subset" ... and the other models also had to drop back to the 165 subset (and software reworked for the 165 subset).

trivia3: more recently, documentation claimed that half of the per processor throughput improvement from z10->z196 was due to introduction of out-of-order execution, branch prediction, speculative execution, etc. ... aka compensation for cache miss and memory latency (that have been in other platforms for decades). There are claims that the cache miss (memory) latency, when measured in count of processor cycles is compareable to 1960s disk latency when measured in count of 1960s processor cycles (i.e. memory is the new disk).

SMP, multiprocessing, tightly-coupled and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

a few posts mention 195 dual i-stream
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2016c.html#3 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014e.html#15 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013i.html#33 DRAM is the new Bulk Core
https://www.garlic.com/~lynn/2013c.html#67 relative speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2008c.html#92 CPU time differences for the same job
https://www.garlic.com/~lynn/2006s.html#21 Very slow booting and running and brain-dead OS's?

--
virtualization experience starting Jan1968, online at home since Mar1970

Mobbed Up

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mobbed Up
Date: 29 Sept, 2023
Blog: Facebook
remember Trump went bankrupt so many times they said no US bank would touch them ... his son then said it didn't matter because they can get all the money they need from the Russians .. we don't need no stinkin US banks, we have the Russians.

Eric Trump in 2014: 'We have all the funding we need out of Russia'
https://thehill.com/homenews/news/332270-eric-trump-in-2014-we-dont-rely-on-american-banks-we-have-all-the-funding-we
ERIC TRUMP REPORTEDLY BRAGGED ABOUT ACCESS TO $100 MILLION IN RUSSIAN MONEY. "We don't rely on American banks. We have all the funding we need out of Russia."
https://www.vanityfair.com/news/2017/05/eric-trump-russia-investment-golf-course
How Russian Money Helped Save Trump's Business. After his financial disasters two decades ago, no U.S. bank would touch him. Then foreign money began flowing in.
https://foreignpolicy.com/2018/12/21/how-russian-money-helped-save-trumps-business/
Trump's oldest son said a decade ago that a lot of the family's assets came from Russia
https://www.businessinsider.com/donald-trump-jr-said-money-pouring-in-from-russia-2018-2
Here are 18 reasons Trump could be a Russian asset
https://www.washingtonpost.com/opinions/here-are-18-reasons-why-trump-could-be-a-russian-asset/2019/01/13/45b1b250-174f-11e9-88fe-f9f77a3bcb6c_story.html

when 1st moved from the west coast to Boston, I was told about local elections, in the rest of the country, mobbed up candidates would get fewer votes, but in Boston they could get more. Now it seems whole swaths of the country will vote for con man and mobbed up candidate.

Donald Trump committed 'repeated' fraud by inflating real estate value, New York judge rules
https://www.ft.com/content/14be758b-b730-4981-807c-64a434978e37
Trump's business empire could collapse 'like falling dominoes' after ruling
https://www.theguardian.com/us-news/2023/sep/27/trump-new-york-real-estate-reaction-fraud
'Fire sale prices': Biographer predicts Trump 'may soon be personally bankrupt' and could see his assets 'liquidated'
https://www.rawstory.com/fire-sale-prices-biographer-predicts-trump-may-soon-be-personally-bankrupt-and-could-see-his-assets-liquidated/
Donald Trump liable for fraud and Trump Organization's business certification canceled, New York judge rules
https://www.cnn.com/2023/09/26/politics/trump-organization-business-fraud/index.html

... and

Trump's bid for Sydney casino 30 years ago rejected due to 'mafia connections'. Cabinet documents reveal police warned NSW government about approving a 1986-87 plan to build city's first casino in Darling Harbour
https://www.theguardian.com/us-news/2017/aug/16/trumps-bid-for-sydney-casino-30-years-ago-rejected-due-to-mafia-connections
Trump's 'Mafia Connections' Excluded Him From An Australian Casino Deal
https://www.huffingtonpost.com.au/2017/09/06/trumps-mafia-connections-excluded-him-from-an-australian-casino-deal_a_23199578/
Donald Trump's 'mafia connections' blocked his bid to open Sydney casino 30 years ago
https://www.cnbc.com/2017/08/16/trump-mafia-connections-blocked-bid-to-open-sydney-casino-30-years-ago.html
Trump's Alleged "Mafia Connections" Lost Him a Bid to Build Sydney's First Casino
https://www.newsweek.com/trumps-alleged-mafia-connections-sydney-casino-651352
Report: Queasy Aussies Killed Trump's Casino Bid Over "Mafia Connections". Government documents label Trump's Atlantic City operations "dangerous."
https://www.motherjones.com/politics/2017/08/report-queasy-aussies-killed-trumps-casino-bid-over-mafia-connections/

some related past posts
https://www.garlic.com/~lynn/2021h.html#21 A Trump bombshell quietly dropped last week. And it should shock us all
https://www.garlic.com/~lynn/2021g.html#59 Report: Prosecutors Have Obtained Damning Information Allegedly Implicating Trump in His Company's Crimes
https://www.garlic.com/~lynn/2021c.html#30 The Supreme Court Finally Lets the Light Shine on Trump
https://www.garlic.com/~lynn/2021.html#24 Trump Tells Georgia Official to Steal Election in Recorded Call
https://www.garlic.com/~lynn/2019e.html#72 CIA's top lawyer made 'criminal referral' on complaint about Trump Ukraine call
https://www.garlic.com/~lynn/2018f.html#56 Too Rich to Jail
https://www.garlic.com/~lynn/2018e.html#89 White-Collar Criminals Got Off Scot-Free After the 2008 Financial Crisis

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

CP/67, VM/370, VM/SP, VM/XA

From: Lynn Wheeler <lynn@garlic.com>
Subject: CP/67, VM/370, VM/SP, VM/XA
Date: 29 Sept, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#90 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#94 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA

The 3033 was the 165/168 group ... and the 308x was more like the 155/158 group. The 308x was really warmed over FS technology that had an enormous amount of circuits (enough to build 16 168s), ratio of circuits/performance was much higher than any of the period (the huge increase in number of circuits has been claimed the motivation for TCMs, cramming so many circuits into smaller physical volume). 3081d was two processors, each supposedly slightly faster than 3033, but some benchmarks had the processor slower than 3033. IBM then quickly doubled the cache size, claiming about 40% faster, for the 3081K (and benchmarks had each processor faster than the 3033). However, the single processor Amdahl had about the same MIP rate claimed for the aggregate of two processor 3081K ... and much higher throughput than MVS 3081K (at the time MVS documentation was claiming two processor MVS throughput was 1.2-1.5 times a single processor, because the significant MVS overhead for multiprocessor operation). A lot more detail about FS & 3081
http://www.jfsowa.com/computer/memo125.htm

FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled processor
https://www.garlic.com/~lynn/subtopic.html#smp

Note going into the early 80s, there was program to move the huge number of internal different CISC microprocessors to common 801/RISC; the 4361&4381 following on to 4331&4341, the AS/400 follow-on to S36&S38 and lots of controller chips, etc (motivation included having single programming environment instead of unique one for each different CISC microprocessor architecture). For various reasons all those programs floundered, and things reverted to (different) CISC (and some RISC engineers left for RISC programs at other vendors). I contributed to an Endicott white paper about VLSI technology had advanced so it was nearly possible to implement the complete 370 architecture directly in silicon (rather than having microprocessor that implemented 370 in microcode, which was averaging ten native instructions per 370 instruction, RISC/801 MIP rate would have to be ten times faster than the target 370 MIP rate) ... so 4381 was 370 VLSI silicon.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc etc posts
https://www.garlic.com/~lynn/subtopic.html#801

... and for the fun of it, old email (in archived post) about 85/165/168/3033/trout (i.e. 3090) "all" the same machine
https://www.garlic.com/~lynn/2019c.html#email810423

also mentions that Endicott (4381) is moving from mid-range into high-end

trivia: then about the same time, IBM Boeblingen was doing 3chip 370 with the performance of 168 ... and some how one of the German clone makers had come into possession of the detailed specs. An Amdahl visitor was shown the spec and immediately confiscated it and said that (legally) it had to be immediately returned to IBM. He delivers it to me (in Silicon Valley) and I have to get it to IBM Boelingen. I actually had a proposal to see how many Boeblingen 3chip 370 systems I could cram into a rack.

some IBM Boeblingen "ROMAN" 3chip 370 posts
https://www.garlic.com/~lynn/2022c.html#77 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#41 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2021h.html#99 Why the IBM PC Used an Intel 8088
https://www.garlic.com/~lynn/2021b.html#50 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2019c.html#48 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2017c.html#10 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, "why?" type question - GPU computing

--
virtualization experience starting Jan1968, online at home since Mar1970

3090 & 3092

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3090 & 3092
Date: 30 Sept, 2023
Blog: Facebook
3092 service processor trivia: IBM FE had "scoping" incremental diagnostic process ... starting with lowest level components. Starting with 3081 TCMs, it was no longer possible to directly scope components ... so service processor was invented with huge number of probes into TCMs. This was a "UC" processor with 3310 disk ... however a whole monitor had to be developed with device drivers, interrupt handlers, its own error recovery, storage management, etc. Same time 3033 processor engineers moved over to trout/3090 ... the 3090 service processor group was formed and I got to know the manager of the group fairly well. Instead of inventing whole new operating system for 3092 ... he decided to scaffold it off VM370/CMS (and all console screens were done with CMS IOS3270).

Originally, 3092 was going to be 4331 ... but then upgraded to pair of redundant 4361s. Some old email from 3092 group about my problem analyzer written in REX(X):
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

3090 ... note 3090 required two 3370 FBA DASD (for the 3092) even for MVS installations which never had any FBA support.
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
The IBM 3092 processor controller, which has two processors to monitor and control the processor complex and to provide maintenance support. An upgrade to the 3092 is required when upgrading to the 3090 model 400.

The IBM 3180 model 145 display station, which attaches to the 3092. A minimum of two display stations are required for the model 200 and three for the model 400 to provide the operator and service personnel with access to the processor complex.


... snip ...

further archived post in the same thread
https://www.garlic.com/~lynn/2010e.html#44
that talks about TCMs, service processors, and SIE instruction

a couple recent posts mentioninng 3092, 3090, TCMs, service processors and SIE instruction
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2022.html#98 Virtual Machine SIE instruction

--
virtualization experience starting Jan1968, online at home since Mar1970

What Will the US M1A1 Abrams Tanks Bring to the Ukrainian Battlefield?

From: Lynn Wheeler <lynn@garlic.com>
Subject: What Will the US M1A1 Abrams Tanks Bring to the Ukrainian Battlefield?
Date: 03 Oct, 2023
Blog: Facebook
What Will the US M1A1 Abrams Tanks Bring to the Ukrainian Battlefield?
https://www.armyrecognition.com/ukraine_-_russia_conflict_war_2022/what_will_the_us_m1a1_abrams_tanks_bring_to_the_ukrainian_battlefield_.html
The delivery of the M1A1 Abrams tanks to Ukraine underscores the US's commitment to supporting Ukraine during these challenging times. While the Abrams brings formidable capabilities to the table, its effectiveness will largely depend on how it's deployed and adapted to the unique challenges of the Ukrainian battlefield.

.... snip ...

Battle for Baqubah: Killing Our Way Out
https://www.amazon.com/Battle-Baqubah-1SG-Robert-Colella/dp/1469791064/
The Battle for Baqubah: Killing Our Way Out is a firsthand account-and sometimes a minute-by-minute tale-of a raw, in-your-face street fight with Al Qaeda militants over a fifteen-month span in the volatile Diyala Province of Iraq. This story is presented through the eyes of a first sergeant serving with B Company 1-12 Cavalry (Bonecrushers), 1st Cavalry Division, out of Fort Hood, Texas. The author takes the reader into the midst of the conflict in and around Baqubah-Iraq's "City of Death"-a campaign that lasted most of 2007. The author and his fellow Bonecrushers watched as the city went from sectarian fighting amongst the Shiite and Sunnis, to an all-out jihad against the undermanned and dangerously dispersed US forces within Baqubah and the outlying areas.

... snip ...

... claims that Abrams were so vulnerable to IEDs that they took to running the route 1st before taking M1 out for a drive (administration says that things were better but this was worse than Fallujah)

loc5243-54:
I was overwhelmed at the amount of destruction that surrounded me. The sterile yard was about 150 meters wide by about 100 meters deep, and it was packed full of destroyed vehicles (words can't describe what I saw). Apache Company's blown-up and burned M88 was down there and barely recognizable. Several M1 tanks sat where they had been dragged in and dropped in place. Some still had the tow bar hooked up to them. They sat on their belly armor because their road wheels and track were blown off. They rested in the dirt crooked and in awkward positions, their heavy steel track rolled up and placed on top of their turrets, which housed their once-proud 120mm gun tubes that now sagged and pointed down to the ground. I saw row after row of Bradleys, some sitting as the tanks were and others that were not even recognizable. The Bradleys burned so violently from the stored ammunition and the 175 gallons of fuel that they melted to the point where the turret collapsed in on itself and came to rest on the bottom of the vehicle's armored floor. The only thing truly recognizable was the heavy steel fluted gun barrel of the 25mm that protruded out of the melted rubble like a flag pole without a flag. I saw other Bradleys and M1 Abrams main battle tanks, the pride of the 1st Cavalry Division-vehicles that, if back at Fort Hood, would be parked meticulously on line, tarps tied tight, gun barrels lined up, track line spotless, not so much as a drop of oil on the white cement. What I saw that day was row after row of mangled tan steel as if in a junkyard that belonged to Satan himself.

... snip ....

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

posts mentioning Baqubah and Abrams
https://www.garlic.com/~lynn/2021g.html#8 Donald Rumsfeld, The Controversial Architect Of The Iraq War, Has Died
https://www.garlic.com/~lynn/2018f.html#89 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018d.html#96 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018b.html#81 What the Gulf War Teaches About the Future of War
https://www.garlic.com/~lynn/2018.html#12 Predicting the future in five years as seen from 1983
https://www.garlic.com/~lynn/2017j.html#73 A-10
https://www.garlic.com/~lynn/2017j.html#2 WW II cryptography
https://www.garlic.com/~lynn/2017c.html#42 Profitable Companies, No Taxes: Here's How They Did It
https://www.garlic.com/~lynn/2016f.html#102 Chain of Title: How Three Ordinary Americans Uncovered Wall Street's Great Foreclosure Fraud
https://www.garlic.com/~lynn/2016f.html#81 The baby boomers' monumental quagmire in Iraq
https://www.garlic.com/~lynn/2016.html#88 The Pentagon's Pricey Culture of Mediocrity
https://www.garlic.com/~lynn/2015h.html#33 The wars in Vietnam, Iraq, and Afghanistan were lost before they began, not on the battlefields
https://www.garlic.com/~lynn/2015g.html#78 New hard drive
https://www.garlic.com/~lynn/2015b.html#16 Keydriven bit permutations
https://www.garlic.com/~lynn/2014m.html#48 LEO
https://www.garlic.com/~lynn/2014h.html#36 The Designer Of The F-15 Explains Just How Stupid The F-35 Is
https://www.garlic.com/~lynn/2014g.html#68 Revamped PDP-11 in Brooklyn
https://www.garlic.com/~lynn/2014b.html#38 Can America Win Wars
https://www.garlic.com/~lynn/2014.html#79 Army Modernization Is Melting Down
https://www.garlic.com/~lynn/2014.html#61 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014.html#42 Royal Pardon For Turing
https://www.garlic.com/~lynn/2013n.html#10 Why the Death of the Tank Is Greatly Exaggerated
https://www.garlic.com/~lynn/2013k.html#48 John Boyd's Art of War
https://www.garlic.com/~lynn/2013e.html#5 Lessons Learned from the Iraq War
https://www.garlic.com/~lynn/2012i.html#2 Interesting News Article

--
virtualization experience starting Jan1968, online at home since Mar1970

A new Supreme Court case could trigger a second Great Depression

From: Lynn Wheeler <lynn@garlic.com>
Subject: A new Supreme Court case could trigger a second Great Depression
Date: 03 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#95 A new Supreme Court case could trigger a second Great Depression

How a Supreme Court Case Could Upend the Consumer Financial Protection Bureau
https://time.com/6320149/supreme-court-consumer-financial-protection-bureau/
Existential Threat to CFPB Spotlights Massive Stakes of New Supreme Court Term
https://www.commondreams.org/news/cfpb-supreme-court
Consumer Agency Hated by Republicans Is in Fight of Its Life at Supreme Court
https://finance.yahoo.com/news/supreme-court-weighs-fate-consumer-090000494.html

some past posts mentioning consumer financial protection bureau
https://www.garlic.com/~lynn/2019.html#20 Trump CFPB Plans Obscene Change to Payday Lender Rule
https://www.garlic.com/~lynn/2017j.html#64 Wages and Productivity
https://www.garlic.com/~lynn/2017j.html#59 Wall Street Wants to Kill the Agency Protecting Americans From Financial Scams
https://www.garlic.com/~lynn/2017j.html#58 Wall Street Wants to Kill the Agency Protecting Americans From Financial Scams

economic mess" posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
Too Big To Fail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
regulatory capture
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

DataTree, UniTree, Mesa Archival

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DataTree, UniTree, Mesa Archival
Date: 03 Oct, 2023
Blog: Facebook
Congress passed some legislature encouraging gov. labs to spin off technology for commercial use, as part of making US more competitive.

LANL had done a supercomputer filesystem using an MVS system as sort of disk controller ... and spun off as Datatree.

LLNL spun off their Cray filesystem ... as Unitree

DataTree and UniTree ref:
https://ieeexplore.ieee.org/document/113582

NCAR spun off their supercomputer archive system as "Mesa Archival". NCAR system had a 4341 managing a bunch of disk and tapes and connected to the supercomputers and disk farm with NSC HYPERchannel. I was sort of the IBM expert on HYPERchannel ... and so would periodically get calls for help from the IBM branch office. 4341 could receive filesystem r/w requests (over HYPERchannel) from supercomputers. 4341 could create DASD channel programs and download them to HYPERchannel A515 (370 channel emulator; if necessary retrieving data from tape and writing to disk) and returning the channel program identifier to the requesting supercomputer. The supercomputer would then contact the A515 to invoke the specific channel program to execute the preloaded channel program (which would directly transfer data between DASD and supercomputer memory).

As part of disk division countermeasures to the communication group, the GPD/Adstar VP had invested in Mesa Archival's port from 4341 to RS/6000 and use of IBM disks ... and he would periodically ask us to drop by "Mesa Archival" (and others) to provide any assistance. NCAR Mesa Archival reference
https://www.cbinsights.com/company/mesa-archival-systems
Who are the investors of Mesa Archival Systems?

Investors of Mesa Archival Systems include IBM Ventures and Hill Carman Ventures.


History: Late 80s, a senior disk engineer got a talk at an internal, world-wide, annual communication group conference, supposedly on 3174 performance ... but open the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing fall in disk sales with data fleeing datacenters to more distributed computing friendly platforms. The disk division was coming up solutions that were constantly being vetoed by the communication group. The issue was that the communication group had stranglehold on mainframe datacenters with their corporate strategic ownership of everything that crossed datacenter walls, and were fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm and install base). Partial countermeasure to communication group opposition, GPD/Adstar was investing in distributed computing startups that would use IBM disks.

In conjunction with GPD/Adstar, we were also funding port of Unitree to our HA/CMP product (high availability RS/6000).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

It originally started out as HA/6000 for the NYTimes to port their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP after I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (oracle, sybase, informix, ingres). Early Jan1992, we have joint meeting with Oracle, where AWD VP Hester tells Ellison that IBM will have 16-system clusters by mid-92 and 128-system clusters by ye-92. Then end of Jan1992, cluster scale-up is transferred for announce as (technical/scientific *ONLY*) IBM supercomputer and we are told we can't work on anything with more than four processors (we leave IBM a few months later).

NSC HYPERchannel ref
https://en.wikipedia.org/wiki/HYPERchannel
https://en.wikipedia.org/wiki/Network_Systems_Corporation
related
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
posts mentioning communication group fiercely fighting off client/server and distributed computing (in part trying to preserve their dumb terminal paradigm)
https://www.garlic.com/~lynn/subnetwork.html#terminal
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

posts specifically mentioning "Mesa Archival"
https://www.garlic.com/~lynn/2023c.html#19 IBM Downfall
https://www.garlic.com/~lynn/2023.html#21 IBM Change
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2021h.html#93 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2021g.html#2 IBM ESCON Experience
https://www.garlic.com/~lynn/2021c.html#63 Distributed Computing
https://www.garlic.com/~lynn/2019e.html#116 Next Generation Global Prediction System
https://www.garlic.com/~lynn/2018d.html#41 The Rise and Fall of IBM
https://www.garlic.com/~lynn/2017k.html#63 SABRE after the 7090
https://www.garlic.com/~lynn/2017k.html#50 Can anyone remember "drum" storage?
https://www.garlic.com/~lynn/2017e.html#25 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017b.html#67 Zero-copy write on modern motherboards
https://www.garlic.com/~lynn/2017b.html#63 Zero-copy write on modern motherboards
https://www.garlic.com/~lynn/2015g.html#26 [Poll] Computing favorities
https://www.garlic.com/~lynn/2015e.html#47 GRS Control Unit ( Was IBM mainframe operations in the 80s)
https://www.garlic.com/~lynn/2015c.html#68 30 yr old email
https://www.garlic.com/~lynn/2014b.html#15 Quixotically on-topic post, still on topic
https://www.garlic.com/~lynn/2012p.html#9 3270s & other stuff
https://www.garlic.com/~lynn/2012k.html#46 Slackware
https://www.garlic.com/~lynn/2012i.html#47 IBM, Lawrence Livermore aim to meld supercomputing, industries
https://www.garlic.com/~lynn/2012e.html#27 NASA unplugs their last mainframe
https://www.garlic.com/~lynn/2011n.html#34 Last Word on Dennis Ritchie
https://www.garlic.com/~lynn/2011b.html#58 Other early NSFNET backbone
https://www.garlic.com/~lynn/2010m.html#85 3270 Emulator Software
https://www.garlic.com/~lynn/2010d.html#71 LPARs: More or Less?
https://www.garlic.com/~lynn/2010d.html#69 LPARs: More or Less?
https://www.garlic.com/~lynn/2009s.html#42 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009k.html#58 Disksize history question
https://www.garlic.com/~lynn/2008p.html#51 Barbless
https://www.garlic.com/~lynn/2007j.html#47 IBM Unionization
https://www.garlic.com/~lynn/2006n.html#29 CRAM, DataCell, and 3850
https://www.garlic.com/~lynn/2005e.html#19 Device and channel
https://www.garlic.com/~lynn/2005e.html#16 Device and channel
https://www.garlic.com/~lynn/2005e.html#15 Device and channel
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
https://www.garlic.com/~lynn/2004d.html#75 DASD Architecture of the future
https://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003b.html#31 360/370 disk drives
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2002e.html#46 What goes into a 3090?
https://www.garlic.com/~lynn/2001f.html#66 commodity storage servers
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#21 Disk caching and file systems. Disk history...people forget

--
virtualization experience starting Jan1968, online at home since Mar1970

DataTree, UniTree, Mesa Archival

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DataTree, UniTree, Mesa Archival
Date: 05 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#106 DataTree, UniTree, Mesa Archival

I had done a lot of mainframe NSC HYPERchannel support in 1980 and it was used in some internal IBM sites. NSC then tried to get IBM to release my support to customers, but there was a group in POK that was working on some serial stuff, and they were afraid if my support was released to customers, it would make it harder to justify releasing their stuff. NSC then recodes my design from scratch for customers.

Later the 3090 product administrator tracks me down about the NSC support. It turns out that 3090 channels were designed to have an aggregate of 3-5 reported "channel errors" across all customers for a period of a year ... but the standard industry reporting (collects EREP reports from customers and summarizes the data) showed 20 (instead 3-5). They track it down and the extra is all at NSC HYPERchannel installations. It turns out that if I got a HPERChannel error, I simulated channel check error, which drove (MVS & VM) error handler through some code to retry the operation. After a little investigation, I determined that simulating IFCC (interface control check), resulted in the same retry ... and I got NSC to change their code from simulating CC to IFCC.

Later I was working with NSC (and others) on HiPPI (LANL behind standardizing Cray channel) and FCS standards (in 1988, local IBM branch office had asked if I could help LLNL standardize some serial stuff they were playing with, which quickly becomes FCS) ... including HIPPI and FCS non-blocking switches for both interconnecting large clusters of supercomputers with large DASD farms (both CPU<->CPU and CPU<->disk). We were planning on also using FCS switches for large HA/CMP clusters interconnecting large disk farms (CPU<->CPU & CPU<->disk), I also needed special FCS switch feature for handling certain types of cluster member failure modes). Note the POK people finally get their stuff released in 1990 with ES/9000 as ESCON (17mbytes/sec, when it is already obsolete; FCS initially full-duplex, 1gbit/sec, 200mbyte/sec aggregate).

Note part of communication group fiercely fighting off client/server and distributed computing was blocking release of mainframe TCP/IP. Then apparently some influential customers got that changed and the communication group changed its tactic and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be release through them. What ships gets aggregate of 44kbytes/sec using nearly whole 3090 processor. I then add RFC1044 support (for NSC boxes) and in some tuning tests at Cray Research between Cray and IBM 4341, get sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

HIPPI
https://en.wikipedia.org/wiki/HIPPI
Fibre Channel
https://en.wikipedia.org/wiki/Fibre_Channel

My wife had been con'ed into being co-author for IBM response to request by gov. agency for super secure network operation in large distributed/campus environment ... where she included 3-tier and lots of NSC gear (in the middle layer).
https://en.wikipedia.org/wiki/Network_Systems_Corporation
We were then out making executive presentations to large IBM commercial customers ... and having to deal with constant barrage of misinformation from the communication group.

Note NSC EN641, had high-speed backplane and could support 16 Ethernet ports, mainframe channels, FDDI LANs, etc. Also AWD Austin engineer had taken early ESCON spec., tweaked it including full-duplex and about 10% faster as the "SLA" for RS/6000. However it only interoperated with other RS/6000. We con NSC to add SLA support to their EN641 so RS/6000 had high performance interoperable interface. The AWD engineer than wanted to start on 800mbit version of SLA ... and we convince him to join the FCS standards committee instead.

This post
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
reference
https://en.wikipedia.org/wiki/High_Performance_Computing_Act_of_1991
and we were participating in the NII "testbed" meetings at LLNL (before company kneecapped HA/CMP and we decide to leave IBM).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

... aka project started out as HA/6000 for NYTimes to port their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with major RDBMS vendors (Oracle, Sybase, Informix, Ingres).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc, somerset, aim posts
https://www.garlic.com/~lynn/subtopic.html#801
communication group fighting off client/server & distributed computing
https://www.garlic.com/~lynn/subnetwork.html#terminal
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

--
virtualization experience starting Jan1968, online at home since Mar1970

John Boyd and IBM Wild Ducks

From: Lynn Wheeler <lynn@garlic.com>
Subject: John Boyd and IBM Wild Ducks
Date: 05 Oct, 2023
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022e.html#104 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#2 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#32 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#60 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#67 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022g.html#24 John Boyd and IBM Wild Ducks
and
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/boyd-ibm-wild-duck-discussion-lynn-wheeler/

Psychologists have finally figured out why your toxic colleagues climb to the top at work
https://www.fastcompany.com/90479073/psychologists-have-finally-figured-out-why-your-toxic-colleagues-climb-to-the-top-at-work
The success of toxic people is so common that there's a phrase for it: the "toxic career model." It goes like this: A toxic employee schmoozes and charms and politicks, which results in high job performance reviews from superiors. (Peers, meanwhile, often know the ugly truth.) All success revolves around social skills. And because the same socializing that can foster strong, healthy work relationships can also be used to deceive others, toxic colleagues are able to use their social skills for their own gain.

... snip ...

... especially fostered by amoral sociopaths.

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

DataTree, UniTree, Mesa Archival

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DataTree, UniTree, Mesa Archival
Date: 05 Oct, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival

Design Of A Computer
https://archive.computerhistory.org/resources/text/CDC/cdc.6600.thornton.design_of_a_computer_the_control_data_6600.1970.102630394.pdf
Thornton and Cray did cdc6600
https://en.wikipedia.org/wiki/CDC_6600
The CDC 6600 was the flagship of the 6000 series of mainframe computer systems manufactured by Control Data Corporation.[9][10]

...
Generally considered to be the first successful supercomputer, it outperformed the industry's prior record holder, the IBM 7030 Stretch, by a factor of three.[11][12]

...
With performance of up to three megaFLOPS,[13][14]

...
the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.[15]

...
The 6600 began to take form, with Cray working alongside Jim Thornton, system architect and "hidden genius" of the 6600.

.... Cray leaves to do Cray Research and Thornton leaves to do Network Systems. HYPERchannel enabling interconnecting clusters of supercomputers as well as supercomputer clusters sharing large disk farms.

The national lab systems were sort of like a combination of SMS (system managed storage) and NAS (network attached storage) used for supercomputers.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Posts mentioning Thornton
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022b.html#98 CDC6000
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2015h.html#10 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2014g.html#75 non-IBM: SONY new tape storage - 185 Terabytes on a tape
https://www.garlic.com/~lynn/2013g.html#6 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013g.html#3 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2012o.html#27 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2011d.html#29 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2009c.html#12 Assembler Question
https://www.garlic.com/~lynn/2007t.html#73 Remembering the CDC 6600
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2003j.html#18 why doesn't processor reordering instructions affect most
https://www.garlic.com/~lynn/2002i.html#13 CDC6600 - just how powerful a machine was it?

--
virtualization experience starting Jan1968, online at home since Mar1970

Internet Host Table, 4-Feb-88

From: Lynn Wheeler <lynn@garlic.com>
Subject: Internet Host Table, 4-Feb-88
Date: 06 Oct, 2023
Blog: Facebook
Internet Host Table, 4-Feb-88
http://pdp-10.trailing-edge.com/bb-ev83b-bm/01/new-system/hosts.txt
https://www.facebook.com/groups/internetoldfarts/permalink/857413832572503
NET : 4.0.0.0 : SATNET :
NET : 6.0.0.0 : YPG-NET :
NET : 7.0.0.0 : EDN-TEMP :
NET : 8.0.0.0 : BBN-NET-TEMP :
NET : 10.0.0.0 : ARPANET :
NET : 12.0.0.0 : ATT :
NET : 13.0.0.0 : XEROX-NET :
NET : 14.0.0.0 : PDN :
NET : 15.0.0.0 : HP-INTERNET :
NET : 18.0.0.0 : MIT-TEMP :

... and old archived email about IBM getting 9-net after Interop-88
https://www.garlic.com/~lynn/2006j.html#email881216

Interop-88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

inventing the internet post
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, next, index - home