From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Numeric Intensive Date: 25 Sept, 2024 Blog: Facebookafter the Future System implosion, I got sucked into helping with a 16-processor multiprocessor and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting that remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody told head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16-way support (at the time, MVS documentation said that MVS 2-processor support only had 1.2-1.5 times the throughput of a single processor). Then some of us were invited to never visit POK again and the 3033 processor engineers told to heads down on 3033 and stop being distracted. POK doesn't ship a 16-processor machine until after the turn of the century. Once the 3033 was out the door, the processor engineers start on trout/3090. Later the processor engineers had improved 3090 scalar floating point processing so it ran as fast as memory (and complained vector was purely marketing since it would be limited by memory throughput).
A decade after 16-processor project, Nick Donofrio approved our HA/6000 project, originally for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP when I start doing numeric/scientific cluster scale-up with the national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, and Ingres that had RDBMS VAXCluster support in the same source base with Unix).
IBM had been marketing a fault tolerant system as S/88 and the S/88 product administrator started taking us around to their customers ... and also got me to write a section for the corporate strategic continuous availability document (section got pulled when both Rochester/AS400 and POK/mainframe complained that they couldn't meet the requirements). Early Jan1992, in meeting with Oracle CEO, AWD/Hester told Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye92 ... however by end of Jan1992, cluster scale-up had been transferred for announce as IBM Supercomputer and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later). Complaints from the other IBM groups likely contributed to the decision.
(benchmarks are number of program iterations compared to reference
platform, not actual instruction count)
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-system: 2016MIPs, 128-system: 16,128MIPS
trivia: in the later half of the 90s, the i86 processor chip vendors
do a hardware layer that translates i86 instructions into RISC
micro-ops for execution.
1999: single IBM PowerPC 440 hits 1,000MIPS
1999: single Pentium3 (translation to RISC micro-ops for execution)
hits 2,054MIPS (twice PowerPC 440)
2003: single Pentium4 processor 9.7BIPS (9,700MIPS)
2010: E5-2600 XEON server blade, two chip, 16 processor, aggregate
500BIPS (31BIPS/processor)
The 2010-era mainframe was 80 processor z196 rated at 50BIPS aggregate
(625MIPS/processor), 1/10th XEON server blade
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM (Empty) Suits Date: 25 Sept, 2024 Blog: FacebookI drank the kool-aid when I graduated and joined IBM ... got white shirts and 3-piece suits. Even tho I was in the science center, lots of customers liked me to stop by and shoot the breeze, including the manager of one of the largest, all-blue financial dataprocessing centers on the east coast. Then the branch manager did something that horribly offended the customer and the customer announced they were ordering an Amdahl system (would be the only one in large sea of IBM systems) in retribution.
I was asked to go sit onsite for 6-12 months to obfuscate why the customer was ordering Amdahl system (this was back when Amdahl was only selling into technical/university market and had yet to crack the commercial market) as part of branch manager cover-up. I talked to the customer and they said they would enjoy having me onsite, but that wouldn't change their decision to order an Amdahl machine so I declined IBM's offer. I was told that the branch manager was good sailing buddy of IBM CEO, and if I didn't, I could forget having an IBM career, promotions, raises. Last time I wore suits for IBM. Later found customers commenting it was refreshing change from the usual empty suits.
This was after (earlier) CEO Learson had tried&failed to block the
bureaucrats, careerists and MBAs from destroying Watsons'
culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
which was accelerated during the failing Future System project
"Computer Wars: The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
some more FS detail:
http://www.jfsowa.com/computer/memo125.htm
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
some past post mentioning "suit kool-aid":
https://www.garlic.com/~lynn/2023g.html#42 IBM Koolaid
https://www.garlic.com/~lynn/2023c.html#56 IBM Empty Suits
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2022g.html#66 IBM Dress Code
https://www.garlic.com/~lynn/2021j.html#93 IBM 3278
https://www.garlic.com/~lynn/2021i.html#81 IBM Downturn
https://www.garlic.com/~lynn/2021.html#82 Kinder/Gentler IBM
https://www.garlic.com/~lynn/2018f.html#68 IBM Suits
https://www.garlic.com/~lynn/2018e.html#27 Wearing a tie cuts circulation to your brain
https://www.garlic.com/~lynn/2018d.html#6 Workplace Advice I Wish I Had Known
https://www.garlic.com/~lynn/2018.html#55 Now Hear This--Prepare For The "To Be Or To Do" Moment
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTRAN Newsgroups: comp.os.linux.misc, alt.folklore.computers Date: Thu, 26 Sep 2024 07:49:14 -1000Lynn Wheeler <lynn@garlic.com> writes:
trivia: 1972, CEO Learson tried (& failed) to block the bureaucrats,
careerists, and MBAs from destroying Watsons' culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
it was greatly accelerated during the failing Future System effort,
Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in
the shadow of defeat ... But because of the heavy investment of face by
the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time, during
F/S, outspoken criticism became political
... snip ...
more FS info
http://www.jfsowa.com/computer/memo125.htm
then 1992, IBM has one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" (take-off on
AT&T "baby bells" breakup a decade earlier) in preperation for breakup
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the company breakup. Before we get started, the board brings in the former president of Amex as CEO, who (somewhat) reverses the breakup ... but it was difficult time saving a company that was on the verge of going under ... IBM somewhat barely surviving as financial engineering company
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Emulating vintage computers Newsgroups: alt.folklore.computers Date: Thu, 26 Sep 2024 08:39:41 -1000Lars Poulsen <lars@beagle-ears.com> writes:
little over decade ago was asked to track down the IBM decision to add virtual memory to all 370s, found staff member to executive making the decision. Basically MVT storage management was so bad that regions sizes had to be specified four times larger than used ... as a result typical 1mbyte 370/165 only ran four concurrent regions, insufficient to keep system busy and justified. Mapping MVT to 16mbyte virtual memory would allow concurrent regions to be increased by factor of four times (caped at 15 for the 4mbit storage protect keys) with little or no paging (aka VS2/SVS), sort of like running MVT in a CP/67 16mbyte virtual machine.
Lat 80s got approval for HA/6000 project, originally for NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP when I start doing numeric/scientific cluster scale-up with the national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, and Ingres that had RDBMS VAXCluster support in the same source base with Unix).
IBM had been marketing a fault tolerant system as S/88 and the S/88 product administrator started taking us around to their customers ... and also got me to write a section for the corporate continuous availability strategy document, section got pulled when both Rochester (AS/400) and POK (mainframe) complained that they couldn't meet the requirements.
Early Jan1992, in meeting with Oracle CEO, AWD/Hester told Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92 ... however by end of Jan1992, cluster scale-up had been transferred for announce as IBM Supercomputer and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later). Complaints from the other IBM groups likely contributed to the decision.
(benchmarks are number of program iterations compared to reference
platform, not actual instruction count)
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-system: 2016MIPs, 128-system: 16,128MIPS
trivia: in the later half of the 90s, the i86 processor chip vendors do
a hardware layer that translates i86 instructions into RISC micro-ops
for execution.
1999: single IBM PowerPC 440 hits 1,000MIPS
1999: single Pentium3 (translation to RISC micro-ops for execution)
hits 2,054MIPS (twice PowerPC 440)
2003: single Pentium4 processor 9.7BIPS (9,700MIPS)
2010: E5-2600 XEON server blade, two chip, 16 processor, aggregate
500BIPS (31BIPS/processor)
The 2010-era mainframe was 80 processor z196 rated at 50BIPS aggregate
(625MIPS/processor), 1/10th XEON server blade
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM (Empty) Suits Date: 27 Sept, 2024 Blog: Facebookre:
After transfer to SJR got to wander around silicon valley, would
periodically drop in on TYMSHARE and/or see lots of people at the
monthly meetings sponsored by Stanford SLAC. In aug1976, TYMSHARE
makes their CMS-based online computer conferencing (precursor to
social media) "free" to (ibm user group) SHARE as VMSHARE ... archives
here:
http://vm.marist.edu/~vmshare
I cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on internal systems and the internal network. One of the biggest problems was IBM lawyers concerned that internal employees would be contaminated by direct exposure to unfiltered customer information.
recent posts mentioning TYMSHARE, VMSHARE, SLAC:
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#139 RPG Game Master's Guide
https://www.garlic.com/~lynn/2024d.html#77 Other Silicon Valley
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#64 Online Computer Conferencing
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023e.html#6 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023d.html#62 Online Before The Cloud
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#37 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM (Empty) Suits Date: 27 Sept, 2024 Blog: Facebookre:
... also wandering around datacenters in silicon valley, including bldg14 (disk engineering) and bldg15 (disk product test) across the street. They were running 7x24, stand-alone, pre-scheduled testing and had mentioned trying MVS, but it had 15min MTBF (requiring manual re-ipl). I offer to rewrite I/O supervisor to enable any amount of concurrent, on-demand testing (greatly improving productivity). Downside was they started blaming me anytime they had problem and I had to spending increasing amount of time playing disk engineer and diagnosing their problems. I then write an (internal only) research report and happened to mention the MVS 15min MTBF, bringing the wrath of the POK MVS group down on my head.
1980, STL (since renamed SVL) was bursting at the seams and were moving 300 people from the IMS group to offsite bldg with dataprocessing service back to the STL datacenter. They had tried "remote 3270" but found the human factors unacceptable. I get con'ed into do channel-extender support so they can place channel-attached 3270 controllers at the offsite bldg so there is no perceptible difference in human factors offsite and in STL. Side-effect was 168-3 system throughput increased by 10-15%; 3270 controllers had been spread across all channels with DASD controllers, channel-extender significantly reduced channel busy (for same 3270 terminal traffic) compared to the (really slow) 3270 controllers (improving DASD throughput I/O) ... and STL considered using channel-extender for all 3270s.
Then there was attempt to release my support, but there was was group in POK playing with some serial stuff afraid if it was in the market, it would make it more difficult releasing their stuff (and get it vetoed).
Mid-80s, the father of 801/RISC wants me to help him get disk "wide-head" released. The original 3380 had 20track spacing between each data track, which was cut in half, doubling tracks&cylinders; then it was cut again, tripling tracks&cylinders. Disk wide-head would transfer 16 closely data placed tracks in parallel ... however required 50mbytes/sec channel ... and mainframe channels were still 3mbytes/sec.
Then in 1988, the IBM branch office asks if I could help LLNL standardize some serial stuff they were working with, which quickly becomes fibre-channel standard (FCS, 1gbit/sec, full-duplex, aggregate 200mbyte/sec, including some stuff I had done in 1980) and do FCS for RS/6000. Then in 1990s, POK gets their serial stuff released as ESCON, when it is already obsolete (17mbytes/sec). Then POK engineers become involved in FCS and define a heavy-weight protocol that significantly cuts the throughput ... which is eventually released as FICON. The most recent public benchmark I've found is z196 "Peak I/O" benchmark getting 2M IOPS using 104 FICON. About the same time, a FCS was announced as E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Also IBM pubs recommended keeping SAPS (system assist processors that do actual I/O) to 70% CPU (which would be 1.5M IOPS). Also with regard to mainframe CKD DASD throughput, none have been made for decades, all being simulated on industry standard fixed-block disks.
One of the problems was 3880 controller ... which people assumed would be like 3830 controller but able to support 3mbyte/sec transfer ... which 3090 planned on. However while 3880 had special hardware for 3mbye/sec transfer, it had a really slow processor for everything else ... which significantly drove up channel busy. When 3090 found out how bad it really was, they realized they had to significantly increase the number of (3mbyte/sec) channels to achieve target system throughput. The increase in number of channel required an additional TCM (and the 3090 group semi-facetiously said they would bill the 3880 group for the increase in 3090 manufacturing costs). Marketing eventually respun the large increase in 3090 channel numbers to be a great I/O machine (but actually was to offset the 3880 increase in channel busy).
trivia: bldg15 got early engineering system (for I/O testing) and received early engineering 3033 (#3 or #4?). Since product testing was only taking percent or two of CPU, we scrounge up 3830 controller and string of 3330s, putting up our own private online service (including running 3270 coax under the street to my office in bldg28). Then the air-bearing simulation (originally for 3370 FBA thin-film floating head design, but also used later for 3380 CKD heads) was getting multiple week turn-around on SJR 370/195 (even with high priority designation), we set it up on the bldg15 3033 and air-bearing simulation was able to get multiple turn-arounds/day. Note that 3380 CKD was already transitioning to fixed-block, can be seen in records/track calculations that had record size rounded up to fixed cell-size).
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM (Empty) Suits Date: 27 Sept, 2024 Blog: Facebookre:
I was repeatedly being told I had no career, no promotions, no raises ... so when head hunter asked me to interview for assistant to president of clone 370 maker (sort of subsidiary of company on the other side of the pacific), I thought why not. It was going along well until one of the staff broached the subject of 370/xa documents (I had a whole drawer full of the documents, registered ibm confidential, kept under double lock&key and subject to surprise audits by local security). In response I mentioned that I had recently submitted some text to upgrade ethics in the Business Conduct Guidlines (had to be read and signed once a year) ... that ended the interview. That wasn't the end of it, later had a 3hr interview with FBI agent, the gov. was suing the foreign parent company for industrial espionage (and I was on the building visitor log). I told the agent, I wondered if somebody in plant site security might of leaked names of individuals who had registered ibm confidential documents.
Somebody in Endicott cons me into helping them with 138/148 ECPS
microcode (also used for 4331/4341 follow-on), find the 6kbytes of
vm370 kernel code that was the highest executed to be moved to
microcode (running ten times faster) ... archived usenet/afc post with
the analysis, 6kbytes represented 79.55% of kernel execution
https://www.garlic.com/~lynn/94.html#21
then he cons me into running around the world presenting the ECPS business case to local planners and forecasters. I'm told WT forecasters can get fired for bad forecasts because they effectively are firm orders to the manufacturing plants for delivery to countries, while US region forecasters get promoted for forecasting whatever corporate tells them are strategic (and plants have to "eat" bad US forecasts, as a result plants will regularly redo US forecasts)
much later my wife had been con'ed into co-authoring a response to gov. agency RFI for a campus-like, super-secure operation where she included 3-tier architecture. We were then out doing customer executive presentations on Ethernet, TCP/IP, Internet, high-speed routers, super-secure operation and 3-tier architecture (at a time when the communication group was fiercely fighting off client/server and distributed computing) and the communication group, token-ring, SNA and SAA forces where attacking us with all sorts of misinformation. The Endicott individual then had a top floor, large corner office in Somers running SAA and we would drop by periodically and tell him how badly his people were behaving.
3-tier architecture posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
some posts mentioning Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022c.html#4 Industrial Espionage
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021j.html#38 IBM Registered Confidential
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#86 Bizarre Career Events
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2019e.html#29 IBM History
https://www.garlic.com/~lynn/2019.html#83 The Sublime: Is it the same for IBM and Special Ops?
https://www.garlic.com/~lynn/2013e.html#42 More Whistleblower Leaks on Foreclosure Settlement Show Both Suppression of Evidence and Gross Incompetence
https://www.garlic.com/~lynn/2009h.html#66 "Guardrails For the Internet"
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTRAN Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Fri, 27 Sep 2024 09:55:38 -1000Peter Flass <peter_flass@yahoo.com> writes:
The IBM communication group was fiercely fighting off client/server and distributed computing and trying to block mainframe TCP/IP release. When that got overturned they changed their tactic and claimed that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be release through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. It was also made available on MVS by doing MVS VM370 "diagnose" instruction simulation.
I then do RFC1044 implementation and in some tuning tests at Cray Research between Cray and IBM 4341, get 4341 sustained channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
In the 90s, the IBM communication group hires a silicon valley contractor to implement tcp/ip support directly in VTAM, what he demo'ed had TCP running much faster than LU6.2. He was then told that everybody knows that LU6.2 is much faster than a "proper" TCP/IP implementation and they would only be paying for a "proper" implementation.
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTRAN Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Fri, 27 Sep 2024 11:13:06 -1000re:
trivia: late 80s univ studying mainframe VTAM implementation LU6.2 had 160k instruction pathlength (and 15 buffer copies) compared to unix/bsd (4.3 tahoe/reno) TCP had a 5k instruction pathlength (and 5 buffer copies).
I was on Greg Chesson's XTP TAB and did further optimization with CRC trailer protocol with outboard XTP LAN chip where CRC was calculated as the packet flowed through and added/checked it in the trailor. Also allowed for no buffer copy with scatter/gather (aka doing packet I/O directly from user memory).
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
posts mention VTAM/BSD-tcp/ip study
https://www.garlic.com/~lynn/2024e.html#71 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2022h.html#94 IBM 360
https://www.garlic.com/~lynn/2022h.html#86 Mainframe TCP/IP
https://www.garlic.com/~lynn/2022h.html#71 The CHRISTMA EXEC network worm - 35 years and counting!
https://www.garlic.com/~lynn/2022g.html#48 Some BITNET (& other) History
https://www.garlic.com/~lynn/2006l.html#53 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Emulating vintage computers Newsgroups: alt.folklore.computers Date: Sat, 28 Sep 2024 13:02:48 -1000antispam@fricas.org (Waldek Hebisch) writes:
from original post:
(benchmarks are number of program iterations compared to reference
platform, not actual instruction count)
...
industry standard MIPS benchmark had been number of program iterations compared to one of the reference platforms (370/158-3 assumed to be one MIPS) ... not actual instruction count ... sort of normalizes across large number of different architectures.
consideration has been increasing processor rates w/o corresponding
improvement in memory latency. For instance IBM documentation claimed
that half of the per processor throughput increase going from z10 to
z196 was the introduction of some out-of-order execution (attempting
some compensation for cache miss and memory latency, features that have
been in other platforms for decades).
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
aka half of the 469MIPS/proc to 625MIPS/proc ... (625-469)/2; aka 78MIPS
per processor from Z10 to z196 due to some out-of-order execution.
There have been some pubs about recent memory latency when measured in terms of processor clock cycles is similar to 60s disk latency when measured in terms of 60s processor clock cycles.
trivia: early 80s, I wrote a tome that disk relative system throughput had declined by an order of magnitude since mid-60 (i.e. disks got 3-5 faster while systems got 40-50 times faster). Disk division executive took exception and assigned the performance group to refute the claims. After a few weeks they came back and effectively said I had slightly understated the problem. They then respun the analysis to about configuring disks to increase system throughput (16Aug1984, SHARE 63, B874).
trivia2: a litle over decade ago, I was asked to track down the decision
to add virtual memory to all IBM 370s. I found staff member to executive
making the decision. Basically MVT storage management was so bad that
region sizes had to be specified four times larger than used. As a
result a typical 1mbyte, 370/165 only ran four concurrent regions at a
time, insufficient to keep 165 busy and justified. Going to MVT in
16mbyte virtual memory (VS2/SVS) allowed increasing the number of
regsions by factor of four times (caped at 15 because of 4bit storage
protect keys) with little or no paging ... similar to running MVT in a
CP67 16mbyte virtual machine (aka increasing overlapped execution while
waiting on disk I/O, and our-of-order execution increasing overlapped
execution while waiting on memory). post with some email extracts
about adding virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73
some recent post mentioning B874 share presentation
https://www.garlic.com/~lynn/2024e.html#116 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#109 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#92 IBM DASD 3380
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive
PC-based IBM mainframe-compatible systems
https://en.wikipedia.org/wiki/PC-based_IBM_mainframe-compatible_systems
flex-es (gone 404 but lives on at wayback machine)
https://web.archive.org/web/20240130182226/https://www.funsoft.com/
some posts mention both hercules and flex-es
https://www.garlic.com/~lynn/2017e.html#82 does linux scatter daemons on multicore CPU?
https://www.garlic.com/~lynn/2012p.html#13 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2010e.html#71 Entry point for a Mainframe?
https://www.garlic.com/~lynn/2009q.html#29 Check out Computer glitch to cause flight delays across U.S. - MarketWatch
https://www.garlic.com/~lynn/2009q.html#26 Check out Computer glitch to cause flight delays across U.S. - MarketWatch
https://www.garlic.com/~lynn/2003d.html#10 Low-end processors (again)
https://www.garlic.com/~lynn/2003.html#39 Flex Question
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Emulating vintage computers Newsgroups: alt.folklore.computers Date: Sat, 28 Sep 2024 14:01:37 -1000re:
emulation trivia
Note upthread mentions helping endicott do 138/148 ECPS ... basically
manual compiling selected code into "native" (micro)code running ten
times faster. Then in the late 90s did some consulting for Fundamental
Software
https://web.archive.org/web/20240130182226/https://www.funsoft.com/
What is this zPDT? (and how does it fit in?)
https://www.itconline.com/wp-content/uploads/2017/07/What-is-zPDT.pdf
More recent versions of zPDT have added a "Just-In-Time" (JIT)
compiled mode to this. Some algorithm determines whether a section of
code should be interpreted or whether it would be better to invest
some more initial cycles to compile the System z instructions into
equivalent x86 instructions to simplify the process somewhat). This
interpreter plus JIT compiler is what FLEX-ES used to achieve its high
performance. FLEX-ES also cached the compiled sections of code for
later reuse. I have not been able to verify that zPDT does this
caching also, but I suspect so.
... snip ...
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: TYMSHARE, Engelbart, Ann Hardy Date: 29 Sept, 2024 Blog: FacebookWhen M/D was buying TYMSHARE,
I was also brought in to evaluate GNOSIS for its spin-off as KeyKos to
Key Logic, Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167
Ann rose up to become Vice President of the Integrated Systems
Division at Tymshare, from 1976 to 1984, which did online airline
reservations, home banking, and other applications. When Tymshare was
acquired by McDonnell-Douglas in 1984, Ann's position as a female VP
became untenable, and was eased out of the company by being encouraged
to spin out Gnosis, a secure, capabilities-based operating system
developed at Tymshare. Ann founded Key Logic, with funding from Gene
Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl
mainframes. After closing Key Logic, Ann became a consultant, leading
to her cofounding Agorics with members of Ted Nelson's Xanadu project.
... snip ...
GNOSIS
http://cap-lore.com/CapTheory/upenn/Gnosis/Gnosis.html
Tymshare (, IBM) & Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89
Ann Hardy is a crucial figure in the story of Tymshare and
time-sharing. She began programming in the 1950s, developing software
for the IBM Stretch supercomputer. Frustrated at the lack of
opportunity and pay inequality for women at IBM -- at one point she
discovered she was paid less than half of what the lowest-paid man
reporting to her was paid -- Hardy left to study at the University of
California, Berkeley, and then joined the Lawrence Livermore National
Laboratory in 1962. At the lab, one of her projects involved an early
and surprisingly successful time-sharing operating system.
... snip ...
note: TYMSHARE made their VM370/CMS-based online computer conferencing
system "free" to (ibm mainframe user group) SHARE in Aug1976 as
VMSHARE, archives
http://vm.marist.edu/~vmshare
I would regularly drop in on TYMSHARE (and/or see them at monthly meetings hosted by Stanford SLAC) and cut a deal with them to get monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on internal IBM systems and network (biggest hassle were lawyers concerned that internal IBM employees would be contaminated directly exposed to unfiltered customer information). Probably contributed to being blamed for online computer conferencing on the IBM internal network in the late 70s and early 80s (folklore is when corporate executive committee was told, 5of6 wanted to fire me).
On one TYMSHARE visit, they demoed ADVENTURE that somebody found on Stanford SAIL PDP10 system and ported to VM370/CMS and I got a full source copy for putting up executable on internal IBM systems (I use to send full source to anybody that demonstrated that they got all points and shortly there were versions that had more points as well as PLI port).
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
some past posts mentioning Tymshare, Ann Hardy, Engelbart
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2008s.html#3 New machine code
https://www.garlic.com/~lynn/aadsm17.htm#31 Payment system and security conference
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 3270 Terminals Date: 29 Sept, 2024 Blog: Facebook3277/3272 had hardware response of .086sec ... it was followed with 3278 that moved a lot of electronics back into 3274 (reducing 3278 manufacturing cost) ... drastically increasing protocol chatter and latency, increasing hardware response to .3-.5sec (depending on amount of data). At the time there were studies showing quarter sec response improved productivity. Some number of internal VM datacenters were claiming quarter second system response ... but you needed at least .164sec system response with 3277 terminal to get quarter sec response for the person (I was shipping enhanced production operating system internally, getting .11sec system response). A complaint written to the 3278 Product Administrator got back a response that 3278 wasn't for interactive computing but for "data entry" (aka electronic keypunch). The MVS/TSO crowd never even noticed, it was a really rare TSO operation that even saw 1sec system response. Later IBM/PC 3277 hardware emulation card would get 4-5 times upload/download throughput of 3278 card.
Also 3270 protocol was half-duplex ... so if you tried hitting a key when screen was updated, it would lock keyboard and would have to stop and reset. YKT did a FIFO box, unplug the 3277 keyboard from the 3277 display, plug the FIFO box into the display and plug the keyboard into the FIFO box (there was enough electronics in the 3277 terminal that it was possible to do a number of adaptations, including the 3277GA), eliminating the keyboard lock scenario.
trivia: 1980, IBM STL (since renamed SVL) was bursting at the seams and 300 people (w/3270 terminals) from the IMS group were being moved to offsite bldg with dataprocessing service back to STL datacenter. They had tried "remote 3270" but found the human factors totally unacceptable. I get con'ed into doing channel-extender (STL was running my enhanced systems) service so that channel-attached 3270 controllers could be placed at the offsite bldg with no perceptible difference in human factors between offsite and in STL. Note a side-effect was that 168 system throughput increased by 10-15%. The 3270 controllers had previously been spread across 168 channels with DASD, moving the 3270 channel-attached controllers to channel-extenders significantly reduced the channel busy (for same amount of 3270 traffic) improving DASD (& system) throughput. There was consideration moving all their 3270 controllers to channel-extenders.
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
recent posts mentioning 3272/3277 and 3274/3278 interactive
https://www.garlic.com/~lynn/2024e.html#26 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024.html#68 IBM 3270
https://www.garlic.com/~lynn/2024.html#42 Los Gatos Lab, Calma, 3277GA
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023f.html#78 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023e.html#0 3270
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
https://www.garlic.com/~lynn/2022h.html#96 IBM 3270
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021c.html#0 Colours on screen (mainframe history question) [EXTERNAL]
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The boomer generation hit the economic jackpot Date: 29 Sept, 2024 Blog: FacebookThe boomer generation hit the economic jackpot. Young people will inherit their massive debts
Early last decade, the new (US) Republican speaker of the house publicly said he was cutting the budget for the agency responsible for recovering $400B in taxes on funds illegally stashed in overseas tax havens by 52,000 wealthy Americans (over and above new legislation after the turn of century that provided for legal stashing funds overseas). Later there was news on a few billion in fines for banks responsible for facilitating the illegal tax evasion ... but nothing on recovering the owed taxes or associated fines or jail sentences (significantly contributing to congress being considered most corrupt institution on earth).
Earlier the previous decade, 2002 (shortly after turn of the century), congress had let the fiscal responsibility act lapse (spending couldn't exceed revenue, on its way to eliminating all federal debt). 2010 CBO report that 2003-2009, spending increased $6T and taxes cut $6T for $12T gap compared to fiscal responsible budget (1st time taxes were cut to not pay for two wars) ... sort of confluence of special interests wanting huge tax cut, military-industrial complex wanting huge spending increase, and Too-Big-To-Fail wanting huge debt increase (since then the US federal debt has close to tripled).
more refs:
https://www.icij.org/investigations/paradise-papers/
http://www.amazon.com/Treasure-Islands-Havens-Stole-ebook/dp/B004OA6420/
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
too-big-to-fail (too-big-to-prosecute, too-big-to-jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
past specific posts mentioning Paradise papers and Treasure Island Havens
https://www.garlic.com/~lynn/2021j.html#44 The City of London Is Hiding the World's Stolen Money
https://www.garlic.com/~lynn/2021f.html#56 U.K. Pushes for Finance Exemption From Global Taxation Deal
https://www.garlic.com/~lynn/2019e.html#99 Is America ready to tackle economic inequality?
https://www.garlic.com/~lynn/2019e.html#93 Trump Administration Scaling Back Rules Meant to Stop Corporate Inversions
https://www.garlic.com/~lynn/2018f.html#8 The LLC Loophole; In New York, where an LLC is legally a person, companies can use the vehicles to blast through campaign finance limits
https://www.garlic.com/~lynn/2018e.html#107 The LLC Loophole; In New York, where an LLC is legally a person
https://www.garlic.com/~lynn/2017h.html#64 endless medical arguments, Disregard post (another screwup)
https://www.garlic.com/~lynn/2017.html#52 TV Show "Hill Street Blues"
https://www.garlic.com/~lynn/2017.html#35 Hammond threatens EU with aggressive tax changes after Brexit
https://www.garlic.com/~lynn/2016f.html#103 Chain of Title: How Three Ordinary Americans Uncovered Wall Street's Great Foreclosure Fraud
https://www.garlic.com/~lynn/2016f.html#35 Deutsche Bank and a $10Bn Money Laundering Nightmare: More Context Than You Can Shake a Stick at
https://www.garlic.com/~lynn/2016.html#92 Thanks Obama
https://www.garlic.com/~lynn/2015e.html#94 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015c.html#56 past of nukes, was Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2014m.html#2 weird apple trivia
https://www.garlic.com/~lynn/2013m.html#66 NSA Revelations Kill IBM Hardware Sales In China
https://www.garlic.com/~lynn/2013l.html#60 Retirement Heist
https://www.garlic.com/~lynn/2013l.html#1 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013k.html#60 spacewar
https://www.garlic.com/~lynn/2013k.html#57 The agency problem and how to create a criminogenic environment
https://www.garlic.com/~lynn/2013k.html#2 IBM Relevancy in the IT World
https://www.garlic.com/~lynn/2013j.html#26 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013j.html#3 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013i.html#81 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#65 The Real Snowden Question
https://www.garlic.com/~lynn/2013i.html#54 How do you feel about the fact that India has more employees than US?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Computer Wars: The Post-IBM World Date: 29 Sept, 2024 Blog: FacebookCEO Learson had tried&failed to block the bureaucrats, careerists and MBAs from destroying Watsons' culture/legacy
... was greatly accelerated by the failing Future System project from
"Computer Wars: The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
... note: FS was completely different and was going to replace all
370s (during FS, internal politics was killing off 370 efforts, claim
is that the lack of new 370 during FS is credited with giving the
clone 370 makers their market foothold). When FS implodes, there is
mad rush to get stuff back into the 370 product pipelines, including
kicking off the quick&dirty 3033&3081 efforts in
parallel. Trivia: I continued to work on 360&370 stuff all during
FS, including periodically ridiculing what they were doing. Some more
FS detail:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
The damage was done in the 20yrs between Learson failed effort and
1992 when IBM has one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" (take off on
AT&T "baby bells" breakup decade earlier) in preparation for
breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the company breakup. Before we get
started, the board brings in the former president of Amex as CEO, who
(somewhat) reverses the breakup ... and uses some of the techniques
used at RJR (ref gone 404, but lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: CSC Virtual Machine Work Date: 29 Sept, 2024 Blog: FacebookAs far as I know, science center reports were trashed when the science centers were shutdown
I got hard copy of Comeau's presentation at SEAS and OCR'ed it:
https://www.garlic.com/~lynn/cp40seas1982.txt
Also another version at Melinda's history site:
https://www.leeandmelindavarian.com/Melinda#VMHist
and
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf
We had the IBM HA/CMP product and subcontracted a lot of work out to CLaM, Comeau had left IBM and was doing a number of things including founder of CLaM. When Cambridge was shutdown, CLaM took over the science center space.
CSC had wanted 360/50 to modify with virtual memory, but all the spare 360/50s were going to FAA ATC project ... so they got a 360/40 to modify and did CP40/CMS. It morphs into CP67/CMS when a 360/67 standard with virtual memory becomes available
I take a two credit hr into to fortran/computers. Univ was getting 360/67 for tss/360 to replace 709/1401, temporarily pending availability of 360/67, the 1401 was replaced with 360/30 and at end of intro class, I was hired to rewrite 1401 MPIO (unit record front-end for 709) in 360 assembler for 360/30 (OS360/PCP). The univ shutdown the datacenter on weekends and I would have the place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a lot of software&hardware manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks have a 2000 card assembler program.
Within a year of taking intro class, the 360/67 arrives and univ hires me fulltime responsible for OS/360 (running on 360/67 as 360/65, tss/360 never came to production fruition). Student fortran had run under second on 709 but well over a minute with os360/MFT. I install HASP and it cuts the time in half. First sysgen was MFTR9.5. Then I start redoing stage2 sysgen to carefully place datasets and PDS members (to optimize arm seek and multi-track search), cutting another 2/3rds to 12.9 secs (student fortran never gets better than 709 until I install univ of waterloo Watfor.
Jan1968, CSC came out to install CP67 at the univ (3rd after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my weekend dedicated times, initial 1st few months rewriting lots of CP67 for running OS/360 in virtual machine. My OS360 test job stream ran 322secs on bare machine, initially 856secs virtually (534secs CP67 CPU), managed to get CP67 CPU down to 113secs. I then redo dispatching/scheduling (dynamic adaptive resource management), page replacement, thrashing controls, ordered disk arm seek, 2301/drum multi-page rotational ordered transfers (from 70-80 4k/sec to 270/sec peak), bunch of other stuff ... for CMS interactive computing. Then to further cut CMS CP67 CPU overhead I do a special CCW. Bob Adair criticizes it because it violates 360 architecture ... and it has to be redone as DIAGNOSE instruction (which is defined to be "model" dependent ... and so have facade of virtual machine model diagnose).
cambrdige science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
some posts mentioning computer work as undergraduate
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTRAN Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Tue, 01 Oct 2024 09:29:37 -1000rbowman <bowman@montana.com> writes:
somewhat like IBM's failed "Future System" effort (replacing all 370s)
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
except FS was enormous amounts of microcode. i432 gave talk at early 80s ACM SIGOPS at asilomar ... one of their issues was really complex stuff in silicon and nearly every fix required new silicon.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
some posts mentioning i432
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2021k.html#38 IBM Boeblingen
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019c.html#33 IBM Future System
https://www.garlic.com/~lynn/2018f.html#52 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018e.html#95 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017j.html#99 OS-9
https://www.garlic.com/~lynn/2017j.html#98 OS-9
https://www.garlic.com/~lynn/2017g.html#28 Eliminating the systems programmer was Re: IBM cuts contractor bil ling by 15 percent (our else)
https://www.garlic.com/~lynn/2017e.html#61 Typesetting
https://www.garlic.com/~lynn/2016f.html#38 British socialism / anti-trust
https://www.garlic.com/~lynn/2016e.html#115 IBM History
https://www.garlic.com/~lynn/2016d.html#63 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#62 PL/I advertising
https://www.garlic.com/~lynn/2014m.html#107 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2014k.html#23 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014c.html#75 Bloat
https://www.garlic.com/~lynn/2013f.html#33 Delay between idea and implementation
https://www.garlic.com/~lynn/2012n.html#40 history of Programming language and CPU in relation to each other
https://www.garlic.com/~lynn/2012k.html#57 1132 printer history
https://www.garlic.com/~lynn/2012k.html#14 International Business Marionette
https://www.garlic.com/~lynn/2011l.html#42 i432 on Bitsavers?
https://www.garlic.com/~lynn/2011l.html#15 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011l.html#2 68000 assembly language programming
https://www.garlic.com/~lynn/2011k.html#79 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011c.html#91 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011c.html#7 RISCversus CISC
https://www.garlic.com/~lynn/2010j.html#22 Personal use z/OS machines was Re: Multiprise 3k for personal Use?
https://www.garlic.com/~lynn/2010h.html#40 Faster image rotation
https://www.garlic.com/~lynn/2010h.html#8 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010g.html#45 IA64
https://www.garlic.com/~lynn/2010g.html#1 IA64
https://www.garlic.com/~lynn/2009q.html#74 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009o.html#46 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009o.html#18 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009o.html#13 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2009d.html#52 Lack of bit field instructions in x86 instruction set because of patents ?
https://www.garlic.com/~lynn/2008k.html#22 CLIs and GUIs
https://www.garlic.com/~lynn/2008e.html#32 CPU time differences for the same job
https://www.garlic.com/~lynn/2008d.html#54 Throwaway cores
https://www.garlic.com/~lynn/2007s.html#36 Oracle Introduces Oracle VM As It Leaps Into Virtualization
https://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64?
https://www.garlic.com/~lynn/2006p.html#15 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006n.html#44 Any resources on VLIW?
https://www.garlic.com/~lynn/2006n.html#42 Why is zSeries so CPU poor?
https://www.garlic.com/~lynn/2006c.html#47 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005q.html#31 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005k.html#46 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005d.html#64 Misuse of word "microcode"
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004e.html#52 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2003e.html#54 Reviving Multics
https://www.garlic.com/~lynn/2002o.html#5 Anyone here ever use the iAPX432 ?
https://www.garlic.com/~lynn/2002l.html#19 Computer Architectures
https://www.garlic.com/~lynn/2002d.html#46 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#27 iAPX432 today?
https://www.garlic.com/~lynn/2000f.html#48 Famous Machines and Software that didn't
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTRAN Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Tue, 01 Oct 2024 13:30:58 -1000Lynn Wheeler <lynn@garlic.com> writes:
I continued to work on 360/370 all during FS, even periodically ridiculing what they were doing (which wasn't exactly career enhancing) ... during FS, 370 stuff was being killed off and claims lack of new 370 during FS gave the clone 370 system makers their market foothold. when FS finally imploded (one of the final nails was analysis that if 370/195 applications were redone for FS machine made out of the fastest hardware available, they would have throughput of 370/145 ... about 30times slowdown) there was mad rush to get stuff back into the 370 product pipelines, including kicking off the Q&D 3033&3081 efforts in parallel.
I have periodically claimed that John did 801/RISC to go the extreme opposite of Future System (mid-70s there was internal adtech conference where we presented 370 16-cpu multiprocessor and the 801/RISC group presented RISC).
I got dragged into helping with a 370 16-cpu multiprocessor and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 370/168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system ("MVS") had effective 16-cpu support (at the time MVS docs had 2-cpu system support with only 1.2-1.5 times the throughput of a 1-cpu system (I had number of 2-cpu systems that had twice the throughput of single cpu system) and head of POK invites some of us to never visit POK again ... and the 3033 processor engineers to keep their heads down and no distractions. Note: POK doesn't ship a 16-cpu system until after the turn of the century (more than two decades later).
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of RISC Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Tue, 01 Oct 2024 15:39:23 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
ROMP was originally targeted to be DISPLAYWRITER follow-on, written in PL.8 and running cp.r ... when that was canceled (market moving to personal computing), they decided to pivot to the UNIX workstation market and got the company that had done AT&T Unix port to IBM/PC for PC/IX ... to do one for ROMP ... some claim they had 200 PL.8 programmers and decided to use them to implement a ROMP abstract virtual machine and tell the company doing AIX that it would be much faster and easier to do it to VM, than to the bare hardware.
However, there was the IBM Palo Alto group doing UCB BSD port to 370 that got redirected to do BSD port to the (bare hardware) ROMP instead (in much less time and resources than either the abstract virtual machine or the AIX effort) as "AOS".
Late 80s, my wife and I got the HA/6000 project, originally for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when I start doing scientific/technical cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres that had VAXCluster support in same source base with UNIX). Then the executive we reported to went over to head up Somerset (AIM; apple, ibm, motorola; single-chip RISC)
Early Jan92 in meeting with Oracle CEO, AWD/Hester tells Ellison we would have 16-system clusters by mid92 and 128-system clusters by ye92, however by end jan92, cluster scale-up was transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four systems (we leave IBM a few months later).
There had been complaints by commercial mainframe possibly contributing
to the decision:
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-system: 2BIPS/2016MIPs,
128-system: 16BIPS/16,128MIPS
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: CSC Virtual Machine Work Date: 02 Oct, 2024 Blog: Facebookre:
Trivia: A little over decade ago, I was asked to track down decision
to add virtual memory to all 370s and found staff to executive making
decision, basically MVT storage management was so bad that regions had
to be specified four times larger than used so a typical 1mbyte
370/165 would only run four regions concurrently, insufficient to keep
165 busy (and justified). Going to MVT in 16mbyte virtual address
space (VS2/SVS) allowed number of regions to be increased by four
times (caped at 15 because of 4bit storage protect keys) with little
or no paging (sort of like running MVT in CP67 16mbyte virtual
machine). some of the email exchange tracking down virtual memory for
all 370s
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
We were visiting POK a lot ... part was trying to get CAS justified in 370 architecture, but I was also hammering the MVT performance group about their proposed page replacement algorithm (eventually they claimed that it didn't make any difference because paging rate would be 5/sec or less). Also would drop in on Ludlow offshift that was implementing VS2/SVS prototype on 360/67. It involved a little bit of code to build the 16mbyte virtual address table, enter/exit virtual address mode, and some simple paging. The biggest effort was EXCP/SVC0 where channel programs built in application space were passed to the supervisor for execution. Since they had virtual addresses, EXCP/SVC0 faced the same problem as CP67, making a copy of the channel program, replacing virtual addresses with real ... and he borrows a copy of CP67 CCWTRAN for the implementation.
Near the end of the 70s, somebody in POK gets an award for fixing the MVS page replacement algorithm (that I complained about in the early 70s, aka by late 70s, the paging rate had significantly increased and it had started to make a difference).
SMP, tightly-coupled, multiprocessor (and/or compare&swap) posts
https://www.garlic.com/~lynn/subtopic.html#smp
posts mentioning Ludlow, EXCP, CP67 CCWTRAN
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024.html#27 HASP, ASP, JES2, JES3
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#69 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#43 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023e.html#4 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#103 2023 IBM Poughkeepsie, NY
https://www.garlic.com/~lynn/2022h.html#93 IBM 360
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#41 MVS
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#91 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#58 Computer Security
https://www.garlic.com/~lynn/2022.html#10 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#59 370 Virtual Memory
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2014d.html#54 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013l.html#18 A Brief History of Cloud Computing
https://www.garlic.com/~lynn/2013i.html#47 Making mainframe technology hip again
https://www.garlic.com/~lynn/2013.html#22 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012l.html#73 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012i.html#55 Operating System, what is it?
https://www.garlic.com/~lynn/2011o.html#92 Question regarding PSW correction after translation exceptions on old IBM hardware
https://www.garlic.com/~lynn/2011d.html#72 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011.html#90 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
https://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2002p.html#51 Linux paging
https://www.garlic.com/~lynn/2002p.html#49 Linux paging
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 360/30, 360/65, 360/67 Work Date: 03 Oct, 2024 Blog: FacebookI take a two credit hr into to fortran/computers. Univ was getting 360/67 for tss/360 to replace 709/1401, temporarily pending availability of 360/67, the 1401 was replaced with 360/30 and at end of intro class, I was hired to rewrite 1401 MPIO (unit record front-end for 709) in 360 assembler for 360/30 (OS360/PCP). The univ shutdown the datacenter on weekends and I would have the place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a lot of software&hardware manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks have a 2000 card assembler program ... ran stand-alone, loaded with the BPS card loader. I then add assembler option and OS/360 system services I/O. The stand-alone version took 30mins to assemble, the OS/360 version took an hour to assemble (each DCB macro taking over five minutes to assemble). Later somebody claimed that the people implementing the assembler were told they only had 256bytes to implement op-code lookup ... so enormous amount of disk I/O.
trivia: periodically i would come in Sat. morning and find production had finished early and the machine room was dark and everything turn off. I would try and power on 360/30 and it wouldn't complete. Lots of pouring over documents and trail and error, I found I could put all controllers in CE-mode, power-on 360/30, power on individual controllers and return each controller to normal mode.
Within a year of taking intro class, the 360/67 shows up and I was hired fulltime responsible for OS/360 (TSS/360 hadn't come to production fruition) ... and I continued to have my dedicated weekend 48hrs (and monday classes still difficult). My first sysgen was MFT9.5. Note student fortran jobs took under a second on 709 (tape->tape), but over a minute with OS/360. I install HASP, cutting the time in half. I then start redoing stage2 sysgen, carefully placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I install Univ. of Waterloo Watfor.
Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in a independent business unit). I think Renton datacenter possibly largest in the world (couple hundred million in 360 systems), 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Renton did have one 360/75, there was black rope around the area perimeter and guards when it was running classified jobs (and heavy black velvet drapped over console lights and 1403 printers). Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room and install a 360/67 for me to play with when I wasn't doing other stuff). When I graduate, I join IBM (instead of staying with Boeing CFO).
Boeing Huntsville had two processor 360/67 SMP (and several 2250 display for CAD/CAM) that was brought up to Seattle. It had been acquired (also) for TSS/360 but ran as two systems with MVT. They had also ran into the MVT storage management problem (that later resulted in the decision to add virtual memory to all 370s). Boeing Huntsville had modified MVTR13 to run in virtual memory mode (but no paging), using virtual memory to partially compensate for the MVT storage management problems. A little over decade ago, I was asked to track down the 370 virtual memory decision and found staff member for executive making decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used, limiting typical 1mbyte 370/165 to four concurrent running regions, insufficient to keep 165 busy and justified. They found that moving MVT to 16mbyte virtual address space (VS2/SVS, similar to running MVT in CP67 16mbyte virtual machine), they could increase number of concurrent regions by factor of four times (caped at 15 because of 4bit storage protect key) with little or no paging.
trivia: in the early 80s, I was introduced to John Boyd and would
sponsor his briefings at IBM.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
One of his stories was about being very vocal that the electronics
across the trail wouldn't work. Then (possibly as punishment) he is
put in command of "spook base" (about the same time I'm at
Boeing). His biography has "spook base" was a $2.5B "windfall" for IBM
(ten times Renton) ... some ref:
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White
Before TSS/360 was decomitted, there was claim that there were 1200 people on TSS at a time there were 12 people (including secretary) in the Cambridge Science Center group on CP67/CMS. CSC had wanted a 360/50 to hardware modify with virtual memory support, but all the spare 50s were going to the FAA ATC project, and so had to settle for 360/40. CP40/CMS morphs into CP67/CMS when 360/67 standard with virtual memory becomes available. Some CSC people come out to the univ to install CP67/CMS (3rd installaion after Cambridge itself and MIT Lincoln Labs) and I mostly get to play with it in my dedicated weekend time. Initially I rewrite a lot of CP67 to improve overhead running OS/360 in virtual machine. My OS/360 test jobstream ran 322 seconds on real hardware and initially 856secs virtually (534secs CP67 CPU). After a few months managed to get it down to 435secs (CP67 CPU 113secs).
I then redo dispatching/scheduling (dynamic adaptive resource management), page replacement, thrashing controls, ordered disk arm seek, 2301/drum multi-page rotational ordered transfers (from 70-80 4k/sec to 270/sec peak), bunch of other stuff ... for CMS interactive computing. Then to further cut CMS CP67 CPU overhead I do a special CCW. Bob Adair criticizes it because it violates 360 architecture ... and it has to be redone as DIAGNOSE instruction (which is defined to be "model" dependent ... and so have facade of virtual machine model diagnose).
Boyd postings and URL refs
https://www.garlic.com/~lynn/subboyd.html
Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
some recent posts mentioning 709/1401, MPIO, 360/30, 360/67, CP67/CMS,
Boeing CFO, Renton datacenter,
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Byte ordering Newsgroups: comp.arch Date: Thu, 03 Oct 2024 15:33:54 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
... note that RS/6000 didn't have design that supported cache consistency, shared-memory multiprocessing ... (one of the reason ha/cmp had to resort to cluster operation for scale-up)
https://en.wikipedia.org/wiki/PowerPC_600#PowerPC_620
https://wiki.preterhuman.net/The_Somerset_Design_Center
the executive we reported to when we were doing HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
went over to head up Somerset
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: stacks are not hard, The joy of FORTRAN-like languages Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Fri, 04 Oct 2024 17:07:09 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
repeated 16Mar2011 at Wash DC HILLGANG user group meeting.
In the morph from CP67->VM370, they dropped and/or simplified a lot of stuff. 1974, I started migrating a bunch of stuff to VM370R2 base for my internal CSC/VM ... including kernel reorganization for SMP multiprocessor (but not the SMP support itself). Then for VM370R3-base CSC/VM I do SMP support ... originally for the internal online sales&marketing HONE systems (before it was released to customers in VM370R4).
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM Posts
https://www.garlic.com/~lynn/submisc.html#cscvm
a couple posts mentioning seas/hillgang presentation
https://www.garlic.com/~lynn/2021g.html#46 6-10Oct1986 SEAS
https://www.garlic.com/~lynn/2011e.html#3 Multiple Virtual Memory
some posts mentioning CP67 R3 free/fret subpool
https://www.garlic.com/~lynn/2019e.html#9 To Anne & Lynn Wheeler, if still observing
https://www.garlic.com/~lynn/2010h.html#21 QUIKCELL Doc
https://www.garlic.com/~lynn/2008h.html#53 Why 'pop' and not 'pull' the complementary action to 'push' for a stack
https://www.garlic.com/~lynn/2007q.html#15 The SLT Search LisT instruction - Maybe another one for the Wheelers
https://www.garlic.com/~lynn/2006r.html#8 should program call stack grow upward or downwards?
https://www.garlic.com/~lynn/2006j.html#21 virtual memory
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2000d.html#47 Charging for time-share CPU time
https://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Future System, Single-Level-Store, S/38 Date: 05 Oct, 2024 Blog: FacebookAmdahl wins the fight to make ACS, 360 compatible. Folklore is ACS/360 was then canceled because executives were worried that it would advance state of the art too fast and IBM loose control of the market. Then Amdahl leaves IBM and starts his own clone mainframe company. Following includes some ACS/360 features that show up more than 20yrs later with ES/9000
Then there is Future System effort, completely different from 370 and was going to completely replace 370 (internal politics was killing off 370 efforts which is claimed to have given clone mainframe makers their market foothold). I had graduated and joined IBM Cambridge Scientific Center, not long before FS started (I continued to work on 360&370 and periodically ridiculed FS). I got to continue to attend user group meetings and also stop by customers. The director of one of the largest financial industry, true-blue IBM mainframe datacenters liked me to stop by and talk technology. Then the IBM branch manager horribly offended the customer and in retaliation they were ordering an Amdahl system (single Amdahl system in large sea of "blue"). Up until then Amdahl had been selling into the technical/scientific industry, but this would be the first Amdahl install in the commercial market. I was asked to go spend onsite for 6-12months at the customer (to help obfuscate why the customer was installing an Amdahl machine). I talk it over with the customer and he says he would be happy to have me onsite, but it wouldn't change installing an Amdahl machine ... and I decline IBM's offer. I'm then told that the branch manager is good sailing buddy of IBM's CEO and if I don't do this, I can forget having a career, raises and promotions.
trivia: Future System eventually implodes, more information
http://www.jfsowa.com/computer/memo125.htm
and
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
One of the final nails in the FS coffin (by the IBM Houston Science Center) was that if 370/195 applications were redone for FS machine made out of fastest technology available, it would have throughput of 370/145 (about 30 times slow down). One of the FS features was single-level-store, possibly inherited from TSS/360 (and there appeared to nobody in FS that knew how to beat highly tuned OS/360 I/O ... and a large 300 disk mainframe configuration could have never worked). I had done a page-mapped filesystem for CP67/CMS, and would joke I learned what not to do from TSS/360.
S/38 was greatly simplified implementation and there was sufficient hardware performance headroom to meet the S/38 market requirements. One of S/38 simplification features was treating all disks as single single-level-store that could include scatter allocation of a file across all disks ... as some S/38 configurations grew in number of disks, downtime backing-up/restoring all disks as single entity was becoming a major issue (contributing to S/38 being early adopter of IBM RAID technology).
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
cms page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
some posts mentioning single-level-store, s/38, cms paged-mapped filesystem,
and RAID
https://www.garlic.com/~lynn/2024e.html#68 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024d.html#29 Future System and S/38
https://www.garlic.com/~lynn/2023g.html#3 Vintage Future System
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022.html#41 370/195
https://www.garlic.com/~lynn/2021k.html#43 Transaction Memory
https://www.garlic.com/~lynn/2017j.html#34 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#66 Is AMD Dooomed? A Silly Suggestion!
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2007t.html#72 Remembering the CDC 6600
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Future System, Single-Level-Store, S/38 Date: 05 Oct, 2024 Blog: Facebookre:
other trivia: shortly after joining IBM, I was also asked to help with
370/195 simulate two processor machine using multithreading:
https://people.computing.clemson.edu/~mark/acs_end.html
Sidebar: Multithreading
In summer 1968, Ed Sussenguth investigated making the ACS/360 into a
multithreaded design by adding a second instruction counter and a
second set of registers to the simulator. Instructions were tagged
with an additional "red/blue" bit to designate the instruction stream
and register set; and, as was expected, the utilization of the
functional units increased since more independent instructions were
available.
IBM patents and disclosures on multithreading include:
US Patent 3,728,692, J.W. Fennel, Jr., "Instruction selection in a
two-program counter instruction unit," filed August 1971, and issued
April 1973.
US Patent 3,771,138, J.O. Celtruda, et al., "Apparatus and method for
serializing instructions from two independent instruction streams,"
filed August 1971, and issued November 1973. [Note that John Earle is
one of the inventors listed on the '138.]
"Multiple instruction stream uniprocessor," IBM Technical Disclosure
Bulletin, January 1976, 2pp. [for S/370]
... snip ...
Most codes only ran 370/195 at half speed. 195 had 64 instruction pipeline with out-of-order execution but no branch prediction and speculative execution, so conditional branches drained the pipeline. Implementing multithreading, simulating two CPU multiprocessor, each simulated "CPU" running at half-speed, could keep 195 fully utilized (modulo: MVT/MVS software at the time claiming a two CPU multiprocessor only had 1.2-1.5 times the throughput of a single processor). Then the decision was made to add virtual memory to all 370s and it was decided that it wouldn't be justified to try and add virtual memory to 195, and all new work on 195 was canceled.
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some recent 370/195 posts mentioning multithread effort
https://www.garlic.com/~lynn/2024e.html#115 what's a mainframe, was is Vax addressing sane today
https://www.garlic.com/~lynn/2024d.html#101 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#66 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2023f.html#89 Vintage IBM 709
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#32 do some Americans write their 1's in this way ?
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#12 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2021k.html#46 Transaction Memory
https://www.garlic.com/~lynn/2021h.html#51 OoO S/360 descendants
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Future System, Single-Level-Store, S/38 Date: 05 Oct, 2024 Blog: Facebookre:
Single-level-store doesn't provide for application high-throughput if done simply ... like just mapping file image and then taking synchronous page faults.
Silverlake to combine s/36 and s/38 (including dropping some s/38
features, after S/38 implementation having greatly simplified FS
stuff)
https://en.wikipedia.org/wiki/IBM_AS/400#Silverlake
https://en.wikipedia.org/wiki/IBM_AS/400#AS/400
I-system rebranding
https://en.wikipedia.org/wiki/IBM_AS/400#Rebranding
I had done a lot of os/360 performance work and rewrote large amount of cp67/cms as undergraduate in the 60s. Then when I graduate, I join the cambridge science center.
Note some of the MIT CTSS/7094 people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
had gone to Project MAC on the 5th flr to do MULTICS (which included
high-performance single-level-store)
https://en.wikipedia.org/wiki/Multics
others had gone to the science center on the 4th flr, doing virtual
machines, internal network, lots of interactive and performance work
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
At Cambridge, I thought I could do anything that the 5th flr could do, so I implement a high-performance virtual page-mapped filesystem for CMS. Standard CMS filesystem was scatter allocate, single device, and disk record transfers were synchronous (it could do multiple record channel programs if sequential, contiguous allocation, but not likely with scatter allocate). I wanted to be able to support contiguous allocation as well as multiple asynchronous page transfers.
Then in early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite) with both RSCS/VNET and TCP/IP support.
RSCS/VNET leveraged 1st CP67 (and later VM370) spool file system that shared a lot of implementation with the paging system. However, it used a synchronous operation for 4k block transfers, limiting RSCS/VNET to about 30kbytes/sec (or around 300kbits/sec). I needed 3mbits/sec sustained for each full-duplex T1 link. I reimplement the VM370 spool file support in VS/Pascal running in virtual address space and used much of the CMS page-mapped filesystem API (that I had done more than decade earlier) .... contiguous allocation, multiple page asynchronous transfers, as well as multiple buffer read ahead and write behind.
As for TCP/IP was also working with NSF director and was suppose to
get $20M to interconnect the NSF Supercomputer Centers. Then Congress
cuts the budget, some other things happen and eventually an RFP (in
part based on what we already had running) is released. From 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
trivia: late 80s, got the HA/6000 project, initially for NYTimes to
move their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I
rename it HA/CMP when I start doing technical/scientific cluster
scale-up with national labs and commercial cluster scale-up with RDBMS
vendors (Oracle, Sybase, Informix, Ingres that had VAXCluster and Unix
support in the same source base, I do distributed lock manager
supporting VAXCluster API semantics easing the move to HA/CMP). The
S/88 product administrator then starts taking us around to their
customers and also has me do a section for the corporate continuous
availability strategy document (it got pulled when both
Rochester/AS400 and POK/mainframe complain they couldn't meet the
requirements).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
trivia: Executive we reported to, moves over to head up Somerset (AIM,
Apple, IBM, Motorola, single chip power/pc)
https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/PowerPC_600
https://wiki.preterhuman.net/The_Somerset_Design_Center
Early Jan92, there is HA/CMP meeting with Oracle CEO where AWD/Hester
tells Ellison that we would have 16-system clusters by mid92 and
128-system clusters by ye92. Then late Jan92, cluster scale-up is
transferred for announce as IBM supercomputer (for
technical/scientific *ONLY*) and we are told we can't work with
anything with more than four processors (we leave IBM a few months
later).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CMS page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The Fall Of OS/2 Newsgroups: alt.folklore.computers Date: Sun, 06 Oct 2024 09:08:57 -1000jgd@cix.co.uk (John Dallman) writes:
some archived posts
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup...
https://www.garlic.com/~lynn/2001n.html#80 a.f.c history checkup...
https://www.garlic.com/~lynn/2001n.html#81 a.f.c history checkup...
https://www.garlic.com/~lynn/2001n.html#82 a.f.c history checkup...
then boca contracted with Dataquest (since bought by gartner) for detailed study of PC market, including couple hr video taped round table of silicon valley PC experts ... I had known the person doing the study at Dataquest and was asked to be one of the PC experts (they promise to garble my details so boca wouldn't recognize me as ibm employee). I did manage to clear it with my immediate management
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The Fall Of OS/2 Newsgroups: alt.folklore.computers Date: Sun, 06 Oct 2024 13:33:17 -1000re:
the IBM communication group was fiercely fighting off client/server and distributed computing. Late 80s, a senior engineer in the disk division got a talk scheduled at internal, communication group, world-wide, annual conference supposedly on 3174 performance ... but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. They were seeing a drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms ... and had come up with a number of solutions. However the communication group was constantly vetoing the disk division solutions (with communication group corporate strategic ownership/responsibility for everything that crossed datacenter walls).
note that workstation division had done their own cards for the PC/RT (PCAT-bus) including 4mbit token-ring card. Then for the RS/6000 microchannel workstations, AWD was told they couldn't do their own cards, but had to use PS2 microchannel cards. The communication group had severely performance kneecapped the PS2 microchannel cards ... example was that the $800 PS2 microchannel 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card ... and significantly lower throughput than the $69 10mbit Ethernet card.
trivia: The new IBM Almaden Research bldg had been extensively provisioned with CAT wiring, presuming use for 16mbit token-ring ... however they found that not only (CAT wiring) 10mbit Ethernet cards had much higher throughput than 16mbit T/R cards, also 10mbit ethernet LANs had higher aggregate throughput and lower latency than 16mbit T/R.
Also in the aggregate cost difference between the $69 Ethernet cards and $800 16mbit T/R cards, Almaden could get nearly half dozen high-performance tcp/ip routers ... each with 16 10mbit Ethernet interfaces and ibm mainframe channel interfaces with options for T1&T3 telco interfaces, and various high-speed serial fiber interfaces. Result was they could spread all the RS/6000 machines across the large number of Ethenet (tcp/ip) lans ... with only a dozen or so machines sharing a LAN.
Summer 1988, ACM SIGCOMM published study that 30 10mbit ethernet stations ... all running low-level device driver loop constantly sending minimum sized packets, aggregate effective LAN throughput dropped off from 8.5mbit/sec to 8mbit/sec.
For fiber "SLA", RS/6000 had re-engineered & tweaked mainframe ESCON ... making it slightly faster (and incompatible with everything else) ... 220mbit/sec, full-duplex; so the only thing they could use it for was with other RS/6000s. We con one of the high-speed tcp/ip router vendors to add a "SLA" interface option to their routers .... giving RS/6000-based servers a high-performance entre into distributed computing envrionment.
In 1988, the IBM branch office had asked me if I could help LLNL (national lab) standardize some serial stuff they were playing with, which quickly becomes fibre-channel standards (FCS, including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. The RS/6000 SLA engineers were planning on improving SLA to 800mbit/sec ... when we convince them to join the FCS standard activity instead.
posts mentioning communication group fighting off client/server and
distributed computing trying to preserve their dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
we were also out marketing 3-tier networking, ethernet, tcp/ip,
high-speed routers, security, etc
https://www.garlic.com/~lynn/subnetwork.html#3tier
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Wild Ducks Date: 06 Oct, 2024 Blog: Facebook1972, CEO Learson trying (and failed) to block the bureaucrats, careerists, and MBAs from destroying Watsons' culture/legacy ... including wild ducks
30 years of Management Briefings, 1958-1988
http://www.bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
I've frequently quoted Number 1-72, 18Jan, 1972, pg160-163 (includes
THINK magazine article)
20 years later IBM has one of the largest losses in the history of US
companies and was being reorganized into the 13 "baby blues" (take-off
on the AT&T "baby bells" breakup a decade earlier) in preparation to
breaking up the company.
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup of the company. Before we get
started, the board brings in the former president of AMEX that
(somewhat) reverses the breakup.
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
1973, How to Stuff a Wild Duck
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual memory Date: 07 Oct, 2024 Blog: FacebookNote some of the MIT CTSS/7094 people
IBM bid 360/67 for Project MAC but lost to GE. Science Center had wanted a 360/50 to modify with virtual memory, but they were all going to FAA ATC, so had to settle for 360/40. I got hard copy from Comeau and OCR'ed it
https://www.garlic.com/~lynn/cp40seas1982.txt
also from Melinda's history site
https://www.leeandmelindavarian.com/Melinda#VMHist
and
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf
When 360/67 came available CP40/CMS morphs into CP67/CMS. Single
processor 360/67 was very similar to 360/65 with addition of virtual
memory and control registers. Two processor 360/67 SMP had more
differences, multi-ported memory (allowing channel and processors to
do concurrent transfers), channel controller that included all
processors could access all channels (360/65 SMP & later 370 SMP, had
to simulate multiprocessor channel I/O by having two-channel
controllers connected to processor dedicated channels at same address)
.... much more 360/67 characteristics.
https://www.bitsavers.org/pdf/ibm/360/functional_characteristics/GA27-2719-2_360-67_funcChar
I had two credit hr intro to fortran/computers and at the end of semester was hired to rewrite 1401 MPIO (unit record front end for 709/1401) in 360 assembler for 360/30 (replace 1401 temporarily until 360/67s for tss/360) ... Univ. shutdown datacenter on weekends and I would have placed dedicated for 48hrs (although made monday class hard). I was given a bunch of hardware and software manuals and got to design & implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. The 360/67 arrives within a year of my taking intro class and I was hired fulltime responsible for OS/360 (ran as 360/65, tss/360 never came to production). Student fortran ran under second on 709, but over a minute on 360/65 OS/360. My 1st sysgen was R9.5 and then I install HASP cutting student fortran time in half. I then start redoing stage2 SYSGEN, carefully placing datasets and PDS members, optimizing arm seek and multi-track search, cutting time another 2/3rds to 12.9secs. Student fortran never got better than 709 until I install WATFOR.
CSC then comes out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly play with it during my weekend decaded time, reWriting lots of CP67 to optimize OS/360 running in virtual machine. Test OS/360 ran 322secs on bare machine and initially 856secs in virtual machine (CP67 CPU 534secs). Within a few months I got it down to 435secs (CP67 CPU 113secs). I then redo dispatching/scheduling (dynamic adaptive resource management), page replacement, thrashing controls, ordered disk arm seek, multi-page rotational ordered transfers for 2314/disk and 2301/drum (from single 4kbyte fifo to 270/sec peak), bunch of other stuff ... for CMS interactive computing. Then to further cut CMS CP67 CPU overhead I do a special CCW. Bob Adair criticizes it because it violates 360 architecture ... and it has to be redone as DIAGNOSE instruction (which is defined to be "model" dependent ... and so have facade of virtual machine model diagnose).
some more detail for 1986 SEAS (European SHARE) ... also presented in
2011 at WashDC "Hillgang" user group:
https://www.garlic.com/~lynn/hill0316g.pdf
After joining CSC, one of my hobbies was enhanced production operating systems for internal datacenters (and internal online sales&marketing support HONE system was long time customer). I figured if 5th flr could do paged mapped filesystem for MULTICS, I could do one for CP67/CMS with lots of new shared segment functions. Then in the morph of CP67->VM370 lots of stuff from CP67 was greatly simplified and/or dropped (including SMP multiprocessor support). Then in 1974, I start moving lots of stuff to VM370R2, including kernel re-org needed for multiprocessor support as well as page mapped filesystem and shared segment enhancements, a small subset of shared segments was picked up for VM370R3 as DCSS) for my internal CSC/VM. Then for VM370R3-base CSC/VM, I do 370 SMP support, initially for HONE (US HONE datacenterrs had been consolidated in Palo Alto with eight systems, and they were then able to add 2nd processor to each system).
Early last decade ago, I was asked to track down the 370 virtual
memory decision and found staff member for executive making
decision. Basically MVT storage management was so bad that region
sizes had to be specified four times larger than used, limiting
typical 1mbyte 370/165 to four concurrent running regions,
insufficient to keep 165 busy and justified. They found that moving
MVT to 16mbyte virtual address space (VS2/SVS, similar to running MVT
in a CP67 16mbyte virtual machine), they could increase number of
concurrent regions by factor of four times (caped at 15 because of
4bit storage protect key) with little or no paging. There was a little
bit of code for making virtual tables, but the biggest issue was
channel programs were built in application space with virtual address
passed to EXCP/SVC0. Ludlow was doing initial implementation on 360/67
and EXCP/SVC0 needed to make a copy of channel programs with real
addresses (similar to CP67 for virtual machines) and he borrows
CCWTRANS from CP67. Archived post with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73
Trivia: Boeing Huntsville had gotten a two processor 360/67 SMP (originally for TSS/360) but ran it as two systems with MVT. They were already dealing with the MVT storage management problem. They modified MVTR13 to run in virtual memory mode (w/o paging, but was partially able to compensate for the MVT storage management problems).
I was also having dispute with the MVT performance group about the page replacement algorithm and eventually they conclude that it wouldn't make any difference because SVS would do hardly any page faults (5/sec or less). Nearly decade later, POK gives somebody award for finally fixing it in MVS.
When the decision was made to add virtual memory to all 370s, CSC started a joint project with Endicott to simulate 370 virtual memory machines, "CP67H" (added to my production "CP67L"). Then changes were made to CP67H for CP67 to run in 370 virtual memory architecture. CP67L ran on the Cambridge real 360/67, in part because had profs, staff, and students from boston/cambridge area institutions using the system (avoid leaking 370 details, CP67H ran in CP67L virtual 360/67, and CP67I ran in CP67H virtual 370). CP67I was in regular production use a year before the 1st engineering 370 (w/virtual memory) was operational. Then some people from San Jose came out to add 3330 and 2305 device support to CP67I ... for CP67SJ, which was in wide-spread use internally (even after vm370 was operation). Trivia: the CP67L, CP67H, and CP67I effort was also when the initial CMS incremental source update management was created.
Trivia: Original 370/145 had microcode for running older DOS version (had base/extent memory bound relocation, as psuedo virtual memory, sort of how the initial LPAR/PR/SM was implemented) and IBM SE used it to implement virtual machine support (before 370 virtual memory).
Trivia: The original shared segment implementation was all done with modules (executables) in the CMS filesystem (shared segment DCSS was trivial subset of the full capability)
Trivia: My original implementation allowed same executables (including shared segments) to appear at different virtual addresses in different virtual address spaces ... but CMS extensively used OS compilers/assemblers with RLD that had to be updated with fixed address at load time. I had to go through all sort of hoops (code modification) to simulate the TSS/360 convention that all embedded addresses were displacements that were combined with directory that was unique for every address space (when didn't have source to fix RLDs, then was restricted to single fixed address).
Trivia: one of the first mainstream IBM documents done with CMS SCRIPT was 370 principles of operation. Command line option would either product the full 370 architecture redbook (for distribution in 3-ring red binders) with lots of notes about justification, alternatives, implementation, etc ... or the principles of operation subset. At some point the 165 engineers were complaining that if they had to do the full 370 virtual memory architecture, the announce would have to slip six months ... eventually decision was made to drop back to 165 subset (and all the other models that had implemented full architecture had to drop back to subset and any software using dropped features had to be redone).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging, thrashing, page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
OS/360 adcon issues in page mapped operation
https://www.garlic.com/~lynn/submain.html#adcon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual memory Date: 07 Oct, 2024 Blog: Facebookre:
VM Assist, provided virtual supervisor mode where microcode directly executed some privileged instructions (using virtual machine rules) ... rather than VM370 kernel having to simulate every virtual machine supervisor instruction.
After Future System imploded, Endicott cons me into helping with
138/148 microcode ECPS ... where the highest executed 6kbytes of
kernel 370 instructions are moved directly into microcode running ten
times faster ... old archived post with analysis for selecting the
kernel pieces (representing 79.55% of kernel execution)
https://www.garlic.com/~lynn/94.html#21
I eventually got permission to give presentations on how ECPS was implemented at user group meetings, including monthly BAYBUNCH hosted by Stanford SLAC. The Amdahl people would corner me after the meeting for more information. They said that they had done MACROCODE ... basically 370 instruction subset running in microcode mode (originally done to quickly respond to numerous, trivial 3033 microcode changes required to run MVS) ... and was using it to implement HYPERVISOR ("multiple domain facility", VM370 subset) targeted at being able to run both MVS & MVS/XA concurrently (3090 wasn't able to respond with LPAR-PR/SM until nearly decade later).
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
some recent posts mentioning Amdahl macrocode/hypervisor
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024c.html#17 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2024b.html#26 HA/CMP
https://www.garlic.com/~lynn/2024.html#121 IBM VM/370 and VM/XA
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#63 VM Microcode Assist
https://www.garlic.com/~lynn/2023g.html#103 More IBM Downfall
https://www.garlic.com/~lynn/2023g.html#100 VM Mascot
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2023f.html#104 MVS versus VM370, PROFS and HONE
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#51 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022g.html#58 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#31 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021e.html#67 Amdahl
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual memory Date: 07 Oct, 2024 Blog: Facebookre:
MFT was mapped into a single 4mbyte virtual address space (VS1, sort of like running MFT in CP67 4mbyte virtual machine).
First MVT was mapped into single 16mbyte virtual address space (VS2/SVS, sort of like running MVT in CP67 16mbyte virtual machine, fixing 1mbyte 370/165 caped at running four concurrent regions, insufficient to keep system busy and justified) ... but 4bit storage protect keys, still caped things to 15 concurrent regions ... which was increasingly became a problem as systems got larger and more powerful.
Both MFT and MVT EXCP/SVC0 required the equivalent of CP67 CCWTRANS to make a copy of passed channel programs, substituting real addresses for virtual (the initial SVS implementation borrowed CP67 CCWTRANS).
For VS2/SVS to get around the cap of 15 concurrent regions, they eventually moved each region into private 16mbyte virtual address space (in theory remove the limit on concurrent regions). However, OS/360 has heavily pointer passing API, and so for MVS, they mapped an 8mbyte image of the MVS kernel into every application 16mbyte virtual address space (leaving 8mbytes for application). However they then had to map each OS/360 subsystem service into their own private 16mbyte virtual address (also with kernel image taking 8mbyte). Again because of the pointer passing API, a one mbyte common segment area was created for API areas for passing between subsystems and applications. However, API passing area requirements were proportional to number of subsystems and number of concurrent applications, and CSA becomes multi-mbyte common system area. By 3033, CSAs had grown to 5-6mbytes (plus 8mbyte kernel image) leaving only 2-3mbytes for applications (and threatening to become 8mbytes, leaving zero for applications). This threat was putting enormous pressure on being able to ship and deploy 31bit MVS/XA as soon as possible (and have customers migrate).
This has a SHARE song that divulges IBM pressure to have customers to
move from VS2/SVS to VS2/MVS.
http://www.mxg.com/thebuttonman/boney.asp
Something similar showed up with forcing customers from MVS to MVS/XA.
Note in the wake of the implosion of Future System (during FS, internal politics was killing of 370 efforts, claims also provide clone 370 makers like Amdahl, getting market foothold), there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. The head of POK also convinces corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott manages to save the VM370 product mission for mid-range, but had to recreate development from scratch; POK executives were then out trying to browbeat internal datacenters to migrate off VM370 to MVS).
Then in efforts to force customers to migrate to MVS/XA, Amdahl was having better success on their machines, because their (microcoded VM370 subset) HYPERVISOR (multiple domain facility) allowed MVS and MVS/XA to be run concurrently on the same machine. POK had done a minimal software virtual machine subset for MVS/XA development and test .... and eventually deploys it as VM/MA (migration aid) and VM/SF (system facility) trying to compete with high performance Amdahl HYPERVISOR (note IBM doesn't respond to HYPERVISOR until almost a decade later with LPAR/PRSM on 3090).
a few posts mentioning mvs, common segment/system area, mvs/xa
posts
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual memory Date: 08 Oct, 2024 Blog: Facebookre:
Note: CSC came out to install CP67 (3rd installation after CSC itself
and MIT Lincoln Labs), it was Lincoln Labs that had developed LLMPS
and contributed it to the SHARE program library (at one time I had
physical copy of LLMPS).
https://apps.dtic.mil/sti/tr/pdf/AD0650190.pdf
LLMPS was somewhat like a more sophisticated version of IBM DEBE and
Michigan had started out scaffolding MTS off LLMPS.
https://web.archive.org/web/20200926144628/michigan-terminal-system.org/discussions/anecdotes-comments-observations/8-1someinformationaboutllmps
other trivia: sophomore I took two credit hour intro to fortran/computers and end of the semester was hired to rewrite 1401 MPIO in assembler for 360/30. The univ had 709 (tape->tape) and 1401 MPIO (unit record front end for 709, physically moving tapes between 709 & 1401 drives). The univ was getting 360/67 for tss/360 and got 360/30 replacing 1401 temporarily until 360/67 arrived. Univ. shutdown datacenter on weekends and I had the place dedicated, although 48hrs w/o sleep made monday classes hard. I was given bunch of hardware&software manuals and got to design and implement monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had 2000 card assembler program. The 360/67 arrived within a year of taking intro class and I was hired fulltime responsible for os/360 (tss never came to production fruition). Later, CSC came out to install CP67 and I mostly played with it during my dedicated weekend time.
CP67 came with 1052 and 2741 terminal support, including dynamically
determining terminal type and automagically switching terminal type
port scanner with controller SAD CCW. Univ. had some TTY 33&35
(trivia: TTY port scanner for IBM controller had arrived in Heathkit
box) and I added TTY/ASCII support integrated with dynamic terminal
type support. Then I wanted to have single dial-in phone number ("hunt
group") for all terminals ... didn't quite work, IBM had hardwired
controller line speed for each port. Univ. then kicks off clone
controller effort, build a channel interface board for Interdata/3
programmed to emulate IBM controller with addition of automagic line
speed. This is upgraded to Interdata/4 for channel interface and
cluster of Interdata/3s for port interfaces. Interdata and then
Perkin/Elmer markets it as IBM clone controller (and four of us get
written up responsible for some part of IBM clone controller
business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
UofM: MTS
https://en.wikipedia.org/wiki/Michigan_Terminal_System
mentions MTS using PDP8 programed to emulate mainframe terminal
controller
https://www.eecis.udel.edu/~mills/gallery/gallery7.html
and Stanford also did operating system for 360/67: Orvyl/Wylbur (a
flavor of Wylbur was also made available on IBM batch operating
systems).
https://en.wikipedia.org/wiki/ORVYL_and_WYLBUR
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
plug compatable controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
a couple archived posts mentioning Interdata, MTS, LLMPS
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021h.html#65 CSC, Virtual Machines, Internet
https://www.garlic.com/~lynn/2006k.html#41 PDP-1
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual memory Date: 08 Oct, 2024 Blog: Facebookre:
Safeway trivia: US safeway datacenter was up in Oakland, had loosely-coupled complex of multiple (I think still VS2/SVS) 168s (late 70s). They were having enormous throughput problems and brought in all the usual IBM experts before asking me to come in. I was brought into classroom with tables covered with large stacks of performance data from the various systems. After about 30mins of pouring over the data, I noticed the activity for a specific (shared) 3330 peaked at 6-7 I/Os per second (aggregate for all systems sharing the disk) during the same period as throughput problems. I asked what the disk was ... and it had a large PDS library for all store controller applications ... with three cylinder PDS directory. Turns out for the hundreds of stores, a store controller app required 1st PDS directory member search; avg two multi-track searches, 1st a full cylinder search and then half-cylinder search, followed by loading the member. A full cylinder search was 19 revolutions (at 60/sec) or .317secs ... during which time the channel, controller and disk was busy and locked up, then a 2nd half cylinder search or .158secs (again channel, controller and disk locked up), followed by seek, load of the module. Basically max throughput of two store controller application loads for all the safeway stores in the US.
I said this problem with OS/360 PDS directory multi-track search, I had encountered numerous times back to undergraduate in the 60s. So we partition the single store controller dataset into multiple datasets ... and then replicated the dataset set on multiple (non-shared) dedicated disks for each system.
posts mentioning FBA, CKD, multi-track search
https://www.garlic.com/~lynn/submain.html#dasd
posts mentioning getting to play disk engineer in bldg14 (disk enginneering)
and bldg15 (disk product test)
https://www.garlic.com/~lynn/subtopic.html#disk
misc past posts mentioning safeway incident:
https://www.garlic.com/~lynn/2024d.html#54 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2023g.html#60 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2019b.html#15 Tandem Memo
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Virtual Memory Global LRU Date: 08 Oct, 2024 Blog: FacebookAs undergraduate back in the 60s, I extensively redid lots of CP67 (precursor to VM370), including page fault, page replacement algorithms, page thrashing controls, etc. Part of it was Global LRU page replacement algorithm ... at a time when a lot of academic papers were being published on "Local LRU", page thrashing, working set, etc. After graduating and joining IBM Cambridge Science Center, I started integrating a lot of my undergraduate stuff into the Cambridge production system (768kbyte 360/67, 104 pageable pages after fixed memory requirements). About the same time IBM Grenoble Science Center started modifying CP67 for their 1mbyte, 360/67, 155 pageable pages with implementation (Grenoble APR1973 CACM article) that matched the 60s academic papers. Both Cambridge and Grenoble did a lot of performance work (and Grenoble forwarded me most of their performance data).
In late 70s and early 80s, I had worked with Vera Watson and Jim Gray on the original SQL/relational, System/R (before Vera didn't come back from the annapurna climb and Jim left IBM for Tandem). Also looked at LRU simulation for paging systems, file caches, DBMS caches, etc and pretty much confirmed the Global LRU would beat various flavors of Local LRU (did a modification to VM370 with high performance monitor that could capture all record level I/O of both VM370 and virtual machine I/O). Part of the simulation was looking at various file caches (using record level traces); disk level, controller level, channel level, and system level. Also found a lot of commercial batch system had collections of files that were only used together on periodic basis (daily, weekly, monthly, yearly; aka archived after use and brought back together as collection).
At Dec81 ACM SIGOPS meeting, Jim asked me to help a TANDEM co-worker get his Stanford PHD that heavily involved GLOBAL LRU (and the "local LRU" forces from 60s academic work, were heavily lobbying Stanford to not award a PHD for anything involving GLOBAL LRU). Jim knew I had detailed stats on the Cambridge/Grenoble CP67 global/local LRU comparison (showing global significantly outperformed local). Grenoble and Cambridge had similar CMS interactive workloads, but my Cambridge system with 80-85 users out performed with better interactive response the Grenoble system with 35 users (even though Grenoble CP67 had 50 percent more real storage for paging).
Turns out that IBM executives blocked my sending a reply for nearly a year ... I hoped that they felt they were doing it as punishment for being blamed for online computer conferencing on the internal network (folklore when the corporate executive committee was told, 5of6 wanted to fire me), and not participating in an academic dispute.
some refs:
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
https://en.wikipedia.org/wiki/CP-67
presentation I did at 86SEAS (European SHARE) and repeated 2011 for
WashDC Hillgang user group meeting
https://www.garlic.com/~lynn/hill0316g.pdf
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
paging related posts
https://www.garlic.com/~lynn/subtopic.html#clock
CSC/VM &/or SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
posts mentioning Jim Gray and Dec81 ACM SIGOPS meeting
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024b.html#95 Ferranti Atlas and Virtual Memory
https://www.garlic.com/~lynn/2024b.html#39 Tonight's tradeoff
https://www.garlic.com/~lynn/2023g.html#105 VM Mascot
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2021c.html#38 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2019b.html#5 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2018f.html#63 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#62 LRU ... "global" vs "local"
https://www.garlic.com/~lynn/2017j.html#78 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017d.html#66 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports
https://www.garlic.com/~lynn/2016g.html#40 Floating point registers or general purpose registers
https://www.garlic.com/~lynn/2016e.html#2 S/360 stacks, was self-modifying code, Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#0 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014l.html#22 Do we really need 64-bit addresses or is 48-bit enough?
https://www.garlic.com/~lynn/2014i.html#98 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014e.html#14 23Jun1969 Unbundling Announcement
https://www.garlic.com/~lynn/2013k.html#70 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013i.html#30 By Any Other Name
https://www.garlic.com/~lynn/2012m.html#18 interactive, dispatching, etc
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2012g.html#21 Closure in Disappearance of Computer Scientist
https://www.garlic.com/~lynn/2011d.html#82 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011c.html#8 The first personal computer (PC)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Virtual Memory Global LRU Date: 08 Oct, 2024 Blog: Facebookre:
Person responsible for internal network (larger than arpanet/internet
from just about beginning until sometime mid/late 80s, his technology
also used for the corporate sponsored univ. BITNET) and I transfer out
to IBM San Jose Research in 1977. Internal network started out as the
Science Center wide-area network, article by one of the inventors of
GML (1969):
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Edson
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
misc other; SJMerc article about Edson (he passed aug2020) and "IBM'S
MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives
free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
We manage to put in (1st IBM) SJR (internal relay) gateway to Udel
CSNET in Oct1982 (before the big cutover to internetworking protocol
1Jan1983). Some old email
Date: 30 Dec 1982 14:45:34 EST (Thursday)
From: Nancy Mimno <mimno@Bbn-Unix>
Subject: Notice of TCP/IP Transition on ARPANET
To: csnet-liaisons at Udel-Relay
Cc: mimno at Bbn-Unix
Via: Bbn-Unix; 30 Dec 82 16:07-EST
Via: Udel-Relay; 30 Dec 82 13:15-PDT
Via: Rand-Relay; 30 Dec 82 16:30-EST
ARPANET Transition 1 January 1983
Possible Service Disruption
Dear Liaison,
As many of you may be aware, the ARPANET has been going through the
major transition of shifting the host-host level protocol from NCP
(Network Control Protocol/Program) to TCP-IP (Transmission Control
Protocol - Internet Protocol). These two host-host level protocols are
completely different and are incompatible. This transition has been
planned and carried out over the past several years, proceeding from
initial test implementations through parallel operation over the last
year, and culminating in a cutover to TCP-IP only 1 January 1983. DCA
and DARPA have provided substantial support for TCP-IP development
throughout this period and are committed to the cutover date.
The CSNET team has been doing all it can to facilitate its part in
this transition. The change to TCP-IP is complete for all the CSNET
host facilities that use the ARPANET: the CSNET relays at Delaware and
Rand, the CSNET Service Host and Name Server at Wisconsin, the CSNET
CIC at BBN, and the X.25 development system at Purdue. Some of these
systems have been using TCP-IP for quite a while, and therefore we
expect few problems. (Please note that we say "few", not "NO
problems"!) Mail between Phonenet sites should not be affected by the
ARPANET transition. However, mail between Phonenet sites and ARPANET
sites (other than the CSNET facilities noted above) may be disrupted.
The transition requires a major change in each of the more than 250
hosts on the ARPANET; as might be expected, not all hosts will be
ready on 1 January 1983. For CSNET, this means that disruption of mail
communication will likely result between Phonenet users and some
ARPANET users. Mail to/from some ARPANET hosts may be delayed; some
host mail service may be unreliable; some hosts may be completely
unreachable. Furthermore, for some ARPANET hosts this disruption may
last a long time, until their TCP-IP implementations are up and
working smoothly. While we cannot control the actions of ARPANET
hosts, please let us know if we can assist with problems, particularly
by clearing up any confusion. As always, we are <cic@csnet-sh> or
(617)497-2777.
Please pass this information on to your users.
Respectfully yours,
Nancy Mimno
CSNET CIC Liaison
... snip ... top of post, old email index
Later BITNET and CSNET merge.
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/CSNET
Early 80s, I also got funding for HSDT project, T1 and faster computer
links (both satellite and terrestrial). Note IBM communication
products had 2701 controller in the 60s that supported T1, but
transition to SNA/VTAM in mid-70s apparently had issues and
controllers were then caped at 56kbit links ... so have lots of
internal disputes with the communication group. I'm supporting both
RSCS/VNET and TCP/IP "high-speed"links ... also working with the NSF
director and was suppose to get $20M to interconnect the NSF
Supercomputer centers. Then congress cuts the budget, some other
things happen and eventually an RFP is released (in part based on what
we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET post
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
Internal network trivia: at the time of the great switch-over, ARPANET had approx 100 IMPs and 255 hosts ... while internal network was rapidly approaching 1000 hosts all around the world
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 801/RISC, PC/RT, AS/400 Date: 08 Oct, 2024 Blog: FacebookThe future system project was going to completely replace 370 with something completely different (I continued to work on 360&370 all during FS, periodically ridiculing what they were doing). When FS finally implodes there was mad rush to get stuff back into the 370 product pipelines. One of the final nails in the FS coffin was study by the IBM Houston Science Center that if applications from 370/195 were redone of FS machine made out of fastest available technology, it would have throughput of 370/145 (factor of 30 times slowdown). There are references to some of the FS retreat to Rochester and do a vastly simplified (entry level) FS machine released as S/38 (plenty of room between available hardware performance and 30 times slowdown still meeting throughput requirements of the entry market). More details:
When FS imploded I got con'ed into helping with a 370 16-cpu tightly-coupled, shared-memory, multiprocessor (I had added 2-cpu multiprocessor support to VM370R3, initially for online sales&marketing HONE datacenter that had eight systems, so they could add a 2nd processor to each system, and was getting twice the throughput of the single cpu systems). We then con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips). Everybody it was great until somebody told head of POK that it could be decades before POK's favorite son operating system (MVS) had (effective) 16-cpu support (at the time, MVS documentation was its 2-cpu support was only getting 1.2-1.5 times throughput of single cpu systems; note POK doesn't ship 16-cpu multiprocessor until after turn of the century). Then the head of POK invites some of us to never visit POK again, and the 3033 engineers, "heads down and no distractions".
There was an internal advance technology conference where we had
presented the 370 16-cpu project and the 801/RISC group presented RISC
(I would periodic say that John Cocke did RISC to go to the opposite
extreme of FS complexity).
https://www.ibm.com/history/john-cocke
I was then asked to help with 4300 white paper that instead of
replacing 138/148 CISC microprocessors with 801/RISC (Iliad chip, for
4300s), VLSI technology had gotten to the point where 370 CPU could be
implemented directly in circuits.
https://en.wikipedia.org/wiki/IBM_AS/400#Fort_Knox
In the early 1980s, IBM management became concerned that IBM's large
number of incompatible midrange computer systems was hurting the
company's competitiveness, particularly against Digital Equipment
Corporation's VAX.[10] In 1982, a project named Fort Knox commenced,
which was intended to consolidate the System/36, the System/38, the
IBM 8100, the Series/1 and the IBM 4300 series into a single product
line based around an IBM 801-based processor codenamed Iliad, while
retaining backwards compatibility with all the systems it was intended
to replace.
... snip ...
https://en.wikipedia.org/wiki/IBM_AS/400#Silverlake
During the Fort Knox project, a skunkworks project was started at IBM
Rochester by engineers who believed that Fort Knox's failure was
inevitable. These engineers developed code which allowed System/36
applications to run on top of the System/38
... snip ..
https://en.wikipedia.org/wiki/IBM_AS/400#AS/400
On June 21, 1988, IBM officially announced the Silverlake system as
the Application System/400 (AS/400). The announcement included more
than 1,000 software packages written for it by IBM and IBM Business
Partners.[18] The AS/400 operating system was named Operating
System/400 (OS/400).[12]
... snip ...
There was 801/RISC ROMP (research, opd) chip that was going to be used for a Displaywriter followon. When that got canceled (market moving to PCs), the decision was made to pivot to the unix workstation market and they hired the company that did AT&T Unix port to IBM/PC as PC/IX, to do port for ROMP ... which becomes "PC/RT" and "AIX". Also the IBM Palo Alto group was working on UCB BSD for 370s and redirected to do port for PC/RT instead, which ships as "AOS". IBM Palo Alto was also in the process of doing UCB BSD UNIX port to 370 and got redirected to PC/RT ... which ships as "AOS" (alternative to AIX). Palo Alto was also working with UCLA Locus which ships as AIX/370 and AIX/386. I have PC/RT with megapel display in non-IBM booth at 1988 Internet/Interop conference. Then the follow-on to ROMP was RIOS for RS/6000 (and AIXV3 where they merge in a lot of BSD'isms).
Then Austin starts on RIOS chipset for RS/6000 UNIX workstation and we
get HA/6000 project, originally for the NYTimes to move their
newspaper (ATEX) system off DEC VAXCluster to RS/6000. I rename it
HA/CMP when I start doing technical/scientific scale-up with national
labs. and commercial scale-up with RDBMS vendors (Oracle, Sybase,
Informix, and Ingres that have VAXcluster support in same source base
with Unix). Then the S/88 product administrator starts taking us
around to their customers and also has me write a section for the
corporate continuous availability strategy document (it gets pulled
when both Rochester/AS400 and POK/mainframe complain that they can't
meet the requirements). The executive we reported to, goes over to
head of the Somerset (AIM, apple, ibm, motorola)
https://en.wikipedia.org/wiki/AIM_alliance
Somerset effort to do a single-chip power/pc (including adopting the
morotola RISC M81K cache and cache consistency for supporting shared
memory, tightly-coupled, multiprocessor)
https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/PowerPC_600
https://wiki.preterhuman.net/The_Somerset_Design_Center
In early Jan1992, AWD/Hester tells Oracle CEO that we would have 16-system clusters by mid92 and 128-system clusters by ye92. However by the end of Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).
benchmarks number of program iterations compared to reference platform
(not actual instruction count)
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-way: 2016MIPs, 128-system: 16,128MIPS
late 90s, i86 cpu vendors do a hardware layer that translates i86
instructions into RISC micro-ops for actual execution, largely
negating difference between i86 & power
1999: single IBM PowerPC 440 hits 1,000MIPS
1999: single Pentium3 (translation to RISC micro-ops for execution)
hits 2,054MIPS (twice PowerPC 440)
2003: single Pentium4 processor 9.7BIPS (9,700MIPS)
2010: E5-2600 XEON server blade, two chip, 16 processor aggregate
500BIPS (31BIPS/processor)
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
801/risc, fort knox, iliad, romp, rios, pc/rt, rs/6000, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
Interop88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370/168 Date: 09 Oct, 2024 Blog: FacebookAfter joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters and one of my long-time customers (starting not long after 23jun1969 unbundling announcement) was the online US branch office HONE systems (evolving into online world-wide sales and marketing HONE systems). In mid-70s, the (370/168) US HONE datacenters were consolidated in Palo Alto and the eight VM370 systems enhanced to "single-system image", shared DASD operation with load-balancing and fall-over across the complex. I then add multiprocessor support to my VM370R3-based CSC/VM, initially for HONE so they can add a 2nd CPU to each system (each 2-cpu system getting twice the throughput of single-CPU system), for 8-systems/16-cpus in single-system image complex. After the bay-area earthquake, the US HONE datacenter was replicated in Dallas, and then another replicated in Boulder. Trivia: when FACEBOOK first moved to silicon valley, it was into a new bldg built next door to the former IBM US HONE consolidated datacenter.
During FS, internal politics was killing off 370 efforts (credited
with giving clone 370 makers their market foothold). When FS implodes,
there is mad rush to get stuff back into the 370 product pipelines,
including kicking off quick&dirty 3033&3081 in parallel.
http://www.jfsowa.com/computer/memo125.htm
I also get talked into helping with a 370 16-cpu tightly-coupled, shared-memory, multiprocessor system and we con the 3033 processor engineers into helping in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was really great until somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) had (effective) 16-CPU multiprocessor support (MVS documents at the time had 2-cpu shared-memory multiprocessor only had 1.2-1.5 times the throughput of single processor, POK doesn't ship 16-CPU multiprocessor until after the turn of the century). The head of POK then invites some of us to never visit POK again, and the 3033 processor engineers, heads down and no distractions. The head of POK had also convinced corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA. Then some of the POK executives were trying to brow-beat internal datacenters (like HONE) to move off VM370 to MVS (Endicott finally manages to acquire the VM370 mission, but had to recreate a development group from scratch).
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm (& SJR/VM, for internal datacenters) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE Posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370/168 Date: 10 Oct, 2024 Blog: Facebookre:
After FS implosion, about the same time as 16-CPU effort, Endicott
cons me into helping with 138/148 ECPS microcode. Basically they
wanted the 6kbytes of VM370 kernel pathlengths for translation
directly to native microcode, about on a byte-for-byte basis and
running ten times faster. Old archived post with the initially
analysis, aka top 6kbytes of kernel execution accounted for 79.55% of
kernel execution:
https://www.garlic.com/~lynn/94.html#21
Also Endicott tried to convince corporate to allow them to ship every machine with VM370 pre-installed (something like LPAR/PRSM), but with POK convincing corporate that VM370 product should be killed, they weren't successful.
... oh and they weren't planning on telling the vm370 group of the shutdown/move until the very last minute (to minimize the number that might escape into the boston area). however the details managed to leak early and several escaped (joke was that the head of POK was a major contributor to DEC VAX/VMS). There then was a witch hunt for the source of the leak, fortunately for me, nobody gave up the leaker.
360(&370) microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 801/RISC, PC/RT, AS/400 Date: 10 Oct, 2024 Blog: Facebookre:
My wife was roped into co-authoring IBM response to gov RFI for large, super-secure, campus-like, distributed & client/server operation ... where she including "3-tier networking". We were then out doing customer executive presentations on Internet, TCP/IP, high-speed routers, Ethernet, and 3-tier networking and taking lots of misinformation barbs in the back from SNA, SAA, and token-ring forces.
After FS imploded, Endicott asks me to help with 138/148 ECPS, select
the highest executed 6kbytes of vm370 kernel code for conversion to
straight microcode, running 10 times faster. Old archive post with
analysis (6k bytes represented 79.55% of kernel CPU).
https://www.garlic.com/~lynn/94.html#21
Then I got asked to run around the world presenting the 138/148 ECPS business case to US regional and WT business planners. Later the Endicott person con'ing me into the ECPS stuff, was executive in charge of SAA (had large top floor corner office in Somers) and we would periodically drop it to complain about how badly his people were acting.
3tier networking posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
360(&370) microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
PC/RT AOS "C-compiler" ... blame it on me. Los Gatos lab was doing lots of work with Metaware's TWC and two people used it to implment 370 Pascal (later released as VS/Pascal) for VLSI tools. One of them leaves IBM and the other was looking at doing C-language front end for the 370 pascal compiler. I leave for a summer giving customer & IBM lab talks and classes all around Europe. When I get back, he had also left IBM for Metaware. Palo Alto was starting work on UCB BSD Unix for 370 and needs a 370 C-compiler. I suggest they hire Metaware (& the former IBMer). When they get redirected to PC/RT, they just have Metaware do a ROMP backend.
trivia: some time before the Los Gatos lab had started work on 801/RISC "Blue Iliad" single chip (1st 32bit 801/RISC) ... really large and hot and never came to production fruition (1st production 32bit 801/RISC was 6chip RIOS). Note: although I was in SJR (had office there and later in Almaden), Los Gatos had given me part of a wing ... which I was allowed to keep for a time even after leaving IBM.
801/risc, iliad, romp, rios, power, power/pc, fort knox, etc
https://www.garlic.com/~lynn/subtopic.html#801
Frequently reposted in Internet & other IBM groups ... Early 80s, I
got the HSDT project, T1 and faster computer links (both terrestrial
and satellite) and would have lots of conflicts with the communication
group. Note in the 60s IBM had 2701 telecommunication controller that
supported T1, however in the 70s with transition to SNA/VTAM, issues
appeared to cap controllers at 56kbit/sec. Was also working with NSF
director and was suppose to get $20M to interconnect NSF supercomputer
centers. Then congress cuts the budget, some other things happen and
eventually a RFP is released (in part based on what we already had
running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed, folklore
is that 5of6 members of corporate executive committee wanted to fire
me). The NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet. Trivia: somebody
had been collecting executive email about how SNA/VTAM could support
NSFNET T1 ... and forwarded it to me ... heavily snipped and redacted
(to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109
The RFP called for T1 network, but the PC/RT links were 440kbits/sec (not T1) and they put in T1 trunks with telco multiplexers (carrying multiple 440kbit links) to call it a T1 network. I periodically ridiculed that why don't they call it T5 network, since it was possible that some of the T1 trunks were in turn, carried over T5 trunks. Possibly to shutdown some of the ridicule, they ask me to be the REDTEAM for th NSFNET T3 upgrade (a couple dozen people from half dozen labs were the BLUETEAM). At final executive review, I presented first and then the blue team. 5min into blue team presentation, the executive pounded on the table and said he would lay down in front of garbage truck before he allows anything but the blue team proposal go forward. I get up and walk out.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Some posts mentioning Los Gatos, Metware, TWS, BSD, AOS
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2023g.html#68 Assembler & non-Assembler For System Programming
https://www.garlic.com/~lynn/2023c.html#98 Fortran
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2021i.html#45 not a 360 either, was Design a better 16 or 32 bit processor
https://www.garlic.com/~lynn/2021c.html#95 What's Fortran?!?!
https://www.garlic.com/~lynn/2017f.html#94 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#24 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2015g.html#52 [Poll] Computing favorities
https://www.garlic.com/~lynn/2010n.html#54 PL/I vs. Pascal
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Virtual Memory Global LRU Date: 08 Oct, 2024 Blog: Facebookre:
Other history
IBM 23jun1969 unbundling announcement started to charge for (application) sofware, SE services, maint, etc. SE training used to include sort of journeyman as part of large group onsite at customer. With unbundling, they couldn't figure out how not to charge for trainee SEs onsite at customers. Thus was born US HONE, branch office online acceess to US HONE cp67 datacenters, where SEs could practice with guest operating systems running in CP67 virtual machines. CSC also ported APL\360 to CP67/CMS as CMS\APL redoing APL\360 storage management and changing from 16k-32k swapable workspaces to large virtual memory demand paged operation, as well implemented APIs for system services (like file I/O), enabling lots of real-world applications. HONE then started also offering sales&marketing CMS\APL-based applications, which shortly came to dominate all HONE use (and SE practice with guest operating systems just dwindled away) ... and HONE clones started appearing world-wide.
One of my hobbies after joining IBM was enhanced production operating
systems for internal datacenters (and HONE as long-time customer back
to CP67 days). With the decision to add virtual memory to all 370s
... because MVT storage management was bad. I was asked to tract down
decision early last decade ... pieces of email exchange with staff
member to executive making the decision ... in this archived post
https://www.garlic.com/~lynn/2011d.html#73
and to produce VM370 product (in the morph of CP67->VM370 lots of features were simplified or dropped, including SMP support). In 1974, I started migrating lots of stuff from CP67 to R2-based VM370 (including kernel reorg for multiprocessor operation, but not multiprocessor operation itself) for my CSC/VM. ... and US HONE datacenters were consolidated in Silicon Valley across the back parking lot from the Palo Alto Science Center. PASC had done APL migration to VM370/CMS as APL\CMS, the APL\CMS microcode assist for 370/145 (APL\CMS throughput claims as good as 370/168) and 5100 prototype ... and provided HONE with APL improvements. US HONE consolidation included implementing simgle-system image, loosely-coupled, shared DASD operation with load balancing and fall-over across the complex. I then implemented SMP, tightly-coupled multiprocessor operation for R3-based VM370 CSC/VM,, initially for US HONE so they could add a 2nd processor to each system in their single-system image operations (16-cpus aggregate).
Note some of the MIT CTSS/7094 people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
had gone to Project MAC (wanted to transition from CTSS swapping to
paging) on the 5th flr to do MULTICS (which included high-performance
single-level-store)
https://en.wikipedia.org/wiki/Multics
others had gone to the science center on the 4th flr, doing virtual
machines, internal network, lots of interactive and performance work
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
CSC had assumed that they would be the IBM center forvirtual memory
... but IBM lost to GE for Project MAX. CSC wanted to have 360/50 to
modify for virtual memory, but all the spare 50s were going to FAA
ATC, so they had to settle for 360/40, I got hardcopy from Les and
OCR'ed it
https://www.garlic.com/~lynn/cp40seas1982.txt
modified 360/40 with virtual memory and creating CP40/CMS (CP - Control Program, CMS - Cambridge Monitor System). Later when 360/67 standard with virtual memory (supposedly strategic for TSS/360) becomes available, CP40/CMS morphs into CP67/CMS. Cambridge had been working with people at MIT Lincoln Labs and get the 2nd CP67/CMS. Lots of places had ordered 360/67 for TSS/360, but never met the marketing promises (at the time TSS/360 was decommited, claims there were 1200 people on TSS/360 compared to CSC CP67/CMS group with 12 people, including secretary).. Lots of places used the machine as 360/65 with OS/360. Univ of Michigan and Stanford write their own virtual memory operating systems for 360/67.
As undergraduate, univ. had 360/67 and hired me fulltime responsible for OS/360. Then CSC comes out and installs CP67/CMS (3rd after CSC itself and MIT Lincoln Labs) and I mostly play with it during my stand-alone weekend windows (univ. shutdown datacenter for weekends, and I had place dedicated, although 48hrs w/o sleep made Monday classes hard). The next few months I rewrite large amounts of CP67 code, initially concentrated on pathlengths running OS/360 in virtual machine. I'm then invited to Spring SHARE Houston meeting for the "public" CP67/CMS announce. CSC then has one week CP67/CMS class at Beverly Hills Hilton, I arrive Sunday night and asked to teach the CP67 class. It turns out the IBM CSC employees that were to teach, had given notice to join a commercial online CP67/CMS service bureau. Within a year, some Lincoln Labs people form a 2nd commercial online CP67/CMS service bureau (both specializing for services for financial industry).
Then there is a IBM CP67/CMS conference hosted in silicon valley for
local CP67/CMS customers, trivia before MS/DOS
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, kildall worked on IBM cp67/cms at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
Then before I graduate, I'm hired fulltime in a small group in the Boeing CFO office to help with formation with Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter is possibly largest in the world (couple hundred million in 360 stuff), 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between Renton director and CFO, who only has a 360/30 up at Boeing Field for payroll (although they enlarge the room and install 360/67 for me to play with when I'm not doing other stuff). When I graduate I join science center (instead staying with Boeing CFO).
Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist
history given at 1986 SEAS (EU SHARE) and 2011 Wash DC "Hillgang" user
group meeting
https://www.garlic.com/~lynn/hill0316g.pdf
Unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Purchase NY Date: 13 Oct, 2024 Blog: FacebookBefore leaving IBM we had some number of meetings in the Purchase building that IBM had picked up from Nestle (impressive marble(?) edifice, supposedly had built for top executives). In 1992, we leave IBM the year that it had one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" (take-off on AT&T "baby bells" from AT&T breakup a decade earlier) in preparation for breaking up IBM.
After leaving IBM, we had done some work on internet electronic commerce and having some meetings with MasterCard and Intuit executives at MasterCard offices in Manhattan ... when MasterCard moves meetings to their new hdqtrs (former IBM/Nestle bldg) in Purchase. IBM apparently was very desperate to raise cash and MasterCard claim they got the bldg for less than what they paid to change all door handle hardware in the bldg.
bldg history
https://en.wikipedia.org/wiki/Mastercard_International_Global_Headquarters#History
IBM would use the facility to centralize activities in the Westchester
area from 1985 to 1992 when it began moving employees to other
facilities as part of cost containment efforts.[2][16] By 1994 the
facility was purchased by MasterCard to serve as its global
headquarters.[17][18] MasterCard moved into 2000 Purchase Street in
October 1995 and in December 2001 acquired the 100 Manhattanville Road
facility to serve as its North American Region headquarters.[19]
... snip ....
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
some internet e-commerce
https://www.garlic.com/~lynn/subnetwork.html#gateway
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM/PC Date: 14 Oct, 2024 Blog: Facebook... opel & ms/dos
... other trivia: before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, kildall worked on IBM cp/67-cms at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
(virtual machine) CP67 (precursor to vm370)
https://en.wikipedia.org/wiki/CP-67
other (virtual machine) history
https://www.leeandmelindavarian.com/Melinda#VMHist
internally, Boca said that they weren't interested in software for "Acorn" (IBM code-name) and a group of couple dozen IBMers in silicon valley formed to do software (apps and operating systems) and would every month or so double check with Boca that they still weren't interested in doing software. Then at one point Boca tells the group that it had changed its mind and if anybody wanted to be involved in software for Acorn, they had to move to Boca ... only one person tried, but returned to Silicon Valley.
AWD (Advanced Workstation Division) was an IBU (Independent Business Unit, supposedly free from IBM bureaucracy, rules, overhead) and did PC/RT and their own (PC/AT bus) cards including a 4mbit token-ring card. Then for RS/6000 and microchannel, senior VP on the executive committee instructed AWD that they couldn't do their own cards, but had to use standard PS2 microchannel cards (some derogatory things said about the executive). The communication group was fiercely fighting off client/server and distributed computing and had heavily performance kneecapped the PS2 cards. An example was the PS2 16mbit token-ring card had lower throughput than the PC/RT 4mbit token-ring card. Alternative were the $69 10mbit Ethernet cards (AMD "Lance", Intel "82586", other chips) which had much higher throughput than both the $800 PS2 16mbit token-ring and the PC/RT 4mbit token-ring.
Late 80s/early 90s, I started posting in (IBM/PC) internal, online forums, PC adverts from Sunday SJMN showing prices way below Boca projections (there was joke that Boca lost $5 on every PS2 sold, but they were planning on making it up in volume). Then Boca hires Dataquest (since bought by Gartner) to do future of PC market study ... which included a video-tape recording of multi-hour pc discussion round table by silicon valley experts. I had known the Dataquest person running the study for several years and was asked to be a silicon valley "expert" (they promised to garble my intro so Boca wouldn't recognize me as IBM employee, I also cleared it with my local IBM management).
U2, SR71
https://www.lockheedmartin.com/en-us/who-we-are/business-areas/aeronautics/skunkworks.html
U2 trivia; USAF's "bomber gap" calling for 1/3rd increase in DOD budget to close gap. U2 showed Eisenhower that gap was fabricated, contributed to his military-industrial(-congressional) complex warning speech.
Similar to Watson's "wild duck" culture, 1972, CEO Learson tried (&
failed)to block bureaucrats, careerists and MBAs from destroying the
Watson culture/legacy (20yrs later, IBM has one of the largest
losses in history of US companies and was being reorged into the 13
"baby blues" in preparation for breaking up the company)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
communication group contributed to IBM downfall posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
Some posts mentioning IBM/PC, MS/DOS, Opel, Gates, CP/M, Kildall
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#112 43 years ago, Microsoft bought 86-DOS and started its journey to dominate the PC market
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024d.html#30 Future System and S/38
https://www.garlic.com/~lynn/2024c.html#111 Anyone here (on news.eternal-september.org)?
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#25 CTSS/7094, Multics, Unix, CP/67
https://www.garlic.com/~lynn/2024.html#4 IBM/PC History
https://www.garlic.com/~lynn/2023g.html#75 The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us
https://www.garlic.com/~lynn/2023g.html#35 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#27 Another IBM Downturn
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023.html#99 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#30 IBM Change
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022d.html#90 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#44 CMS Personal Computing Precursor
https://www.garlic.com/~lynn/2022b.html#111 The Rise of DOS: How Microsoft Got the IBM PC OS Contract
https://www.garlic.com/~lynn/2021k.html#22 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2019e.html#136 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2019d.html#71 Decline of IBM
https://www.garlic.com/~lynn/2018f.html#102 Netscape: The Fire That Filled Silicon Valley's First Bubble
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM/PC Date: 14 Oct, 2024 Blog: Facebookre:
got a 2741 at home mar1970 until summer of 1977 when it was replaced with 300 baud cdi miniterm, 1979 was replaced with 1200 baud ibm 3101 (topaz) glass teletype (dial into IBM pvm 3270 emulator), ordered ibm/pc the day employee purchase was announced but employee purchases had very long delivery, by the time it showed up the street price had dropped below what i paid in employee purchase. IBM then had special 2400baud encrypting modem cards for the "official" travel/home terminal program (security audits highlighted hotel phone closet vulnerabilities) ... and sophisticated pcterm/pvm 3270 emulator
posts mentioning 2400baud encrypting modems and/or pcterm 3270
emulation
https://www.garlic.com/~lynn/2022d.html#28 Remote Work
https://www.garlic.com/~lynn/2022.html#49 Acoustic Coupler
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2017g.html#36 FCC proposes record fine for robocall scheme
https://www.garlic.com/~lynn/2016b.html#101 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014j.html#60 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014j.html#25 another question about TSO edit command
https://www.garlic.com/~lynn/2014i.html#11 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014h.html#71 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014e.html#49 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2013l.html#23 Teletypewriter Model 33
https://www.garlic.com/~lynn/2012d.html#20 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2011d.html#6 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009c.html#30 I need magic incantation for a power conditioner
https://www.garlic.com/~lynn/2008n.html#51 Baudot code direct to computers?
https://www.garlic.com/~lynn/2007t.html#74 What do YOU call the # sign?
https://www.garlic.com/~lynn/2007o.html#66 The use of "script" for program
https://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism?
https://www.garlic.com/~lynn/2003p.html#44 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2003n.html#7 3270 terminal keyboard??
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: PROFS & VMSG Date: 15 Oct, 2024 Blog: FacebookIBM internal network evolved out of the science center CP67 wide-area network centered in Cambridge ... ref from one of the inventors of GML in 1969 at Cambridge:
Science-center/internal network larger than the arpanet/internet from
the beginning until sometime mid/late 80s, also used for the corporate
sponsored univ BITNET
https://en.wikipedia.org/wiki/BITNET
Edson responsible for the technology over the years (passed aug2020)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
misc other; SJMerc article about Edson and "IBM'S MISSED OPPORTUNITY
WITH THE INTERNET" (gone behind paywall but lives free at wayback
machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website ... blocked from converting internal network to
tcp/ip (late 80s converted to sna/vtam instead)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
some number of line-oriented and 3270 fullscreen CMS apps during the 70s, including email clients. One popular in the later half of 70s was "VMSG". Then the PROFS group was out picking up internal apps for wrapping its menu interface around .... and picked up source for a very early VMSG for the email client. When the VMSG author tried to offer PROFS group a more mature, enhanced VMSG, they tried to get him separated from the IBM company. The whole thing quieted down when the VMSG author demonstrated his initials in every PROFS email in a non-displayed field. After that the VMSG author only shared the source with me and one other person.
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
gml, sgml, html, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
recent posts mentioning PROFS & VMSG
https://www.garlic.com/~lynn/2024e.html#99 PROFS, SCRIPT, GML, Internal Network
https://www.garlic.com/~lynn/2024e.html#48 PROFS
https://www.garlic.com/~lynn/2024e.html#27 VMNETMAP
https://www.garlic.com/~lynn/2024b.html#109 IBM->SMTP/822 conversion
https://www.garlic.com/~lynn/2024b.html#69 3270s For Management
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#32 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#5 IBM Downfall
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022f.html#64 Trump received subpoena before FBI search of Mar-a-lago home
https://www.garlic.com/~lynn/2022b.html#29 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#89 IBM PROFs
https://www.garlic.com/~lynn/2021j.html#83 Happy 50th Birthday, EMAIL!
https://www.garlic.com/~lynn/2021j.html#23 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#86 IBM EMAIL
https://www.garlic.com/~lynn/2021i.html#68 IBM ITPS
https://www.garlic.com/~lynn/2021h.html#50 PROFS
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2021e.html#30 Departure Email
https://www.garlic.com/~lynn/2021d.html#48 Cloud Computing
https://www.garlic.com/~lynn/2021c.html#65 IBM Computer Literacy
https://www.garlic.com/~lynn/2021b.html#37 HA/CMP Marketing
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 5100 and Other History Date: 15 Oct, 2024 Blog: FacebookIBM 5100 and Other history
IBM 23jun1969 unbundling announcement started to charge for (application) sofware, SE services, maint, etc. SE training used to include sort of journeyman as part of large group onsite at customer. With unbundling, they couldn't figure out how not to charge for trainee SEs at customer facilities. Thus was born US HONE, branch office online acceess to US HONE cp67 datacenters, where SEs could practice with guest operating systems running in CP67 virtual machines. CSC also ported APL\360 to CP67/CMS as CMS\APL redoing APL\360 storage management and changing from 16k-32k swapable workspaces to large virtual memory demand paged operation, as well implemented APIs for system services (like file I/O), enabling lots of real-world applications. HONE then started also offering sales&marketing CMS\APL-based applications, which shortly came to dominate all HONE use (and SE practice with guest operating systems just dwindled away) ... and HONE clones started appearing world-wide.
One of my hobbies after joining IBM was enhanced production operating
systems for internal datacenters (and HONE as long-time customer back
to CP67 days). With the decision to add virtual memory to all 370s
... because MVT storage management was bad. I was asked to tract down
decision early last decade ... pieces of email exchange with staff
member to executive making the decision ... in this archived post
https://www.garlic.com/~lynn/2011d.html#73
and to produce VM370 product (in the morph of CP67->VM370 lots of features were simplified or dropped, including SMP support). In 1974, I started migrating lots of stuff from CP67 to R2-based VM370 (including kernel reorg for multiprocessor operation, but not multiprocessor operation itself) for my CSC/VM. ... and US HONE datacenters were consolidated in Silicon Valley (across the back parking lot from the Palo Alto Science Center). US HONE consolidation included implementing simgle-system image, loosely-coupled, shared DASD operation with load balancing and fall-over across the complex (original done at Uithorne HONE). I then implemented SMP, tightly-coupled multiprocessor operation for R3-based VM370 CSC/VM,, initially for US HONE so they could add a 2nd processor to each system in their single-system image operations (16-cpus aggregate).
PASC had done APL migration to VM370/CMS as APL\CMS, the APL\CMS microcode assist for 370/145 (APL\CMS throughput claims as good as 370/168 APL\CMS) and 5100 prototype ... and provided HONE with APL improvements.
IBM 5100 ... at the Palo Alto Science Center
https://en.wikipedia.org/wiki/IBM_5100
Note Los Gatos was ASDD lab, then GPD VLSI & VLSI tools, early 80s had
a number of GE CALMA.
https://en.wikipedia.org/wiki/Calma
Also had done the LSM (Los Gatos State Machine) ... ran VLSI circuit logic simulation 50,000 times faster than 3033 (also had timer support so could simulate asynchronous clocks and digital/analog chips (like thin film disk heads). At the time I was in San Jose Research (bldg28 on main plant site), but LSG let me have part of wing and other space in the basement.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
IBM unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC/VM and/or SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
posts mentioning IBM 5100
https://www.garlic.com/~lynn/2023f.html#55 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023.html#117 IBM 5100
https://www.garlic.com/~lynn/2022g.html#23 IBM APL
https://www.garlic.com/~lynn/2022c.html#86 APL & IBM 5100
https://www.garlic.com/~lynn/2022.html#103 Online Computer Conferencing
https://www.garlic.com/~lynn/2021c.html#90 Silicon Valley
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history
https://www.garlic.com/~lynn/2020.html#40 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2019.html#84 IBM 5100
https://www.garlic.com/~lynn/2018f.html#52 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018d.html#116 Watch IBM's TV ad touting its first portable PC, a 50-lb marvel
https://www.garlic.com/~lynn/2018b.html#96 IBM 5100
https://www.garlic.com/~lynn/2017c.html#7 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2016d.html#34 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2014e.html#20 IBM 8150?
https://www.garlic.com/~lynn/2013o.html#82 One day, a computer will fit on a desk (1974) - YouTube
https://www.garlic.com/~lynn/2013n.html#82 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2013c.html#44 Lisp machines, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013c.html#36 Lisp machines, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012l.html#79 zEC12, and previous generations, "why?" type question - GPU computing
https://www.garlic.com/~lynn/2012e.html#100 Indirect Bit
https://www.garlic.com/~lynn/2012.html#10 Can any one tell about what is APL language
https://www.garlic.com/~lynn/2011j.html#48 Opcode X'A0'
https://www.garlic.com/~lynn/2011i.html#55 Architecture / Instruction Set / Language co-design
https://www.garlic.com/~lynn/2011e.html#58 Collection of APL documents
https://www.garlic.com/~lynn/2011d.html#59 The first personal computer (PC)
https://www.garlic.com/~lynn/2010c.html#54 Processes' memory
https://www.garlic.com/~lynn/2010c.html#28 Processes' memory
https://www.garlic.com/~lynn/2007d.html#64 Is computer history taugh now?
https://www.garlic.com/~lynn/2005m.html#2 IBM 5100 luggable computer with APL
https://www.garlic.com/~lynn/2005.html#44 John Titor was right? IBM 5100
https://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM TCM Date: 15 Oct, 2024 Blog: FacebookFuture System early part of 70s was completely different from 370s and was gong to completely replace 370s (internal politics was killing off 370 projects, claim that the lack of new IBM 370s during FS is credited with giving clone 370 system makers their market foothold). When FS finally implodes there is mad rush to get stuff into the 370 product pipeline, including quick&dirty 3033&3081 ... some more info (including enormous increase in circuits)
i.e. 3033 started out remapping 168 logic to 20% faster chips and 158 engine with just the integrated channel microcode for external channels (3031 was two 158 engines, one with just 370 microcode and one with just channel microcode; 3032 was 168-3 and 158 engine with integrated channel microcode). 3081 was warmed over FS stuff with enormous number of circuits, TCMs were required to package in reasonable size space. Two-CPU 3081D aggregate processing was slower than latest (air-cooled) Amdahl single CPU; they double the 3081 processor cache size for 3081K, which brings 3081K 2-CPU aggregate processing similar to Amdahl's single processor (modulo MVS multiprocessor overhead listed 2-CPU throughput as only 1.2-1.5 times single processor throughput).
Also, after FS failure, I'm asked to help with a 370 16-CPU multiprocessor project and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips). I had recently added 2-cpu multiprocessor support to VM370, initially for US HONE consolidated datacenter (eight system single-system image, loosely-coupled, shared DASD with load balancing and fall-over) so they could add 2nd CPU to each system (and 2-CPU configurations were getting twice throughput of single CPU) for 16 CPUs total. In any case, everybody thought it was great until somebody told head of POK that it could be decades before POK's favorite son operating system ("MVS") had (effective) 16-cpu tightly-coupled shared-memory operation (with SMP overhead increasing with number of processors and 2-CPU throughput only 1.2-1.5 a single CPU; POK doesn't ship 16-CPU SMP until after turn of century). Then head of POK invites some of us to never visit POK again and directs 3033 processor enginers, heads down and no distractions. Once 3033 is out the door, the 3033 processor engineers start on trout/3090.
FE had a bootstrap diagnostic/fix process, starting with scoping. With 3081 circuits all in TCMs, it was no longer scopable. They go to a "service processor" with lots of probes into the TCMs that can be used for diagnostics ... and the 3081 service processor could be diagnosed/scoped (and fixed).
Note by 2010, large cloud opeations could have dozens of megadatacenters around the world, each with half million or more server blades (several million processors) ... at the time frequently E5-2600 that benchmarked at 500BIPS, IBM base list price was $1815 (or $3.63/BIPS) ... but large clouds were assembling their own server blades for 1/3rd the cost of brand name blades, aka $603 (or $1.21/BIPS). The server system costs were so radically reduced, that things like power and cooling were increasingly becoming megadatacenter major costs. By comparison, 2010 max configured z196 went for $30M and benchmarked at 50BIPS, 1/10th E5-2600 blade; $600,000/BIPS.
The cloud operators were putting enormous pressure on server chip
makers to reduce power&cooling requirements/costs. Water could both
help with cooling costs as well as space (packing more blades into a
rack and more racks in megadatacenter physical space). Also new
generation of more power efficient chips could easily justify complete
replacement of all systems (power cost savings easily justifying the
system replacement costs).
https://www.datacenterknowledge.com/energy-power-supply/data-center-power-fueling-the-digital-revolution
https://www.datacenterknowledge.com/cooling/data-center-power-cooling-in-2023-top-stories-so-far
https://www.datacenterfrontier.com/hyperscale/article/55021675/the-gigawatt-data-center-campus-is-coming
In that same time-frame, there was industry press that server chip makers were shipping half their product directly to cloud operators and IBM unloads its server product business.
trivia: In late 70s and early 80s, I was blamed for online computer conferencing on the internal network, it really took off the spring of 1981 when I distributed trip report of visit to Jim Gray at Tandem. Only about 300 actively participated but claims that upwards of 25,000 were reading (also folklore when corporate executive committee was told, 5of6 wanted to fire me). One of the active 300 was manager of the 3090 service processor (3092) ... instead of building whole thing from scratch, it started out 4331 with modified VM370R6 and all service screens were CMS IOS3270 ... which evolves into a pair of (redundant) 4361s.
Semi-facetious: When I 1st transferred to SJR, I got to wander around lots of IBM and customer datacenters in silicon valley, including disk bldgs 14/engineering and 15/product-test across the street. They were doing 7x24, prescheduled, stand-alone mainframe testing and had mentioned they recently tried MVS, but it had 15min MTBF (requiring manual re-ipl). I offered to rewrite input/output supervisor so it was bullet-proof and never fail so they could do any amount of on-demand concurrent testing, greatly improving (downside was they got into habit of blaming me when things didn't work like they expected and I had to spend increasing amount of time playing disk engineer). There was lots of complaining about GPD executive that had mandating a slow, inexpensive processor ("JIB-prime") for the 3880, while 3880 had special 3mbye/sec hardware path for 3380 transfers, everything else was much slower than 3830 (with associated increases in channel busy).
3090 had configured number of channels for target throughput, assuming 3880 was same as 3830, but with 3mbyte transfer. When they found out how bad 3880 really was, they realized they had to significantly increase the number of 3090 channels to achieve target throughput. This required an extra TCM and they semi-facetiously claimed they would bill the 3880 group for the increase in 3090 manufacturing costs. Eventually marketing respins the big increase in the number of 3090 channels, as being wonderful I/O machine.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
some TCM posts
https://www.garlic.com/~lynn/2024e.html#131 3081 (/3090) TCMs and Service Processor
https://www.garlic.com/~lynn/2022c.html#107 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#20 Service Processor
https://www.garlic.com/~lynn/2021i.html#66 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017c.html#94 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#88 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#50 Mainframes after Future System
https://www.garlic.com/~lynn/2016f.html#86 3033
https://www.garlic.com/~lynn/2014k.html#11 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014e.html#14 23Jun1969 Unbundling Announcement
https://www.garlic.com/~lynn/2014.html#31 Hardware failures (was Re: Scary Sysprogs ...)
https://www.garlic.com/~lynn/2012e.html#38 A bit of IBM System 360 nostalgia
https://www.garlic.com/~lynn/2012c.html#23 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2011m.html#21 Supervisory Processors
https://www.garlic.com/~lynn/2011f.html#42 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2009b.html#77 Z11 - Water cooling?
https://www.garlic.com/~lynn/2009b.html#22 Evil weather
https://www.garlic.com/~lynn/2008h.html#80 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2004p.html#37 IBM 3614 and 3624 ATM's
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: Postel, RFC Editor, Internet Date: 16 Oct, 2024 Blog: FacebookPostel use to let me help with periodically updated/re-issued STD1, which regularly had RFC numbers reserved ... even after later numbers were in use.
I had been brought into small client/server startup as consultant, two
former Oracle people we had worked when doing IBM's HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
were there responsible for "commerce server" and they wanted to do financial transactions on the server, the startup had also invented this technology called "SSL" they wanted to use, the result is now frequently called "elecctronic commerce". I had responsibility for everything between webservers and financial industry payment networks. I then did a talk on "Why The Internet Isn't Business Critical Dataprocessing" on all the documentation, procedures, and software I had to do (that Postel sponsored at ISI, was standing room only, lots of graduate students over from USC).
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
payment gateway for electronic commerce posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
past recent posts mentioning Postel
https://www.garlic.com/~lynn/2024e.html#85 When Did "Internet" Come Into Common Use
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024d.html#97 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#37 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021j.html#10 System Availability
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#34 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021g.html#66 The Case Against SQL
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021e.html#7 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2021b.html#30 IBM Recruiting
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Telecommunication Controllers Date: 16 Oct, 2024 Blog: FacebookA "baby-bell" had done a VTAM/NCP emulator running on distributed IBM S/1s, no-single-point of failure, T1 and faster link support, all resources "owned" out in distributed S/1 with cross-domain protocol to mainframe VTAMs, channel-interface to mainframes simulated 3725NCP. Branch office & Boca cons me into looking at turning it out as type-1 product, with later follow-on port to rack-mount RS/6000s. I gave a presentation on the work at fall 1986 SNA architecture review board in Raleigh. The communication group was infamous for internal political dirty tricks and branch office/Boca had done every countermeasure they could think off. What the communication group did next could only be described as "truth is stranger than fiction".
Archived post with part of the fall 1986 ARB presentation:
https://www.garlic.com/~lynn/99.html#67
Part of presentation that the "baby bell" gave at spring '86 IBM
"COMMON" user group
https://www.garlic.com/~lynn/99.html#70
Communication group would constantly complain the ARB presentation numbers were invalid, but couldn't say why. The S/1s numbers were taken from the "baby bell" production operation, the 3725/VTAM numbers were taken from the "communication group's" configurators on HONE (I suggested if there was some "real" problem, they needed to update their HONE configurators).
Early 70s at the science center, there was an attempt by Edson to get CPD to use the Series/1 much more capable "peachtree" processor for 3705 (rather than UC). Edson was responsible for the CP67-based science center wide-area network which morphs into the non-SNA internal network, larger than arpanet/internet from the beginning until sometime mid/late 80s) ... also used for the corporate sponsored univ "BITNET".
Following from one of the inventors of GML (1969) at the science center:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Mentions Ed's work for "internet" and then trying to move the internal
nework to TCP/IP (instead of the late 80s move to SNA/VTAM)
https://en.wikipedia.org/wiki/Edson_Hendricks
misc other; SJMerc article about Edson and "IBM'S MISSED OPPORTUNITY
WITH THE INTERNET" (gone behind paywall but lives free at wayback
machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website ... blocked from converting internal network to
tcp/ip (late 80s converted to sna/vtam instead)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
I had also gotten HSDT project early 80s (T1 and faster computer links, both terrestrial and satellite). Note in the 60s, IBM had 2701 controller which supported T1, however the move to SNA/VTAM in the mid-70s and associated issues seemed to have caped links at 56kbits ... as a result I tended to have periodic conflicts/battles with the communication group.
disclaimer: as undergraudate, Univ had hired me fulltime responsible
of OS/360. Univ was getting a 360/67 for TSS/360 (to replace
709/1401), temporarily they got 360/30 to replace 1401 (getting 360
experience) pending availability of 360/67. TSS/360 never met the
marketing promises and most installations used 360/67 as 360/65
running OS/360. The univ. shutdown datacenter on weekends and I would
have the machine room dedicated for 48hrs (although made monday
classes hard). Then science center came out to install CP67 (precursor
to VM370, 3rd install after CSC itself and MIT Lincoln Labs). I mostly
got to play with it during my weekend datacenter time. CP67 arrived
with 1052 and 2741 support with automagic terminal type identification
(using telecommunication controller SAD CCW to switch terminal type
port scanner). The univ. had some TTY/ASCII terminals (trivia: when
the ASCII port scanner arrive to install in the controller, it came in
a Heathkit box), so I added ASCII terminal support (integrated with
terminal type identification). I then wanted to have a single dial-in
phone number ("hunt group") for all terminals, but IBM had taken
short-cut and hard wired line speed for each port. This kicks off the
Univ clone controller project; build a channel interface board for an
Interdata/3 programmed to emulate IBM's controller, with the addition
that it could do automagic line-speed. It was later upgraded to an
Interdata/4 for the channel interface and cluster of Interdata/3s for
ports. Interdata and later Perkin-Elmer sell it as IBM clone
controller and four of us get written up for (some part of) the IBM
clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
around the turn of the century, I run into a descendant of the box handling nearly all dial-up merchant credit-card terminal calls east of the Mississippi.
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Telecommunication Controllers Date: 16 Oct, 2024 Blog: Facebookre:
mid-80s communication group was fiercely fighting off client/server and distribution computing. They were also trying to block mainframe tcp/ip release, when that got reversed, they changed their strategy and since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregrate 44kbytes/sec using nearly whole 3090 CPU (the MVS port was further degraded by using the VM370 implementation implementing VM370 DIAGNOSE simulation). I then do RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341, it got sustained (4341) channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
Late 80s, univ. study comparing UNIX TCP and mainframe VTAM, had UNIX TCP with 5000 instruction pathlength and five buffer copies, while VTAM LU6.2 had 160,000 instruction pathlength and 15 buffer copies. Then in the 90s, Raleigh hires a silicon valley contractor to implement TCP/IP directly in VTAM. What he demo'ed had TCP much faster LU6.2. He was then told that everybody knows that LU6.2 is much faster than a "proper" TCP/IP implementation and they would be only paying for a "proper" implementation.
HSDT in early 80s was also working with NSF director and suppose to
get $20M to interconnect the NSF supercomputers. Then congress cuts
the budget, some other things happen and finally an RFP is release (in
part based on what we already had running). From 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed, folklore
is that 5of6 members of corporate executive committee wanted to fire
me). The NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet. Trivia: somebody
had been collecting executive (mis-information) email about how
SNA/VTAM could support NSFNET T1 ... and forwarded it to me
... heavily snipped and redacted (to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 3081 & TCM Date: 17 Oct, 2024 Blog: Facebookre:
3081 was going to be multiprocessor only, enormous number of circuits
... using some warmed over Future System technology i.e. FS was to
completely replace 370 and during FS, 370 efforts were being killed
off, when FS finally implodes, there is mad rush to get stuff back
into 370 product pipelines, including quick&dirty 3033&3081 in
parallel
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
the enormous number of circuits motivating (water-cooled) TCMs to package in reasonable physical volume.
initially 2-CPU 3081D aggregate MIPS less than Amdahl single processor, then doubled processor cache size for 3081K and aggregate MIPS about same as Amdahl 1-CPU (although IBM documents claimed MVS 2-CPU multiprocessor throughput was only 1.2-1.5 times throughput of single processor). Then concern because ACP/TPF systems didn't have multiprocessor support, that whole market would move to Amdahl ... and came out with 3083, removing one of the 3081 CPUs (leaving just CPU0, initially just removing CPU1 was worry it would be top-heavy and prone to tip over, so had to rewire to get CPU0 in the middle of the box). Then saw interconnecting two 3081s for 3084 4-CPU multiprocessor.
Amdahl trivia: After joining IBM, I continued to go to SHARE and drop in on IBM customers. The director of one of the largest financial industry IBM datacenters liked me to stop in and talk technology. At some point, the branch manager horribly offended the customer and they were ordering a Amdahl system (single Amdahl in a vast sea of blue). Up until then, Amdahl had been selling into the scientific/technical/univ. market and this would be the 1st for "true blue", commercial market. I was asked to go onsite for 6-12 months (to help obfuscate why the customer was ordering an Amdahl machine). I talked it over with the customer and then turned down the IBM offer. I was then told that the branch manager was good sailing buddy of IBM CEO and if didn't do this, I could forget career, raises, promotions.
note: Amdahl had won the battle to make ACS 360 compatible, but after
ACS/360 was killed (supposedly executives were concerned it would
advance state of the art too fast and IBM would loose control of the
market), Amdahl leaves IBM (shortly before FS started)
https://people.computing.clemson.edu/~mark/acs_end.html
https://people.computing.clemson.edu/~mark/acs.html
https://people.computing.clemson.edu/~mark/acs_legacy.html
trivia: during FS, claim the lack of new 370s gave clone 370 makers their market foothold, also saw enormous uptic in IBM 370 marketing FUD; I continued to work on 360&370 all during FS, including periodically ridiculing what they were doing). Then when FS implodes there was mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. About same time I got asked to help with 16-cpu 370 multiprocessor and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 370/168 circuit design to 20% faster chips). Everybody thought it was great until somebody tells head of POK that it could be decades before POK's favorite son operating system ("MVS") had (effective) 16-cpu support (aka significant multiprocessor overhead, increasing with number of CPUs, POK doesn't ship 16-CPU system until after turn of century) and some of us were invited to never visit POK again (and the 3033 processor engineers, heads down and no distractions). Once the 3033 was out the door, the 3033 processor engineers start on trout/3090.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some posts mentioning Future System, 3033, 3081, 3083, TCM
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
--
virtualization experience starting Jan1968, online at home since Mar197
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Democracy Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Fri, 18 Oct 2024 07:48:47 -1000rbowman <bowman@montana.com> writes:
note that John Foster Dulles played major role rebuilding Germany
economy, industry, military from the 20s up through the early 40s
https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret-ebook/dp/B00BY5QX1K/
loc865-68:
In mid-1931 a consortium of American banks, eager to safeguard their
investments in Germany, persuaded the German government to accept a
loan of nearly $500 million to prevent default. Foster was their
agent. His ties to the German government tightened after Hitler took
power at the beginning of 1933 and appointed Foster's old friend
Hjalmar Schacht as minister of economics.
loc905-7:
Foster was stunned by his brother's suggestion that Sullivan &
Cromwell quit Germany. Many of his clients with interests there,
including not just banks but corporations like Standard Oil and
General Electric, wished Sullivan & Cromwell to remain active
regardless of political conditions.
loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace
Seligman, was equally disturbed. In October 1939, six weeks after the
Nazi invasion of Poland, he took the extraordinary step of sending
Foster a formal memorandum disavowing what his old friend was saying
about Nazism
... snip ...
From the law of unintended consequences, when US 1943 Strategic Bombing program needed targets in Germany, they got plans and coordinates from wallstreet.
June1940, Germany had a victory celebration at the NYC Waldorf-Astoria
with major industrialists. Lots of them were there to hear how to do
business with the Nazis
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/
loc1925-29:
One prominent figure at the German victory celebration was Torkild
Rieber, of Texaco, whose tankers eluded the British blockade. The
company had already been warned, at Roosevelt's instigation, about
violations of the Neutrality Law. But Rieber had set up an elaborate
scheme for shipping oil and petroleum products through neutral ports
in South America.
... snip ...
Intrepid also points finger at Ambassador Kennedy ... they start bugging the US embassy because classified information was leaking to the Germans. They eventually identified a clerk as responsible but couldn't prove ties to Kennedy. However Kennedy is claiming credit for Chamberland capitulating to Hitler on many issues ... also making speeches in Britain and the US that Britain could never win a war with Germany and if he was president, he would be on the best of terms with Hitler.
loc2645-52:
The Kennedys dined with the Roosevelts that evening. Two days later,
Joseph P. Kennedy spoke on nationwide radio. A startled public learned
he now believed "Franklin D. Roosevelt should be re-elected
President." He told a press conference: "I never made anti-British
statements or said, on or off the record, that I do not expect Britain
to win the war."
British historian Nicholas Bethell wrote: "How Roosevelt contrived the
transformation is a mystery." And so it remained until the BSC Papers
disclosed that the President had been supplied with enough evidence of
Kennedy's disloyalty that the Ambassador, when shown it, saw
discretion to be the better part of valor. "If Kennedy had been
recalled sooner," said Stephenson later, "he would have campaigned
against FDR with a fair chance of winning. We delayed him in London as
best we could until he could do the least harm back in the States."
... snip ...
The congressmen responsible for the US Neutrality Act claimed it was in reaction to the enormous (US) war profiteering they saw during WW1. The capitalists intent on doing business with Nazi Germany respin that as isolationism in major publicity campaigns
... getting into the war
US wasn't in the war and Stalin was effectively fighting the Germans
alone and worried that Japan would attack from the east ... opening up
a second front. Stalin wanted US to come in against Japan (making sure
Japan had limited resources to open up a 2nd front against the Soviet
Union). US assistant SECTREAS Harry Dexter White was operating on
behalf of the Soviet Union and Stalin sends White a draft of demands
for US to present to Japan that would provoke Japan into attacking US
and drawing US into the war.
https://en.wikipedia.org/wiki/Harry_Dexter_White#Venona_project
demands were included in the Hull Note which Japan received just prior
to decision to attack Perl Harbor, hull note
https://en.wikipedia.org/wiki/Hull_note#Interpretations
More Venona
https://en.wikipedia.org/wiki/Venona_project
Benn Stein in "The Battle of Bretton Woods" spends pages 55-58
discussing "Operation Snow".
https://www.amazon.com/Battle-Bretton-Woods-Relations-University-ebook/dp/B00B5ZQ72Y/
pg56/loc1065-66:
The Soviets had, according to Karpov, used White to provoke Japan to
attack the United States. The scheme even had a name: "Operation
Snow," snow referring to White.
... snip ...
... after the war
Later somewhat replay of the 1940 celebration, there was conference of
5000 industrialists and corporations from across the US at the
Waldorf-Astoria, and in part because they had gotten such a bad
reputation for the depression and supporting Nazis, as part of
attempting to refurbish their horribly corrupt and venal image, they
approved a major propaganda campaign to equate Capitalism with
Christianity.
https://www.amazon.com/One-Nation-Under-God-Corporate-ebook/dp/B00PWX7R56/
part of the result by the early 50s was adding "under god" to the
pledge of allegiance. slightly cleaned up version
https://en.wikipedia.org/wiki/Pledge_of_Allegiance
The Coming of American Fascism, 1920-1940
https://www.historynewsnetwork.org/article/the-coming-of-american-fascism-19201940
and old book that covers the early era covered in "Economists and the
Powerful": "Triumphant plutocracy; the story of American public life
from 1870 to 1920" (at the moment this part of way-back machine still
offline)
https://archive.org/details/triumphantpluto00pettrich
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Unbundling, Software Source and Priced Date: 18 Oct, 2024 Blog: Facebook23jun1969 unbundling announcement, IBM starts charging for (application) software (but able to make case that kernel software should still be free, still shipping source), software/system engineers, maint. etc. Then the Future System effort 1st part of 70s was completely different from 370 and was going to completely replace 370; internal politics during FS was killing off 370s efforts ... and the lack of new 370 products during the period is credited giving the clone 370 system makers their market foothold. With the implosion of FS, there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick and dirty 3033&3081 efforts. Also with FS having given the clone 370 makers their foothold there was a decision to start charging with kernel software (starting initially w/kernel add-ons, but eventually culminating in the 80s, charging for all kernel software)
One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters. I continued to work on 360&370 stuff all during FS, including periodically ridiculing what they were doing.
With the decision to add virtual memory to all 370s and decision to morph CP67 into VM370 (however part of the morph was simplifying and/or dropping lots of features). One of the things that I had done for CP67 was automated benchmarking (easy was tens of scripts that varied configuration, features, algorithms and workload, with automated rebooting between each benchmarks) and then in 1974 started moving loads of stuff to VM370R2 and the first thing was automated benchmarking so could have a baseline. However, VM370 wasn't to complete benchmark series w/o crashing and I then had to migrate CP67 kernel serialization and integrity features ... in order to get a baseline benchmarks. I then migrated a bunch of pathlength operations, dynamic adaptive resource management (scheduling & dispatching), paging&thrashing control algorithms, I/O optimization and kernel reorg needed for mulitprocessor support (but not the actual support). Then in 1975 for VM370R3-base, I add a bunch more performance stuff along with multiprocessor support (initially for consolidated US HONE datacenter, so they could add a 2nd CPU to each system in their eight system, single-system-image, shared DASD, loosely-coupled operation (for aggregate 16-CPUs).
About that time, a bunch of my dynamic adaptive resource management stuff was selected to the initial guinea pig for kernel-add software charging and I get to spend a lot of time with lawyers and planners talking about charging practices. Along the way I manage to slip in various things along with multiprocessor kernel reorg (but not actual multiprocessor support) for the VM370R3 add-on product. Then comes to VM370R4 and they want to release multiprocessor support. However part of policy is that hardware support is still free AND free software can't require payed software (which is part of my performance add-on). The eventually solution is removing nearly 90% of the code from my priced add-on and move it into the free VM370R4 base along with the free multiprocessor hardware support.
In 1st part of the 80s, the transition to all kernel software is priced and done, and comes the "object code only" announcement (no more source) and the OCO-wars with customers.
trivia: Co-worker at science center had done an APL-based analytical system model ... which was made available on the world-wide, branch office, online sales&marketing support HONE systems as Performance Predictor (configuration and workload information is entered and branch people can ask what-if questions about effect of changing configuration and/or workloads). The consolidated US HONE single-system-image uses a modified version of performance predictor to make load balancing decisions. I also use it for 2000 automated benchmarks (takes 3months elapsed time), in preparation for my initial kernel add-on release to customers. The first 1000 benchmarks have manually specified configuration and workload profiles that are uniformally distributed across known observations of real live systems (with 100 extreme combinations outside real systems). Before each benchmark, the modified performance predictor predicts the performance and then compares the results with its prediction (and saves all values). The 2nd 1000 automated benchmarks have configuration and workload profiles generated by the modified performance predictor (searching for possible problem combinations).
trivia2: as undergraduate I was hired fulltime by the univ. which shutdown the datacenter on weekends, and I would have place dedicated (48hrs w/o sleep made monday classes hard). The univ had replaced 709/1401 with 360/67 (for tss/360) but it ran with os/360 as 360/65 and my first SYSGEN was R9.5. Student Fortran ran under sec on 709 tape->tape, but over a minute on OS/360. I install HASP and it is cut in half. Then w/R11, I start heavily customized SYSGEN STAGE2, carefully placing datasets and PDS members to optimize (disk) arm seek and multi-track search, cutting student fortran time by anotherr2/3rds to 12.9secs. It never got better than 709 until I install UofWaterloo WATFOR.
Prior to graduation, CSC comes out to install CP67 (precursor to VM370, 3rd installation after CSC itself and MIT Lincol Labs) and I mostly play with it during my dedicated weekend time. Initially I rewrite lots of CP67 to optimize OS/360 running in virtual machine. The OS/360 stand-alone job stream ran in 322 secs, initially 856secs virtually (534secs CP67 CPU), after a few months have CP67 CPU down to 113secs, I then redo dispatching/scheduling (dynamic adaptive resource management), page replacement, thrashing controls, ordered disk arm seek, 2301/drum multi-page rotational ordered transfers (from 70-80 4k/sec to 270/sec peak), bunch of other stuff ... for CMS interactive computing. Then to further cut CMS CP67 CPU overhead I do a special CCW. Bob Adair criticizes it because it violates 360 architecture ... and it has to be redone as DIAGNOSE instruction (which is defined to be "model" dependent ... and so have facade of virtual machine model diagnose).
23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM (and/or SJR/VM) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE (& APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging & thrashing optimization and control posts
https://www.garlic.com/~lynn/subtopic.html#clock
benchmark posts
https://www.garlic.com/~lynn/submain.html#benchmark
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Democracy Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Fri, 18 Oct 2024 17:49:13 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
re:
https://www.garlic.com/~lynn/2024f.html#51 The joy of Democracy
Plot for coup to overthrow Roosevelt and install Smedley Butler as dictator, but Butler blows the whistle
Business Plot
https://en.wikipedia.org/wiki/Business_Plot
https://en.wikipedia.org/wiki/Business_Plot#Committee_reports
https://en.wikipedia.org/wiki/Business_Plot#Prescott_Bush
Smedley Butler
https://en.wikipedia.org/wiki/Smedley_Butler
authored: War Is a Racket
https://en.wikipedia.org/wiki/War_Is_a_Racket
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM "THINK" Date: 20 Oct, 2024 Blog: Facebook... also outside the box, "We are convinced that any business needs its wild ducks. And in IBM we try not to tame them." T.J. Watson, Jr.
1972, CEO Learson trying (and failed) to block the bureaucrats,
careerists, and MBAs from destroying Watsons' culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
I was introduced to John Boyd in the early 80s and use to sponsor his
briefings at IBM.
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
https://www.colonelboyd.com/understanding-war
https://en.wikipedia.org/wiki/OODA_loop
1989/1990, the Commandant of the Marine Corp leverages Boyd for a
make-over of the corp (at the time IBM was desperately in need of
make-over). 1992 (20yrs after Learson was trying to save IBM), IBM has
one of the largest losses in the history of US companies and was being
re-organized into the 13 "baby blues" (take-off on break-up of
AT&T "baby bells", decade earlier) in preparation for breaking up
the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM, but get a call from the bowels of Armonk
asking if we could help with the break-up. Before we get started, the
board brings in the former president of AMEX as CEO, who (somewhat)
reverses the breakup; ... and uses some of the techniques used at RJR
(ref gone 404, but lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
2016 100 IBM Centennial videos had one on "wild ducks" ... but it was customer "wild ducks" ... IBM "wild ducks" had been long expunged.
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
.. periodically being told I had no career, no promotions, no raises. After Boyd passes in 1997, (then/retired) former commandant of Marine Corps (passed last spring) would sponsor Boyd conferences at Marine Corps Univ. in Quantico
Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The joy of Democracy Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Sun, 20 Oct 2024 14:19:30 -1000Lynn Wheeler <lynn@garlic.com> writes:
... then "Was Harvard responsible for the rise of Putin" ... after
the fall of the Soviet Union, those sent over to teach capitalism were
more intent on looting the country (and the Russians needed a Russian to
oppose US looting). John Helmer: Convicted Fraudster Jonathan Hay,
Harvard's Man Who Wrecked Russia, Resurfaces in Ukraine
http://www.nakedcapitalism.com/2015/02/convicted-fraudster-jonathan-hay-harvards-man-who-wrecked-russia-resurfaces-in-ukraine.html
If you are unfamiliar with this fiasco, which was also the true
proximate cause of Larry Summers' ouster from Harvard, you must read
an extraordinary expose, How Harvard Lost Russia, from Institutional
Investor. I am told copies of this article were stuffed in every
Harvard faculty member's inbox the day Summers got a vote of no
confidence and resigned shortly thereafter.
... snip ...
How Harvard lost Russia; The best and brightest of America's premier
university came to Moscow in the 1990s to teach Russians how to be
capitalists. This is the inside story of how their efforts led to
scandal and disgrace (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20130211131020/http://www.institutionalinvestor.com/Article/1020662/How-Harvard-lost-Russia.html
Mostly, they hurt Russia and its hopes of establishing a lasting
framework for a stable Western-style capitalism, as Summers himself
acknowledged when he testified under oath in the U.S. lawsuit in
Cambridge in 2002. "The project was of enormous value," said Summers,
who by then had been installed as the president of Harvard. "Its
cessation was damaging to Russian economic reform and to the
U.S.-Russian relationship."
... snip ...
There was proposal to establish 5000 local bank operations around Russia (as part of promoting capitalism), running approx @$1m, needed sequence of financial dealings starting with $5B of Russian natural resources ... until financing was available for the bank institutions. All the efforts collapsed with the US-style kleptocracy capitalism (which has a long history predating the banana republics).
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
posts mentoning "How Harvard Lost Russia"
https://www.garlic.com/~lynn/2023e.html#35 Russian Democracy
https://www.garlic.com/~lynn/2023.html#17 Gangsters of Capitalism
https://www.garlic.com/~lynn/2022g.html#50 US Debt Vultures Prey on Countries in Economic Distress
https://www.garlic.com/~lynn/2022f.html#76 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022c.html#37 The Lost Opportunity to Set Post-Soviet Russia on a Stable Course
https://www.garlic.com/~lynn/2022b.html#104 Why Nixon's Prediction About Putin and Ukraine Matters
https://www.garlic.com/~lynn/2021e.html#95 Larry Summers, the Man Who Won't Shut Up, No Matter How Wrong He's Been
https://www.garlic.com/~lynn/2021d.html#76 The "Innocence" of Early Capitalism is Another Fantastical Myth
https://www.garlic.com/~lynn/2019e.html#132 Ukraine's Post-Independence Struggles, 1991 - 2019
https://www.garlic.com/~lynn/2019e.html#92 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#69 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019d.html#54 Global Warming and U.S. National Security Diplomacy
https://www.garlic.com/~lynn/2019d.html#52 The global economy is broken, it must work for people, not vice versa
https://www.garlic.com/~lynn/2019c.html#15 Don't forget how the Soviet Union saved the world from Hitler
https://www.garlic.com/~lynn/2019b.html#40 Has Privatization Benefitted the Public?
https://www.garlic.com/~lynn/2019.html#85 LUsers
https://www.garlic.com/~lynn/2018f.html#45 Why Finance Is Too Important to Leave to Larry Summers
https://www.garlic.com/~lynn/2018d.html#100 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018d.html#75 Nassim Nicholas Taleb
https://www.garlic.com/~lynn/2018c.html#50 Anatomy of Failure: Why America Loses Every War It Starts
https://www.garlic.com/~lynn/2018b.html#60 Revealed - the capitalist network that runs the world
https://www.garlic.com/~lynn/2018.html#82 DEC and HVAC
https://www.garlic.com/~lynn/2018.html#14 Predicting the future in five years as seen from 1983
https://www.garlic.com/~lynn/2017k.html#66 Innovation?, Government, Military, Commercial
https://www.garlic.com/~lynn/2017j.html#35 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017i.html#82 John Helmer: Lunatic Russia-Hating in Washington Is 70 Years Old. It Started with Joseph Alsop, George Kennan and the Washington Post
https://www.garlic.com/~lynn/2017i.html#69 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017h.html#39 Disregard post (another screwup; absolutely nothing to do with computers whatsoever!)
https://www.garlic.com/~lynn/2017g.html#83 How can we stop algorithms telling lies?
https://www.garlic.com/~lynn/2017f.html#69 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017f.html#65 View of Russia
https://www.garlic.com/~lynn/2017f.html#63 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017b.html#83 Sleepwalking Into a Nuclear Arms Race with Russia
https://www.garlic.com/~lynn/2017.html#56 25th Anniversary Implementation of Nunn-Lugar Act
https://www.garlic.com/~lynn/2017.html#7 Malicious Cyber Activity
https://www.garlic.com/~lynn/2016h.html#38 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#3 Smedley Butler
https://www.garlic.com/~lynn/2016g.html#92 The Lessons of Henry Kissinger
https://www.garlic.com/~lynn/2016f.html#105 How to Win the Cyberwar Against Russia
https://www.garlic.com/~lynn/2016f.html#22 US and UK have staged coups before
https://www.garlic.com/~lynn/2016e.html#59 How Putin Weaponized Wikileaks to Influence the Election of an American President
https://www.garlic.com/~lynn/2016c.html#69 Qbasic
https://www.garlic.com/~lynn/2016c.html#7 Why was no one prosecuted for contributing to the financial crisis? New documents reveal why
https://www.garlic.com/~lynn/2016b.html#39 Failure as a Way of Life; The logic of lost wars and military-industrial boondoggles
https://www.garlic.com/~lynn/2016b.html#31 Putin holds phone call with Obama, urges better defense cooperation in fight against ISIS
https://www.garlic.com/~lynn/2016.html#73 Shout out to Grace Hopper (State of the Union)
https://www.garlic.com/~lynn/2016.html#39 Shout out to Grace Hopper (State of the Union)
https://www.garlic.com/~lynn/2016.html#16 1970--protesters seize computer center
https://www.garlic.com/~lynn/2015h.html#122 For those who like to regress to their youth? :-)
https://www.garlic.com/~lynn/2015h.html#91 Happy Dec-10 Day!!!
https://www.garlic.com/~lynn/2015h.html#70 Department of Defense Head Ashton Carter Enlists Silicon Valley to Transform the Military
https://www.garlic.com/~lynn/2015h.html#26 Putin's Great Crime: He Defends His Allies and Attacks His Enemies
https://www.garlic.com/~lynn/2015f.html#45 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#44 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015f.html#30 Analysis: Root of Tattered US-Russia Ties Date Back Decades
https://www.garlic.com/~lynn/2015b.html#8 Shoot Bank Of America Now---The Case For Super Glass-Steagall Is Overwhelming
https://www.garlic.com/~lynn/2015b.html#5 Swiss Leaks lifts the veil on a secretive banking system
https://www.garlic.com/~lynn/2015b.html#1 do you blame Harvard for Puten
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM DB2 and MIP Envy Date: 21 Oct, 2024 Blog: FacebookI was at SJR and worked with Jim Gray and Vera Watson on original SQL/relational, System/R. Official next DBMS "EAGLE" was to be follow-on to IMS ... and was able to do technology transfer to Endicott for SQL/DS ("under the radar" while the company was preoccupied with "EAGLE"). When "EAGLE" finally implodes, there is a request for how fast could System/R be ported to MVS. Eventually port is released as "DB2", originally for "decision support" only.
Note System/R was being done with (IBM's internal) PLS ... a problem
was that during "Future System", internal politics were killing off
370 efforts, including PLS 370 group.
http://www.jfsowa.com/computer/memo125.htm
Jim Gray's "MIP Envy" tome including lack of PLS support, before
leaving IBM for Tandem
https://jimgray.azurewebsites.net/papers/mipenvy.pdf
and from IBMJargon
MIP envy - n. The term, coined by Jim Gray in 1980, that began the
Tandem Memos (q.v.). MIP envy is the coveting of other's facilities -
not just the CPU power available to them, but also the languages,
editors, debuggers, mail systems and networks. MIP envy is a term
every programmer will understand, being another expression of the
proverb The grass is always greener on the other side of the fence.
... snip ...
note: I was blamed for online computer conferencing on the internal network (larger than arpanet/internet from the beginning until sometime mid/late 80s, in part because of its conversion to SNA/VTAM 2nd half of the 80s) in the late 70s and early 80s. "Tandem Memos" actually took off spring of 1981 after I distributed a trip report of visit to Jim at Tandem (only about 300 actively participated, but claims 25,000 were reading; folklore is when corporate executive committee was told, 5of6 wanted to fire me).
some more about "Tandem Memos" in this long-winded post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Patents Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Tue, 22 Oct 2024 14:04:53 -1000Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
claim was constitution setup patent system to protect new innovative players from large monopolies (that nominally protect their status quo).
patent linquistic study in the 90s, found there was 30% of computer/technology related patents with very ambiquous language filed by patent trolls (setup be large status quo monopolies) in non-computer/non-technology caterories
the troll entities had no activity for which they could be sued ... but the trolls could sue other entities that appeared and might adversely affect the status quo of large monopolies (subverting the original purpose of the patent system) ... in addition to the trolls that were strictly scamming the patent system for financial gain.
previous mention of the plutocrat gilded age 1870-1920 and
countermeasures (stil offline)
https://archive.org/details/triumphantpluto00pettrich
real paper copy
https://www.amazon.com/Triumphant-Plutocracy-Government-Economics-Autobiography/dp/1789873215
then the '29 crash. in jan2009 I was asked to HTML'ize the 30s (Pecora,
had been scanned the fall of 2008) senate hearings (resulted in many
prison sentances) with lots of internal HREFS and URLs comparing what
happened this time and what happened then (comments that the new
congress might have an appetite to do something). I worked on it for
awhile and then get a call that it won't be needed after all (comment
that capital hill was totally buried under enormous mountains of
wallstreet cash). wiki (and the actual scanned hearings should also
be there when wayback machine fully returns).
https://en.wikipedia.org/wiki/Pecora_Commission
One of the comparisons was the 80s S&L crisis resulted in 30k criminal
referrals and 1k prison sentences. Bush senior claimed that (while VP),
he knew nothing about Iran-Contra because he was fulltime deregulating
financial industry (resulting in S&L crisis)
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
along with other members of the family (note there have been claims
that republican supporters not only paid the $50k fine, but also the
$26M settlement ... and Saudi's helped bail out the brothers)
http://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260
Republicans and Saudis bailing out the Bushes.
then after turn of century (one of the brothers, pres) Bush presided over economic mess that was 70 times larger than the S&L crisis ... proportionally should have had 2.1M criminal referrals and 70K jail terms ... but instead we got too big to fail, too big to prosecute and too big to jail.
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
S&L crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
too big to fail, too big to prosecute, too big to jail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
Pecora &/or Glass-Steagall posts
https://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass-Steagall
posts mentioning constitution and patent system
https://www.garlic.com/~lynn/2023b.html#66 HURD
https://www.garlic.com/~lynn/2021k.html#99 The US Patent and Trademark Office should act now to catalyze innovation
https://www.garlic.com/~lynn/2021d.html#19 IBM's innovation: Topping the US patent list for 28 years running
https://www.garlic.com/~lynn/2018f.html#70 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#22 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018c.html#52 We the Corporations: How American Businesses Won Their Civil Rights
https://www.garlic.com/~lynn/2017k.html#68 Innovation?, Government, Military, Commercial
https://www.garlic.com/~lynn/2017h.html#83 Bureaucracy
https://www.garlic.com/~lynn/2017f.html#62 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016b.html#49 Corporate malfeasance
https://www.garlic.com/~lynn/2015.html#25 Gutting Dodd-Frank
https://www.garlic.com/~lynn/2013n.html#88 Microsoft, IBM lobbying seen killing key anti-patent troll proposal
https://www.garlic.com/~lynn/2013k.html#40 copyright protection/Doug Englebart
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Patents Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Tue, 22 Oct 2024 15:40:38 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
False Profits: Reviving the Corporation's Public Purpose
https://www.uclalawreview.org/false-profits-reviving-the-corporations-public-purpose/
I Origins of the Corporation. Although the corporate structure dates
back as far as the Greek and Roman Empires, characteristics of the
modern corporation began to appear in England in the mid-thirteenth
century.[4] "Merchant guilds" were loose organizations of merchants
"governed through a council somewhat akin to a board of directors,"
and organized to "achieve a common purpose"[5] that was public in
nature. Indeed, merchant guilds registered with the state and were
approved only if they were "serving national purposes."[6]
... snip ...
... however there has been significant pressure to give corporate
charters to entities operating in self-interest ... followed by
extending constitutional "people" rights to corporations. The supreme
court was scammed into extending 14th amendment rights to corporations
(with faux claims that was what the original authors had intended).
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiv/loc74-78:
Between 1868, when the amendment was ratified, and 1912, when a
scholar set out to identify every Fourteenth Amendment case heard by
the Supreme Court, the justices decided 28 cases dealing with the
rights of African Americans--and an astonishing 312 cases dealing with
the rights of corporations.
... snip ...
The Price of Inequality: How Today's Divided Society Endangers Our Future
https://www.amazon.com/Price-Inequality-Divided-Society-Endangers-ebook/dp/B007MKCQ30/
pg35/loc1169-73:
In business school we teach students how to recognize, and create,
barriers to competition -- including barriers to entry -- that help
ensure that profits won't be eroded. Indeed, as we shall shortly see,
some of the most important innovations in business in the last three
decades have centered not on making the economy more efficient but on
how better to ensure monopoly power or how better to circumvent
government regulations intended to align social returns and private
rewards
... snip ...
How Economists Turned Corporations into Predators
https://www.nakedcapitalism.com/2017/10/economists-turned-corporations-predators.html
Since the 1980s, business schools have touted "agency theory," a
controversial set of ideas meant to explain how corporations best
operate. Proponents say that you run a business with the goal of
channeling money to shareholders instead of, say, creating great
products or making any efforts at socially responsible actions such as
taking account of climate change.
... snip ...
A Short History Of Corporations
https://newint.org/features/2002/07/05/history
After Independence, American corporations, like the British companies
before them, were chartered to perform specific public functions -
digging canals, building bridges. Their charters lasted between 10 and
40 years, often requiring the termination of the corporation on
completion of a specific task, setting limits on commercial interests
and prohibiting any corporate participation in the political process.
... snip ...
... a residual of that is current law that can't use funds/payments from
government contracts for lobbying. After the turn of the century there
was huge upswing in private equity buying up beltway bandits and
government contractors, PE owners then transfer every cent possible to
their own pockets, which can be used to hire prominent politicians that
can lobby congress (including "contributions") to give contracts to
their owned companies (resulting in huge increase in gov. outsourcing to
private companies) ... can snowball since gov. agencies aren't allowed
to lobby (contributing to claims that congress is most corrupt
institution on earth)
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/
"Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a
Washington-based global private equity firm whose 2006 revenues of $87
billion were just a few billion below ibm's. Carlyle has boasted
George H.W. Bush, George W. Bush, and former Secretary of State James
Baker III on its employee roster."
... snip ...
... also promoting the Success of Failure culture (especially in the
military/intelligence-industrial complex)
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
private-equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
gerstner posts
https://www.garlic.com/~lynn/submisc.html#gestner
success of failure of posts
https://www.garlic.com/~lynn/submisc.html#success.of.failure
military-industrial(-congressional) complex
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Patents Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Wed, 23 Oct 2024 01:15:21 -1000re:
"Why Nations Fail"
https://www.amazon.com/Why-Nations-Fail-Origins-Prosperity-ebook/dp/B0058Z4NR8/
original settlement, Jamestown ... English planning on emulating the
Spanish model, enslave the local natives to support the
settlement. Unfortunately the North American natives weren't as
cooperative and the settlement nearly starved. Then started out switching
to sending over some of the other populations from the British Isles
essentially as slaves ... the English Crown charters had them as
"leet-man" ... pg27:
The clauses of the Fundamental Constitutions laid out a rigid social
structure. At the bottom were the "leet-men," with clause 23 noting,
"All the children of leet-men shall be leet-men, and so to all
generations."
My wife's father was presented with a set of 1880 history books for some
distinction at West Point by the Daughters Of the 17th Century
http://www.colonialdaughters17th.org/
which refer to if it hadn't been for the influence of the Scottish settlers from the mid-atlantic states, the northern/English states would have prevailed and the US would look much more like England with monarch ("above the law") and strict class hierarchy.
Inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
Capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 3705 Date: 23 Oct, 2024 Blog: Facebookre:
sophomore I took two credit hour intro to fortran/computers and end of the semester was hired to rewrite 1401 MPIO in assembler for 360/30. The univ had 709 (tape->tape) and 1401 MPIO (unit record front end for 709, physically moving tapes between 709 & 1401 drives). The univ was getting 360/67 for tss/360 and got 360/30 replacing 1401 temporarily until 360/67 arrived. Univ. shutdown datacenter on weekends and I had the place dedicated, although 48hrs w/o sleep made monday classes hard. I was given bunch of hardware&software manuals and got to design and implement monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had 2000 card assembler program. The 360/67 arrived within a year of taking intro class and I was hired fulltime responsible for os/360 (tss never came to production fruition). Note student fortran ran under second on 709 (tape->tape), but well over a minute on OS360. First sysgen was OS360R9.5 (MFT) and I add HASP which cuts time in half. Then OS360R11 and I start doing custom stage2 sysgen to carefully place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs.
Later, CSC came out to install CP67 and I mostly played with it during my dedicated weekend time.
CP67 came with 1052 and 2741 terminal support, including dynamically
determining terminal type and automagically switching terminal type
port scanner with controller SAD CCW. Univ. had some TTY 33&35
(trivia: TTY port scanner for IBM controller had arrived in Heathkit
box) and I added TTY/ASCII support integrated with dynamic terminal
type support. Then I wanted to have single dial-in phone number ("hunt
group") for all terminals ... didn't quite work, IBM had hardwired
controller line speed for each port. Univ. then kicks off clone
controller effort, build a channel interface board for Interdata/3
programmed to emulate IBM controller with addition of automagic line
speed. This is upgraded to Interdata/4 for channel interface and
cluster of Interdata/3s for port interfaces. Interdata and then
Perkin/Elmer markets it as IBM clone controller (and four of us get
written up responsible for some part of IBM clone controller
business).
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
trivia: turn of century, tour of datacenter that had descendant of one
of the boxes that handled the majority of dial-up credit-card terminal
calls east of the Mississippi.
UofM: MTS
https://en.wikipedia.org/wiki/Michigan_Terminal_System
mentions MTS using PDP8 programed to emulate mainframe terminal controller
https://www.eecis.udel.edu/~mills/gallery/gallery7.html
and Stanford also did operating system for 360/67: Orvyl/Wylbur (a flavor of Wylbur was also made available on IBM batch operating systems).
https://en.wikipedia.org/wiki/ORVYL_and_WYLBUR
After graduating, I joined science center. One of the co-workers was
responsible for the CP67-based science wide area network (that morphs
into the internal corporate nework and technology also used for the
corporate sponsored BITNET) ... reference by one of the other members
that was one of the three inventing GML in 1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Person responsible then tried to get CPD to use the (much more capable) S/1 "peachtree" processor for 3705 (instead of the UC).
One of my hobbies after joining IBM was enhanced production operating systems and (branch office online sales&marketing) US HONE (morphing into clones world-wide) was long time customer. In the mid-70s, the US HONE datacenters were consolidated in Palo Alto (across back parking lot from IBM PASC, trivia: when facebook 1st moved into silicon valley, it was into new bldg built next door to the former US HONE datacenter) ... where they upgrade to single-system-image, loosely-coupled, shared DASD operation with load-balancing and fall-over across the complex. Then added shard-memory multiprocessor support so a 2nd processor could be added to each system in the complex (16 CPUs total).
trivia: I got HSDT project in early 80s, T1 and faster computer links
(both satellite and terrestrial) with lots of conflicts with the
communication group. Note in the 60s, IBM had 2701 telecommunication
controller that supported T1 (1.5mbits/sec) links, but the move to
SNA/VTAM in mid-70s apparently had issues and controllers were caped
at 56kbit links. Mid-80s, I was also asked to take a "baby bell"
simulated SNA/VTAM done on S/1 and turn it out as IBM type1 product
(with objective to moving to rack mount 801/RISC RS6000) that had
significant more features, performance and priced/performance and had
supported real networking (what communication group then did can only
be described is truth is stranger than fiction) ... part of fall86
presentation at SNA Architecture Review Board meeting in Raleigh
https://www.garlic.com/~lynn/99.html#67
and part of "baby bell" presentation at spring 1986 IBM COMMON user
group meeting:
https://www.garlic.com/~lynn/99.html#70
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
corpoate internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
corporate sponsored univ bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Mainframe System Meter Date: 23 Oct, 2024 Blog: Facebook... back when systems were rented/leased, charges were based on the system meter (ran whenever CPUs and/or channels were busy). when moving CP67 to 7x24 operations ... lots of work for terminal channel programs that would let system meter stop when lines were otherwise idle, but immediately be active when there was data moving. Note system had to be solid idle for at least 400ms before system meter would stop ... trivia: long after transition to sales, MVS still had timer task that woke up every 400ms.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
recent posts mentioning system meter
https://www.garlic.com/~lynn/2024c.html#116 IBM Mainframe System Meter
https://www.garlic.com/~lynn/2024b.html#45 Automated Operator
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023d.html#78 IBM System/360, 1964
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2022g.html#93 No, I will not pay the bill
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#115 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#98 Mainframe Cloud
https://www.garlic.com/~lynn/2022f.html#23 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#2 IBM Games
https://www.garlic.com/~lynn/2022d.html#108 System Dumps & 7x24 operation
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#26 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022c.html#25 IBM Mainframe time-sharing
https://www.garlic.com/~lynn/2022b.html#22 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022.html#27 Mainframe System Meter
https://www.garlic.com/~lynn/2021k.html#53 IBM Mainframe
https://www.garlic.com/~lynn/2021i.html#94 bootstrap, was What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Amdahl and other trivia Date: 23 Oct, 2024 Blog: FacebookAfter joining IBM, I continued to go to SHARE and drop in on IBM customers. The director of one of the largest financial industry IBM datacenters liked me to stop in and talk technology. At some point, the branch manager horribly offended the customer and they were ordering a Amdahl system (single Amdahl in a vast sea of blue). Up until then, Amdahl had been selling into the scientific/technical/univ. market and this would be the 1st for "true blue", commercial market. I was asked to go onsite for 6-12 months (to help obfuscate why the customer was ordering an Amdahl machine). I talked it over with the customer and then turned down the IBM offer. I was then told that the branch manager was good sailing buddy of IBM CEO and if didn't do this, I could forget career, raises, promotions.
note: Amdahl had won the battle to make ACS 360 compatible, but after
ACS/360 was killed (supposedly executives were concerned it would
advance state of the art too fast and IBM would loose control of the
market), Amdahl leaves IBM (shortly before FS started)
https://people.computing.clemson.edu/~mark/acs_end.html
https://people.computing.clemson.edu/~mark/acs.html
https://people.computing.clemson.edu/~mark/acs_legacy.html
trivia: FS was completely different than 370 and was going to replace
it, also internal politics were killing off 370 efforts. claim was the
lack of new 370s during FS period, gave clone 370 makers their market
foothold, also saw enormous uptic in IBM 370 marketing FUD. I
continued to work on 360&370 all during FS, including periodically
ridiculing what they were doing). Then when FS implodes there was mad
rush to get stuff back into 370 product pipelines, including kicking
off quick&dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
About same time I got asked to help with 16-cpu 370 multiprocessor and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 370/168 circuit design to 20% faster chips). Everybody thought it was great until somebody tells head of POK that it could be decades before POK's favorite son operating system ("MVS") had (effective) 16-cpu support (aka significant multiprocessor overhead, increasing with number of CPUs, at the time MVS documentation was that 2-CPU systems only had 1.2-1.5 throughput of single CPU, POK doesn't ship 16-CPU system until after turn of century) and some of us were invited to never visit POK again (and the 3033 processor engineers, heads down and no distractions). Once the 3033 was out the door, the 3033 processor engineers start on trout/3090.
trivia: later I was at IBM San Jose Research and my brother was Apple regional marketing rep (largest physical region conus) and I would get invited to business dinners when he came into town (got to argue MAC design with developers even before announce). One of his ploys at customers, was fawning over the great IBM mugs and be willing to trade two Apple mugs for one IBM mug. He also figured out how to dial into the IBM S/38 that ran Apple to track manufacturing and delivery schedules.
One of my sons was at DLI over the hill and brought me a couple ("russian") mugs ... I would make mistake of bringing them into work and leaving it on my desk (never leave anything in IBM office unless door was locked). I also had a IBM FE tool briefcase left in the office and individual tools kept disappearing.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Welcome to the defense death spiral Date: 23 Oct, 2024 Blog: FacebookWelcome to the defense death spiral. At the current spending rate, in another generation we will have a lot of rich contractors and no aircraft or Naval fleets to speak of
Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html
False Profits: Reviving the Corporation's Public Purpose
https://www.uclalawreview.org/false-profits-reviving-the-corporations-public-purpose/
I Origins of the Corporation. Although the corporate structure dates
back as far as the Greek and Roman Empires, characteristics of the
modern corporation began to appear in England in the mid-thirteenth
century.[4] "Merchant guilds" were loose organizations of merchants
"governed through a council somewhat akin to a board of directors,"
and organized to "achieve a common purpose"[5] that was public in
nature. Indeed, merchant guilds registered with the state and were
approved only if they were "serving national purposes."[6]
... snip ...
... however there has been significant pressure to give corporate
charters to entities operating in self-interest ... followed by
extending constitutional "people" rights to corporations. The supreme
court was scammed into extending 14th amendment rights to corporations
(with faux claims that was what the original authors had intended).
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiv/loc74-78:
Between 1868, when the amendment was ratified, and 1912, when a
scholar set out to identify every Fourteenth Amendment case heard by
the Supreme Court, the justices decided 28 cases dealing with the
rights of African Americans--and an astonishing 312 cases dealing with
the rights of corporations.
... snip ...
The Price of Inequality: How Today's Divided Society Endangers Our Future
https://www.amazon.com/Price-Inequality-Divided-Society-Endangers-ebook/dp/B007MKCQ30/
pg35/loc1169-73:
In business school we teach students how to recognize, and create,
barriers to competition -- including barriers to entry -- that help
ensure that profits won't be eroded. Indeed, as we shall shortly see,
some of the most important innovations in business in the last three
decades have centered not on making the economy more efficient but on
how better to ensure monopoly power or how better to circumvent
government regulations intended to align social returns and private
rewards
... snip ...
How Economists Turned Corporations into Predators
https://www.nakedcapitalism.com/2017/10/economists-turned-corporations-predators.html
Since the 1980s, business schools have touted "agency theory," a
controversial set of ideas meant to explain how corporations best
operate. Proponents say that you run a business with the goal of
channeling money to shareholders instead of, say, creating great
products or making any efforts at socially responsible actions such as
taking account of climate change.
... snip ...
A Short History Of Corporations
https://newint.org/features/2002/07/05/history
After Independence, American corporations, like the British companies
before them, were chartered to perform specific public functions -
digging canals, building bridges. Their charters lasted between 10 and
40 years, often requiring the termination of the corporation on
completion of a specific task, setting limits on commercial interests
and prohibiting any corporate participation in the political process.
... snip ...
... a residual of that is current law that can't use funds/payments
from government contracts for lobbying. After the turn of the century
there was huge upswing in private equity buying up beltway bandits and
government contractors, PE owners then transfer every cent possible to
their own pockets, which can be used to hire prominent politicians
that can lobby congress (including "contributions") to give contracts
to their owned companies (resulting in huge increase in
gov. outsourcing to private companies) ... can snowball since
gov. agencies aren't allowed to lobby (contributing to claims that
congress is most corrupt institution on earth)
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/
"Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a
Washington-based global private equity firm whose 2006 revenues of $87
billion were just a few billion below ibm's. Carlyle has boasted
George H.W. Bush, George W. Bush, and former Secretary of State James
Baker III on its employee roster."
... snip ...
... also promoting the Success of Failure culture (especially in the
military/intelligence-industrial complex)
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/
we were peripherally involved, summer 2002 got a call asking if we would respond to (unclassified) BAA that was about to close (from IC-ARDA, since renamed IARPA). We get in response, have some meetings showing we can do what was required, and then nothing. Wasn't until the Success of Failure articles that we had an idea what was going on.
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
Inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
Private Equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
Success of Failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failure
recent posts mentioning IC-ARDA BAA
https://www.garlic.com/~lynn/2023e.html#40 Boyd OODA-loop
https://www.garlic.com/~lynn/2023d.html#11 Ingenious librarians
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022c.html#120 Programming By Committee
https://www.garlic.com/~lynn/2022c.html#40 After IBM
https://www.garlic.com/~lynn/2022b.html#114 Watch Thy Neighbor
https://www.garlic.com/~lynn/2021i.html#53 The Kill Chain
https://www.garlic.com/~lynn/2021g.html#66 The Case Against SQL
https://www.garlic.com/~lynn/2021f.html#68 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2019e.html#129 Republicans abandon tradition of whistleblower protection at impeachment hearing
https://www.garlic.com/~lynn/2019e.html#54 Acting Intelligence Chief Refuses to Testify, Prompting Standoff With Congress
https://www.garlic.com/~lynn/2019e.html#40 Acting Intelligence Chief Refuses to Testify, Prompting Standoff With Congress
https://www.garlic.com/~lynn/2019.html#82 The Sublime: Is it the same for IBM and Special Ops?
https://www.garlic.com/~lynn/2019.html#49 Pentagon harbors culture of revenge against whistleblowers
https://www.garlic.com/~lynn/2018e.html#6 The Pentagon Is Building a Dream Team of Tech-Savvy Soldiers
https://www.garlic.com/~lynn/2017i.html#11 The General Who Lost 2 Wars, Leaked Classified Information to His Lover--and Retired With a $220,000 Pension
https://www.garlic.com/~lynn/2017h.html#23 This Is How The US Government Destroys The Lives Of Patriotic Whistleblowers
https://www.garlic.com/~lynn/2017c.html#47 WikiLeaks CIA Dump: Washington's Data Security Is a Mess
https://www.garlic.com/~lynn/2017c.html#5 NSA Deputy Director: Why I Spent the Last 40 Years In National Security
https://www.garlic.com/~lynn/2017b.html#35 Former CIA Analyst Sues Defense Department to Vindicate NSA Whistleblowers
https://www.garlic.com/~lynn/2017.html#64 Improving Congress's oversight of the intelligence community
https://www.garlic.com/~lynn/2016h.html#96 This Is How The US Government Destroys The Lives Of Patriotic Whistleblowers
https://www.garlic.com/~lynn/2016b.html#62 The NSA's back door has given every US secret to our enemies
https://www.garlic.com/~lynn/2016b.html#39 Failure as a Way of Life; The logic of lost wars and military-industrial boondoggles
https://www.garlic.com/~lynn/2015.html#72 George W. Bush: Still the worst; A new study ranks Bush near the very bottom in history
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Distributed Computing VM4341/FBA3370 Date: 23 Oct, 2024 Blog: FacebookVM4341&FBA3370: early 80s, large corporations were starting to order hundreds of VM4341/FBA3370s systems at a time (sort of leading edge of the coming distributed computing tsunami), placing out in (non-datacenter) departmental areas (inside IBM, conference rooms were becoming scarce commodity, being taken over by distributed computing VM4341s) ... and there was lots of interest by MVS getting into the market. One problem was new CKD disks were datacenter 3380s, while new non-datacenter (capable) disks were FBA3370. IBM finally came out with CKD3375 (for MVS) ... basically CKD simulation on 3370. It didn't do MVS much good, for distributed computing, they were looking at scores of systems per support staff while MVS was still scores of support staff per system.
posts mentioning VM4341/FBA3370 distributed computing tsunami
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024c.html#29 Wondering Why DEC Is The Most Popular
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#65 IBM Mainframes and Education Infrastructure
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#55 Vintage IBM 5100
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022e.html#67 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022.html#59 370 Architecture Redbook
https://www.garlic.com/~lynn/2021j.html#51 3380 DASD
https://www.garlic.com/~lynn/2021f.html#84 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019c.html#42 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018c.html#80 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2018.html#41 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2017j.html#88 Ferranti Atlas paging
https://www.garlic.com/~lynn/2017c.html#87 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2016b.html#32 Query: Will modern z/OS and z/VM classes suffice for MVS and VM/370
https://www.garlic.com/~lynn/2016b.html#23 IBM's 3033; "The Big One": IBM's 3033
https://www.garlic.com/~lynn/2015g.html#4 3380 was actually FBA?
https://www.garlic.com/~lynn/2014g.html#83 Costs of core
https://www.garlic.com/~lynn/2014f.html#40 IBM 360/370 hardware unearthed
https://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
https://www.garlic.com/~lynn/2014d.html#108 The IBM Strategy
https://www.garlic.com/~lynn/2014c.html#60 Bloat
https://www.garlic.com/~lynn/2013j.html#86 IBM unveils new "mainframe for the rest of us"
https://www.garlic.com/~lynn/2013i.html#15 Should we, as an industry, STOP using the word Mainframe and find (and start using) something more up-to-date
https://www.garlic.com/~lynn/2013i.html#2 IBM commitment to academia
https://www.garlic.com/~lynn/2012l.html#78 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#76 END OF FILE
https://www.garlic.com/~lynn/2012j.html#2 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Welcome to the defense death spiral Date: 24 Oct, 2024 Blog: Facebookre:
Eisenhower goodby speech included warning about the military-industrial(-congressional) complex ... aka take-over by financial engineering.
Old F22 news: F22 hangar empress (2009) "Can't Fly, Won't Die"
http://nypost.com/2009/07/17/cant-fly-wont-die/
Pilots call high-maintenance aircraft "hangar queens." Well, the
F-22's a hangar empress. After three expensive decades in development,
the plane meets fewer than one-third of its specified
requirements. Anyway, an enemy wouldn't have to down a single F-22 to
defeat it. Just strike the hi-tech maintenance sites, and it's game
over. (In WWII, we didn't shoot down every Japanese Zero; we just sank
their carriers.) The F-22 isn't going to operate off a dirt strip with
a repair tent.
But this is all about lobbying, not about lobbing bombs. Cynically,
Lockheed Martin distributed the F-22 workload to nearly every state,
employing under-qualified sub-contractors to create local financial
stakes in the program. Great politics -- but the result has been a
quality collapse.
... snip ...
F22 stealth coating was subject to moisture ... and jokes about not
being able take to F22 out in the rain. Before the move of Tyndall
F22s to Hawaii (and before all the Tyndall storm damage) ... there
were articles about the heroic efforts of the Tyndall F22 stealth
maintenance bays dealing with backlog of F22 coating maintenance.
http://www.tyndall.af.mil/News/Features/Display/Article/669883/lo-how-the-f-22-gets-its-stealth/
and Boeing contaminated by financial engineering in the MD take-over
... The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout
Unlike Boeing, McDonnell Douglas was run by financiers rather than
engineers. And though Boeing was the buyer, McDonnell Douglas
executives somehow took power in what analysts started calling a
"reverse takeover." The joke in Seattle was, "McDonnell Douglas bought
Boeing with Boeing's money."
... snip ...
Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution
Sorscher had spent the early aughts campaigning to preserve the
company's estimable engineering legacy. He had mountains of evidence
to support his position, mostly acquired via Boeing's 1997 acquisition
of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft
plant in Long Beach and a CEO who liked to use what he called the
"Hollywood model" for dealing with engineers: Hire them for a few
months when project deadlines are nigh, fire them when you need to
make numbers. In 2000, Boeing's engineers staged a 40-day strike over
the McDonnell deal's fallout; while they won major material
concessions from management, they lost the culture war. They also
inherited a notoriously dysfunctional product line from the
corner-cutting market gurus at McDonnell.
... snip ...
Boeing's travails show what's wrong with modern
capitalism. Deregulation means a company once run by engineers is now
in the thrall of financiers and its stock remains high even as its
planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
some recent post mentioning Boeing and financial enginneering
https://www.garlic.com/~lynn/2024e.html#76 The Death of the Engineer CEO
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024.html#56 Did Stock Buybacks Knock the Bolts Out of Boeing?
https://www.garlic.com/~lynn/2023g.html#104 More IBM Downfall
https://www.garlic.com/~lynn/2023e.html#11 Tymshare
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022g.html#64 Massachusetts, Boeing
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021e.html#87 Congress demands records from Boeing to investigate lapses in production quality
https://www.garlic.com/~lynn/2021b.html#70 Boeing CEO Said Board Moved Quickly on MAX Safety; New Details Suggest Otherwise
https://www.garlic.com/~lynn/2021b.html#40 IBM & Boeing run by Financiers
https://www.garlic.com/~lynn/2020.html#11 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019e.html#151 OT: Boeing to temporarily halt manufacturing of 737 MAX
https://www.garlic.com/~lynn/2019e.html#39 Crash Course
https://www.garlic.com/~lynn/2019e.html#33 Boeing's travails show what's wrong with modern capitalism
https://www.garlic.com/~lynn/2019d.html#39 The Roots of Boeing's 737 Max Crisis: A Regulator Relaxes Its Oversight
https://www.garlic.com/~lynn/2019d.html#20 The Coming Boeing Bailout?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Welcome to the defense death spiral Date: 24 Oct, 2024 Blog: Facebookre:
Sophomore in the 60s took two credit hr intro to computers, at the end of semester, Univ hires me to implement software, datacenter shutdown on weekends and I would have whole datacenter dedicated (although 48hrs w/o sleep made monday classes hard). Within year of taking intro class, the 709/1401 configuration had been replaced with large 360/65 and I was hired fulltime responsible for systems (continued to have my dedicated 48hr weekend time). Then before I graduate, I'm hired fulltime into small group in Boeing CFO office to help with the consolidation of all dataprocessing into an independent business unit (at the time I thought it was largest in the world). When I graduate, I join IBM (instead of staying with Boeing CFO). In the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. Boyd biography has him at "spook base" about same time that I was at Boeing ... and that "spook base" was a $2.5B "wind-fall" for IBM (ten times Boeing dataprocessing when I was there).
Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html
some recent posts mentioning working in Boeing CFO office, John Boyd,
and "spook base"
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#58 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023g.html#28 IBM FSD
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2022g.html#49 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2021k.html#70 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021i.html#89 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#6 The Kill Chain: Defending America in the Future of High-Tech Warfare
https://www.garlic.com/~lynn/2021h.html#64 WWII Pilot Barrel Rolls Boeing 707
https://www.garlic.com/~lynn/2021h.html#35 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#29 Online Computer Conferencing
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM "THINK" Date: 24 Oct, 2024 Blog: Facebookre:
we were doing HA/6000, originally for NYTimes so they could move their
newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it
HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when start doing technical/scientific cluster scale-up with national
labs and commercial cluster scale-up with RDBMS vendors (Oracle,
Sybase, Informix, and Ingres ... that happen to have VAXCluster
support in same source base with unix ... I do distributed lock
manager that supported VAXCluster lock semantics to ease the ports).
Early Jan1992, we have meeting with Oracle CEO, where AWD/Hester tells Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. A friend at the time was TA to FSD president and were periodically dropping in on him (he was working 1st shift in the pres. office and 2nd shift writing ADA code for the latest FAA effort) and keeping FSD updated on what we were doing.
FSD then tells the Kingston supercomputer group that they were going
with HA/CMP scale-up (code name: MEDUSA) for the government
... apparently triggering decision, that HA/CMP cluster scale-up was
being transferred to Kingston for announce as IBM supercomputer (for
technical/scientific *ONLY*) and we were told we couldn't work on
anything with more than four processors (we leave IBM a few months
later).
Date: Wed, 29 Jan 92 18:05:00
To: wheeler
MEDUSA uber alles...I just got back from IBM Kingston. Please keep me
personally updated on both MEDUSA and the view of ENVOY which you
have. Your visit to FSD was part of the swing factor...be sure to tell
the redhead that I said so. FSD will do its best to solidify the
MEDUSA plan in AWD...any advice there?
Regards to both Wheelers...
... snip ... top of post, old email index
... and within a couple days, if not hrs of the email, (MEDUSA) cluster scale-up was transferred (his "redhead" was reference to my wife).
I then send out email, a little premature it turns out since we were
about to be kneecapped ... copy in this archived post from two decades
ago.
https://www.garlic.com/~lynn/2006x.html#email920129
.... note the S/88 product administrator had started taking us around to their customers and also had me write a section for the corporate continuous availability strategy document (but it got pulled when both Rochester/AS400 and POK/mainframe complained they couldn't meet the requirements ... which likely contributed to the kneecapping).
some of the guys from multics on 5th flr left and did stratus ... ibm logo'ed it
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Patents Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Fri, 25 Oct 2024 07:46:58 -1000re:
Capitalism and social democracy ... have pros & cons and can be used for
checks & balances ... example, On War
https://www.amazon.com/War-beautifully-reproduced-illustrated-introduction-ebook/dp/B00G3DFLY8/
loc394-95:
As long as the Socialists only threatened capital they were not
seriously interfered with, for the Government knew quite well that the
undisputed sway of the employer was not for the ultimate good of the
State.
... snip ....
i.e. the government needed general population standard of living
sufficient that soldiers were willing to fight to preserve their way of
life. Capitalists tendency was to reduce worker standard of living to
the lowest possible ... below what the government needed for soldier
motivation ... and therefor needed socialists as counterbalance to the
capitalists in raising the general population standard of living. Saw
this fight out in the 30s, American Fascists opposing all of FDR's "new
deals" The Coming of American Fascism, 1920-1940
https://www.historynewsnetwork.org/article/the-coming-of-american-fascism-19201940
The truth, then, is that Long and Coughlin, together with the
influential Communist Party and other leftist organizations, helped
save the New Deal from becoming genuinely fascist, from devolving into
the dictatorial rule of big business. The pressures towards fascism
remained, as reactionary sectors of business began to have significant
victories against the Second New Deal starting in the late 1930s. But
the genuine power that organized labor had achieved by then kept the
U.S. from sliding into all-out fascism (in the Marxist sense) in the
following decades.
... snip ...
aka "Coming of America Fascism" shows socialists countered the "New
Deal" becoming fascist ... which had been the objective of the
capitalists ... and possibly contributed to forcing them further into
the Nazi/fascist camp. When The Bankers Plotted To Overthrow FDR
https://www.npr.org/2012/02/12/145472726/when-the-bankers-plotted-to-overthrow-fdr
The Plots Against the President: FDR, A Nation in Crisis, and the Rise
of the American Right
https://www.amazon.com/Plots-Against-President-Nation-American-ebook/dp/B07N4BLR77/
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
Inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTH (not) Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Fri, 25 Oct 2024 22:59:28 -1000Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
I then used conditional assembler option to implement 2nd version with os/360 sysstem services (get/put, dcb macros, etc). The stand-alone version took 30mins to assemble, the OS/360 version took 60mins to assemble (each DCB macro taking 5-6mins). Later I was told that the IBMers implementing the assembler had been told that they only had 256bytes for instruction lookup (and DCB macro had an enormous number of 2311 disk I/Os).
Within a year, the 360/67 shows up (replacing 709 & 360/30) but tss/360 never came to production fruition so ran as 360/65 with os/360 ... and I was hired fulltime responsible for os/360 (keeping my dedicated weekend time). Originally student fortran ran under a second on 709, but initially over a minute on 360/67 with os/360. My first sysgen was R9.5 and I add HASP and it cuts student fortran in half. Then with R11 sysgen, I start redoing stage2 to carefully place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs, Student fortran never got better than 709 until I install Univ. of Waterloo WATFOR.
Along the way, CSC comes out to install (virtual machine) CP/67 (3rd
installation after CSC itself and MIT Lincoln Labs) and i mostly got
to play with it during my weekend dedicated time.
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
https://en.wikipedia.org/wiki/CP-67
Initially i rewrote lots of CP67 to optimize running os/360 in virtual machine. My os/360 test stream ran 322secs on real hardware and initially ran 856secs in virtual machine (534secs CP67 CPU), within a few months I have CP67 CPU down to 113secs and I'm invited to the Spring Houston IBM mainframe user group meeeting to participate in CP67 announcement.
Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the consolidation of all dataprocessing into independent business unit. I thought that Renton datacenter was possibly largest in the world, a couple hundred million in IBM 360 stuff, 360/65s were arriving faster than they could be installed, boxes constantly being staged in the hallways around the machine room. Also a lot of politics betweeen Renton director and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarge the room to install a 360/67 for me to play with when I wasn't doing other stuff).
some recent posts mentoning 709/1401, mpio, boeing cfo, renton, etc
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTH (not) Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Sat, 26 Oct 2024 09:10:57 -1000Lynn Wheeler <lynn@garlic.com> writes:
when I graduate, I join the science center (instead of staying with Boeing CFO).
Later I transfer out to SJR, I get to wonder around silicon valley datacenters, including disk bldgs 14 (engineering) and 15 (product test) across the street. They were doing 7x24, prescheduled, stand-alone testing and mention that they had recently tried MVS, but it had 15min MTBF (in that environment). I offer to rewrite I/O supervisor so it is bullet proof and never fails, so they can do any amount of on-demand, concurrent testing, greatly improving productivity. Downside I had to spend increasing amount of time playing disk engineer diagnosing hardware development issues. Bldg15 gets early engineering systems for disk I/O testing and got both 3033 and 4341. In jan1979 (well before 4341 customer ship), I get con'ed into doing benchmark on the 4341 for national lab that was looking at getting 70 for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).
I also get con'ed into working with Jim Gray and Vera Watson on original SQL/relational, System/R. Official next DBMS "EAGLE" was to be follow-on to IMS ... and was able to do technology transfer to Endicott for SQL/DS ("under the radar" while the company was preoccupied with "EAGLE"). When Jim leaves for Tandem, he tries to palm off a bunch of stuff including supporting BofA which was in System/R joint study and getting 60 4341s, sort of the leading edge of the coming distributed computing tsunami (large corporations start ordering hundreds at a time for placing out in departmental areas, inside IBM, conference rooms were becoming in short supply being converted to VM/4341 rooms). Note after "EAGLE" finally implodes, there is a request for how fast could System/R be ported to MVS. Eventually port is released as "DB2", originally for "decision support" only.
Late 80s (decade after 4341 benchmark), got HA/6000 project, originally for NYTimes to convert their newspaper system (ATEX) from DEC VAXcluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, and Ingres that have VAXCluser support in same source base with UNIX; I do distributed lock manager supporting VAXCluster semantics to ease the port). Early Jan1992, have meeting with Oracle CEO and AWD/Hester tells Ellison that we would have 16-system clusters by mid92 and 128-system clusters by ye92. Then by end of Jan1992 we get corporate kneecapping with cluster scale-up being transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't do anything with more than four processors (we leave IBM shortly later).
trivia: email sent out possibly just hrs before the kneecapping,
referencing that IBM FSD had agreed to strategic HA/CMP cluster scale-up
(code name: MEDUSA) for gov. customers ...
alt.folklore.computers(/comp.arch) two decade old archived posting
https://www.garlic.com/~lynn/2006x.html#3
with copy of the email
https://www.garlic.com/~lynn/2006x.html#email920129
HA/CMP (MEDUSA) (mainframe/6000 comparison) footnote:
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; (HA/CMP clusters) 16-system: 2016MIPS,
128-system: 16,128MIPS
posts mentioning getting to play disk engineering in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning original relational/sql System/R
https://www.garlic.com/~lynn/submain.html#systemr
posts mentioning HA/CMP
https://www.garlic.com/~lynn/subtopic.html#hacmp
recent related posts in threads
https://www.garlic.com/~lynn/2024f.html#68 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#59 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#58 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#57 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#55 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#53 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#51 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#22 stacks are not hard, The joy of FORTRAN-like languages
https://www.garlic.com/~lynn/2024f.html#18 The joy of RISC
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#16 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#7 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#2 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#145 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#144 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#142 The joy of FORTRAN
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 2250 Hypertext Editing System Date: 26 Oct, 2024 Blog: FacebookIBM 2250 Hypertext Editing System
copied from private facebook group posting ... and trivia note:
2250M4 .... is 2250 with 1130 computer as "controller" (rather than
mainframe direct connection, 1130 would then have some connection to
mainframe). Cambridge Science Center had one and ported the PDP1 space
war game to 1130:
https://www.computerhistory.org/pdp-1/08ec3f1cf55d5bffeb31ff6e3741058a/
https://en.wikipedia.org/wiki/Spacewar%21
same person responsible for the 60s' CP67 science center "wide area"
network (that morphs into the corporate internal network ... larger
than arpanet/internet from just about the beginning until sometime
mid/late 80s ... technology also used for the corporate sponsored
univ. BITNET). Account by one of the inventors of GML in 1969 at the
science center:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
url ref for CP67 wide-area network, was The Reason Why and the First Published Hint (evolution of SGML from GML)
originally CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
had been re-implemented for CP67/CMS (precursor to VM370/CMS) as
"SCRIPT", after GML was invented in in 1969, GML tag processing was
added to SCRIPT
CERN was using Univ. Waterloo CMS SGML processor when HTML was
invented
http://infomesh.net/html/history/early/
first webserver in the US was on Stanford SLAC (CERN sister
institution) VM370/CMS system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal corporate network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
past posts mentioning PDP1 space war ported to 2250m4 at science
center
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#52 IBM Vintage 1130
https://www.garlic.com/~lynn/2022f.html#118 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2018f.html#72 Jean Sammet — Designer of COBOL – A Computer of One's Own – Medium
https://www.garlic.com/~lynn/2013g.html#72 DEC and the Bell System?
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2011n.html#9 Colossal Cave Adventure
https://www.garlic.com/~lynn/2011g.html#45 My first mainframe experience
https://www.garlic.com/~lynn/2010d.html#74 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2004d.html#45 who were the original fortran installations?
https://www.garlic.com/~lynn/2003m.html#14 Seven of Nine
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: According to Psychologists, there are four types of Intelligence Date: 26 Oct, 2024 Blog: FacebookAccording to Psychologists, there are four types of Intelligence:
Late 90s, I was rep to the financial industry standards organization and at a meeting hosted by major DC K-street congressional lobbying group. During the meeting was asked to step out, taken to a office and introduced to somebody from a NJ ethnic group, who told me some investment bankers asked him to talk to me; they were expecting $2B return on upcoming Internet IPO and my (public) criticism was predicted to have a 10% downside ($200M) and would I please shut-up. I then went to some Federal law enforcement officers and was told that significant percentage of investment bankers were amoral sociopaths (a major criteria for "success").
Also some investment bankers had walked away "clean" from the 80s S&L Crisis, were then running Internet IPO mills (invest a few million, hype, IPO for a couple billion, needed to fail to leave the field clear for the next round of IPOs) and were predicted next to get into securitized loans&mortgages ("economic mess" after turn of the century).
... 1972, IBM CEO Learson trying (but failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
two decades later, IBM has one of the largest losses in the history of
US companies and was being reorganized into the 13 "baby blues"
(take-off on the "baby bells" from the AT&T breakup a decade earlier)
in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM, but get a call from the bowels of Armonk
(corp hdqtrs) asking if we could help with the break-up. Before we get
started, the board brings in the former president of AMEX as CEO, who
(somewhat) reverses the breakup.
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
S&L crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 2250 Hypertext Editing System Date: 28 Oct, 2024 Blog: Facebookre:
other trivia: early 80s got HSDT project (T1 and faster computer
links, both satellite and terrestrial), some amount of conflicts with
the communication group; note in the 60s, IBM had 2701
telecommunication controller that supported T1 (1.5mbits/sec) links,
then with move to SNA/VTAM in the 70s, issues appear to cap controller
links at 56kbits/sec. Was working with NSF director and was suppose to
get $20M to interconnect the NSF supercomputer centers. Then congress
cuts the budget, some other things happen and finally a RFP is
released (in part based on what we already had running). Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
NCSA was one of OASC software including
http://www.ncsa.illinois.edu/enabling/mosaic
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
Trivia: somebody had been collecting executive (mis-information) email
about how IBM SNA/VTAM could support NSFNET T1 ... and forwarded it to
me ... heavily snipped and redacted (to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109
Note earlier NSF had been asking us to make presentations to the supercomputing centers and one was Berkeley, NSF had recently made a grant to UC for UCB supercomputing center ... but the regents bldg plan had UCSD getting the next new bldg ... so it became the UCSD Supercomputing Center (operated by General Atomics).
Some of the NCSA people had came out for small client/server startup, Mosaic corporation. NCSA complained about use of the name, and it was changed to NETSCAPE (use of "NETSCAPE" acquired from another company in silicon valley). I had left IBM and was bought in as consultant, two of the (former) Oracle people (that we had worked with on our HA/CMP cluster scale-up project), were there responsible for something called "commerce server" and wanted to do payment transactions. The startup had also invented this technology they called "SSL" they wanted to use, the result is now frequently called "electronic commerce". I had responsibility for everything between webservers and financial industry payment networks.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
electronic commerce "gateway" posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM IMS Date: 28 Oct, 2024 Blog: FacebookIMS nostalgia: my wife (before we met) was in the gburg FS group reporting tp Les Comeau (interconnect), then when FS implodes, she goes over to the JES group reporting to Crabtree and one of the catchers for ASP/JES3 and then co-author of JESUS (JES unified system) specification (all the features of JES2&JES3 that the respective customers couldn't live w/o ... for whatever reason, never came to fruition). She was then con'ed into going to POK responsible for loosely-coupled (her Peer-Coupled Shared Data) architecture. She didn't remain long because 1) periodic battles with communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX and Parallel SYSPLEX) except for IMS hot-standby (she has story asking Vern Watts who he would ask permission of to do hot-standby, he replies nobody, he would just tell them when it was all done).
my IMS (about the same time): STL was bursting at the seams and 300 people (& 3270s) from the IMS group were being moved to offsite bldg with dataprocessing back to STL datacenter. They had tried remote 3270, but found human factors totally unacceptable. I get con'ed into doing channel-extender support so they can place channel-attach 3270 controllers at the offside bldg (with no perceptible difference in human factors between in STL and offsite). trivia, 3270 controllers had been spread across the mainframe channels with 3830 disk controllers ... but moving the 3270 controllers to channel extenders significantly reduced channel busy interference with disk I/O (for same amount of 3270 activity) increasing system throughput by 10-15% (as result, STL considered using channel-extenders for *ALL* 3270 controllers).
I had also been working with Jim Gray and Vera Watson on (original sql/relational) System/R and when Jim leaves IBM for Tandem ... he wanted me to pick up responsibility for System/R joint study with BofA (getting 60 4341s for distributed operation) and DBMS consulting with the IMS group.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
peer-coupled shared data posts
https://www.garlic.com/~lynn/submain.html#shareddata
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
original sql/relational, system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
some other IMS & Vern Watts posts
https://www.garlic.com/~lynn/2024e.html#140 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024d.html#65 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#105 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#79 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024b.html#80 IBM DBMS/RDBMS
https://www.garlic.com/~lynn/2024b.html#29 DB2
https://www.garlic.com/~lynn/2024.html#27 HASP, ASP, JES2, JES3
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#72 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#51 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023e.html#76 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#45 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#43 AI Scale-up
https://www.garlic.com/~lynn/2023c.html#48 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#47 IBM ACIS
https://www.garlic.com/~lynn/2023.html#1 IMS & DB2
https://www.garlic.com/~lynn/2022h.html#1 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#74 IBM/PC
https://www.garlic.com/~lynn/2022d.html#51 IBM Spooling
https://www.garlic.com/~lynn/2022c.html#79 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#20 Telum & z16
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2022b.html#11 Seattle Dataprocessing
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#114 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021i.html#19 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021i.html#1 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#90 IBM Internal network
https://www.garlic.com/~lynn/2021e.html#76 WEB Security
https://www.garlic.com/~lynn/2021c.html#39 WA State frets about Boeing brain drain, but it's already happening
https://www.garlic.com/~lynn/2021b.html#72 IMS Stories
https://www.garlic.com/~lynn/2021.html#55 IBM Quota
https://www.garlic.com/~lynn/2011l.html#22 IBM IMS - Vern Watts
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2007q.html#14 Does software life begin at 40? IBM updates IMS database
https://www.garlic.com/~lynn/2007e.html#41 IBM S/360 series operating systems history
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Prodigy Date: 28 Oct, 2024 Blog: FacebookProdigy history 1984-2001
Earlier online service ... TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
Tymshare, Inc was a time-sharing service and third-party hardware
maintenance company. Competing with companies such as CompuServe,
Service Bureau Corporation and National CSS. Tymshare developed and
acquired various technologies, such as data networking, electronic
data interchange (EDI), credit card and payment processing, and
database technology.[1] It was headquartered in Cupertino in
California, from 1964 to 1984.
... snip ...
... in aug1976, they started providing their CMS-based online computer
conferencing system to (IBM mainframe user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE, archives here
http://vm.marist.edu/~vmshare
I cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE (& later PCSHARE) files for putting up on internal systems and internal network (including the world-wide, branch office, sales&marketing support HONE systems; after joining IBM, one of my hobbies was enhanced production systems for internal datacenters and HONE was long time customer), most difficult was lawyers who were concerned internal employees could be contaminated exposed to unfiltered customer information. I was also blamed for online computer conferencing in the late 70s and early 80s on the internal network (larger than arpanet/internet from the beginning until sometime mid/late 80s, when they started converting the internal network to SNA/VTAM).
Genesis of internal network was the Cambridge Science Center 60s
CP67-based wide-area network ... account by one of the GML (precursor
to SGML & HTML) inventers at the science center
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
as the 60s science center wide-area network expands, it morphs into the IBM internal network (technology also used for the corporate sponsored univ. BITNET).
other TYMSHARE trivia:
Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167
Ann rose up to become Vice President of the Integrated Systems
Division at Tymshare, from 1976 to 1984, which did online airline
reservations, home banking, and other applications. When Tymshare was
acquired by McDonnell-Douglas in 1984, Ann's position as a female VP
became untenable, and was eased out of the company by being encouraged
to spin out Gnosis, a secure, capabilities-based operating system
developed at Tymshare. Ann founded Key Logic, with funding from Gene
Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl
mainframes. After closing Key Logic, Ann became a consultant, leading
to her cofounding Agorics with members of Ted Nelson's Xanadu project
... snip ...
Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89
Ann Hardy is a crucial figure in the story of Tymshare and
time-sharing. She began programming in the 1950s, developing software
for the IBM Stretch supercomputer. Frustrated at the lack of
opportunity and pay inequality for women at IBM -- at one point she
discovered she was paid less than half of what the lowest-paid man
reporting to her was paid -- Hardy left to study at the University of
California, Berkeley, and then joined the Lawrence Livermore National
Laboratory in 1962. At the lab, one of her projects involved an early
and surprisingly successful time-sharing operating system.
... snip ...
trivia: I was brought in to evaluate GNOSIS as part of spin-off to Key Logic.
Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
post mentioning GML, SGML, HTML
https://www.garlic.com/~lynn/submain.html#sgml
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
corporate sponsored univ BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
virtual machine based, online commercial service bureaus
https://www.garlic.com/~lynn/submain.html#online
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Trump's MSG event draws comparisons to 1939 Nazi rally Date: 29 Oct, 2024 Blog: FacebookTrump's MSG event draws comparisons to 1939 Nazi rally
Americans hold a Nazi rally in Madison Square Garden
https://www.history.com/this-day-in-history/americans-hold-nazi-rally-in-madison-square-garden
Donald Trump's Racist NYC Rally Was Vile. It Was Also Political Suicide
https://www.yahoo.com/news/opinion-donald-trump-racist-nyc-024926623.html
The media is finally calling Trump rallies what they are
https://www.dailykos.com/stories/2024/10/28/2280259/-The-media-is-finally-calling-Trump-rallies-what-they-are-Racist-as-hell
Walz compares Trump's Madison Square Garden rally to 1939 pro-Nazi event
https://thehill.com/homenews/campaign/4956168-walz-trump-madison-square-garden-rally/
Capitalism and social democracy ... have pros & cons and can be
used for checks & balances ... example, On War
https://www.amazon.com/War-beautifully-reproduced-illustrated-introduction-ebook/dp/B00G3DFLY8/
loc394-95:
As long as the Socialists only threatened capital they were not
seriously interfered with, for the Government knew quite well that the
undisputed sway of the employer was not for the ultimate good of the
State.
... snip ...
i.e. the government needed general population standard of living
sufficient that soldiers were willing to fight to preserve their way
of life. Capitalists tendency was to reduce worker standard of living
to the lowest possible ... below what the government needed for
soldier motivation ... and therefor needed socialists as
counterbalance to the capitalists in raising the general population
standard of living. Saw this fight out in the 30s, American Fascists
opposing all of FDR's "new deals" The Coming of American Fascism,
1920-1940
https://www.historynewsnetwork.org/article/the-coming-of-american-fascism-19201940
The truth, then, is that Long and Coughlin, together with the
influential Communist Party and other leftist organizations, helped
save the New Deal from becoming genuinely fascist, from devolving into
the dictatorial rule of big business. The pressures towards fascism
remained, as reactionary sectors of business began to have significant
victories against the Second New Deal starting in the late 1930s. But
the genuine power that organized labor had achieved by then kept the
U.S. from sliding into all-out fascism (in the Marxist sense) in the
following decades.
... snip ...
aka "Coming of America Fascism" shows socialists countered the "New
Deal" becoming fascist ... which had been the objective of the
capitalists ... and possibly contributed to forcing them further into
the Nazi/fascist camp. When The Bankers Plotted To Overthrow FDR
https://www.npr.org/2012/02/12/145472726/when-the-bankers-plotted-to-overthrow-fdr
The Plots Against the President: FDR, A Nation in Crisis, and the Rise
of the American Right
https://www.amazon.com/Plots-Against-President-Nation-American-ebook/dp/B07N4BLR77/
"Why Nations Fail"
https://www.amazon.com/Why-Nations-Fail-Origins-Prosperity-ebook/dp/B0058Z4NR8/
original settlement, Jamestown ... English planning on emulating the
Spanish model, enslave the local natives to support the
settlement. Unfortunately the North American natives weren't as
cooperative and the settlement nearly starved. Then started out
switching to sending over some of the other populations from the
British Isles essentially as slaves ... the English Crown charters had
them as "leet-man" ... pg27:
The clauses of the Fundamental Constitutions laid out a rigid social
structure. At the bottom were the "leet-men," with clause 23 noting,
"All the children of leet-men shall be leet-men, and so to all
generations."
... snip ...
My wife's father was presented with a set of 1880 history books for
some distinction at West Point by the Daughters Of the 17th Century
http://www.colonialdaughters17th.org/
which refer to if it hadn't been for the influence of the Scottish
settlers from the mid-atlantic states, the northern/English states
would have prevailed and the US would look much more like England with
monarch ("above the law") and strict class hierarchy.
The Great Scandal: Christianity's Role in the Rise of the Nazis
http://churchandstate.org.uk/2016/04/the-great-scandal-christianitys-role-in-the-rise-of-the-nazis/
note that John Foster Dulles played major role rebuilding Germany
economy, industry, military from the 20s up through the early 40s
https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret-ebook/dp/B00BY5QX1K/
loc865-68:
In mid-1931 a consortium of American banks, eager to safeguard their
investments in Germany, persuaded the German government to accept a
loan of nearly $500 million to prevent default. Foster was their
agent. His ties to the German government tightened after Hitler took
power at the beginning of 1933 and appointed Foster's old friend
Hjalmar Schacht as minister of economics.
loc905-7:
Foster was stunned by his brother's suggestion that Sullivan &
Cromwell quit Germany. Many of his clients with interests there,
including not just banks but corporations like Standard Oil and
General Electric, wished Sullivan & Cromwell to remain active
regardless of political conditions.
loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace
Seligman, was equally disturbed. In October 1939, six weeks after the
Nazi invasion of Poland, he took the extraordinary step of sending
Foster a formal memorandum disavowing what his old friend was saying
about Nazism
... snip ...
From the law of unintended consequences, when US 1943 Strategic Bombing program needed targets in Germany, they got plans and coordinates from wallstreet.
June1940, Germany had a victory celebration at the NYC Waldorf-Astoria
with major industrialists. Lots of them were there to hear how to do
business with the Nazis
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/
loc1925-29:
One prominent figure at the German victory celebration was Torkild
Rieber, of Texaco, whose tankers eluded the British blockade. The
company had already been warned, at Roosevelt's instigation, about
violations of the Neutrality Law. But Rieber had set up an elaborate
scheme for shipping oil and petroleum products through neutral ports
in South America.
... Intrepid also points finger at Ambassador Kennedy ... they start
bugging the US embassy because classified information was leaking to
the Germans. They eventually identified a clerk as responsible but
couldn't prove ties to Kennedy. However Kennedy is claiming credit for
Chamberland capitulating to Hitler on many issues ... also making
speeches in Britain and the US that Britain could never win a war with
Germany and if he was president, he would be on the best of terms with
Hitler ... loc2645-52:
The Kennedys dined with the Roosevelts that evening. Two days later,
Joseph P. Kennedy spoke on nationwide radio. A startled public learned
he now believed "Franklin D. Roosevelt should be re-elected
President." He told a press conference: "I never made anti-British
statements or said, on or off the record, that I do not expect Britain
to win the war."
British historian Nicholas Bethell wrote: "How Roosevelt contrived the
transformation is a mystery." And so it remained until the BSC Papers
disclosed that the President had been supplied with enough evidence of
Kennedy's disloyalty that the Ambassador, when shown it, saw
discretion to be the better part of valor. "If Kennedy had been
recalled sooner," said Stephenson later, "he would have campaigned
against FDR with a fair chance of winning. We delayed him in London as
best we could until he could do the least harm back in the States."
... snip ...
The congressmen responsible for the US Neutrality Act claimed it was in reaction to the enormous (US) war profiteering they saw during WW1. The capitalists intent on doing business with Nazi Germany respin that as isolationism in major publicity campaign
... getting into the war
US wasn't in the war and Stalin was effectively fighting the Germans
alone and worried that Japan would attack from the east ... opening up
a second front. Stalin wanted US to come in against Japan (making sure
Japan had limited resources to open up a 2nd front against the Soviet
Union). US assistant SECTREAS Harry Dexter White was operating on
behalf of the Soviet Union and Stalin sends White a draft of demands
for US to present to Japan that would provoke Japan into attacking US
and drawing US into the war.
https://en.wikipedia.org/wiki/Harry_Dexter_White#Venona_project
demands were included in the Hull Note which Japan received just prior
to decision to attack Perl Harbor, hull note
https://en.wikipedia.org/wiki/Hull_note#Interpretations
More Venona
https://en.wikipedia.org/wiki/Venona_project
Benn Stein in "The Battle of Bretton Woods" spends pages 55-58 discussing "Operation Snow".
https://www.amazon.com/Battle-Bretton-Woods-Relations-University-ebook/dp/B00B5ZQ72Y/
pg56/loc1065-66:
The Soviets had, according to Karpov, used White to provoke Japan to
attack the United States. The scheme even had a name: "Operation
Snow," snow referring to White.
... after the war
Later somewhat replay of the 1940 celebration, there was conference of
5000 industrialists and corporations from across the US at the
Waldorf-Astoria, and in part because they had gotten such a bad
reputation for the depression and supporting Nazis, as part of
attempting to refurbish their horribly corrupt and venal image, they
approved a major propaganda campaign to equate Capitalism with
Christianity.
https://www.amazon.com/One-Nation-Under-God-Corporate-ebook/dp/B00PWX7R56/
part of the result by the early 50s was adding "under god" to the
pledge of allegiance. slightly cleaned up version
https://en.wikipedia.org/wiki/Pledge_of_Allegiance
and old book that covers the early era in "Economists and the
Powerful": "Triumphant plutocracy; the story of American public life
from 1870 to 1920" (wayback machine seems to be fully back)
https://archive.org/details/triumphantpluto00pettrich
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
racism posts
https://www.garlic.com/~lynn/submisc.html#racism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
some recent posts mentioning "Coming of American Fascism"
https://www.garlic.com/~lynn/2024f.html#68 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#51 The joy of Democracy
https://www.garlic.com/~lynn/2024c.html#108 D-Day
https://www.garlic.com/~lynn/2024c.html#49 Left Unions Were Repressed Because They Threatened Capital
https://www.garlic.com/~lynn/2022g.html#28 New Ken Burns Documentary Explores the U.S. and the Holocaust
https://www.garlic.com/~lynn/2022g.html#19 no, Socialism and Fascism Are Not the Same
https://www.garlic.com/~lynn/2022e.html#62 Empire Burlesque. What comes after the American Century?
https://www.garlic.com/~lynn/2021j.html#104 Who Knew ?
https://www.garlic.com/~lynn/2021i.html#56 "We are on the way to a right-wing coup:" Milley secured Nuclear Codes, Allayed China fears of Trump Strike
https://www.garlic.com/~lynn/2021c.html#96 How Ike Led
https://www.garlic.com/~lynn/2021b.html#91 American Nazis Rally in New York City
https://www.garlic.com/~lynn/2021.html#66 Democracy is a threat to white supremacy--and that is the cause of America's crisis
https://www.garlic.com/~lynn/2021.html#34 Fascism
https://www.garlic.com/~lynn/2020.html#0 The modern education system was designed to teach future factory workers to be "punctual, docile, and sober"
https://www.garlic.com/~lynn/2019e.html#161 Fascists
https://www.garlic.com/~lynn/2019e.html#112 When The Bankers Plotted To Overthrow FDR
https://www.garlic.com/~lynn/2019e.html#107 The Great Scandal: Christianity's Role in the Rise of the Nazis
https://www.garlic.com/~lynn/2019e.html#96 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#63 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019e.html#43 Corporations Are People
https://www.garlic.com/~lynn/2019d.html#75 The Coming of American Fascism, 1920-1940
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Registered Confidential and "811" Date: 29 Oct, 2024 Blog: FacebookI had a (double locked) full drawer of registered "811" documents (370/xa publication dates nov1978). In period (for offending various people) I was being told I had no career, no promotions, no raises in IBM ... so when a head hunter called to ask me to interview for position of TA to president of operation selling clone 370 systems (subsidiary of operation on other side of pacific), I said what the heck. Sometime during the interview the question of advanced next generation architecture came up. I changed the subject to I had recently submitted an ethics improvement for the business conduct guidelines (that had to be read and signed every year) ... which shutdown the interview.
However that wasn't the end of it, the US gov filed charges against the foreign parent for industrial espionage and I got a 3hr interview with a FBI agent (I appeared as visitor on bldg log). I related the whole story ... and suggested somebody in IBM site security might be leaking names of people with registered documents.
other trivia: not long after joining IBM, there was new CSO that had come from gov. service (head of presidential detail) and I was asked to travel some with him discussing computer security (while little bits of physical security rubbed off on me).
circa 1980, IBM brought trade-secret lawsuit against disk clone maker for couple billion dollars ... for having acquired detailed unannounced new (3380) disk drive documents. Judge ruled that IBM had to show security proportional to risk ... or "security proportional to value" ... i.e. temptation for normal person finding something not adequately protected and selling it for money ... couldn't be blamed (analogous to requiring fences around swimming pools because children couldn't be expected to not jump in unprotected pool).
Security Proportional To Risk posts
https://www.garlic.com/~lynn/submisc.html#security.proportional.to.risk
posts mentioning registered confidential and 811:
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023.html#59 Classified Material and Security
https://www.garlic.com/~lynn/2022h.html#69 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022d.html#35 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022c.html#4 Industrial Espionage
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#47 IBM Conduct
https://www.garlic.com/~lynn/2021k.html#125 IBM Clone Controllers
https://www.garlic.com/~lynn/2021j.html#38 IBM Registered Confidential
https://www.garlic.com/~lynn/2021d.html#86 Bizarre Career Events
https://www.garlic.com/~lynn/2021b.html#12 IBM "811", 370/xa architecture
https://www.garlic.com/~lynn/2019e.html#75 Versioning file systems
https://www.garlic.com/~lynn/2019e.html#29 IBM History
https://www.garlic.com/~lynn/2019.html#83 The Sublime: Is it the same for IBM and Special Ops?
https://www.garlic.com/~lynn/2017f.html#35 Hitachi to Deliver New Mainframe Based on IBM z Systems in Japan
https://www.garlic.com/~lynn/2017e.html#67 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2014f.html#27 Complete 360 and 370 systems found
https://www.garlic.com/~lynn/2011g.html#12 Clone Processors
https://www.garlic.com/~lynn/2011g.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
https://www.garlic.com/~lynn/2011c.html#67 IBM Future System
https://www.garlic.com/~lynn/2006f.html#20 Old PCs--environmental hazard
https://www.garlic.com/~lynn/2005s.html#26 IEH/IEB/... names?
https://www.garlic.com/~lynn/2005f.html#42 Moving assembler programs above the line
https://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/aepay10.htm#20 Security Proportional to Risk (was: IBM Mainframe at home)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Registered Confidential and "811" Date: 29 Oct, 2024 Blog: Facebookre:
... re: jupiter; stl had called corporate 2-day review of jupiter
1&2dec1983. unfortunately I had previously scheduled all day talk by
John Boyd on dec1 in SJR auditorium (open to all IBMers). some more
info here
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
https://en.wikipedia.org/wiki/OODA_loop
URLs and post referencing Boyd
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Fiscal Impact of the Harris and Trump Campaign Plans Date: 29 Oct, 2024 Blog: FacebookThe Fiscal Impact of the Harris and Trump Campaign Plans
2002, congress lets fiscal responsibility act lapse (spending can't exceed tax revenue) that had been on its way to eliminating all federal debt.
2005, Comptroller General was including in speeches that nobody in congress was capable of middle-school arithmetic (for how badly they were savaging the budget).
2010, CBO had report that 2003-2009, tax revenue was cut by $6T and spending was increased by $6T for $12T gap compared to fiscal responsible budget (first time taxes were cut to not pay for two wars), sort of confluence of special interests wanting huge tax cut, military-industrial complex wanting huge spending increase, and Too-Big-To-Fail wanting huge debt increase (TBTF bailout was done by Federal Reserve providing over $30T in ZIRP funds, which TBTF then invested in treasuries)
fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
comptroller general posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
too-big-to-fail, too-big-to-prosecute, too-big-to-jail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
federal reserve chairman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
ZIRP funds posts
https://www.garlic.com/~lynn/submisc.html#zirp
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: CP67 And Source Update Date: 29 Oct, 2024 Blog: FacebookFrom CP67 days, VM provided both assembled TXT decks and the source updates. VM introduced monthly PLC tapes (aggregate collection of updates). CP67 had "update" that would apply single UPDATE file to source and then compile/assemble the resulting source program.
After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters (including world-wide, online, sales&marketing support HONE systems) ... and could distributed updates over the internal network.
With the decision to add virtual memory to all 370s, also for morph of CP67->VM370 (although simplified and/or eliminated lots of features). My production CP67 was "L" level updates. Joint project with Endicott was to update CP67 with "H" level updates that provided 370 virtual memory virtual machines (ran on CP67H in a 360/67 virtual machine under CP67L, issue was to prevent leakage of unannounced 370 virtual memory since there were professors, staff and students from Boston/Cambridge institutions). Then there were the "I" level updates that changed CP67 to run with 370 virtual memory architecture ... running in a CP67H 370 virtual machine.
CP67I in CP67H 370 virtual machine in a CP67L 360/67 virtual machine with CP67L running on real 360/67 was in regular production use a year before the first engineering 370 with virtual memory was operation.
This also required extending source update from single file to a ordered sequence of update files.
Later three people from san jose came out and added 3330 & 2305 device support for CP67SJ that was in use on internal real 370 virtual memory machines, long after VM370 became available
Mid-80s, Melinda contacted me about getting copy of the original
multi-level source update implementation ... which I was able to pull
off archive tape and email to her.
https://www.leeandmelindavarian.com/Melinda#VMHist
she was lucky, triple redundant replicated copies of the archive were in the Almaden Research tape library. Not long later, Almaden had an operational problem where random tapes were mounted as scratch and I lost nearly a dozen tapes ... including all three redundant copies of my archive.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
misc. past posts mentioning cms multi-level source update (and
almaden datacenter operational problem)
https://www.garlic.com/~lynn/2024c.html#103 CP67 & VM370 Source Maintenance
https://www.garlic.com/~lynn/2024b.html#7 IBM Tapes
https://www.garlic.com/~lynn/2024.html#39 Card Sequence Numbers
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022c.html#83 VMworkshop.og 2022
https://www.garlic.com/~lynn/2022.html#61 File Backup
https://www.garlic.com/~lynn/2021k.html#51 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021.html#22 Almaden Tape Library
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2018e.html#65 System recovered from Princeton/Melinda backup/archive tapes
https://www.garlic.com/~lynn/2017i.html#76 git, z/OS and COBOL
https://www.garlic.com/~lynn/2017d.html#52 Some IBM Research RJ reports
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014e.html#28 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2014d.html#19 Write Inhibit
https://www.garlic.com/~lynn/2014.html#20 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013h.html#9 IBM ad for Basic Operating System
https://www.garlic.com/~lynn/2013f.html#73 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013e.html#61 32760?
https://www.garlic.com/~lynn/2013b.html#61 Google Patents Staple of '70s Mainframe Computing
https://www.garlic.com/~lynn/2012k.html#72 Any cool anecdotes IBM 40yrs of VM
https://www.garlic.com/~lynn/2012i.html#22 The Invention of Email
https://www.garlic.com/~lynn/2011f.html#80 TSO Profile NUM and PACK
https://www.garlic.com/~lynn/2011c.html#3 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#89 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#39 1971PerformanceStudies - Typical OS/MFT 40/50/65s analysed
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2006w.html#42 vmshare
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Rise and Fall of the 'IBM Way' Date: 29 Oct, 2024 Blog: FacebookThe Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us
1996 MIT Sloan The Decline and Rise of IBM
https://sloanreview.mit.edu/article/the-decline-and-rise-of-ibm/?switch_view=PDF
1995 l'Ecole de Paris The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
my 2022 rendition, 1972 CEO Learson trying (& failed) to block the
bureaucrats, careerists, and MBAs from destroying the Watson
culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
20 years later IBM has one of the largest losses in the history of US
companies and was being reorganized into the 13 "baby blues" (take-off
on the AT&T "baby bells" breakup a decade earlier) in preparation to
breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO who (somewhat)
reverses the breakup.
AMEX was in competition with KKR for private equity (LBO) take-over of
RJR and KKR wins. then KKR was having trouble with RJR and hires away
the AMEX president to help.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
Then IBM board hires the former AMEX president to try and save the
company, who uses some of the same techniques used at RJR
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Registered Confidential and "811" Date: 30 Oct, 2024 Blog: Facebookre:
trivia: The communication group was fiercely fighting off client/server and distributed computing and trying to block release of mainframe TCP/IP support. When that was overturned, they changed their strategy and said that since they had corporate strategic responsibility for every thing that crossed datacenter walls, it had to be released through them. What shipped got aggregate of 44kbytes/sec using nearly whole 3090 processor. Then a version was ported to MVS by simulating VM370 diagnose instructions (aggravating the CPU overhead). I then do RFC1044 support and in some tuning tests at Cray Research between a Cray and IBM 4341 was getting sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
more trivia: early 80s got HSDT project, T1 and faster computer links
(both satellite and terrestrial) ... and periodic conflicts with
communication group. Note in the 60s, IBM had 2701 telecommunication
controller that supported T1 (1.5mbits/sec) links, however with the
transition to SNA/VTAM in the 70s, issues appeared to cap controllers
at 56kbit/sec links. Was also working with the NSF director and was
suppose to get $20M to interconnect the NSF supercomputer
centers. Then congress cuts the budget, some other things happen and
eventually an RFP is released, in part based on what we already had
running. From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
One of the 1st HSDT internal T1 links was between Los Gatos lab and
Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston, had a bunch of FPS boxes, some with 40mbyte/sec
disk arrays ... later Cornell proposal
https://en.wikipedia.org/wiki/Floating_Point_Systems
Cornell University, led by physicist Kenneth G. Wilson, made a
supercomputer proposal to NSF with IBM to produce a processor array of
FPS boxes attached to an IBM 3090 mainframe with the name lCAP.
... snip ...
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
conputer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Rise and Fall of the 'IBM Way' Date: 30 Oct, 2024 Blog: Facebookre:
trivia: 70s, US auto makers were being hit hard by foreign makers with cheap imports. Congress establishes import quotas that was to provide US makers with significant profits for completely remaking themselves (however they just pocketed the profits and continued business as usual, early 80s saw calls for 100% unearned profit tax). The foreign makers determined at quotas set, they could sell as many high end cars (with greater profits, which further reduced downward pressure on US car prices). At the time, car business was taking 7-8yrs to come out with new design (from initial to rolling off the line), and the foreign makers cut it in half to 3-4yrs.
In 1990, GM had "C4 taskforce" to (finally?) completely remake themselves and since they were planning on leveraging technology, technology companies were invited to participat and I was one from IBM (other was POK mainframe IBMer). At the time foreign makers were in the process of cutting elapsed time from 3-4yrs to 18-24months (while US was still 7-8yrs), giving foreign makers significant advantage in transition to new designs incorporating new technologies and changing customer preferences. Offline, I would chide the mainframe rep what was their contribution, since they had some of the same problems.
Roll forward two decades and bailout shows they still hadn't been able to make the transition.
C4 auto taskforce posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Who First Thought Of Using GMT/UTC For System Clock? Newsgroups: alt.folklore.computers Date: Wed, 30 Oct 2024 09:07:03 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
trivia: some of the MIT CTSS/7094 people went to project mac for multics on the 5th flr, others went to the ibm science center on the 4th flr.
from early 370 principles of operation.
Thus, the operator can enable the setting of all clocks in the
configuration by using the switch, of any CPU in the configuration.
time to which a clock value of zero corresponds. January 1, 1900, 0
A.M. Greenwich Mean Time is recommended as the standard epoch for the
clock, although some early support of the TOD clock is not based on
this epoch.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Who First Thought Of Using GMT/UTC For System Clock? Newsgroups: alt.folklore.computers Date: Wed, 30 Oct 2024 13:13:24 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
some kernel software only had local time as displacement from GMT ... and had to be reassembled and rebooted each time that displacement change.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Rise and Fall of the 'IBM Way' Date: 30 Oct, 2024 Blog: Facebookre:
Future System accelerated downfall, Ferguson & Morris, "Computer Wars:
The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
more
http://www.jfsowa.com/computer/memo125.htm
I had continued to work on 360&370 all during "Future System", even periodically ridiculing what they were doing (which wasn't a particular career enhancing activity)
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Democracy Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Thu, 31 Oct 2024 12:42:53 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
in theory they promise to stop doing it ... but if they were found to continue ... they would be prosecuted ... but in fact, several TBTF multiple money laudering cases were settled with repeated "deferred prosecution" (as if previous cases had never existed).
How a big US bank laundered billions from Mexico's murderous drug gangs
https://www.theguardian.com/world/2011/apr/03/us-bank-mexico-drug-gangs
Are some banks too big to jail?
https://www.bbc.com/news/business-20757032
Banks Financing Mexico Gangs Admitted in Wells Fargo Deal
https://www.bloomberg.com/news/articles/2010-06-29/banks-financing-mexico-s-drug-cartels-admitted-in-wells-fargo-s-u-s-deal
Wall Street Is Laundering Drug Money And Getting Away With It
http://www.huffingtonpost.com/zach-carter/megabanks-are-laundering_b_645885.html?show_comment_id=53702542
Money Laundering and the Global Drug Trade are Fueled by the
Capitalist Elites
http://dandelionsalad.wordpress.com/2010/07/23/money-laundering-and-the-global-drug-trade-are-fueled-by-the-capitalist-elites-by-tom-burghardt/
Money Laundering and the Global Drug Trade are Fueled by the
Capitalist Elites
http://www.globalresearch.ca/index.php?context=va&aid=20210
The Banksters Laundered Mexican Cartel Drug Money
http://www.economicpopulist.org/content/banksters-laundered-mexican-cartel-drug-money
... a couple more gone 404, but still at wayback machine:
Too Big to Jail - How Big Banks Are Turning Mexico Into Colombia
https://web.archive.org/web/20100808141220/http://www.taipanpublishinggroup.com/tpg/taipan-daily/taipan-daily-080410.html
Banks Financing Mexico Drug Gangs Admitted in Wells Fargo Deal
https://web.archive.org/web/20100701122035/http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2010/06/28/bloomberg1376-L4QPS90UQVI901-6UNA840IM91QJGPBLBFL79TRP1.DTL
... and has continued
HSBC moved vast sums of dirty money after paying record laundering
fine. FinCEN Files probe reveals Europe's biggest bank aided
massive Ponzi scheme while on probation over ties to drug kingpins.
https://www.icij.org/investigations/fincen-files/hsbc-moved-vast-sums-of-dirty-money-after-paying-record-laundering-fine/
Global banks defy U.S. crackdowns by serving oligarchs, criminals and
terrorists. The FinCEN Files show trillions in tainted dollars flow
freely through major banks, swamping a broken enforcement system.
https://www.icij.org/investigations/fincen-files/global-banks-defy-u-s-crackdowns-by-serving-oligarchs-criminals-and-terrorists/
Too Big To Fail, Too Big To Prosecute, Too Big To Jail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
money laundering posts
https://www.garlic.com/~lynn/submisc.html#money.laundering
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
other posts in this thread
https://www.garlic.com/~lynn/2024f.html#51 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#53 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#55 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#57 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#58 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#59 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#68 The joy of Patents
posts mentioning TBTF and money laundering for drug cartels enabliing
military equipment and violence
https://www.garlic.com/~lynn/2024e.html#39 The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008
https://www.garlic.com/~lynn/2024e.html#7 For Big Companies, Felony Convictions Are a Mere Footnote
https://www.garlic.com/~lynn/2024d.html#59 Too-Big-To-Fail Money Laundering
https://www.garlic.com/~lynn/2024b.html#77 Mexican cartel sending people across border with cash to buy these weapons
https://www.garlic.com/~lynn/2024.html#58 Sales of US-Made Guns and Weapons, Including US Army-Issued Ones, Are Under Spotlight in Mexico Again
https://www.garlic.com/~lynn/2024.html#19 Huge Number of Migrants Highlights Border Crisis
https://www.garlic.com/~lynn/2022f.html#104 American Real Estate Was a Money Launderer's Dream. That's Changing
https://www.garlic.com/~lynn/2021h.html#58 Mexico sues US gun-makers over flow of weapons across border
https://www.garlic.com/~lynn/2021h.html#13 'A Kleptocrat's dream': US real estate a safe haven for billions in dirty money, report says
https://www.garlic.com/~lynn/2021g.html#88 Mexico sues US gun-makers over flow of weapons across border
https://www.garlic.com/~lynn/2019c.html#60 America's Monopoly Crisis Hits the Military
https://www.garlic.com/~lynn/2018b.html#45 More Guns Do Not Stop More Crimes, Evidence Shows
https://www.garlic.com/~lynn/2016c.html#41 Qbasic
https://www.garlic.com/~lynn/2016.html#46 Thanks Obama
https://www.garlic.com/~lynn/2015f.html#56 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015d.html#80 Greedy Banks Nailed With $5 BILLION+ Fine For Fraud And Corruption
https://www.garlic.com/~lynn/2015d.html#75 Greedy Banks Nailed With $5 BILLION+ Fine For Fraud And Corruption
https://www.garlic.com/~lynn/2014i.html#27 How Comp-Sci went from passing fad to must have major
https://www.garlic.com/~lynn/2014c.html#99 Reducing Army Size
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SHARE User Group Meeting October 1968 Film Restoration, IBM 360 Date: 31 Oct, 2024 Blog: Facebookfor IBM Computer "Old Timers Only" - SHARE User Group Meeting October 1968 Film Restoration, IBM 360
I was at both the spring and fall '68 SHARE meetings (major/full meetings were twice a year with two mini-meetings in-between).
I had taken two credit hr intro to fortran/computers and end of semester was hired to rewrite 1401 MPIO in 360 assembler for 360/30. The univ was getting a 360/67 for tss/360 to replace 709/1401 combo and temporarily got a 360/30 to replace 1401 (pending availability of 360/67). The univ. shutdown datacenter on weekends and I would have the whole place to myself, although 48hrs w/o sleep made monday classes hard. They gave me a bunch of hardware&software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. Within a few weeks had a 2000 card assembler program.
Then within a year of the intro class, the 360/67 and I was hired fulltime responsible for os/360 (tss/360 never came to production and 360/67 ran as 360/65 with os/360). First sysgen was for R9.5. Student fortran jobs ran under second on 709 but initially over a minute on os360/360/67. I install HASP ("POWER" equivalent for OS/360) and it cuts the time in half. I start redoing OS/360 sysgen, carefully placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9sec. Student fortran never got better than 709 until I install Univ. Waterloo WATFOR.
Then CSC came out to install (virtual machine) CP67/CMS
(precursor to VM370, 3rd install after CSC itself and MIT Lincoln
Labs) and I mostly played with it during my dedicated weekend time,
originally rewriting a bunch of CP67 pathlengths to optimize CP67
running in virtual machine. OS360 jobsteam was 322 secs on real
machine and initially 856secs in virtual machine (534secs CP67
CPU). After a couple months I got CP67 CPU down to 113secs (from 534)
... and was asked to attend CP67 "official" announcement at spring '68
SHARE meeting in Houston ... where I gave presentations on both OS/360
optimization and CP67 optimization work (I then updated the
presentations for the fall '68 SHARE meeting in Atlantic City). Pieces
of spring '68 presentation in this archived post
https://www.garlic.com/~lynn/94.html#18
trivia: a decade ago I was asked to track down decision to add virtual
memory to all 370s and found former staff to executive making the
decision; basically MVT storage management was so bad that regions had
to be specified four times larger than used, as a result a standard
1mbyte 370/165 only ran four concurrently executing regions at a time,
insufficient to keep system busy and justified; mapping MVT to 16mbyte
virtual memory allowed increasing concurrent executing regions by
factor of four (caped at 15 because of 4bit storage protect keys) with
little or no paging. Old archived post with pieces of email exchange,
including SPOOL discussion both 360 and pre-360.
https://www.garlic.com/~lynn/2011d.html#73
Before graduation, several SHARE contacts made me job offers, primarily for the OS/360 optimization work ... but took a position with small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in an independent business unit). I thought Renton datacenter possibly largest in the world, couple hundred million in 360 stuff; 360/65s arriving faster than they could be installed, boxes constantly being staged in the hallways around the machine room (somebody recently joked that Boeing was installing 360/65s like other companies installed keypunches). When I graduate, I joined CSC (instead of staying with Boeing CFO).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
misc. posts mentioning MPIO, Watfor, Boeing CFO, Renton
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Internal Network Date: 31 Oct, 2024 Blog: Facebookco-worker responsible for science center wide-area network
old reference by one of the inventors of GML (precursor to sgml and
then morphs into html at cern) at the cambridge science center in
1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
gml, sgml, html, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SHARE User Group Meeting October 1968 Film Restoration, IBM 360 Date: 01 Nov, 2024 Blog: Facebookre:
VS2/SVS (MVT in 16mbyte virtual address space) was nearly identical to
running MVT in CP67 16mbyte virtual machine with similar results and
several customers had seen that result in early 70s. Also Tom Simpson
(of HASP fame) had done RASP, MFT running in virtual memory, but also
with a page-mapped filesystem (rather than VS1&VS2 keeping OS/360
filesystem). See reference for more
https://www.garlic.com/~lynn/2011d.html#73
there was only one copy of the MVT kernel in MP ... and 360/65 MP could have 2mbytes, that 2nd mbyte was essentially all for region execution.
note: charlie had invented the (MP) compare&swap instruction (chosen since his initials were CAS) when he was doing CP67 SMP light weight kernel locking for 360/67 at the cambridge science center. Then there were number of visits to 370 architecture owners in POK to get CAS added to 370 ... however they claimed that the POK favorite son operating system people (MVT) claimed that 360 "TEST&SET" was sufficient. The 370 architecture people then said that to get CAS added to 370, had to come up with other uses than SMP kernel locking (thus were born use in multitasking applications, like large DBMS whether were running single processor or multiprocessor, and 370 architecture owners extended it to both single word (CS) and double word (CDS).
Also: MVT (& MVS) documentation claimed that their multiprocessor support only had 1.2-1.5 times throughput of single processor (while we got 2times of single processor with CP67 and later VM370). disclaimer: 360/67 was originally for TSS/360 but the kernel size was extremely bloated so 1mbyte single processor had lots of page thrashing, adding 2nd processor and 2nd mbyte, claim was it got 3.7times throughput of single processor, (not so much 2nd processor, but the 2nd mbyte).
Later trivia: 1st half of 70s was IBM's Future System, totally
different from 370s and was to completely replace 370, internal
politics was killing off 370 efforts and the lack of new 370 products
during FS is claimed to have given the clone 370 makers their market
foothold. When FS finally implodes there is mad rush to get stuff back
into the 370 product pipelines including kicking off quick&dirty
3033&3081 efforts.
http://www.jfsowa.com/computer/memo125.htm
I also get asked to help with a 16-CPU multiprocessor and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was really great until somebody tells the head of POK that it could be decades before POK favorite son operating system (now "MVS") had (effective 16-cpu support, aka high 2-cpu MP overhead getting only 1.2-1.5 throughput of 1-cpu, POK doesn't ship 16-cpu until after turn of century) and he invites some of us to never visit POK again and 3033 processor engineers directed to no more distractions, heads down on 3033.
Other triva: When I was at Boeing, Boeing Huntsville 360/67 2-cpu SMP was brought up to Seattle. Boeing Huntsville had gotten 360/67 2-CPU SMP for TSS/360 with lots of 2250 graphic displays for CAD/CAM ... but ran it was two MVT (single CPU) 360/65 systems, and found long running CAD/CAM apps further aggravated the MVT storage management problem and had modified MVTR13 to run in virtual memory mode ... no paging, but using virtual address space to alleviate some of the MVT storage management problems (sort of precursor to VS2/SVS for 370).
SMP, tightly coupled, shared memory multiprocessor support
https://www.garlic.com/~lynn/subtopic.html#smp
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Email and PROFS Date: 01 Nov, 2024 Blog: FacebookOne of my hobbies after joining IBM was enhanced production operating systems for internal datacenters. After 23jun1969 unbundling announcement, HONE was originally for online branch office access for SEs to practice with guest operating systems running in CP67 virtual machines. The science center had also ported APL\360 to CP67/CMS for CMS\APL (fixing storage management, expanding workspaces from 16kbytes to virtual memory size and adding API for system services like file I/O, enabling lots of real word apps) and HONE started using it for CMS\APL-based sales&marketing support apss, which came to dominate all HONE activity (and guest operating system use dwindled away).
PROFS group was picking up early version of internal apps for wrapping menus around (attracting the less computer literate) and picked up a very early version of VMSG for email client. When the VMSG author tried to offer PROFS a much enhanced version, they tried to have him separated from the company. It all quieted down when he showed that all PROFS email had his initials in non-displayed field. After that he only shared his source with me and one other person.
VMSG author also did Parasite/Story in the 70s ... programmed terminal
emulator with HLLAPI-like facility (well before IBM/PC) ... automagic
log into systems and run scripts. Lots more detail in this old
archived post
https://www.garlic.com/~lynn/2001k.html#35
and example story to automagically log into retain and download
information
https://www.garlic.com/~lynn/2001k.html#36
23jun1969 unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
enhanced production CP67L, CSC/VM, SJR/VM, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
some posts mentioning internal nework, profs, vmsg, parasite, story
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2017k.html#27 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2012d.html#17 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011m.html#44 CMS load module format
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Email and PROFS Date: 01 Nov, 2024 Blog: Facebookre:
recent reply in group post, co-worker responsible for the science
center 60s wide-area network that morphs into the corporate internal
network and technology used for the corporate sponsored univ BITNET
https://www.garlic.com/~lynn/2024f.html#89 IBM Internal Network
we transfer to SJR in 1977, I get to wander around datacenters in silicon valley including disk bldg14 (engineering) and bldg15 (product test) across the street. At the time they were running 7x24, prescheduled, stand-alone testing and mentioned they had recently tried MVS, but it had 15min MTBF (requiring manual re-ipl). I offer to rewrite I/O supervisor to make it bullet-proof and never fail, allowing any amount of on-demand, concurrent testing (greatly improving productivit)y. Bldg15 then gets 1st engineering 3033 (outside POK processor engineering) and since testing only requires percent or two of CPU, we scrounge up a 3830 controller and string of 3330s for our own private online service.
archived post about internal network passing 1000 nodes
https://www.garlic.com/~lynn/2006k.html#8
I get HSDT project early 80s, T1 and faster computer links (both
terrestrial and satellite), some number of disputes with communication
group. IBM had 2701 telecommunication controllers in the 60s that
supported T1 (1.5mbits/sec), but with transition to SNA/VTAM in the 70s,
issues appeared to cap controller links at 56kbits/sec. Was working
with the NSF director and was suppose to get $20M to interconnect the
NSF supercomputer centers. Then congress cuts the budget, other things
happen and finally RFP release, in part based on what we already had
running. From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of actual numbers, was Democracy Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Fri, 01 Nov 2024 16:14:20 -1000The Natural Philosopher <tnp@invalid.invalid> writes:
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Email and PROFS Date: 01 Nov, 2024 Blog: Facebookre:
trivia: some of the MIT CTSS/7094 people went to 5th flr for MIT Project MAC and Multics. Others went to the IBM Cambridge Science Center on the 4th flr and did virtual machines (CP40 which morphs into CP67, precursor to VM370), internal network, GML, online apps, performance tools, etc.
CTSS EMAIL history
https://www.multicians.org/thvv/mail-history.html
IBM CP/CMS had electronic mail as early as 1966, and was widely used
within IBM in the 1970s. Eventually a PROFS product evolved in the
1980s.
... snip ...
The IBM 360/67 and CP/CMS
https://www.multicians.org/thvv/360-67.html
Electronic Mail and Text Messaging in CTSS, 1965 - 1973
https://www.multicians.org/thvv/anhc-34-1-anec.html
Early 70s my 1st non-US HONE install was brand new bldg in La Defense, landscaping yet to be done, all brown dirt ... one of the hardest problems was figuring out how to login back in the states to read email.
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
enhanced production CP67L, CSC/VM, SJR/VM, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SHARE User Group Meeting October 1968 Film Restoration, IBM 360 Date: 02 Nov, 2024 Blog: Facebookre:
well, 1972, CEO Learson tried (and failed) to block the bureaucrats,
careerists and MBAs from destroying watson culture/legacy, lot more
detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
When I transferred to IBM SJR, I got to wander around IBM & non-IBM datacenters, including disk bldg14 (engineering) and bldg15 (product test) across the street. They had been running 7x24, pre-scheduled, stand-alone testing and had mentioned that they had tried MVS, but it had 15min MTBF (requiring manual re-IPL) in that environment. I offered to rewrite input/output supervisor, making it bullet-proof and never fail, allowing any amount of ondemand concurrent testing (greatly improving productivity).
bldg15, tended to get very early engineering systems and got a engineering 3033 and then engineering 4341. Jan1979, I was con'ed into doing national lab benchmark on the 4341 that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami). A small cluster of 4341s also had aggregate throughput of 3033 along with significant smaller footprint, cost, power and cooling.
I was blamed for online computer conferencing in the late 70s and
early 80s on the internal network. It really took off the spring of
1981 when I distributed trip report to Jim Gray at Tandem. When the
corporate executive committee was told there was something of uproar
(folklore 5of6 wanted to fire me), with some task forces that resulted
in official online conferencing software and officially sanctioned
moderated forums. One of the observations
Date: 04/23/81 09:57:42
To: wheeler
your ramblings concerning the corp(se?) showed up in my reader
yesterday. like all good net people, i passed them along to 3 other
people. like rabbits interesting things seem to multiply on the
net. many of us here in pok experience the sort of feelings your mail
seems so burdened by: the company, from our point of view, is out of
control. i think the word will reach higher only when the almighty $$$
impact starts to hit. but maybe it never will. its hard to imagine one
stuffed company president saying to another (our) stuffed company
president i think i'll buy from those inovative freaks down the
street. '(i am not defending the mess that surrounds us, just trying
to understand why only some of us seem to see it).
bob tomasulo and dave anderson, the two poeple responsible for the
model 91 and the (incredible but killed) hawk project, just left pok
for the new stc computer company. management reaction: when dave told
them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they
tried to keep bob by telling him he shouldn't go (the reward system in
pok could be a subject of long correspondence). when he left, the
management position was 'he wasn't doing anything anyway. '
in some sense true. but we haven't built an interesting high-speed
machine in 10 years. look at the 85/165/168/3033/trout. all the same
machine with treaks here and there. and the hordes continue to sweep
in with faster and faster machines. true, endicott plans to bring the
low/middle into the current high-end arena, but then where is the
high-end product development?
... snip ...
1992 (20yrs after Learson tried to save the company), IBM had one of
the largest losses in the history of US companies and was being
reorged into the 13 "baby blues" (sort of takeoff on AT&T "baby bells"
breakup a decade earlier) in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from bowels of Armonk asking if
we could help with the breakup. Before we get started, the board
brings in the former AMEX president as CEO who (somewhat) reverses the
breakup.
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
Learson Management Briefing pg160-163, 30yrs of ibm management
briefings
http://www.bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
... and text rendition of Learson's poster:
+-----------------------------------------+ | "BUSINESS ECOLOGY" | | | | | | +---------------+ | | | BUREAUCRACY | | | +---------------+ | | | | is your worst enemy | | because it - | | | | POISONS the mind | | STIFLES the spirit | | POLLUTES self-motivation | | and finally | | KILLS the individual. | +-----------------------------------------+
"I'M Going To Do All I Can to Fight This Problem . . ." by T. Vincent Learson, Chairman
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Cloud Exit: 42% of Companies Move Data Back On-Premises Date: 02 Nov, 2024 Blog: FacebookCloud Exit: 42% of Companies Move Data Back On-Premises
Benchmarks on number of program iterations compared to reference platform (not actual counts).
Large cloud operators have massive scale and lots of effort optimizing hardware system costs and automation, as a result power and cooling were becoming major megadatacenter costs (as a result significant pressure on chip makers to significantly reduce computation cooling and power consumption). Also clouds tended to build-out to handle large ondemand spikes (and needed components where power could drop to zero but were instantly on when needed).
2010: IBM z196 max configured mainframe, $30M and 50BIPS ($600,000/BIPS). IBM E5-2600 server blade base list price $1815 and 500BIPS ($3.63/BIPS). Major cloud operators were claiming that they assembled their own systems for 1/3rd the cost of brand name systems (i.e. E5-2600 blade for $605 or $1.21/BIPS).
A large cloud operator could have dozens of megadatacenters around the world, each megadatacenter with half million or more blades (@500BIPS/blade and $1.21/BIPS) with millions of cores(/processors), and enormous automation (operated with 70-80 staff). Any significant improvement in new chip generation computation power use, easily justified complete replacement of all systems.
Shortly after 2010, industry press claimed major server chip makers were shipping at least half their product to large megadatacenters ... and IBM unloads its server business.
Since then there are businesses offering to replicate cloud-like environments, with similar configurations for in-house datacenters ... leveraging the enormous cost savings with components used by large clouds.
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
trivia late last century, the i86 vendors went to a hardware layer that
translated i86 into RISC micro-ops for actual execution ... largely
negating the throughput advantage of RISC processors
1999: single IBM PowerPC 440 hits 1,000MIPS (>six times each Dec2000
IBM z900 mainframe processor)
1999: single Pentium3 (translation to RISC micro-ops for execution)
hits 2,054MIPS (twice PowerPC 440)
2003: max. configured IBM mainframe z990, 32 processor aggregate 9BIPS
(281MIPS/proc)
2003: single Pentium4 processor 9.7BIPS (>max configured z990)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: SHARE User Group Meeting October 1968 Film Restoration, IBM 360 Date: 03 Nov, 2024 Blog: Facebookre:
note comment about 308x technology being warmed over Future System
http://www.jfsowa.com/computer/memo125.htm
original two processor 3081D was slower than Amdahl single processor, they double the processor cache sizes for 3081K and clocks about same as Amdahl single processor (minus the MVS multiprocessor overhead that two processor only 1.2-1.5 times throughput of single processor). Then 3084 with two 3081K lashup ... in theory four times a 3081K processor was about same as Amdahl two processor ... however going from 2-processor to 4-processor have software & hardware interference from three other processors rather than interferance from just one other processors. MVS redoes kernel storage allocation to be cache line aligned and area sizes allocated rounded up to multiple of cache line size (reducing cross cache hardware interferance). Then they had to do the same for VM370, claiming 4-5% system throughput improvement.
oft repeated trivia: in the morph from CP67->VM370, they simplify and/or drop a lot of stuff. In 1974, I start migrating lots of stuff from CP67 to VM370R2-based system (including kernel structure reorg for multiprocessor operation, but not the actual re-org) for my internal CSC/VM distribution. Then 1975, for VM370R3-based system, add multiprocessor support, originally for US HONE. The US HONE datacenters had beeen consolidated in Palo Alto and upgraded to single-system image, loosely-coupled, shared DASD with load-balancing and fall-over support across the complex. Adding in multiprocessor support allowed adding a 2nd processor to each system ... for 16-CPU total operation. I could get 2-CPU system throughput at least twice a 1-CPU system with combination of extremely efficient SMP parallelization pathlengths and some cache-affinity tricks (reduced cache-misses, increasing processor throughput, frequently more than offsetting SMP overhead).
For some reason much of that was undone for VM370 SP1 that came w/3081 targeted at TPF/ACP customers, however it decreased throughput for all VM370 SMP customers by 10-15%. I got called into a very long-time, large TLA gov. customer (dating back to the very early CP67 days) to try and help revert VM370 SMP to pre-SP1. One of the issues was SP1 had a hack with 3270 I/O improving interactive response (trying to mask the throughput decrease). However, this particular customer was all high-speed glass teletype ... so the 3270 I/O hack didn't help at all.
old email ref
https://www.garlic.com/~lynn/2001f.html#email830420
Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Engine-Cars Newsgroups: alt.folklore.computers Date: Sun, 03 Nov 2024 10:42:14 -1000Bud Frede <frede@mouse-potato.com> writes:
The foreign makers determined at quotas set, they could sell as many higher end cars (with greater profits, which further reduced any downward pressure on US car prices and increased the US industry significant profits that they pocketed). At the time, car business was taking 7-8yrs to come out with new design (from initial to rolling off the line, usually two efforts in parallel offset 3-4yrs to simulate something new, more often), and the foreign makers cut their elapsed time in half to 3-4yrs (as part of completely different product).
In 1990, there was "C4 taskforce" (by us makers) to look at (finally?) completely remaking themselves and since they were planning on leveraging technology, technology companies were invited to participat and I was one from IBM (other was POK mainframe IBMer).
At the time foreign makers were in the process of cutting elapsed time in half again, from 3-4yrs to 18-24months (while US was still 7-8yrs), giving foreign makers significant advantage in transition to new designs incorporating new technologies and changing customer preferences. US long development time was aggravated by most of US makers had spun off their part business ... and were finding that equipment that were part of 8yr old designs had changed and they had further delays adapting design for current equipment.
Offline, I would chide the IBM mainframe rep what was their contribution, since they had some of the same problems.
Roll forward two decades and bailouts showed they still hadn't been able to make the transition.
C4 auto taskforce posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of actual numbers, was Democracy Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Sun, 03 Nov 2024 10:55:36 -1000rbowman <bowman@montana.com> writes:
... and railroads scamming the supreme court
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiii/loc45-50:
IN DECEMBER 1882, ROSCOE CONKLING, A FORMER SENATOR and close
confidant of President Chester Arthur, appeared before the justices of
the Supreme Court of the United States to argue that corporations like
his client, the Southern Pacific Railroad Company, were entitled to
equal rights under the Fourteenth Amendment. Although that provision
of the Constitution said that no state shall "deprive any person of
life, liberty, or property, without due process of law" or "deny to
any person within its jurisdiction the equal protection of the laws,"
Conkling insisted the amendment's drafters intended to cover business
corporations too.
pg36/loc726-28:
On this issue, Hamiltonians were corporationalists--proponents of
corporate enterprise who advocated for expansive constitutional rights
for business. Jeffersonians, meanwhile, were populists--opponents of
corporate power who sought to limit corporate rights in the name of
the people.
pg229/loc3667-68:
IN THE TWENTIETH CENTURY, CORPORATIONS WON LIBERTY RIGHTS, SUCH AS
FREEDOM OF SPEECH AND RELIGION, WITH THE HELP OF ORGANIZATIONS LIKE
THE CHAMBER OF COMMERCE.
... snip ...
False Profits: Reviving the Corporation's Public Purpose
https://www.uclalawreview.org/false-profits-reviving-the-corporations-public-purpose/
I Origins of the Corporation. Although the corporate structure dates
back as far as the Greek and Roman Empires, characteristics of the
modern corporation began to appear in England in the mid-thirteenth
century.[4] "Merchant guilds" were loose organizations of merchants
"governed through a council somewhat akin to a board of directors," and
organized to "achieve a common purpose"[5] that was public in
nature. Indeed, merchant guilds registered with the state and were
approved only if they were "serving national purposes."[6]
... snip ...
... however there has been significant pressure to give corporate
charters to entities operating in self-interest ... followed by
extending constitutional "people" rights to corporations. The supreme
court was scammed into extending 14th amendment rights to corporations
(with faux claims that was what the original authors had intended).
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiv/loc74-78:
Between 1868, when the amendment was ratified, and 1912, when a
scholar set out to identify every Fourteenth Amendment case heard by
the Supreme Court, the justices decided 28 cases dealing with the
rights of African Americans--and an astonishing 312 cases dealing with
the rights of corporations.
... snip ...
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
Inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Engine-Cars Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Sun, 03 Nov 2024 11:25:52 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Iconoclast: A Neuroscientist Reveals How to Think Differently
https://www.amazon.com/Iconoclast-Neuroscientist-Reveals-Think-Differently/dp/1422115011
pg125/loc1324-28:
The Model T became possible only when Ford heard about a new type of
steel that was being smelted in France. French steel contained a secret
ingredient, vanadium, which made it three times stronger than regular
steel. This changed everything for Ford. As with other iconoclasts, his
perception of the automobile instantly changed when he saw what could be
done with a vehicle that weighed a third less. Now, little gas engines
that struggled to pull a heavy car suddenly weren't so anemic anymore. A
little engine could do a lot with a car that didn't weigh very much. The
Model T was released in 1908, and within the first year, Ford had sold
10,607 of them, more than any other manufacturer.
Reducing the weight by 2/3rds with two cylinder engine ... made the
Model T a significantly more attractive vehicle.
https://en.wikipedia.org/wiki/Model_T
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of Engine-Cars Newsgroups: alt.folklore.computers Date: Mon, 04 Nov 2024 03:21:05 -1000Lynn Wheeler <lynn@garlic.com> writes:
a side effect of US makers able to significantly increase their car prices (from 70s to the 80s) was that they far outstripped rise in wages and they needed to move from 36m auto loans to 60m & 72m loans ... but financial institutions wouldn't make 60m/72m loans w/o warrenties matching life of the loan ... and US makers were forced to significantly improve US auto quality otherwise the auto business would be loosing money with warranty costs (more than offsetting the increase in profits from the price increases).
other parts of "The joy of" threads
https://www.garlic.com/~lynn/2024e.html#142 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#144 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#145 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#2 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#7 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#8 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#16 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#17 The joy of FORTRAN
https://www.garlic.com/~lynn/2024f.html#18 The joy of RISC
https://www.garlic.com/~lynn/2024f.html#22 stacks are not hard, The joy of FORTRAN-like languages
https://www.garlic.com/~lynn/2024f.html#51 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#53 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#55 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#57 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#58 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#59 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#68 The joy of Patents
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#70 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#87 The joy of Democracy
https://www.garlic.com/~lynn/2024f.html#93 The joy of actual numbers, was Democracy
https://www.garlic.com/~lynn/2024f.html#99 The joy of actual numbers, was Democracy
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Father, Son & CO. My Life At IBM And Beyond Date: 04 Nov, 2024 Blog: FacebookFather, Son & CO. My Life At IBM And Beyond
1996 MIT Sloan The Decline and Rise of IBM
https://sloanreview.mit.edu/article/the-decline-and-rise-of-ibm/?switch_view=PDF
1995 l'Ecole de Paris The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
my 2022 rendition, 1972 CEO Learson trying (& failed) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
20 years later IBM has one of the largest losses in the history of US
companies and was being reorganized into the 13 "baby blues" (take-off
on the AT&T "baby bells" breakup a decade earlier) in preparation
to breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO who (somewhat)
reverses the breakup.
note AMEX was in competition with KKR for LBO (private-equity)
take-over of RJR and KKR wins, it then runs into some difficulties and
hires away AMEX president to help
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
later as IBM CEO, uses some of the same methods used at RJR:
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Former AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of actual numbers, was Democracy Newsgroups: alt.folklore.computers Date: Mon, 04 Nov 2024 06:55:17 -1000Lynn Wheeler <lynn@garlic.com> writes:
sometime in the 80s, a gimmick by corporations with large unionized work forces was to create corporate structure with parent company and multiple subsidiaries with the unionized work force in a separate subsidiary and financials structured so the unionized work force subidiary operated near break even or at a loss; US auto makers, large construction equipment makers, airlines.
equipment and auto makers could sell to separate subsidiary at wholesale where nearly all the profit was booked. in case of airlines, nearly all the profit was booked in computerized ticket sale subsidiary.
in the mid-90s, some airline operations were operating at a loss because
of increase in fuel prices, while the parent company still had
significant profit (more than offsetting "losses") at the ticket
subsidiary. later a airline operations even declared bankruptcy dumping
its union workforce pensions on the US gov.
https://www.pbgc.gov/
with retirees seeing 2/3rds cut in their pension payments.
a large construction equipment company took it a step further and incorporated distributership in offshore tax haven. us manufacturing sold to distributership at wholesale, distributer sales to us customers at retail (with all the profit) were booked in offshore tax heaven ... equipment continued to be shipped directly from us manufacturing to us customers, but nearly all the profit was booked offshore. then there were threats to even move manufacturing out of the US (being able to further cut the wholesale prices and increase the profits in tax havens).
then saw some airlines incorporating their computerized ticket corporation in offshore tax haven.
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
some posts mentioning pbgc.gov
https://www.garlic.com/~lynn/2020.html#2 Office jobs eroding
https://www.garlic.com/~lynn/2017f.html#4 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#93 United Air Lines - an OODA-loop perspective
https://www.garlic.com/~lynn/2016f.html#100 D.C. Hivemind Mulls How Clinton Can Pass Huge Corporate Tax Cut
https://www.garlic.com/~lynn/2016e.html#98 E.R. Burroughs
https://www.garlic.com/~lynn/2016b.html#98 Qbasic - lies about Medicare
https://www.garlic.com/~lynn/2016b.html#83 Qbasic - lies about Medicare
https://www.garlic.com/~lynn/2014m.html#8 weird apple trivia
https://www.garlic.com/~lynn/2010d.html#46 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2010b.html#24 Happy DEC-10 Day
https://www.garlic.com/~lynn/2008.html#65 As Expected, Ford Falls From 2nd Place in U.S. Sales
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Father, Son & CO. My Life At IBM And Beyond Date: 04 Nov, 2024 Blog: Facebookre:
Learson Management Briefing
Management Briefing
Number 1-72: January 18,1972
ZZ04-1312
TO ALL IBM MANAGERS:
Once again, I'm writing you a Management Briefing on the subject of
bureaucracy. Evidently the earlier ones haven't worked. So this time
I'm taking a further step: I'm going directly to the individual
employees in the company. You will be reading this poster and my
comment on it in the forthcoming issue of THINK magazine. But I wanted
each one of you to have an advance copy because rooting out
bureaucracy rests principally with the way each of us runs his own
shop.
We've got to make a dent in this problem. By the time the THINK piece
comes out, I want the correction process already to have begun. And
that job starts with you and with me.
Vin Learson
... snip ...
Learson Management Briefing pg160-163, 30yrs of ibm management briefings
http://www.bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
1st half of 70s "Future System" (F/S, FS) project
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
more FS info
http://www.jfsowa.com/computer/memo125.htm
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
... and text rendition of Learson's poster:
+-----------------------------------------+ | "BUSINESS ECOLOGY" | | | | | | +---------------+ | | | BUREAUCRACY | | | +---------------+ | | | | is your worst enemy | | because it - | | | | POISONS the mind | | STIFLES the spirit | | POLLUTES self-motivation | | and finally | | KILLS the individual. | +-----------------------------------------+
"I'M Going To Do All I Can to Fight This Problem . . ." by T. Vincent Learson, Chairman
From: Lynn Wheeler <lynn@garlic.com> Subject: NSFnet Date: 03 Nov, 2024 Blog: FacebookI get HSDT project early 80s, T1 and faster computer links (both terrestrial and satellite), some number of disputes with communication group. IBM had 2701 telecommunication controllers in the 60s that supported T1 (1.5mbits/sec), but with transition to SNA/VTAM in the 70s, issues appeared to cap controller links at 56kbits/sec. Was working with the NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, other things happen and finally RFP release, in part based on what we already had running. From 28Mar1986 Preliminary Announcement:
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
Winning bid put in PC/RT routers with 440kbit/sec links ... and then to make it look like T1 had T1 trunks with telco multiplexors.
trivia: one of HSDT issues w/T1 (and faster) for internal links was corporate requirement that all links be encrypted and difficulty getting link encryptors ... especially for links faster than T1.
when latest ibm mainframe came out (two cpu 3081k), i did some software DES benchmarks ...which ran at 150kbytes/sec ... aka would have required both processors dedicated for a full-duplex T1 link.
later i work on encryptor that would handle 30mbits/sec and cost less than $100 to build.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: NSFnet Date: 04 Nov, 2024 Blog: Facebookre:
Didn't know the executive directly, he owned some amount of POK sofware development ... early spring81 there were huge customer issues with VM/SP stability ... that he was trying to address.
... note, 1st half of 70s, there was future system effort that was
completely different and was going to completely replace existing IBM
systems (I continued to work on 370 all during FS, including
periodically ridiculing it)
http://www.jfsowa.com/computer/memo125.htm
when that implodes there was mad rush to get products back into the
370 product pipeline. The head of POK (high-end mainframes) also
convinced corporate to kill the vm370 product, shutdown the vm370
development group, and transfer all the people to POK for
MVS/XA. Endicott managed to save the VM370 product mission (for
mid-range mainframes), but had to recreate a development group from
scratch (contributing to VM/SP stability issues). Some of this shows
up in VMSHARE archives (TYMSHARE provided their VM370/CMS computer
conferencing system starting in Aug1976)
http://vm.marist.edu/~vmshare
The next new high-end mainframe was 3081, originally multiprocessor
only, but MVS/XA initially was late (so ran in 370 mode instead of
XA/370). Market issue was that Amdahl 1-CPU system had MIP higher than
3081D 2-CPU. They double the processor cache which brings aggregate
3081K 2-CPU about same as Amdahl 1-CPU system. However IBM had market
segment involving airlines & transactions with its ACP/TPF systems
that didn't have multiprocessor support and there was concern that
whole market would move to Amdahl 1-CPU 370s. POK helped with
enhancements for VM/SP multiprocessor support, that would improve
ACP/TPF throughput running in single processor virtual machine
... helping motivate ACP/TPF customers to use 3081s; however it
degraded performance for most other multiprocessor (3081 or earlier)
VM/SP customers.
http://www.jfsowa.com/computer/memo125.htm
I recently posted in "private" mainframe group about getting called in
about reverting the ACP/TPF enhancements starting with long-time (back
to 60s virtual machines), high-end gov. agency customer. Mentions in
the morph from CP67->VM370, lots of features were simplified or
dropped (like multiprocessor). I start adding things back into VM370
including multiprocessor support in 1975 to a VM370R3 base for
internal datacenters.
https://www.garlic.com/~lynn/2024f.html#97 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
online, commercial, virtual-machine-based services
https://www.garlic.com/~lynn/submain.html#online
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: NSFnet Date: 04 Nov, 2024 Blog: Facebookre:
Another MP story was after FS implodes, I get asked to help with 16-CPU multiprocessor project, and we con the 3033 processor engineers into working on it in their spare time. Everybody thought it was great until somebody tells the head of POK that it could be decades before POK's favorite son operating system ("MVS") had (effective) 16-CPU support (i.e. MVS documentation that their 2-CPU throughput was only 1.2-1.5 times that of 1-CPU ... while I was getting 2times throughput; POK doesn't ship 16-CPU machine until after turn of century). Head of POK then invites some of us to never visit again ... and directs 3033 processor engineers, heads down and no distractions.
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Co-worker at science center was responsible for the science center
CP67 "wide-area" network, morphing into the IBM internal network
(larger than arpanet/internet until sometime mid/late 80s about time
they force it to convert to SNA/VTAM), technology also used for
corp. sponsored univ BITNET. Item from one of the inventors of GML
(precursor to SGML, HTML, XML, etc) at the science center in 1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
co-worker (we transfer to IBM SJR on the west coast in 1977, he passes
aug2020):
https://en.wikipedia.org/wiki/Edson_Hendricks
SJMerc article about Edson and "IBM'S MISSED OPPORTUNITY WITH THE
INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website ... blocked from converting internal network to
tcp/ip
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
The SNA/VTAM group had been fiercely fighting off client/server and distributed computing and trying to block release of IBM mainframe TCP/IP support. When that got reversed, they changed tactic and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be release through them. What shipped got aggregate 44kbytes/sec throughput using nearly whole 3090 CPU. I then do RFC1044 enhancements and in tuning tests at Cray Research between Cray and IBM 4341, get sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
trivia: for the NSFNET upgrade to T3, I was asked to be the red team and couple dozen people from half dozen labs around the world were the blue team. At final executive review, I presented 1st. Then 5-10mins into blue team presentation, the executive pounded on the table and said he would lay down in front of garbage truck before he let anything but the blue team proposal go forward. I (and couple others) get up and leave.
for another look at IBM (public IBM group)
https://www.garlic.com/~lynn/2024f.html#102 Father, Son & CO. My Life At IBM And Beyond
https://www.garlic.com/~lynn/2024f.html#104 Father, Son & CO. My Life At IBM And Beyond
Learson trying (& failed) to block the bureaucrats, careerists, and MBAs from
destroying the watson legacy/culture
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
CP67L, CSC/VM, SJR/VM, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Father, Son & CO. My Life At IBM And Beyond Date: 05 Nov, 2024 Blog: Facebookre:
Then IBM becomes a financial engineering company
Stockman; The Great Deformation: The Corruption of Capitalism in
America
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.
pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.
... snip ...
(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate
Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.
...snip ..
(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts
Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket
more financial engineering company
IBM deliberately misclassified mainframe sales to enrich execs,
lawsuit claims. Lawsuit accuses Big Blue of cheating investors by
shifting systems revenue to trendy cloud, mobile tech
https://www.theregister.com/2022/04/07/ibm_securities_lawsuit/
IBM has been sued by investors who claim the company under former CEO
Ginni Rometty propped up its stock price and deceived shareholders by
misclassifying revenues from its non-strategic mainframe business -
and moving said sales to its strategic business segments - in
violation of securities regulations.
... snip ...
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former amex president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
other recent posts mention IBM Financial engineering company:
https://www.garlic.com/~lynn/2024e.html#124 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#77 The Death of the Engineer CEO
https://www.garlic.com/~lynn/2024e.html#51 Former AMEX President and New IBM CEO
https://www.garlic.com/~lynn/2024.html#120 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023c.html#72 Father, Son & CO
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#118 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#105 IBM 360
https://www.garlic.com/~lynn/2022f.html#105 IBM Downfall
https://www.garlic.com/~lynn/2022d.html#83 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022c.html#91 How the Ukraine War - and COVID-19 - is Affecting Inflation and Supply Chains
https://www.garlic.com/~lynn/2022c.html#46 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2022b.html#115 IBM investors staged 2021 revolt over exec pay
https://www.garlic.com/~lynn/2022b.html#52 IBM History
https://www.garlic.com/~lynn/2022.html#108 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#75 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#11 General Electric Breaks Up
https://www.garlic.com/~lynn/2021k.html#3 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#101 Who Says Elephants Can't Dance?
https://www.garlic.com/~lynn/2021i.html#80 IBM Downturn
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: NSFnet Date: 05 Nov, 2024 Blog: Facebookre:
Last product at IBM was HA/CMP, it started out as HA/6000 for the
NYTimes to move their newspaper (ATEX) system off VAXCluster to
RS/6000. I rename it HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
when I start doing technical/scientific cluster scale-up with national
labs and commercial cluster scale-up with RDBMS vendors (that had
VAXCluster support in same source base with UNIX, I do a distributed
lock manager supporting VAXCluster semantics to ease the port).
Early jan1992, have meeting with Oracle CEO where AWD/Hester tells Ellison that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. Then mid-jan, update IBM (gov) FSD about status and they tell the IBM Kingston supercomputer group that they were going with HA/CMP for gov customers. End of Jan, cluster scale-up is transferred for announce as IBM supercomputer (for "technical/scientific ONLY") and we are told we can't do anything with more than four processors (we leave IBM a few months later).
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 360/65 and 360/67 Date: 07 Nov, 2024 Blog: FacebookI take 2credit hr intro to fortran/computers and at end of semester, univ hires me to rewrite 1401 MPIO for 360/30. The univ was getting 360/67 for tss/360 to replace 709/1401 and getting a 360/30 temporarily replacing 1401 (pending availability of 360/67). Univ. shutdowns datacenter on weekends and I get the machine room dedicated, although 48hrs w/o sleep makes monday classes hard. Within a few weeks, I have 2000 card assembler program.
then within year of taking into class, the 360/67 arrives and I'm hire fulltime responsible for OS/360 (360/67 as 360/65, tss/360 never coms to fruition). student fortran ran under second on 709 (tape->tape), initially well over minute. I install HASP cutting time in half. I then start doing highly modified stage2 sysgen to carefully place datasets and PDS members to optimize arm seek and multi-track search cutting another 2/3rds to 12.9secs. Student fortran never got better than 709 until I install Univ. of Waterloo WATFOR
Then CSC comes out and installs CP67/CMS (3rd after CSC itself and MIT Lincoln Labs) and I mostly play with it during my weekend stand-alone time. The next few months I rewrite large amounts of CP67 code, initially concentrated on pathlengths running OS/360 in virtual machine. I'm then invited to Spring SHARE Houston meeting for the "public" CP67/CMS announce. CSC then has one week CP67/CMS class at Beverly Hills Hilton, I arrive Sunday night and asked to teach the CP67 class. It turns out the IBM CSC employees that were to teach, had given notice to join a commercial online CP67/CMS service bureau. Within a year, some Lincoln Labs people form a 2nd commercial online CP67/CMS service bureau (both specializing for services for financial industry).
Before I graduate, I'm hired fulltime in a small group in the Boeing CFO office to help with formation with Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter is possibly largest in the world (couple hundred million in 360 stuff), 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room (some joke that Boeing was getting 360/65s like other companies got keypunches) although they have single 360/75 used for classified work. Lots of politics between Renton director and CFO, who only has a 360/30 up at Boeing Field for payroll (although they enlarge the room and install 360/67 for me to play with when I'm not doing other stuff). They also bring the Boeing Huntsville 2-CPU 360/67 up to Seattle. When I graduate I join science center (instead staying with Boeing CFO).
back in the days of rent/leased, charges based on system meter (meter didn't run in CE-mode). all cpu and channel activity needed to be idle for 400ms before system meter stopped. Lots of work on CP67 in transition to 7x24, running dark room unattended offshift, and allow channel programs to allow system meter to stop when idle (but instant on when characters arrive). Trivia: long after IBM changed from rent/leased to sales, MVS still had timer task that woke up every 400ms (would have prevented system meter from stopping).
One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and the US online sales and marketing support HONE systems were long time customer. In the morph of CP67->VM370, lots of stuff is simplifed and/or dropped and in 1974, I start adding stuff back in starting with a VM370R2 base for my CSC/VM. US HONE consolidates their datacenters in Palo Alto (when facebook 1st moves into silicon valley, it is into a new bldg built next door to former US HONE datacenter), and VM370 enhanced single-system image, loosely-coupled, shared DASD operation with load-balancing and fall-over across the complex. I then add multiprocessor support for my CSC/VM, initially for HONE so they can add 2nd 168 processor to each system (16 CPUs total).
bitsavers cp67
https://bitsavers.org/pdf/ibm/360/cp67/
360 functional characteristiscs, including 65 and 67
https://bitsavers.org/pdf/ibm/360/functional_characteristics/
trivia: this gives some market compute cycle ranking, from end of ACS,
Amdahl had won battle to make ACS 360 compatible (folklore it was
canceled when executive thought it would advance state-of-art too fast
and IBM would loose control of the market, Amdahl then leaves IBM).
https://people.computing.clemson.edu/~mark/acs_end.html
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CP67L, CSC/VM, SJR/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some posts mentioning 709/1401, MPIO, Fortran, CP67/CMS, Boeing CFO,
Renton, HONE
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Japan Technology Date: 08 Nov, 2024 Blog: Facebookafter graduating and joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters and online sales&marketing support HONE system was long time customer ... and got some of my 1st overseas trips when HONE asked me to go along for non-US installs (early ones in the 70s were both Paris and Tokyo).
In early 80s, got HSDT project, T1 and faster computer links (both
terrestrial and satellite) which resulted in some amount of disputes
with the SNA/VTAM group (note in the 60s IBM had 2701 controllers that
supported T1 ... 1.5mbits/sec; with the transition to SNA/VTAM in the
70s, issues apparently capped controllers at 56kbits). HSDT was also
having some custom hardware built on the other side of Pacific. Week
before I was to visit, Raleigh sent out announcement about new
"networking" forum with the following definition:
low-speed: 9.6kbits/sec,
medium speed: 19.2kbits/sec,
high-speed: 56kbits/sec,
very high-speed: 1.5mbits/sec
monday morning on wall of conference room on the other side of
pacific, there were these definitions:
low-speed: <20mbits/sec,
medium speed: 100mbits/sec,
high-speed: 200mbits-300mbits/sec,
very high-speed: >600mbits/sec
Also had tour/demo of cdrom player line, surface mount technology, optical
drivers, advanced FEC ... $300 retail, more advance than computer
stuff going for tens of thousands.
trivia: In late 70s and early 80s, I was blamed for online computer conferencing on the IBM internal network (larger than arpanet/internet from the start until sometime mid/late 80s about the time when the internal network was forced to convert to SNA/VTAM). Folklore is when the corporate executive committee was told, 5of6 wanted to fire me. One of the outcomes were official online discussion group software and approved&moderated forums.
co-worker at the cambridge science center responsible for the science
center wide-area CP67-network (which morphs into ibm internal
network), reference by one of the people at the science center
responsible for inventing GML (precursor to SGML, HTML, etc) in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
co-worker ... passed in aug2020
https://en.wikipedia.org/wiki/Edson_Hendricks
IBM Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM etc
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
some past posts mentioning new Raleigh forum
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2019b.html#96 Journey from Idea to Practice: Internetworking and Protocols
https://www.garlic.com/~lynn/2019b.html#79 IBM downturn
https://www.garlic.com/~lynn/2019b.html#23 Online Computer Conferencing
https://www.garlic.com/~lynn/2010i.html#69 Favourite computer history books?
https://www.garlic.com/~lynn/2010e.html#11 Crazed idea: SDSF for z/Linux
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Email and PROFS Date: 09 Nov, 2024 Blog: Facebookre:
... early use was also "distributed" network development project between CSC & Endicott to add (virtual memory) 370 virtual machine emulation to CP67; modify CP67L running in (CP67L) 360/67 virtual machine, CP67H to support virtual memory 370 virtual machines, modify CP67H running in (virtual memory) 370 virtual machine, CP67I. CP67I running in CP67H 370 virtual machine, running in CP67L 360/67 virtual machine ... instead of running CP67H on real 360/67 was because the CSC system also had professors, staff, and students from Boston/Cambridge area institutions using CSC system and wanted to provide extra layer of security against leaking unannounced 370 virtual memory. It was in regular production use a year before the first virtual memory engineering 370 hardware was operational (when 1st engineering virtual memory 370 was operational, ipl'ing CP67I was one of the 1st test cases). Later three engineers from San Jose add 3330&2305 device support to CP67I for CP67SJ
trivia: more than decade ago, I was asked to track down decision to
add virtual memory to all 370s and found staff member to executive
making decision. Basically MVT storage management was so bad that
regions frequently had to be specified four times larger than used, so
typical 1mbyte 370/165 only ran four regions concurrently,
insufficient to keep system busy and justified. Going to running MVT
in 16mbyte virtual memory (VS2/SVS) would allow number of concurrent
regions to be increased by factor of four times (capped at 15 total
because of 4bit storage protect key) with little or no paging,
analogous to running MVT in a CP67 16mbyte virtual machine. Old
archived newgroup post with pieces of email exchange:
https://www.garlic.com/~lynn/2011d.html#73
In POK, Ludlow was implementing prototype SVS on 360/67 ... a little bit of code to build virtual memory tables and simple paging ... biggest part of code was EXCP/SVC0, same issue that CP67 had, creating copy channel programs, replacing virtual addresses with real ... and he crafts CP67 "CCWTRANS" into EXCP.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal net
https://www.garlic.com/~lynn/subnetwork.html/internalnet
other posts mentioning modifying CP67 to emulate 370 virtual machines
https://www.garlic.com/~lynn/2024f.html#80 CP67 And Source Update
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013.html#71 New HD
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2007i.html#16 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 370 Virtual Memory Date: 10 Nov, 2024 Blog: Facebookinitially, 370s came w/o virtual memory (VS .. virtual storage) more than decade ago, I was asked to track down decision to add virtual memory to all 370s and found staff member to executive making decision. Basically MVT storage management was so bad that regions frequently had to be specified four times larger than used, so typical 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. Going to running MVT in 16mbyte virtual memory (VS2/SVS) would allow number of concurrent regions to be increased by factor of four times (capped at 15 total because of 4bit storage protect key) with little or no paging, analogous to running MVT in a CP67 16mbyte virtual machine. Old archived newsgroup post with pieces of email exchange:
In POK, Ludlow was implementing prototype SVS on 360/67 ... a little bit of code to build virtual memory tables and simple paging ... biggest part of code was EXCP/SVC0, same issue that CP67 had, creating copy channel programs, replacing virtual addresses with real ... and he crafts CP67 "CCWTRANS" into EXCP.
370/165 processor engineers were complaining that if they had to ship the full 370 virtual memory architecture, it would slip announce and ship by six months. Eventually decision was made to retrench 370 virtual memory to 165 subset ... and the other processors already with full 370 virtual memory architecture had to also regress to 165 subset (also any software implementing full support had to drop back to 165 subset).
then larger 370s needed more than 15 concurrent regions (limited by
4bit storage protect key, in single 16mbyte virtual address space),
and VS2/SVS morphs into VS2/MVS ... each region gets its own, separate
16mbyte virtual adress space (each having different address space to
isolate regions rather than storage protect keys, so doesn't have cap
of 15 concurrent regions). However decision then was to replace 370
with "Future System" completely different from 370 (and internal
politics was killing off 370 efforts). When FS implodes, there is mad
rush to get stuff back into the 370 product pipelines ... including
kicking off quick&dirty 3033 & 3081 in parallel
http://www.jfsowa.com/computer/memo125.htm
The MVS system structure was getting increasingly bloated, threatening to completely take over every 16mbyte address space, leaving nothing for application ... motivating MVS/XA; 31bit/2gbyte address space and some other features to compensate for MVS enormous bloat. At the same time the head of POK convinces corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (trivia: XA/370 architecture documents were referred to 811 for their Nov1978 pub date). Endicott eventually was able to save the VM370 product mission for the mid-range, but had to recreate a VM370 development group from scratch.
After MVS/XA was shipping, customers weren't converting as planned. Also Amdahl had created its microcoded HYPERVISOR ("multiple domain") support (virtual machine subset) was having better success with MVS/XA conversion, able to run MVS and MVS/XA concurrently (IBM wasn't able to respond with PR/SM&LPAR for 3090 until nearly a decade later). Amdahl single processor was faster than the two processor 3081D (IBM doubles the 3081 proceessor caches for two-processor 3081K so aggregate MIPS is about the same as Amdahl's single processor, although MVS even more bloated two-processor overhead operation was documented as only 1.2-1.5 times the throughput of single processor). POK had done a very limited virtual machine system, VMTOOL for MVS/XA testing and VMTOOL eventually was released as VM/MA (i.e. migration aid, later VM/SF) able to run MVS & MVS/XA concurrently on the same machine trying to respond to Amdhal's HYPERVISOR.
POK then proposes a few hundred person group to upgrade VMTOOL to feature, function and performance of VM370 for VM/XA. Endicott had a counter, an internal sysprog in Rochester had added XA/370 support to VM370 ... but POK prevails.
trivia: SHARE song when customers weren't moving from MVT&SVS to MVS
(as IBM POK had planned)
http://www.mxg.com/thebuttonman/boney.asp
other trivia: after FS implodes, Endicott cons me into helping with
the 138/148 microcode assist (find 6kbytes highest executed vm370
pathlengths for moving to microcode with 10:1 speed up). Old archived
post w/initial analysis 6kbytes instruction accounted for 79.55%
kernel execution
https://www.garlic.com/~lynn/94.html#21
then Endicott cons me going around the world presenting the business case for 138/148 "ECPS". Endicott then tries to convince corporate to let them preinstall VM370 on every 138 & 148 shipped ... but with POK actively working on getting VM370 product killed, they weren't successful.
recent post mentioning joint CSC/Endicott development project to
implement support for virtual memory 370 virtual machines in CP67
https://www.garlic.com/~lynn/2024f.html#112
IBM Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Posts mentioning CP67L, CSC/VM, and/or SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
some past posts mentiong 138/148 "ECPS" and wanting to pre-install
VM370 on every machine shipped
https://www.garlic.com/~lynn/2024f.html#38 IBM 370/168
https://www.garlic.com/~lynn/2024e.html#129 IBM 4300
https://www.garlic.com/~lynn/2024e.html#113 370/125, Future System, 370-138/148
https://www.garlic.com/~lynn/2024e.html#33 IBM 138/148
https://www.garlic.com/~lynn/2024.html#61 VM Microcode Assist
https://www.garlic.com/~lynn/2023g.html#78 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#50 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#95 370/148 masthead/banner
https://www.garlic.com/~lynn/2023b.html#64 Another 4341 thread
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2023.html#47 370/125 and MVCL instruction
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2018b.html#104 AW: mainframe distribution
https://www.garlic.com/~lynn/2018.html#93 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#46 VSE timeline [was: RE: VSAM usage for ancient disk models]
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2016d.html#62 PL/I advertising
https://www.garlic.com/~lynn/2013.html#67 Was MVS/SE designed to confound Amdahl?
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2009r.html#51 "Portable" data centers
https://www.garlic.com/~lynn/2007s.html#36 Oracle Introduces Oracle VM As It Leaps Into Virtualization
https://www.garlic.com/~lynn/2006l.html#25 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: REXX Date: 10 Nov, 2024 Blog: Facebooklong ago and far away (spring '82) ... before rex renamed and released as rexx, I wanted to show it wasn't just another pretty scripting language, demo was to redo a very large assembler program (program failure and dump analysis) in three months elapsed time working half time, with ten times the function and ten times the performance (some hacks to have interpreted language running faster than assembler); I finished early so decided to implement some automated scripts that searched for common failure signatures.
I thought that it would be released to customers, but for whatever it
wasn't (even though nearly every PSR and internal datacenter was using
it, this was early in the OCO-wars, "object code only" ... customers
complaining that source would no longer be available). I did manage to
get approval to give user group presentations on how I did the
implementation ... and within a few months, similar implementations
started appearing. I eventually did get a request from the 3090
service processor (3092) group to release it on the service processor.
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
some old email
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223
in this archived post
https://www.garlic.com/~lynn/2010e.html#32
dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Internal network and "tele" Date: 10 Nov, 2024 Blog: FacebookLate 70s, we use to have friday after discussions about how to get the large computer illiterate population to use computers and one of the things we came up with was online telephone book, Jim Gray would spend a week writing the application with the requirement that it could return the information in less time that it took to open paper book on desk (radix partition search) and I would spend a week writing the functions that convert software copy of location specific paper books to "tele" format ... then started merging email addresses into tele phone books. Afterwards we were told somebody submitted request to corporate for few tens of millions for organization and dedicate mainframe to provide similar function.
trivia: some of the MIT CTSS/7094 people went to 5th flr for MIT Project MAC and Multics. Others went to the IBM Cambridge Science Center on the 4th flr and did virtual machines (CP40 which morphs into CP67, precursor to VM370), internal network, GML, online apps, performance tools, etc.
CTSS EMAIL history
https://multicians.org/thvv/mail-history.html
IBM CP/CMS had electronic mail as early as 1966, and was widely used
within IBM in the 1970s. Eventually a PROFS product evolved in the
1980s.
... snip ...
co-worker at the cambridge science center responsible for the science
center wide-area CP67-network (which morphs into ibm internal network,
larger than arpanet/internet from the beginning until mid/late 80s,
about the time the internal network was forced to convert to
SNA/VTAM), reference by one of the people at the science center
responsible for inventing GML (precursor to SGML, HTML, etc) in 1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
co-worker ... passed in aug2020
https://en.wikipedia.org/wiki/Edson_Hendricks
Ed and I transferred to SJR on west coast ... and Oct1982 SJR got the first gateway to non-IBM csnet which had gateways to other parts of the networking world (began consolidating with the cutover to internetworking 1jan1983)
At the 1Jan1983 date for arpanet to cutover from IMPs & host protocol
to internetworking, it had about 100 network IMPs and 255 hosts while
the internal network was shortly going to pass 1000. Old archived post
with list of world-wide corporate locations that added one more
network nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML (precursor to SGML, HTML, etc) posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: NASA Shuttle & SBS Date: 11 Nov, 2024 Blog: FacebookI got HSDT project early 80s, T1 and faster computer links (both terrestrial and satellite), lots of conflict with the communication group. Note IBM had 2701 telecommunication controllers in the 60s that supported T1 links. Then mid-70s, IBM got SNA/VTAM and issues appeared to cap controllers at 56kbit links (terrestrial, but also satellite problems even at 56kbits/sec). I guess as result got invited VIP stands for 41-D launch that included SBS4 ... HSDT was getting transponder on the SBS-4/SBS-D. Buzz Aldrin was sitting behind us escorting some Lockheed dignitaries and my youngest asked him signature and got rude response ... later Buzz apologizes.
Assumed one of the SNA/VTAM issues was window-based pacing algorithm that drastically limited outstanding packets ... shows up late 80s with the 3737 attempt to handle even short-haul terrestrial T1. 3737 had psuedo-VTAM that similated CTCA VTAM and would immediately ACK packets trying to spoof the host VTAM limit, it had a boatload of Motorola 68k processors and memory and would then use non-SNA to the remote 3737 (which still peaked at 2mbits/sec, while US full-duplex T1 aggregate was 3mbits/sec and EU full-duplex was 4mbits/sec).
HSDT almost immediately had gone to dynamic adaptive rate-based pacing (that could adjust to round-trip latency, link speed, and congestion). IBM had setup SBS in the 70s with two other partners, but the communication group (SNA/VTAM) turned out to mostly have stranglehold on (satellite) computer links. trivia: so many IBM (bureaucrats) transferred to SBS that it had as many levels of management for 2000 people as IBM had for 400,000 people (joke about being nearly linear vertical organization, half the people director or above).
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
posts mentioning SBS-4 & 41-D
https://www.garlic.com/~lynn/2023c.html#0 STS-41-D and SBS-4
https://www.garlic.com/~lynn/2018b.html#13 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#110 private thread drift--Re: Demolishing the Tile Turtle
https://www.garlic.com/~lynn/2011g.html#20 TELSTAR satellite experiment
https://www.garlic.com/~lynn/2010i.html#69 Favourite computer history books?
https://www.garlic.com/~lynn/2010c.html#57 watches
https://www.garlic.com/~lynn/2009o.html#36 U.S. students behind in math, science, analysis says
https://www.garlic.com/~lynn/2008m.html#19 IBM-MAIN longevity
https://www.garlic.com/~lynn/2007p.html#61 Damn
https://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006m.html#16 Why I use a Mac, anno 2006
https://www.garlic.com/~lynn/2005h.html#21 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2000b.html#27 Tysons Corner, Virginia
1972, Learson tries (and fails) to block the bureaucrats, careerists,
and MBS from destroying Watson culture/legacy. Two decades later, IBM
has one of the largest losses in US corporate history and was being
re-orged into the 13 "baby blues" (take-off on AT&T "baby bells" in
its breakup a decade earlier) in preparation for breaking up the
company.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: NETSYS Internet Service Provider Date: 11 Nov, 2024 Blog: Facebook"online service provider" and "network service provider" ... back to the 60s. This has "internet service provider" nov1989.
trivia: Pagesat provided full usenet feed over satellite. I got a deal for free feed in return for doing unix & windows drivers for the pagesat modem ... and article in boardwatch magazine.
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Downturn and Downfall Date: 11 Nov, 2024 Blog: Facebook1972, Learson tries (and fails) to block the bureaucrats, careerists, and MBAs from destroying the Watson culture/legacy.
... Learson Management Briefing pg160-163, 30yrs of ibm management
briefings
http://www.bitsavers.org/pdf/ibm/generalInfo/IBM_Thirty_Years_of_Mangement_Briefings_1958-1988.pdf
It was accelerated by the Future System project failing
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
more info
http://www.jfsowa.com/computer/memo125.htm
note, I continued to work on 360&370 all during the Future System
period, including periodically ridiculing what they were doing (which
wasn't career enhancing).
two decades later (1992), IBM has one of the largest losses in the
history of US companies and was being reorged into the 13 "baby blues"
(take-off on the AT&T "baby bells" from the AT&T breakup a decade
earlier) in preparation for breaking of the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
We had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup. Before we get started, the
board brings in the former AMEX president as CEO to try and save the
company, who (somewhat) reverses the breakup.
note AMEX was in competition with KKR for LBO (private-equity)
take-over of RJR and KKR wins, it then runs into some difficulties and
hires away AMEX president to help
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
later as IBM CEO, uses some of the same methods used at RJR:
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
1996 MIT Sloan The Decline and Rise of IBM
https://sloanreview.mit.edu/article/the-decline-and-rise-of-ibm/?switch_view=PDF
1995 l'Ecole de Paris The rise and fall of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
1993 Computer Wars: The Post-IBM World
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394/
Note late 80s, a senior disk engineer got a talk scheduled at the world-wide, annual, internal communication group conference, supposedly on 3174 performance but opened the talk with the communication group was going to be responsible for the demise of the disk division. The disk division was seeing data fleeing datacenters to more distributed computing friendly platforms with drop in disk sales. The disk division had come up with a number of solutions but they were constantly vetoed by the communication group (the communication group had corporate strategic ownership of everything that crossed datacenter walls and was fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm). The communication group stranglehold on datacenters wasn't just limited to disks and a couple years later (1992), IBM has one of the largest losses in the history of US companies.
trivia: One of the disk division executives partial work around to the communication group stranglehold was investing in distributed computing startups that would use IBM disks ... and would periodically ask us to visit his investments to see if we could provide any help.
slightly related comment (in this group) a little earlier today
https://www.garlic.com/~lynn/2024f.html#116 NASA Shuttle & SBS
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
posts mentioning communication group fiercely fighting to preserve
their dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
text rendition of Learson's poster
+-----------------------------------------+ | "BUSINESS ECOLOGY" | | | | | | +---------------+ | | | BUREAUCRACY | | | +---------------+ | | | | is your worst enemy | | because it - | | | | POISONS the mind | | STIFLES the spirit | | POLLUTES self-motivation | | and finally | | KILLS the individual. | +-----------------------------------------+
"I'M Going To Do All I Can to Fight This Problem . . ." by T. Vincent Learson, Chairman
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Downturn and Downfall Date: 12 Nov, 2024 Blog: Facebookre:
co-worker at the cambridge science center responsible for the science
center wide-area CP67-network (which evolves into ibm internal
network, larger than arpanet/internet from the beginning until
mid/late 80s, about the time the internal network was forced to
convert to SNA/VTAM), reference by one of the people at the science
center responsible for inventing GML (precursor to SGML, HTML, etc) in
1969
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ....
co-worker ... passed in aug2020
https://en.wikipedia.org/wiki/Edson_Hendricks
Ed and I transfer out to SJR in 1977 and I get HSDT project in the
early 80s, T1 and faster computer links (both terrestrial and
satellite) bringing conflicts with the communication group. Note in
the 60s, IBM had 2701 telecommunication controller that supported T1,
but then moves to SNA/VTAM in the 70s, issues apparently caped
controller links at 56kbits/sec. Was working with the NSF director and
was suppose to get $20M to interconnect the NSF Supercomputing
Centers, then congress cuts the budget, some other things happen and
then a RFP is released (in part based on what we already had
running. From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of actual numbers, was Democracy Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Wed, 13 Nov 2024 13:32:01 -1000The Natural Philosopher <tnp@invalid.invalid> writes:
US assistant SECTREAS Harry Dexter White was operating on behalf of the
Soviet Union and Stalin sends White a draft of demands for US to present
to Japan that would provoke Japan into attacking US and drawing US into
the war, Soviets were already fighting Germany nearly alone and wanted
to preclude an attack by Japan.
https://en.wikipedia.org/wiki/Harry_Dexter_White#Venona_project
demands were included in the Hull Note which Japan received just prior
to decision to attack Perl Harbor, hull note
https://en.wikipedia.org/wiki/Hull_note#Interpretations
More Venona
https://en.wikipedia.org/wiki/Venona_project
Benn Stein in "The Battle of Bretton Woods" spends pages 55-58
discussing "Operation Snow".
https://www.amazon.com/Battle-Bretton-Woods-Relations-University-ebook/dp/B00B5ZQ72Y/
pg56/loc1065-66:
The Soviets had, according to Karpov, used White to provoke Japan to
attack the United States. The scheme even had a name: "Operation
Snow," snow referring to White.
... snip ...
... from truth is stranger than fiction and law of unintended
consequences that come back to bite you, much of the radical Islam &
ISIS can be considered our own fault, VP Bush in the 80s
https://www.amazon.com/Family-Secrets-Americas-Invisible-Government-ebook/dp/B003NSBMNA/
pg292/loc6057-59:
There was also a calculated decision to use the Saudis as surrogates
in the cold war. The United States actually encouraged Saudi efforts
to spread the extremist Wahhabi form of Islam as a way of stirring up
large Muslim communities in Soviet-controlled countries. (It didn't
hurt that Muslim Soviet Asia contained what were believed to be the
world's largest undeveloped reserves of oil.)
... snip ...
Saudi radical extremist Islam/Wahhabi loosened on the world ... bin
Laden & 15of16 9/11 were Saudis (some claims that 95% of extreme Islam
world terrorism is Wahhabi related)
https://en.wikipedia.org/wiki/Wahhabism
Mattis somewhat more PC (political correct)
https://www.amazon.com/Call-Sign-Chaos-Learning-Lead-ebook/dp/B07SBRFVNH/
pg21/loc349-51:
Ayatollah Khomeini's revolutionary regime took hold in Iran by ousting
the Shah and swearing hostility against the United States. That same
year, the Soviet Union was pouring troops into Afghanistan to prop up
a pro-Russian government that was opposed by Sunni Islamist
fundamentalists and tribal factions. The United States was supporting
Saudi Arabia's involvement in forming a counterweight to Soviet
influence.
... snip ...
and internal CIA
https://www.amazon.com/Permanent-Record-Edward-Snowden-ebook/dp/B07STQPGH6/
pg133/loc1916-17:
But al-Qaeda did maintain unusually close ties with our allies the
Saudis, a fact that the Bush White House worked suspiciously hard to
suppress as we went to war with two other countries.
... snip ...
other trivia:
Winston Churchill on the current mess in middle east and Persia/Iran
dating back to before WW1 ... started with the move from 13.5in to 15in
guns;
https://www.amazon.com/World-Crisis-Winston-Churchills-Collection-ebook/dp/B00FFD2DP2/
loc2012-14:
From the beginning there appeared a ship carrying ten 15-inch guns,
and therefore at least 600 feet long with room inside her for engines
which would drive her 21 knots and capacity to carry armour which on
the armoured belt, the turrets and the conning tower would reach the
thickness unprecedented in the British Service of 13 inches.
loc2087-89:
To build any large additional number of oil-burning ships meant basing
our naval supremacy upon oil. But oil was not found in appreciable
quantities in our islands. If we required it, we must carry it by sea
in peace or war from distant countries.
loc2151-56:
This led to enormous expense and to tremendous opposition on the Naval
Estimates. Yet it was absolutely impossible to turn back. We could
only fight our way forward, and finally we found our way to the
Anglo-Persian Oil agreement and contract, which for an initial
investment of two millions of pub`lic money (subsequently increased to
five millions) has not only secured to the Navy a very substantial
proportion of its oil supply, but has led to the acquisition by the
Government of a controlling share in oil properties and interests
which are at present valued at scores of millions sterling, and also
to very considerable economies, which are still continuing, in the
purchase price of Admiralty oil.
... snip ...
In the 50s, an Iran popular elected government wanted to examine the
terms of the British oil contract. Kermit Roosevelt
https://en.wikipedia.org/wiki/Kermit_Roosevelt,_Jr.
helps with coup that installs the Shah (in exchange for supporting the
existing status quo)
https://en.wikipedia.org/wiki/1953_Iranian_coup_d%27%C3%A9tat
... and Schwarzkoph (senior; junior participated in Desert Storm)
training of the secret police to help keep Shah in power (eventually an
uprising against the violent, repressive government)
https://en.wikipedia.org/wiki/SAVAK
CIA Director Colby wouldn't approve the "Team B" analysis (exaggerated
USSR military capability) and Rumsfeld got Colby replaced with Bush, who
would approve "Team B" analysis (justifying huge DOD spending increase),
after Rumsfeld replaces Colby, he resigns as white house chief of staff
to become SECDEF (and is replaced by his assistant Cheney)
https://en.wikipedia.org/wiki/Team_B
former CIA director H.W. is VP, he and Rumsfeld are involved in
supporting Iraq in the Iran/Iraq war
http://en.wikipedia.org/wiki/Iran%E2%80%93Iraq_War
including WMDs (note picture of Rumsfeld with Saddam)
http://en.wikipedia.org/wiki/United_States_support_for_Iraq_during_the_Iran%E2%80%93Iraq_war
VP and former CIA director repeatedly claims no knowledge of
http://en.wikipedia.org/wiki/Iran%E2%80%93Contra_affair
because he was fulltime administration point person deregulating
financial industry ... creating S&L crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
along with other members of his family
http://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260
In the early 90s, H.W. is president and Cheney is SECDEF. Sat. photo
recon analyst told white house that Saddam was marshaling forces to
invade Kuwait. White house said that Saddam would do no such thing and
proceeded to discredit the analyst. Later the analyst informed the white
house that Saddam was marshaling forces to invade Saudi Arabia, now the
white house has to choose between Saddam and the Saudis.
https://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2/
... roll forward ... Bush2 is president and presides over the huge cut
in taxes, huge increase in spending, explosion in debt, the economic
mess (70 times larger than his father's S&L crisis, trivia: S&L crisis
had 1000 convictions with jailtime; proportionally, the economic mess
should have had 70,000) and the forever wars, Cheney is VP, Rumsfeld is
SECDEF and one of the Team B members is deputy SECDEF (and major
architect of Iraq policy).
https://en.wikipedia.org/wiki/Paul_Wolfowitz
Before the Iraq invasion, the cousin of white house chief of staff Card
... was dealing with the Iraqis at the UN and was given evidence that
WMDs (tracing back to US in the Iran/Iraq war) had been
decommissioned. the cousin shared it with (cousin, white house chief of
staff) Card and others ... then is locked up in military hospital, book
was published in 2010 (4yrs before decommissioned WMDs were
declassified)
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/
NY Times series from 2014, the decommission WMDs (tracing back to US
from Iran/Iraq war), had been found early in the invasion, but the
information was classified for a decade
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html
note the military-industrial complex had wanted a war so badly that
corporate reps were telling former eastern block countries that if they
voted for IRAQ2 invasion in the UN, they would get membership in NATO
and (directed appropriation) USAID (can *ONLY* be used for purchase of
modern US arms, aka additional congressional gifts to MIC complex not in
DOD budget). From the law of unintended consequences, the invaders were
told to bypass ammo dumps looking for WMDs, when they got around to
going back, over a million metric tons had evaporated (showing up later
in IEDs)
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war
https://www.garlic.com/~lynn/submisc.html#perpetual.war
S&L crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
too-big-to-fail (too-big-to-prosecute, too-big-to-jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
some recent posts mentioning Saudi, Wahhabi, Iraq
https://www.garlic.com/~lynn/2024d.html#93 Why Bush Invaded Iraq
https://www.garlic.com/~lynn/2024c.html#85 New 9/11 Evidence Points to Deep Saudi Complicity
https://www.garlic.com/~lynn/2023c.html#63 Inside the Pentagon's New "Perception Management" Office to Counter Disinformation
https://www.garlic.com/~lynn/2023b.html#45 'Saddam was terrible but we had security': the Iraq war 20 years on
https://www.garlic.com/~lynn/2023.html#10 History Is Un-American. Real Americans Create Their Own Futures
https://www.garlic.com/~lynn/2022h.html#123 Wars and More Wars: The Sorry U.S. History in the Middle East
https://www.garlic.com/~lynn/2022e.html#107 Price Wars
https://www.garlic.com/~lynn/2022d.html#43 Iraq War
https://www.garlic.com/~lynn/2022c.html#115 The New New Right Was Forged in Greed and White Backlash
https://www.garlic.com/~lynn/2022.html#97 9/11 and the Road to War
https://www.garlic.com/~lynn/2021j.html#112 Who Knew ?
https://www.garlic.com/~lynn/2021j.html#90 Afghanistan Proved Eisenhower Correct
https://www.garlic.com/~lynn/2021j.html#57 After 9/11, the U.S. Got Almost Everything Wrong
https://www.garlic.com/~lynn/2021i.html#53 The Kill Chain
https://www.garlic.com/~lynn/2021i.html#49 The Counterinsurgency Myth
https://www.garlic.com/~lynn/2021i.html#38 The Accumulated Evil of the Whole: That time Bush and Co. made the September 11 Attacks a Pretext for War on Iraq
https://www.garlic.com/~lynn/2021i.html#18 A War's Epitaph. For Two Decades, Americans Told One Lie After Another About What They Were Doing in Afghanistan
https://www.garlic.com/~lynn/2021h.html#62 An Un-American Way of War: Why the United States Fails at Irregular Warfare
https://www.garlic.com/~lynn/2021h.html#42 Afghanistan Down the Drain
https://www.garlic.com/~lynn/2021g.html#4 Donald Rumsfeld, The Controversial Architect Of The Iraq War, Has Died
https://www.garlic.com/~lynn/2021f.html#71 Inflating China Threat to Balloon Pentagon Budget
https://www.garlic.com/~lynn/2021f.html#65 Biden takes steps to rein in 'forever wars' in Afghanistan and Iraq
https://www.garlic.com/~lynn/2021f.html#59 White House backs bill to end Iraq war military authorization
https://www.garlic.com/~lynn/2021e.html#42 The Blind Strategist: John Boyd and the American Art of War
https://www.garlic.com/~lynn/2020.html#22 The Saudi Connection: Inside the 9/11 Case That Divided the F.B.I
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Downturn and Downfall Date: 13 Nov, 2024 Blog: Facebookre:
IBM becomes a financial engineering company
Stockman; The Great Deformation: The Corruption of Capitalism in
America
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback
contraption on steroids. During the five years ending in fiscal 2011,
the company spent a staggering $67 billion repurchasing its own
shares, a figure that was equal to 100 percent of its net income.
pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.
... snip ....
(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate
Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind
paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.
... snip ...
(2016) After Forking Out $110 Billion on Stock Buybacks, IBM
Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket
more financial engineering company
IBM deliberately misclassified mainframe sales to enrich execs,
lawsuit claims. Lawsuit accuses Big Blue of cheating investors by
shifting systems revenue to trendy cloud, mobile tech
https://www.theregister.com/2022/04/07/ibm_securities_lawsuit/
IBM has been sued by investors who claim the company under former CEO
Ginni Rometty propped up its stock price and deceived shareholders by
misclassifying revenues from its non-strategic mainframe business -
and moving said sales to its strategic business segments - in
violation of securities regulations.
... snip ...
... and some trivia
Late 80s, last product we did at IBM started out HA/6000.
Nick Donofrio approves project and budget, original for NYTimes to
move newspaper system (ATEX) off VAXCluster to RS/6000. I rename it
HA/CMP, when I start doing technical/scientific cluster scale-up with
national labs and commercial cluster scale-up with RDBMS vendors
(Oracle, Sybase, Informix, and Ingres) who have VAXCluster in same
source base with Unix. Early Jan1992 in Oracle meeting, AWD/Hester
tells Oracle CEO we would have 16-system cluster by mid92 and
128-system cluster ye92. Mid-Jan I update FSD on the work with
national labs and TA to FSD President then tells Kingston
supercomputer group that FSD would be going with HA/CMP for gov. Then
late Jan1992, cluster scale-up is transferred for announce as IBM
supercomputer (for technical/scientific only) and we are told we can't
work on anything with more than four processors (we leave IBM a few
months later). Contributing were complaints from mainframe DB2 that if
we were allowed to continue, it would be at least 5yrs ahead of them.
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS;
(HA/CMP clusters) 16-system: 2016MIPS, 128-system: 16,128MIPS
RS6000/RIOS was still single processor systems. The executive we
reported to with HA/CMP had gone over to head up Somerset (AIM, Apple,
IBM, Motorola) and adapts Motorola RISC 88k multiprocessor bus for
Power/PC) and start seeing Power multiprocessor systems for cluster
configurations. 1999, single PowerPC 440 processor benchmarks 1000MIPS
(1BIPS), six times Dec2000 IBM z900 mainframe processor (note: all
MIPS/BIPS numbers are industry standard benchmark that is based on
number of program iterations compared to reference platform).
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
some posts mentioning Someset, AIM, multiprocessor, power/pc
https://www.garlic.com/~lynn/2024f.html#36 IBM 801/RISC, PC/RT, AS/400
https://www.garlic.com/~lynn/2024f.html#21 Byte ordering
https://www.garlic.com/~lynn/2024e.html#130 Scalable Computing
https://www.garlic.com/~lynn/2024e.html#121 IBM PC/RT AIX
https://www.garlic.com/~lynn/2024e.html#105 IBM 801/RISC
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#14 801/RISC
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#93 PC370
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#57 Vintage RISC
https://www.garlic.com/~lynn/2024b.html#55 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#98 Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#67 VM Microcode Assist
https://www.garlic.com/~lynn/2024.html#52 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#44 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023b.html#28 NEC processors banned for 386 industrial espionage?
https://www.garlic.com/~lynn/2022d.html#105 Transistors of the 68000
https://www.garlic.com/~lynn/2021d.html#47 Cloud Computing
https://www.garlic.com/~lynn/2015.html#45 z13 "new"(?) characteristics from RedBook
https://www.garlic.com/~lynn/2013n.html#59 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013m.html#70 architectures, was Open source software
https://www.garlic.com/~lynn/2013f.html#29 Delay between idea and implementation
https://www.garlic.com/~lynn/2012f.html#55 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2011c.html#38 IBM "Watson" computer and Jeopardy
https://www.garlic.com/~lynn/2010j.html#2 Significant Bits
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Downturn and Downfall Date: 14 Nov, 2024 Blog: Facebookre:
history IBM CEOs
https://en.wikipedia.org/wiki/List_of_IBM_CEOs
1972, Learson tries (and fails) to block the bureaucrats, careerists,
and MBAs from destroying the Watson culture/legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler
One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters, including the online sales&marketing support HONE systems were long time customers. I also got to continue to attend SHARE and visit lots of customer sites. Director of one of the large customer financial datacenters on the east coast liked me to drop by and talk technology. At one point the IBM branch manager horribly offended the customer and in retaliation they ordered an Amdahl system (single Amdahl in large sea of blue). Up until then, Amdahl had been selling into technical/scientific/univ market and this would be the first commercial "true blue" install.
I was asked to go onsite for 6-12months (to help obfuscate why the customer was installing an Amdahl machine). I talked it over with the customer and then declined IBM's offer. I was then told that the branch manager was good sailing buddy of IBM CEO and if I didn't do it, I could forget having promotions, raises, and a IBM career (note I never talked directly with IBM CEO so couldn't be positive that this wasn't some claim fabricated by the mid-70s IBM bureaucracy).
Note in 60s, Amdahl had won the battle to make IBM ACS, 360
compatible. Then shortly after ACS/360 was killed, Amdahl leaves IBM
to found his own 360 clone company
https://people.computing.clemson.edu/~mark/acs_end.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
IBM Cambridge Scientific Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM, etc posts
https://www.garlic.com/~lynn/submisc.html#cscvm
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: NETSYS Internet Service Provider Date: 14 Nov, 2024 Blog: Facebookre:
trivia : commercial companies made (tax free/deductable) donations to
NSFnet that were approx 4-5 times the value of the winning RFP
(commercial use could have impacted that tax free/deductable status):
<NIS.NSF.NET> [NSFNET] NETUSE.TXT
Interim 3 July 1990
NSFNET
Acceptable Use Policy
The purpose of NSFNET is to support research and education in and
among academic institutions in the U.S. by providing access to unique
resources and the opportunity for collaborative work.
This statement represents a guide to the acceptable use of the NSFNET
backbone. It is only intended to address the issue of use of the
backbone. It is expected that the various middle level networks will
formulate their own use policies for traffic that will not traverse
the backbone.
(1) All use must be consistent with the purposes of NSFNET.
(2) The intent of the use policy is to make clear certain cases which
are consistent with the purposes of NSFNET, not to exhaustively
enumerate all such possible uses.
(3) The NSF NSFNET Project Office may at any time make determinations
that particular uses are or are not consistent with the purposes of
NSFNET. Such determinations will be reported to the NSFNET Policy
Advisory Committee and to the user community.
(4) If a use is consistent with the purposes of NSFNET, then
activities in direct support of that use will be considered consistent
with the purposes of NSFNET. For example, administrative
communications for the support infrastructure needed for research and
instruction are acceptable.
(5) Use in support of research or instruction at not-for-profit
institutions of research or instruction in the United States is
acceptable.
(6) Use for a project which is part of or supports a research or
instruction activity for a not-for-profit institution of research or
instruction in the United States is acceptable, even if any or all
parties to the use are located or employed elsewhere. For example,
communications directly between industrial affiliates engaged in
support of a project for such an institution is acceptable.
(7) Use for commercial activities by for-profit institutions is
generally not acceptable unless it can be justified under (4)
above. These should be reviewed on a case-by-case basis by the NSF
Project Office.
(8) Use for research or instruction at for-profit institutions may or
may not be consistent with the purposes of NSFNET, and will be
reviewed by the NSF Project Office on a case-by-case basis.
... snip ...
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
posts mentioning NSFNET tax-free contributions
(non-commercial/acceptable use)
https://www.garlic.com/~lynn/2024e.html#85 When Did "Internet" Come Into Common Use
https://www.garlic.com/~lynn/2023g.html#67 Waiting for the reference to Algores creation documents/where to find- what to ask for
https://www.garlic.com/~lynn/2023f.html#11 Internet
https://www.garlic.com/~lynn/2023d.html#55 How the Net Was Won
https://www.garlic.com/~lynn/2022h.html#91 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2021k.html#130 NSFNET
https://www.garlic.com/~lynn/2021h.html#110 The Foundation of the Internet: TCP/IP Turns 40
https://www.garlic.com/~lynn/2021h.html#66 CSC, Virtual Machines, Internet
https://www.garlic.com/~lynn/2021h.html#36 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2017k.html#32 Net Neutrality
https://www.garlic.com/~lynn/2014m.html#93 5 Easy Steps to a High Performance Cluster
https://www.garlic.com/~lynn/2014j.html#79 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014j.html#76 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014.html#3 We need to talk about TED
https://www.garlic.com/~lynn/2013n.html#18 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013d.html#52 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012j.html#89 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012j.html#88 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2010g.html#75 What is the protocal for GMT offset in SMTP (e-mail) header header time-stamp?
https://www.garlic.com/~lynn/2010e.html#64 LPARs: More or Less?
https://www.garlic.com/~lynn/2010b.html#33 Happy DEC-10 Day
https://www.garlic.com/~lynn/2007l.html#67 nouns and adjectives
https://www.garlic.com/~lynn/2006j.html#46 Arpa address
https://www.garlic.com/~lynn/2006j.html#34 Arpa address
https://www.garlic.com/~lynn/2004l.html#1 Xah Lee's Unixism
https://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002h.html#86 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2000e.html#31 Cerf et.al. didn't agree with Gore's claim of initiative.
https://www.garlic.com/~lynn/2000e.html#29 Vint Cerf and Robert Kahn and their political opinions
https://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000c.html#26 The first "internet" companies?
https://www.garlic.com/~lynn/aadsm12.htm#23 10 choices that were critical to the Net's success
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Any interesting PDP/TECO photos out there? Newsgroups: alt.sys.pdp11, alt.folklore.computers Date: Fri, 15 Nov 2024 15:21:41 -1000"Carlos E.R." <robin_listas@es.invalid> writes:
Univ shutdown datacenter on weekends and I would have the machine room dedicated all weekend, although 48hrs w/o sleep made monday classes hard. They gave me a bunch of hardware & software manuals and I got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. and within a few weeks had a 2000 card assembler program.
I quickly learned when I 1st came in sat. morning to clean tape drives and 1403 printer and disassemble and clean 2540 card reader/punch. Sometimes when I arrived, the place would be dark, production work had finished early, and they shut the place down. Sometimes the 360/30 wouldn't power up and reading manuals and trial and error, learned to put all controllers in CE mode, power on 360/30 and controllers individually and then take controllers out of CE mode.
the 360/67 came in within a year of taking intro class and univ. hires me fulltime responsible of os/360 (tss/360 never came to fruition so ran as 360/65 with os/360, I continue to get the machine room dedicated for weekends). Student fortran ran under a second on 709 but initially over a minute with os/360. I install HASP cutting the time in half. I then start redoing stage2 sysgen to carefully place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student fortran never got better than 709 until i install Univ. of Waterloo WATFOR.
Then before I graduate, I'm hired fulltime into small group in Boeing CFO office to help with formation of Boeing Computer Services, place all dataprocessing in an independent business unit. I thot Renton datacenter largest in the world, couple hundred million in 360 stuff, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room (somebody joked that Boeing was buying 360/65s like other companies bought keypunches).
Posts mentioning fortran, 709/1401, 360/67, hasp, watfor, boeing cfo,
and renton
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023e.html#88 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#101 Operating System/360
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Adventure Game Date: 15 Nov, 2024 Blog: Facebook... speaking of Adventure, I use to drop in on TYMSHARE periodically and/or see them at monthly BAYBUNCH meetings (hosted by Stanford SLAC). TYMSHARE had provided their VM370/CMS based online computer conferencing system to (mainframe user group) SHARE starting in aug1976 as VMSHARE ... archives here:
I cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on internal corporate systems and network.
One visit to TYMSHARE, they demo'ed "ADVENTURE" that somebody had found on Stanford SAIL PDP10 and ported to CMS ... I got copy of source and executable for putting up on internal systems.
commercial online, virtual machine-based services
https://www.garlic.com/~lynn/submain.html#online
internal corporate network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
posts mentioning tymshare, adventure, and vmshare
https://www.garlic.com/~lynn/2024f.html#11 TYMSHARE, Engelbart, Ann Hardy
https://www.garlic.com/~lynn/2024e.html#143 The joy of FORTRAN
https://www.garlic.com/~lynn/2024e.html#139 RPG Game Master's Guide
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2017j.html#26 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017h.html#11 The original Adventure / Adventureland game?
https://www.garlic.com/~lynn/2017f.html#67 Explore the groundbreaking Colossal Cave Adventure, 41 years on
https://www.garlic.com/~lynn/2017d.html#100 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016e.html#103 August 12, 1981, IBM Introduces Personal Computer
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2011f.html#75 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2006n.html#3 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2005u.html#25 Fast action games on System/360+?
https://www.garlic.com/~lynn/2005k.html#18 Question about Dungeon game on the PDP
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, next, index - home