List of Archived Posts
2025 Newsgroup Postings (05/11 - )
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Interactive Response
- Living Wage
Interactive Response
Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 11 May, 2025
Blog: Facebook
Some other details (from recent post) ... related to quarter second
response
https://www.garlic.com/~lynn/2025b.html#115 SHARE, MVT, MVS, TSO
Early MVS days, CERN did MVS/TSO comparison with VM370/CMS with 1974
presentation of analysis at SHARE ... inside IBM, copies of the
presentation were stamped "IBM Confidential - Restricted" (2nd highest
security classification), only available on "need to know" basis (for
those that didn't directly get copy at SHARE)
MVS/TSO trivia: late 70s, SJR got a 370/168 for MVS and 370/158 for
VM/370 (replacing MVT 370/195) and several strings of 3330s all with
two channel switch 3830s connecting to both systems .... but
strings&controllers were labeled MVS or VM/370 and strict rules that
no MVS use of VM/370 controller/strings. One morning, an MVS 3330 was
placed on 3330 string and within a few minutes, operations were
getting irate phone calls from all over the bldg about what happened
to response. Analysis showed that the problem was MVS 3330 (OS/360
filesystem extensive use of multi-track search locking up controller
and all drives on that controller) had been placed on VM/370 3330
string and demands that the offending MVS 3330 be moved. Operations
said they would have to wait until offshift. Then a single pack VS1
(highly optimized for VM370 and hand-shaking) is put up on an MVS
string and brought up on the loaded 370/158 VM370 ... and was able to
bring the MVS/168 to a crawl ... alleviating a lot of the problems for
VM370 users (operations almost immediately agreed to move the
offending MVS 3330).
Trivia: one of my hobbies after joining IBM was highly optimized
operating systems for internal datacenters. In early 80s, there were
increasing studies showing quarter second response improved
productivity. 3272/3277 had .086 hardware response. Then 3274/3278 was
introduced with lots of 3278 hardware moved back to 3274 controller,
cutting 3278 manufacturing costs and significantly driving up coax
protocol chatter ... increasing hardware response to .3sec to .5sec
depending on amount of data (impossible to achieve quarter
second). Letters to the 3278 product administrator complaining about
interactive computing got a response that 3278 wasn't intended for
interactive computing but data entry (sort of electronic
keypunch). 3272/3277 required .164sec system response (for human to
see quarter second). Fortunately I had numerous IBM systems in silicon
valley with (90th precentile) .11sec system response, I don't believe
any TSO users ever noticed 3278 issues, since they rarely ever saw
even one sec system response). Later, IBM/PC 3277 emulation cards had
4-5 times the upload/download throughput as 3278 emulation cards.
Future System was going to completely replace 370s:
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project
https://people.computing.clemson.edu/~mark/fs.html
and internal politics were killing off 370 efforts (lack of new 370s
is credited giving clone 370 makers their market foothold), when FS
implodes there is mad rush to get stuff back into the 370 product
pipelines, including kicking off quick&dirty 3033&3081 efforts
in parallel. Head of POK also lobbying corporate to kill the VM370
product, shutdown the development group and transfer all the people to
POK for MVS/XA (Endicott eventually manages to save the VM370 product
mission, but have to recreate a development group from scratch).
Endicott starts on XEDIT for release to customers. I send Endicott
email asking might they consider one of the internal 3270 fullscreen
editors, that were much more mature, more function and faster. red,
ned, xedit, edgar, etc. had similar capability ("EDIT" was the old
CP67/CMS editor) ... but simple cpu usage test that i did (summery
from '79) of the same set of operations on the same file by all
editors showed the following cpu uses (at the time, "RED" was my
choice):
RED 2.91/3.12
EDIT 2.53/2.81
NED 15.70/16.52
XEDIT 14.05/14.88
EDGAR 5.96/6.45
SPF 6.66/7.52
ZED 5.83/6.52
Endicott's reply was that it was the RED-author's fault that it was so
much better than XEDIT and therefor it should be his responsibility to
bring XEDIT up to RED level.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
internal CP67L, CSC/VM, SJR/VM
https://www.garlic.com/~lynn/submisc.html#cscvm
posts mentioning RED, NED, XEDIT, EDGAR, SPF, ZED:
https://www.garlic.com/~lynn/2024.html#105 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2011p.html#112 SPF in 1978
https://www.garlic.com/~lynn/2011m.html#41 CMS load module format
https://www.garlic.com/~lynn/2006u.html#26 Assembler question
https://www.garlic.com/~lynn/2003d.html#22 Which Editor
some posts mentioning .11sec system response and 3272/3277
https://www.garlic.com/~lynn/2025b.html#47 IBM Datacenters
https://www.garlic.com/~lynn/2025.html#127 3270 Controllers and Terminals
https://www.garlic.com/~lynn/2025.html#75 IBM Mainframe Terminals
https://www.garlic.com/~lynn/2025.html#69 old pharts, Multics vs Unix
https://www.garlic.com/~lynn/2024f.html#12 3270 Terminals
https://www.garlic.com/~lynn/2024e.html#26 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024.html#92 IBM, Unix, editors
https://www.garlic.com/~lynn/2023g.html#70 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023e.html#0 3270
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021c.html#92 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2017e.html#26 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014m.html#127 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2014g.html#23 Three Reasons the Mainframe is in Trouble
https://www.garlic.com/~lynn/2013l.html#65 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012n.html#37 PDP-10 and Vax, was System/360--50 years--the future?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2011d.html#53 3270 Terminal
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 11 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
Trivia: 90s, i86 chip makers implemented on-the-fly, pipelined
translation of i86 instructions to RISC micro-ops (for execution,
largely negating performance difference with RISC systems). Also
Somerset/AIM (apple, ibm, motorola) was formed to do single chip
801/RISC (with motorola 88k bus and cache supporting multiprocessor
configurations). Industry benchmark is number of program iterations
compared to reference platform (for MIPS rating); 1999:
• single core IBM PowerPC 440, 1BIPS
• single core Pentium3, 2.054BIPS
and Dec2000:
• IBM z900, 16 processors 2.5BIPS (156MIPS/processor)
2010:
• IBM z196, 80 processors, 50BIPS (625MIPS/processors)
• E5-2600 server blade (two 8-core XEON chips) 500BIPS (30BIPS/core)
Note: no CKD DASD have been made for decades, all being emulated on
industry fixed-block devices (increasingly SSD).
Cache miss/memory latency, when measured in count of processor cycles
is similar to 60s disk latency when measured in count of 60s processor
cycles (memory is new disk). Current equivalent to 60s multitasking
are things like out-of-order execution, branch prediction, speculative
execution, etc (and to further improve things, translating CPU
instructions into RISC micro-ops for actual execution scheduling).
Note that individual instruction timings can take multiple cycles
(translation, broken into multiple parts, etc) ... but there is large
amount of concurrent pipelining .... so can complete one instruction
per cycle, even while it might take 10-50 cycles to process each
instruction.
60s undergraduate, took 2 credit hr intro to fortran/computers, end of
semester was hired to reimplement 1401 MPIO on 360/30. Univ was
getting 360/67 for tss/360, replacing 709/1401 and temporarily got
360/30 replacing 1401. Univ. shutdown datacenter on weekends and I had
whole place dedicated, although 48hrs w/o sleep affected monday
classes. I was given pile of hardware & software manuals and got to
design/implement my own monitor, device drivers, interrupt handlers,
error recovery, storage management, etc ... and within a few weeks had
2000 card assembler program. 360/67 arrived within year of taking
intro class (tss/360 didn't come to production and ran as 360/65) and
was hired fulltime responsible for os/360. Student fortran ran under
second on 709 (tape->tape) but over minute on os/360. I install HASP
and cut the time in half. I then start redoing MFTR11 STAGE2 SYSGEN,
carefully placing datasets and PDS members to optimize seeks and
multi-track search, cutting another 2/3rds to 12.9secs. Student
Fortran never got better than 709 until I install UofWaterloo WATFOR
(single step monitor, batch card tray of jobs, ran at 20,000 cards/min
on 360/65).
CSC comes out to install CP67/CMS (precursor to VM370, 3rd site after
CSC itself and MIT Lincoln Labs). I mostly get to play with it during
my weekend dedicated time, started out reWriting pathlengths for os360
virtual machine, os360 job stream ran 322secs on real hardware
and initially 856secs in virtual machine (CP67 CPU 534secs),
after a couple months I have reduced CP67 CPU from 534secs to
113secs. I then start rewriting the dispatcher, scheduler, paging,
adding ordered seek queuing (from FIFO) and mutli-page transfer
channel programs (from FIFO and optimized for transfers/revolution,
getting 2301 paging drum from 70-80 4k transfers/sec to peak of 270).
CP67 originally came with auto terminal type with support for 1052 &
2741 terminals. Univ. had some TTY, so I integrate in ASCII support
(including auto terminal type support). I then want to have single
dial-in number ("hunt group") for all terminals, but IBM had taken
short-cut, while could change terminal type port scanner, baud rate
was hardwired for each port. That starts program to do clone
controller, build channel interface board for Interdata/3 programmed
to emulate IBM controller ... but including port auto-baud (later
upgraded to Interdata/4 for channel interface and cluster of
Interdata/3s for port interfaces). Interdata (and later Perkin-Elmer)
sell it as clone controller and four of us are written responsible for
some part of IBM clone business.
I then add terminal support to HASP for MVT18, with an editor
emulating "CMS EDIT" syntax for simple CRJE.
In prior life, my wife was in the GBURG JES group reporting to Crabby
& one of the ASP "catchers" for JES3; also co-author of JESUS (JES
Unified System) specification (all the features of JES2 & JES3
that the respective customers couldn't live w/o, for various reasons
never came to fruition). She was then con'ed into transfer to POK
responsible for mainframe loosely-coupled architecture
(Peer-Coupled Shared Data). She didn't remain long because 1)
periodic battles with communication group trying to force her to use
VTAM for loosely-coupled operation and 2) little uptake (until much
latter with SYSPLEX and Parallel SYSPLEX), except for IMS hot-standby
(she has story asking Vern Watts who he would ask permission, he
replied "nobody" ... he would just tell them when it was all done).
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
360&370 clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
IBM Cambridge Science Center
https://www.garlic.com/~lynn/subtopic.html#545tech
Peer-Coupled Shared Data Architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
other trivia: i got to wander around silicon valley datacenters after
transfer to SJR, including disk bldg14/engineering and bldg15/product
test, across the street. They were running prescheduled, 7x24,
stand-alone testing and mentioned that they had recently tried MVS
... but it had 15min MTBF (in that environment) requiring manual
re-ipl. I offer to rewrite I/O supervisor to be bullet proof and never
fail, allowing any amount of on-demand, concurrent testing, greatly
improving productivity. I also drastically cut the queued I/O redrive
pathlength (1/10th MVS time from interrupt to redrive SIOF) and
significantly enhance multi-channel path efficiency (in addition to
never fail).
IBM had quideline that new generation product had to have performance
(should be more but) not more than 5% less than previous. Initial test
of 3880 showed it failed. It supported "data streaming" channels
(previous channels were end-to-end hand-shake for every byte,
data-streaming cut overhead by going to multiple bytes, higher
datarate, but less processing) ... and they were able to get away with
much slower processor than in 3830. However, slower processing
significantly increased controller channel busy for every other kind
of operation, including from end of channel program data transfer to
presenting ending interrupt (significantly increase in time from SIOF
to channel program ending interrupt) reducing aggregate I/O
throughput. Attempt to mask the problem, they change 3880 to present
ending interrupt and do final controller cleanup overlapped with
operating system interrupt processing overhead (modulo niggling
problem finding controller error and need to present "unit check" with
an interrupt).
Whole thing tested fine with MVS ... the enormous MVS interrupt to
redrive pathlength was more than enough to mask the 3880 controller
fudge. However the 3880 fudge didn't work for me, I would hit 3880
with redrive SIOF long before it was done, which it then had to
respond with CU-busy, I then had to requeue the request and wait for
the CUE interrupt (indicating the controller was really free).
I periodically pontificated that a lot of the XA/370 architecture was
to mask MVS issues (and my 370 redrive pathlength was close to the
elapsed time that of the XA/370 hardware redrive).
Slightly other issues ... getting within a few months for 3880 first
customer ship (FCS), FE (field engineering) had test of 57 simulated
errors that were likely to occur and MVS was (still) failing in all 57
cases (requiring manual re-ipl) and in 2/3rds of the cases no
indication for what caused the failure.
I then did a (IBM internal only) research report on all the I/O
integrity work and it was impossible to believe the enormous grief
that the MVS organization caused me for mentioning MVS 15min MTBF.
... trivia: MVS wrath at my mentioning VM370 "never fail" & MVS "15min
MTBF" ... remember, POK had only recently convinced corporate to kill
VM370, shutdown the group and transfer all the people to POK for
MVS/XA. Endicott had managed to save the VM370 product mission (for
mid-range) but was still recreating a development from scratch and
bringing it up to speed ... can find comments about ibm code quality
during the period in vmshare archive
http://vm.marist.edu/~vmshare
getting to play disk engineer in bldgs14/15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
... also, as undergraduate in the 60s, univ hired me fulltime
responsible for OS/360 on their 360/67 (running as 360/65) ... had
replaced 709/1401. Student fortran ran under second on 709. Initially
on OS/360, it was well over a minute. I install HASP, cutting time in
half. I then start redoing (MFTR11) stage2 sysgen to carefully place
datasets and PDS members to optimize arm seek and multi-track search,
cutting another 2/3rds to 12.9secs. Student fortran never got better
than 709 until I install UofWaterloo WATFOR.
Turns out a major part of that 12.9 secs, was OS/360 had major
implementation goal running on minimal real-storage configurations, so
things like file OPEN SVC had a long string of SVCLIB modules that had
to be sequentially loaded ... I got tens of second performance
improvement by carefully placing those SVCLIB members (both in
multi-track search of PDS directory, and the actual loading).
One of my problems was PTF that replaced SVCLIB and LINKLIB PDS
members that disturbed the careful placement, and student fortran
would start inching up towards 20secs (from 12.9) and I would have to
do a mini-sysgen to get the ordering restored.
some other recent posts mentioning student fortran, 12.9secs, WATFOR
https://www.garlic.com/~lynn/2025b.html#121 MVT to VS2/SVS
https://www.garlic.com/~lynn/2025b.html#98 Heathkit
https://www.garlic.com/~lynn/2025b.html#59 IBM Retain and other online
https://www.garlic.com/~lynn/2025.html#103 Mainframe dumps and debugging
https://www.garlic.com/~lynn/2025.html#91 IBM Computers
https://www.garlic.com/~lynn/2025.html#26 Virtual Machine History
https://www.garlic.com/~lynn/2025.html#8 IBM OS/360 MFT HASP
https://www.garlic.com/~lynn/2024g.html#98 RSCS/VNET
https://www.garlic.com/~lynn/2024g.html#69 Building the System/360 Mainframe Nearly Destroyed IBM
https://www.garlic.com/~lynn/2024g.html#62 Progenitors of OS/360 - BPS, BOS, TOS, DOS (Ways To Say How Old
https://www.garlic.com/~lynn/2024g.html#54 Creative Ways To Say How Old You Are
https://www.garlic.com/~lynn/2024g.html#39 Applications That Survive
https://www.garlic.com/~lynn/2024g.html#29 Computer System Performance Work
https://www.garlic.com/~lynn/2024g.html#17 60s Computers
https://www.garlic.com/~lynn/2024g.html#0 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#124 Any interesting PDP/TECO photos out there?
https://www.garlic.com/~lynn/2024f.html#110 360/65 and 360/67
https://www.garlic.com/~lynn/2024f.html#88 SHARE User Group Meeting October 1968 Film Restoration, IBM 360
https://www.garlic.com/~lynn/2024f.html#69 The joy of FORTH (not)
https://www.garlic.com/~lynn/2024f.html#52 IBM Unbundling, Software Source and Priced
https://www.garlic.com/~lynn/2024f.html#29 IBM 370 Virtual memory
https://www.garlic.com/~lynn/2024f.html#20 IBM 360/30, 360/65, 360/67 Work
https://www.garlic.com/~lynn/2024f.html#15 CSC Virtual Machine Work
https://www.garlic.com/~lynn/2024e.html#136 HASP, JES2, NJE, VNET/RSCS
https://www.garlic.com/~lynn/2024e.html#98 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024e.html#2 DASD CKD
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#117 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
an anecdote: 1979 major national grocer (hundreds of stores organized
in regions) was having severe performance problem ... and after
bringing through all the standard IBM corporate performance experts
... they got around to asking me. Datacenter had several CECs in
loosely-coupled configuration (each CEC with a stores from couple
dedicated regions). I'm brought into classroom with tables piled high
with CEC system activity reports. After more than 30mins, I notice a
specific 3330 DASD peaking at 7-8 I/Os (activity summed across all the
CEC activity reports) during worst performance period. I asked what it
was. It was shared DASD (across all CECs) with the store controller
apps PDS dataset with 3cyl PDS directory.
Then it was obvious ... every store controller app load for hundreds
of stores, was doing multi-track search avg. 1.5cyls, aka 60revs/sec,
full cyl, 19tracks, first search 19/60=.317sec, 2nd search
9.5/60=.158sec ... multi-track search for each store controller app
load taking .475sec ... (multi-track search locks the device,
controller, and channel for the duration). Effectively limited to two
store controller app loads (for the hundreds of stores) took avg of
four multi-track search I/Os and .951secs (during which DASD,
controller, channel was blocked) ... other 3-4 I/Os per second
representing the rest of the 7-8 I/Os per second (for the shared DASD
across all CECs).
Solution was to partition the store controller app PDS DATASET into
multiple files and provide a dedicated set (on non-shared) 3330 (and
non-shared controller) for each CEC.
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
pure random: 1988 branch office asks if I could help LLNL (national
lab) standardize some serial stuff they working with (including long
runs from machine rooms to high performance large graphics in offices)
quickly becomes fibre-channel standard ("FCS", including some stuff I
had done in 1980, initially 1gbit transfer, full-duplex, aggregate
200mbyte/sec). Then POK finally gets their stuff shipped with ES/9000
as ESCON (when it is already obsolete, 17mbytes/sec). Then POK becomes
involved with FCS and define a heavy weight protocol that eventually
ships as FICON.
Latest public benchmark I found was z196, "Peak I/O" getting 2M IOPS
using 104 FICON (about 20,000 IOPS/FCS). About the same time a FCS is
announced for E5-2600 server blades claiming over million IOPS (two
such FCS higher throughput than 104 FICON). Also IBM pubs was that
SAPs (system assist processors that actually do I/O) should be kept to
70% CPU (or around 1.5M IOPS). Also no CKD DASD has been made for
decades, all being simulated on industry standard fixed-block devices.
FICON ... overview
https://en.wikipedia.org/wiki/FICON
IBM System/Fibre Channel
https://www.wikiwand.com/en/articles/IBM_System/Fibre_Channel
Fibre Channel
https://www.wikiwand.com/en/articles/Fibre_Channel
FICON is a protocol that transports ESCON commands, used by IBM
mainframe computers, over Fibre Channel. Fibre Channel can be used to
transport data from storage systems that use solid-state flash memory
storage medium by transporting NVMe protocol commands.
... snip ...
Evolution of the System z Channel
https://web.archive.org/web/20170829213251/https://share.confex.com/share/117/webprogram/Handout/Session9931/9934pdhj%20v0.pdf
above mention zHPF, a little more similar to what I had done in 1980
and also in the original native FCS, early documents claimed something
like 30% throughput improvement ... pg39 claims increase in 4k IOs/sec
for z196 from 20,000/sec per FCS to 52,000/sec and then 92,000/sec.
https://web.archive.org/web/20160611154808/https://share.confex.com/share/116/webprogram/Handout/Session8759/zHPF.pdf
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
posts mentioning zHPF
https://www.garlic.com/~lynn/2025b.html#111 System Throughput and Availability II
https://www.garlic.com/~lynn/2025b.html#110 System Throughput and Availability
https://www.garlic.com/~lynn/2025.html#81 IBM Bus&TAG Cables
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2018f.html#21 IBM today
https://www.garlic.com/~lynn/2017d.html#88 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017d.html#1 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2016g.html#28 Computer hard drives have shrunk like crazy over the last 60 years -- here's a look back
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012m.html#13 Intel Confirms Decline of Server Giants HP, Dell, and IBM
https://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response
Some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS,
others went to the IBM Cambridge Science Center on the 4th flr and do
virtual machines, internal network, bunch of performance work (some
evolving into capacity planning), inventing GML in 1969 (after decade,
morphs into ISO standard SGML and after another decade morphs into
HTML at CERN). CSC 1st wanted a 360/50 to hardware modify with virtual
memory, but all the spare 50s were going to FAA/ATC, so they had to
settle for 360/40 and did CP40/CMS. When 360/67 standard with virtual
memory becomes available, CP40/CMS morphs into CP67/CMS (precursor to
VM370).
3272/3277 had hardware response of .086 ... and I had a bunch of
VM370s inside IBM that had 90th percentile of .11 seconds ... giving
.196secs for human response. For the 3274/3278, they move a lot of
hardware to the 3274 reducing 3278 manufacturing cost but really
driving up coax protocol latency and hardware response becomes
.3sec-.5sec, depending on amount of data. Letters to 3278 "product
administrator" got response that 3278 wasn't for interactive
computing, but data entry (MVS/TSO users never notice because it was
really rare that they saw even 1sec system response).
PROFS trivia: PROFS group went around gathering internal apps to wrap
menus around and picked up very early copy of VMSG for the email
client. Then when VMSG author tried to offer them a much enhanced
version, they wanted him shutdown & fired. It all quieted down when he
demonstrated his initials in non-displayed field in every email. After
that, he only shared his source with me and one other person.
While I was at SJR, I also worked with Jim Gray and Vera Watson on the
original SQL/relational (System/R, originally all done on VM370). Then
when the company was preoccupied with the next great DBMS, "EAGLE"
... we managed to do tech transfer to Endicott for SQL/DS. Later when
"EAGLE" imploded, there was request for how fast could System/R be
ported to MVS ... eventually released as DB2 (originally for decision
support only). Fall of 1980, Jim had left IBM for Tandem and tries to
palm off bunch of stuff on me.
Also early 80s, I got HSDT project, T1 and faster computer links (both
terrestrial and satellite, even do double hop satellite between west
coast and europe) and lots of conflicts with the communication group
(60s, IBM had 2701 telecommunication controller supporting T1, 70s
move to SNA/VTAM, issues caped controller links at 56kbits/sec). Was
working with NSF director and was suppose to get $20M to interconnect
the NSF supercomputer centers, then congress cuts the budget, some
other things happen and then a RFP is released, in part based on what
we already had running. Communication group was fiercely fighting off
client/server and distributed computing and we weren't allowed to bid.
trivia: NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet.
Communication also tried to block release of mainframe TCP/IP support,
when that failed, they said that since they had corporate ownership of
everything that crossed datacenter walls, it had to be shipped through
them; what shipped got aggregate 44kbytes/sec using nearly whole 3090
processor.
I then do enhancements for RFC1044 support and in some tuning tests at
Cray Research between Cray and 4341, got 4341 sustained channel
throughput using only modest amount of 4341 CPU (something like 500
times improvement in bytes moved per instruction executed).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CP67L, CSC/VM, SJR/VM systems for internal datacenters posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
Lots of places got 360/67s for tss/360, but most places just used them
as 360/65s for OS/360. CSC did CP67, and a lot of places started using
360/67s for CP67/CMS. UofMichigan did their own virtual memory system
(MTS) for 360/67 (later ported to MTS/370). Stanford did their own
virtual memory system for 360/67 which included WYLBUR .... which was
later ported to MVS.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
some posts mentioning 360/67, cp67/cms, michigan, mts, stanford, wylbur
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2018b.html#94 Old word processors
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015c.html#52 The Stack Depth
https://www.garlic.com/~lynn/2014c.html#71 assembler
https://www.garlic.com/~lynn/2013e.html#63 The Atlas 2 and its Slave Store
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv
https://www.garlic.com/~lynn/2011i.html#63 Before the PC: IBM invents virtualisation (Cambridge skunkworks)
https://www.garlic.com/~lynn/2010j.html#67 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2008h.html#78 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries
--
virtualization experience starting Jan1968, online at home since Mar1970
Interactive Response
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interactive Response
Date: 12 May, 2025
Blog: Facebook
re:
https://www.garlic.com/~lynn/2025c.html#0 Interactive Response
https://www.garlic.com/~lynn/2025c.html#1 Interactive Response
https://www.garlic.com/~lynn/2025c.html#2 Interactive Response
https://www.garlic.com/~lynn/2025c.html#3 Interactive Response
https://www.garlic.com/~lynn/2025c.html#4 Interactive Response
https://www.garlic.com/~lynn/2025c.html#5 Interactive Response
https://www.garlic.com/~lynn/2025c.html#6 Interactive Response
https://www.garlic.com/~lynn/2025c.html#7 Interactive Response
ORVYL and WYLBUR
https://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML
... this wiki entry looks like work in progress
https://en.wikipedia.org/wiki/Talk%3AORVYL_and_WYLBUR
Orvyl is a time shariang monitor that took advantage of the paging
capability of the IBM 360/67 at Stanford's Campus Computing center. It
was written by Roger Fajman with major improvements by Richard Carr, I
believe over the summer in 1968. Wylbur is a text editor and
time-sharing system. Wylbur was originally written by John Borgelt
alongside Richard, again, I believe, in the summer of 1968. Milten
monitored and supervised all the computer terminal input ports that
allowed multiple users access to Wylbur and Orvyl. John Halperin wrote
Milten. Joe Wells for Wylbur and John Halperin for Milten converted
them to run under MVT on the 360/91 at SLAC and eventually OS/VS2 when
SLAC obtained the two 360/168 computers. Joe made major improvements
to Wylbur including the 'Exec' file capability that allowed one to
script and run Wylbur Commands. He also built automatic file recovery
for when the entire MVT/MVS system crashed which was not infrequent.
This made Joe very popular with the SLAC physics community. John
extended Milten to operate hundreds of terminals using an IBM3705
communications controller. These changes were eventually back-ported
to the campus version of Wylbur when Orvyl was retired.
... snip ...
... trivia: "Metz" frequently mentioned (in the wiki), is (also)
person from online "bit.listserv.ibm-main" that asked me to track down
decision to add virtual memory to all 370s ... archived post w/pieces
of email exchange with staff member to executive making the decision
https://www.garlic.com/~lynn/2011d.html#73
... trivia2: At Dec81 SIGOPS, Jim Gray (I had worked with Jim &
Vera Watson on original SQL/Relational, System/R, before he left for
Tandem fall 1980) asked me if I could help Richard (Tandem co-worker)
get his Stanford PhD, it involved Global LRU page replacement
... and there was ongoing battle with the "Local LRU page replacement"
forces. I had huge amount of data from 60s & early 70s with both
"global" and "local" implementations done for CP67. Late 70s &
early 80s, I had been blamed for online computer conferencing on the
internal network; it really took off spring 1981 when I distributed
trip report about visit to Jim at Tandem. While only 300 directly
participated, claims that 25,000 were reading and folklore when
corporate executive committee was told, 5of6 wanted to fire me. In any
case, IBM executives blocked me from sending reply for nearly a year.
... trivia3: In prior life, my wife was in Gburg JES group, reporting
to Crabby and one of the catchers for ASP (to turn into JES3) and
co-author of JESUS (JES Unified System) specification (all the
features of JES2 & JES3 that the respective customers couldn't
live w/o; for various reason never came to fruition). She was then
con'ed into transfer to POK responsible for mainframe loosely-coupled
architecture (Peer-Coupled Shared Data). She didn't remain long
because 1) periodic battles with communication group trying to force
her to use VTAM for loosely-coupled operation and 2) little uptake
(until much later with SYSPLEX and Parallel SYSPLEX), except for IMS
hot-standby (she has story asking Vern Watts who he would ask
permission, he replied "nobody" ... he would just tell them when it
was all done).
virtual memory, page replacement, paging posts
https://www.garlic.com/~lynn/subtopic.html#clock
original sql/relational implementation, System/R
https://www.garlic.com/~lynn/submain.html#systemr
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
loosely-coupled, hot-standby posts
https://www.garlic.com/~lynn/submain.html#shareddata
--
virtualization experience starting Jan1968, online at home since Mar1970
Living Wage
From: Lynn Wheeler <lynn@garlic.com>
Subject: Living Wage
Date: 13 May, 2025
Blog: Facebook
In the 90s, congress asked GAO for studies on paying workers below
living wage ... GAO report found it cost (city/state/federal)
govs. avg $10K/worker/year .... basically worked out to an indirect
gov. subsidy to their employers. The interesting thing is that it has
been 30yrs since that report ... and have yet to see congress this
century asking the GAO to update the study.
https://www.gao.gov/assets/hehs-95-133.pdf
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, index - home