From: Anne & Lynn Wheeler <lynn@garlic.com> Date: Mon, 31 Mar 2003 08:35:36 -0700 Newsgroups: bit.listserv.vmesa-l Subject: early vnet & exploitAt 8:08 AM 3/31/2003 -0600, Graeme Moss wrote:
the internal network was larger than arpanet/internet up until sometime in '85. part of the reason was it had native efficient line drivers and didn't have a lot of architectural bugs that JES2/NJE came up with. It also provided the support for the logical equivalent of gateway; both the arpanet (with IMPs, until the great 1/1/83 switchover to internet protocol) and JES2 required a homogeneous implementation. JES2/NJE implementation was especially onerous. NJE started out with the JES2 internal 255-entry psuedo device table ... and any entries left over could be used for network node definitions (possibly 180-200 entries). At the time that JES2/NJE first shipped to customers, the internal network was well over 255 nodes. NJE also had the unfortunate characteristic if it saw something that had either the origin or the destination node not in the local internal table .... NJE trashed it (effectively couldn't operate as any kind of network intermediate node). The other characteristic was that NJE jumbled various different protocols together in the header. Slight header variations between releases could each others MVS system (there is a famous incident of file originating in san jose crashing a mvs system in hursley). Eventually the MVS/NJE nodes were relegated to end-nodes behind VNET intermediate nodes. A crop of special VNET/NJE (non-native) line drivers grew up that was specific to each different release of NJE .... where it was the responsibility of the VNET/NJE line drivers to provide a canonical translation of the NJE header and convert it to exact format reguired by the JES2/NJE on the other end of the line.
misc 1/1/83 discussions:
https://www.garlic.com/~lynn/internet.htm#22 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001l.html#35 Processor Modes
https://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#6 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#87 A new forum is up! Q: what means nntp
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#58 ibm vnet : Computer Naming Conventions
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#71 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2002q.html#4 Vector display systems
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN
past discussions of internal network, nje, jes2, vnet, etc
https://www.garlic.com/~lynn/95.html#7 Who built the Internet? (was: Linux/AXP.. Reliable?)
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002k.html#42 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003.html#68 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003d.html#59 unix
misc. discussions of size of internal network:
https://www.garlic.com/~lynn/97.html#2 IBM 1130 (was Re: IBM 7090--used for business or science?)
https://www.garlic.com/~lynn/97.html#26 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#56 Earliest memories of "Adventure" & "Trek"
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#109 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000c.html#46 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#60 Disincentives for MVS & future of MVS systems programmers
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000e.html#14 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#30 Is Tim Berners-Lee the inventor of the web?
https://www.garlic.com/~lynn/2000g.html#14 IBM's mess (was: Re: What the hell is an MSX?)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2000g.html#39 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2000g.html#50 Egghead cracked, MS IIS again
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#34 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001h.html#34 D
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#28 Title Inflation
https://www.garlic.com/~lynn/2001j.html#29 Title Inflation
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#56 E-mail 30 years old this autumn
https://www.garlic.com/~lynn/2001l.html#34 Processor Modes
https://www.garlic.com/~lynn/2001l.html#35 Processor Modes
https://www.garlic.com/~lynn/2001l.html#45 Processor Modes
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#64 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#11 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#71 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#18 Unbelievable
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#23 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#42 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002n.html#35 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2003.html#68 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003b.html#46 internal network drift (was filesystem structure)
https://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: History of project maintenance tools -- what and when? Newsgroups: alt.folklore.computers,alt.os.multics,alt.sys.pdp10 Date: Mon, 31 Mar 2003 15:59:23 GMTdpeschel@eskimo.com (Derek Peschel) writes:
when i encountered cp/67 and cms, it already had update and compare commands (but nothing from compare could be used by update). after i joined csc, the multi-level update scheme was developed ... and one of the MIT students did the parallel merge support which had some diff support (however, the parallel merge support never propagated into the vm/370 version or shipped to customers).
sometime after the mid-70s there was diff command developed internal ... and after presentation at share ... something similar was developed and made available on the waterloo/share tape (for lots of stuff, if you couldn't get it out of internal, make a technology presentation at share ... and have some of the share community reimplement it).
A very specific motivation for the diff command was release to release transition. Standard product procedure for a new release was to permanently apply all accumlate service and other updates to the base source file and then freshly resequence it by 1000. Customers had loads of their own source update files that were no longer usable. So the process was to take all of the previous, base product release source and updates and create a temporary source file (using the old sequence numbers). Then run a diff between that file (with old sequence numbers) and the source file from the new release (which could have new development that hadn't shown up in previous files distributed to customers). When that diff/update file was applied to the (old) temporary source file ... it would result in the equivalent executable to the new release .... but would have the "old" sequence numbers. Customers then (manually) reconciled any update conflicts (with their local updates and anything from the new release).
There was a companion program that I called reseq ... which given two otherwise identical source files that only differed in the sequence field, it would take any number of updates that applied to one of the base source files and convert their sequence numbers to correspond with the other source file.
previous refs:
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#75 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#76 History of project maintenance tools -- what and when?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: History of project maintenance tools -- what and when? Newsgroups: alt.folklore.computers,alt.os.multics,alt.sys.pdp10 Date: Mon, 31 Mar 2003 16:14:55 GMTdpeschel@eskimo.com (Derek Peschel) writes:
it got distribution in commercial sector and large number of univerisity datacenters however, I didn't see a lot of bleed over into the academic community. for instance a significant faction of share membership was university datacenters .... and lot of stuff found on the share waterloo tape was university datacenters ... and even some amount of vmshare computer conferencing in the mid & late '70s was by people from university datacenters.
i don't have any feeling for how many people would have read the stuff about update et al. Misc. other stuff done at CSC was internal network, script (both with dot run-off as well as "markup language"). the internal network was announced as product and there were other vendors that implemented interfaces to it (like univerisity community with respect to bitnet and earn). script spawned a number of script clones on a number of other platforms .... and of course everybody knows that gml begate sgml which begat html, et al.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch Date: Mon, 31 Mar 2003 18:36:34 GMThack@watson.ibm.com (hack) writes:
I later expanded that ... where each process was given a secondary
psuedo address space into which various kernel control blocks could be
mapped. One example was the backing-store/disk-map tables for all of
the process's virtual pages. If the process was suspended .... and its
virtual pages written out ... depending on load .... the tables
mapping those pages on disk could also be written out. This shipped as
part of the resource manager:
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003e.html#35 unix
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
there was also a "disk cleaner" (page migration) that would check for low-usage pages on higher-speed transfer devices (like fixed-head disks or drums) and migrate them to lower speed disks.
at least one vm/370 time-sharing service bureau expanded that support to include "all" process control blocks .... allowing process to be checkpointed to disk ... migrate to a different processor complex with access to the same disk-pool .... or even migrate to a different processor complex with transfer over a network (aka waltham to san fran ... cases were processor complex had to be brought down 3rd shift over the weekend for scheduled maintenance).
random past discussions of paging kernel & other pieces:
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#11 REXX
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002i.html#9 More about SUN and CICS
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#73 Home mainframes
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#64 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#54 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 31 Mar 2003 18:51:23 GMThack@watson.ibm.com (hack) writes:
Under severe real storage constraints it would just update the lower level tables and return .... then individual pages would be brought in via the standard virtual page fault operation. For totally unconstrained environment (at least comparing the mmap specification to the current real storage size and the concurrent paging activity) .... it would do the mmap and then start a pre-fetch for all pages before returning to the process ... either immediately or delayed by some amount. If the process touched a page before the prepage was complete .... then the standard virtual serialization would do the right thing. It could also select to prepage a subset (along with hints) and either return immediately or delayed.
random other refs:
https://www.garlic.com/~lynn/submain.html#mmap
In the vm/370 version, i added the semantics to the executable creation command (genmod) to allow it to specify "shared segments". the loader (loadmod) would use that information (saved by genmod as part of the executable control information) to provide the appropriate specification in the mmap api.
a small subset of the "sharing" code ... part of the restructure of
table handling in the cp kernel ... and a lot of the cms cleanup for
execute only code (like embedded work areas had to be removed) was
shipped in release 3 of vm/370 under something called discontiguous
shared segments:
https://www.garlic.com/~lynn/2000.html#18 Computer of the century
https://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2001c.html#2 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002g.html#59 Amiga Rexx
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 31 Mar 2003 21:36:17 GMT"Glen Herrmannsfeldt" writes:
I started doing no-dup in late 70s .... based on whether the high-speed backing store was heavily constrained or not This was sort of as a follow-on the stuff that I released in the resource manager for doing page migration from high-speed/low-latency devices to lower-speed/higher-latency devices. Note that the dynamics allowed switching between "dup" & "no-dup" strategies based on resource bottlenecks (i.e. "no-dup" traded off secondary storage space for more writes).
Note that this also applied to 3880-11/ironwood contrul unit page-cache. It was relatively easy to overload the ironwood cache on large systems .... so a "no-dup" strategy in conjunction with a "distructive" read would also extend the "no-duplication" not only the physical disk surface ... but also to the intermediate controller cache (aka a "distructive read" indicated to the controller to remove it from cache after transfer, if it happened to be in cache). A dup strategy could have the page in processor memory, in the controller cache as well on some disk patter surface.
I also did a rewrite of the SYSOWN tables .... so that "high-speed" and "low-speed" could be configurable from allocation/deallocation standpoint. The standard SYSOWN index were full device ... and high/low speed was by device type ... so that both allocation and deallocation strategy was based on SYSOWN index & device type. I created a different structure for allocation that allowed finer control definition on a per area basis. For instance this allowed differentiation between an electronic-store emulated disk ... that had same device types as the "real" disk.
Note that the "swapper" (actually "big pages") implemented on both VM and MVS in the early 80s was a no-dup algorithm. A "big page" was collection of 4k pages that fit a track ... it also had some characteristics of log-structured filesystem .... in that it always wrote to a new location closest to the current head position (its primary objective was NOT to conserve scarce disk page space ... but to try and minimize arm movement).
misc. syspag/migration ref:
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
lots of past postings on "dup" vis-a-vis "no-dup" strategies
https://www.garlic.com/~lynn/93.html#12 managing large amounts of vm
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001i.html#42 Question re: Size of Swap File
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#16 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#19 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002f.html#26 Blade architectures
misc. 3880-11/ironwood refs:
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#54 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2002d.html#55 Storage Virtualization
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2002o.html#52 ''Detrimental'' Disk Allocation
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
misc. "big pages"
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Date: Mon, 31 Mar 2003 15:10:49 -0700 Subject: recent cp67/vm370 walks down memory lane Newsgroups: bit.listserv.vmesa-lfrom alt.folklore.computer and comp.arch newgroups .... fyi
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#75 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#76 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#2 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#4 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: unix Newsgroups: alt.folklore.computers Date: Mon, 31 Mar 2003 22:32:20 GMTCharles Shannon Hendrix writes:
Everybody saw the whole source plus update files as distinct filesystem objects. Testing could be done with the same exact filesystem objects as used in the production system ... just by having a local version of the configuration/control file ... specifying the addition of the local "test" update files.
There were some regression testing issues if somebody slipped some new production files into one of the "lower-level" (aka earlier applied) auxiliary control files (aka like product maintenance). However, since they were distinct filesystem objects .... it would also be possible to temporarily make local copies of the affected auxiliary control files commenting out application of new things (like product maintenance updates) until appropriate regardion/review was completed.
one of the rules that tried to be followed was to keep the distinct filesystem objects with original date/time and NEVER change them (as closely as practical, everything became a new incremental update).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 31 Mar 2003 23:28:02 GMT"Glen Herrmannsfeldt" writes:
... the mmap stuff for the cms filesystem (early '70s) ... had some adjustment that would either force to new location after initial read or "page in place" ... from its original filesystem location.
in the mmap for cms filesystem version that was released in the mid-80s as part of xt/370 .... it was always left to "page in place" .... since the cms filesystem emulated disk on a pc harddisk and the cp paging emulated disk on the pc harddisk nominally was the same (part of the mmap stuff in the xt/370 configuration ... was that the available real memory for paging on xt/370 machine was often smaller than the cms executable that was being loaded ... which in the non-mmap implementation ... aka simulated real i/o ... resulted in extremely long delays ... effectively loaded little bits in ... writing them out to a new location and then reading in additional little bits ... until the virtual address space has been populated).
sort of start of os/2 interaction ... from long ago and far away (just
before release of os/2 1.00 in december of 1987).
Date: 11/24/87 17:35:50
To: wheeler
FROM: ????
Dept ???, Bldg ??? Phone: ????, TieLine ????
SUBJECT: VM priority boost
got your name thru ??? ??? who works with me on OS/2. I'm
looking for information on the (highly recommended) VM technique of
goosting priority based on the amount of interaction a given user is
bringing to the system. I'm being told that our OS/2 algorithm is
inferior to VM's. Can you help me find out what it is, or refer me
to someone else who may know?? Thanks for your help.
Regards,
???? (????? at BCRVMPC1)
... snip ... top of post, old email index
os/2 history
http://www.os2bbs.com/os2news/OS2Warp.html
http://www.os2bbs.com/os2news/OS2History.html
random xt/at/370 posts:
https://www.garlic.com/~lynn/94.html#42 bloat
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001g.html#53 S/370 PC board
https://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001n.html#92 "blocking factors" (Was: Tapes)
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#44 Blade architectures
https://www.garlic.com/~lynn/2002f.html#49 Blade architectures
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2002f.html#52 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 31 Mar 2003 23:47:10 GMT"Glen Herrmannsfeldt" writes:
1) earlier was full-track log write that played game with CKD so that the write started at the first record under the head (aka search was not for record equal) ... read/recovery could also do a full-track starting at the first record under the head (data inside the record .... allowed recovery to figure out what the original record sequence was). It could beat some of the later full-track caches .... which started cache loading as soon as head was settled ... but would actually transfer in processor sequence.
2) a disk/dasd controller store-in cache that was replicated and battery backed aka processor would get early indication that write was complete as soon as it was in controller cache ... and cache then could do lazy write ... replicated storage and battery backed allowed for various kinds of failure recovery
.....
there was other discussions of big pages .... driving 3380s at close to transfer rate (optimal head & arm scheduling) ... with 10 4kpages per big page (3380 40k track). there were some numbers about systems easily, routinely hitting over 2000 plus 4k-page tranmsfers per second.
as in previous big page discussion ... possibly 30-40 percent of such page transfers were unnessary ... however real storage wasn't the real constraint, it was disk arm latency. the efficiency from doing multiple page transfers in single disk operation more than offset the overhead of doing potentially unnessary transfers (and the associated unnessary occupancy of real storage). In effect, real storage and transfer rate was traded off for arm motion and rotational delay operational optimization.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 01 Apr 2003 01:59:31 GMTJohn Ahlstrom writes:
iniital loading of a program could select an arbritrary address location ... and then pass control to the application ... and eventually the application would get all the general purpose registers it was use for addressing appropirately initialized. however, the kernel/system had no real good idea what that might be ... and which registers it would need to swizzle.
also the os/360 standard program convention included storage objects that occupied the application address spacecall adcons .... for instance the program could
l r15,=a(sub1) balr r14,r15which would load the address pointer to sub1 into general purpose register 15 and then branch and link to the subroutine (storing the address after the BALR instruction in R14). The program image on disk had these storage objects specifically identified and the value recorded as a displacement from some value. As part of program loading, the loader would "resolve" these relocatable adcons into absolute adcons before program invokation.
this created quite a problem for me when doing floatable shared segments .... i.e. the same shared program objects present in multiple different virtual address spaces at possibly different virtual addresses. Basically I had to replace all the "relocatable" adcons with "absolute" adcons that were the displacement from some reference value known to be present in some register. The displacement value would then be added to some register contents at runtime. The sequence would then look something like:
lr r15,r12 a r15,=a(sub1-base) balr r14,r15where the application knew that r12 contained the current value of "base" for that specific address space.
The problem that I was up against was that there was 16mbyte virtual
address space. Having each shared object occupy the same virtual
address in each address space ... eventually implied that when
defining the shared objects ... they had to be very carefully
allocated pre-defined addressesn ... since on any specific system the
aggregate size of all possible shared objects was going to be larger
than 16mbytes. Situations where that some programming processes
combinations of multiple different shared objects mapped into the same
virtual address space. basically address sorage object could be
1) early resolution
2) medium late resolution
3) runtime resolution
Early resolution was to fixed the shared object address location at
the time it was defined (and fixed all the adcons when the shared
object was initially defined and written to disk). This is in effect
the original cp/67, the original vm/370 and much of the VM stuff
today.
medium late resolution would have fixed the shared object address objects by the first process/address space to load it. However, this easily has deadly embrace ... example different processes need two libraries that are in shared r/o storage. the first process initially loads libray1 at location N (resolving all address constants at the first loading of that shared library). the second process initially loads library2 at location N (resolving all address constants at the first loading of that shared libary). The first process not tries to also load libary2 .... but can't because of the address location at location N (and the shared image are already bound on first load). This problem led to early resolution.
runtime resolution .... the adcons are never changed ... but are absolute displacements for some location that can float in different address spaces. actual use is resolved at runtime. This allowed any process to have any combination of shared objects up to a total of 16mbytes. By comparison the early resolution severly restricted the combination of different shared objects. There was severe installation management effort involved in assigning which shared objects used the same addresses ... to avoid situations where some process might need concurrent availability of such shared objects.
misc. past discussions of floating/relocatable shared segments:
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2002f.html#41 Blade architectures
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Security Certifications? Newsgroups: comp.security.misc Date: Tue, 01 Apr 2003 03:32:16 GMTAnne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 01 Apr 2003 16:27:30 GMTJan C. Vorbrüggen writes:
... start BALR call drift ...
I had already done a trick with the call macro. The original (cp/67) call macro always used an svc 8/12 convention (supervisor call). It would interrupt and allocate a new register save area and some other stuff and then call off to the called routine. The called routine woudl svc12 ... which would deallocate the dynamaic save area.
doing lots of performance measurements ... I noticed some relatively short path length subroutines that always returned and never called anything else, and were non-interruptable, they also had very high frequency call rates. they could effectively get by with a static savearea. So I modified them to use a BALR (branch and link, single instruction) call convention with a "static" savearea. I also modified the CALL macro to check a list of BALR routines ... and generate a BALR instead of a SVC8. I had earlier re-written the cp/67 svc8/12 implementatin that cut about 70% of its pathlength ... resulting in possibly 5-10 percent overall savings in kernel cpu utilization. I had also implemented the ability to dynamically extend the SVC save areas ... originally there was one hundred pre-allocated ... and if those ever ran out ... the system crashed and burned.
The BALR, for some critical, high usage routines, the BALR change eliminated the SVC pathlength altogether for another 5-10% savings in kernel cpu utilization.
ok, where do you put a static savearea for the BALR routines ... especially if you are in a multiprocessor environment with finegrain locking and possible parallel execution.
so slightly related drift in this thread
https://www.garlic.com/~lynn/2003f.html#10 Alpha performance, why?
360/370 had 16 general purpose registers ... 15 that could be used as base/address registers. If you specified register zero in a address or index usage .... it didn't indicate the contents of register zero ... but no register at all. In effect, the 12-bit displacement would be with respect to first 4k bytes of storage.
in the real hardware this is where the processor hardware interrupt and misc. other stuff goes on (maybe first 512bytes of page zero). low-level interrupt handlers tended to use other parts of page zero for something like a temporary register savearea. The interrupt handler is going to need some registers to do its work ... save status in permanent area ... allocate dynamic save area, etc. So a temporary savearea is typically reserved somewhere in page 0 for interrupt handlers ... until they've done enuf work to save status whereever it might needed to be permanently. I just assigned/reserved a free location in page zero for the BALR routines.
the 360/67 for multiprocessing had a single linear address space. However, it wouldn't work trying to have more than one processor tramping around in the same real page zero. As a result, each processor had a page zero prefix register. You loaded a page number into the page zero prefix register ... and that processor started using that real page for its nominal "page zero" activities (including addressing when there wasn't an address register). The kernel code had the necessary logic that as it was initializing more than one processor ... it was allocating different real pages for use as page zero by different processors. so a real page (that was loaded into a prefix register) could be addressed by two different values its real page number and its page zero alias.
This was changed for 370 multiprocessor. if an attempt was made to address the real page indicated by the prefix register ... it was redirected to the ... "real, real page zero". In 360, once in multiprocessor mode ... it was no longer possible to access the "real, real page zero". The double reverse translation for 370 prefix register allowed access to the real, real page zero. This was done on the assumption that multiprocessing kernel software might use the area indicated by the multiprocessor prefix register for some sort of system wide multiprocessing coordination operations.
... end BALR call drift ...
so the logic for pageable kernel was to establish a kernel boundary
address where all kernel calls with addresses less than the boundary
went straight to that address. however any kernel calls that were
higher than the kernel boundary address went thru the pageable logic.
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
so pageable kernel routines had to be crafted following certain procedures .... and they had to be appropriately placed in the sequence of the kernel build. In the svc8 (call) interrupt handler, if the called-to address was greater than the kernel boundary value, it was treated as a pageable kernel call; if it was less than the boundary value, it was treated as a non-pageable kernel call. On an svc12 (return) ... if the interrupting from address was greater than the kernel boundary vlaue, it was treated as a pageable kernel routine; if the interrupting from address was less than the kernel boundary value, it was treated as a non-pageable kernel routine.
so originally on 768k 360/67 ... with 192 4k pages ... the fixed kernel was about 30 pages leaving around 160 4k pages. then dynamic storage out of fixed real memory ... bookkeeping for each process, process virtual memory tables, etc ... could be another 30-60 pages (depending on load); say leaving 120 4k pages. The original implementation that I did on cp/67 was taking "console functions" ... and fixing them up for pageable kernel operation; originally about 5-6 4k pages .... say about five percent of real storage. this didn't ship in cp/67 ... but an updated version of it shipped in vm/370.
later for some additional real-storage constraints ... i also
did pageable "control blocks" ... basically various process-specific
stuff that was laying around consuming real storage that wasn't
needed. this was part of the resource manager. ... again previous
description:
https://www.garlic.com/~lynn/2003f.html#3 ALpha performance, why?
also
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003e.html#35 unix
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
slightly related is a.f.c thread on source/project maintenance procedures:
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#75 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#76 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#77 unix
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#2 History of project maintenance tools -- what and when?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 01 Apr 2003 16:42:28 GMTTerje Mathisen writes:
Over ten years later ... somebody from MVS call to say that they had just gotten a big corporate award for changing mvs from deterministic sweep of pages from real storage to only doing it if real storage was constrained ... and could they do something similar for VM. I indicated to him that I had never, not done it that way ... and that was the way vm/370 (& my prior cp/67 rewrite) had always done it (I even had an argument about that with some of the pok people back before initial release of os/vs2/svs). I made some facetious comment that instead of POK having given a big award for fixing an obvious bug ... that the people responsible for the bug (needing fixing) should have done the honorable thing and at least returned the past ten years salary to the corporation ... and then rewrote their wills to forfeit all of their worldly possesions.
some topic drift regard DataHub related to certain pc company
starting with the letter N:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious
https://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun?
https://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect
https://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 01 Apr 2003 16:52:28 GMTmemorymorass@aol.com (Paul A. Clayton) writes:
the real i/o system ran with real addresses ... i/o from a virtual
process had to have all the references translated from virtual to copy
structures with real addresses ... and the associated virtual pages
pined/fixed in real storage. then the i/o was scheduled, then all the
stuff was unpined/unfixed and necessary status addresses translated
back from real to virtual. I made use of the infrastructure for
pinning/unpinning pages for real I/O operations ... for managing pages
that were part of the pageable kernel.
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
as per above ... i later extended these psuedo virtual address tables for also paging internal control blocks associated with processes.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 01 Apr 2003 17:08:18 GMTpmxtow@merlot.uucp (Thomas Womack) writes:
we had used the internal version for a number of things for 4-5 years before it was released as a product.
one of the things it was used on was taking (real storage) os/360 apl\360 and converting it to run on cms (released as cms\apl) and in a virtual memory environment. the traces and the reduction ... repackaged the module sequence for optimal virtual memory operation (using some fancy fortan code doing complex cluster analysis).
It could also produce storage/execution traces ... both instruction
and data references. There were these six foot swaths of storage
references on the backside of 1403 greenbar paper ... taped together
and covering the walls of the hallways of 4th floor, 545 tech sq.
Typical display was each horizontal line was 2000 instructions and the
storage was scaled to fit the vertical, 6foot line (floor to ceiling).
https://www.garlic.com/~lynn/subtopic.html#545tech
One of the issues was that apl\360 allowed real storage workspaces that were 16kbytes or 32kbytes that were swapped in and out. all assingments went to new storage location ... when all storage in the workspace was assigned ... it would garbage collect and compress all variables back down to low storage. this storage management technique caused a lot of problems in a virtual memory environment .... where you might have an apl application that was maybe 20-100k ... but was operating with a 2mbyte to possibly 16mbyte "workspace". It was guaranteed to cause all sorts of page thrashing. The vs/repack trace of apl ... running down the halls showed very distinct saw-tooth effect ... a lot of access down at low storage and a very distinct pattern that ran from low storage to high storage (over time) ... and then a solic verticle line as garbage collection occurred.
it was also used extensively on doing tuning of various products ... STL used it on IMS (database, transaction).
random past vs/repack refs:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 01 Apr 2003 17:20:31 GMTPeter da Silva writes:
in that sense it was much more like dynamic demand paging ... but the members of a (40kbyte) "big page" was effectively dynamically determined on page out (and then on page fault for any 4k page within a specific 10-page "big page", all ten pages were fetched).
in the past ... swapping tended to be slightly more characteristics of contiguous memory allocation infrastructures ... like real memory apl\360 workspaces, mentioned in previous post.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: History of project maintenance tools -- what and when? Newsgroups: alt.folklore.computers,alt.os.multics,alt.sys.pdp10 Date: Tue, 01 Apr 2003 19:14:57 GMT"Shmuel (Seymour J.) Metz" writes:
it wasn't so much stuff that i was involved in doing research as an employee ... it was stuff that i had done as an undergraduate 8 years earlier.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Could somebody use SCSH, Sheme, or Lisp to create the "Lispm" architecture. Newsgroups: comp.lang.scheme.scsh,comp.lang.scheme,comp.lang.lisp,comp.arch,alt.folklore.computers Date: Tue, 01 Apr 2003 19:22:58 GMTSymbolics_XL1201_Sebek_Budo_Kafka@hotmail.com (Franz Kafka) writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Disk prefetching Newsgroups: comp.arch Date: Tue, 01 Apr 2003 19:37:10 GMTThomas writes:
later there were two models of 2305 fixed-head disk ... the two had the same number of physical heads, rotated at the same speed, had the same transfer rate; but one had half the number of tracks as well as half total data capacity ... but it also had half the rotational latency (exercise left to the student).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 01 Apr 2003 23:13:41 GMTJohn Ahlstrom writes:
in the pageable kernel case, a source module had to be 4kbytes or less
and not cross a page boundary. the paging system could bring the
kernel storage image into an arbritrary real page .... and execution
then took place with that real address. It could be paged out and
paged back in at a totally different real address. This required
carefully following some specific coding conventions for pageable
kernel routines.
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance
the location independence for relocatable/floating (read-only) shared
segment is/was similar to that of pageable kernel routines .... it was
code that ran with virtual addressing on ... but the same exact
storage image could appear in multiple different virtual address
spaces concurrently ... possible at different virtual addresses in
each address space. As a result all address resolution had to be with
respect to the address position in whatever address space it was
currently operating in. Again this required relatively specific coding
conventions.
https://www.garlic.com/~lynn/2003f.html#10 Alpha performance
Note however, most of the effort I put into adapting CMS code to
relocation/floating (read-only) shared segments ... wasn't so much
making it address location independent ... but reworking various
pieces of code to also make it free of any storage modifications (aka
in some vernacular, re-entrant). In addition to various CMS system
routines that had to be sanitized, another example that i reworked was
browse, fulist, and ios3270; specific reference:
https://www.garlic.com/~lynn/2001f.html#9 Theo Alkema
random other browse, fulist, ios3270 references:
https://www.garlic.com/~lynn/96.html#41 IBM 4361 CPU technology
https://www.garlic.com/~lynn/99.html#60 Living legends
https://www.garlic.com/~lynn/99.html#61 Living legends
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001f.html#8 Theo Alkema
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?
https://www.garlic.com/~lynn/2002i.html#79 Fw: HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2002p.html#40 Linux paging
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "Super-Cheap" Supercomputing Newsgroups: comp.arch Date: Wed, 02 Apr 2003 05:37:50 GMTGreg Pfister writes:
cambridge also put in the support for making system calls ... and it was released as cms\apl. the system call stuff caused quite a bit of consternation among the apl purists ... since it violated some number of the original apl principles.
palo alto science center then took cms\apl and did some number of things to it, including revamping the system call stuff into shared variable paradigm ... as well as doing the apl microcode assist for the 370/145. this was release as apl\cms and then apl\sv. A lot of apl applications ran as fast on 370/145 with the apl microcode assist as they did on a 370/168 w/o apl microcode assist (not quite ten times).
Across the back parking lot from the palo alto science center was hone, probably for a time, the largest single system cluster in the world. It had something like 40,000 userids and supported all the branch and field people in the US. In addition, HONE system was cloned and deployed in a number of other countries (in a couple cases, I hand carried it) around the world supporting branch and field people all over the world.
The major environment for the branch and field people was a large subsystem environment written in APL called sequoia (possibly one of the most used APL applications of all time) ... and within sequoia ran a lot of support tools ... like machine configurators (allow branch office people to configure and order machines for customers). A lot of sequoia would have ran as fast on 370/145 with apl m'code assist as on 370/168s .... but there was some amount of sequoia which wasn't addressed by the apl m'code assist.
some amount of discussions w/regard to hone & apl
https://www.garlic.com/~lynn/subtopic.html#hone
note that the person that was primarily responsible for the 145 apl
microcode assist was also fundamentally responsible for FORTQ
... which became FORTHX.
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
when we were woking on ECPS ... a kernel microcode assist for the
138/148 (follow-on to the 135/145), he did a special microcode
PSW/instruction-address sampler for us on the 145 ... that help
identify were the CP kernel was spending its time (there was actually
two technologies ... one was the microcode psw sampler ... the other
was some software kernel instrumentation):
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
and
https://www.garlic.com/~lynn/submain.html#mcode
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Could somebody use SCSH, Sheme, or Lisp to create the "Lispm" architecture. Newsgroups: comp.lang.scheme.scsh,comp.lang.scheme,comp.lang.lisp,comp.arch,alt.folklore.computers Date: Wed, 02 Apr 2003 14:48:25 GMTcstacy@dtpq.com (Christopher C. Stacy) writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 02 Apr 2003 17:05:17 GMTJan C. Vorbrüggen writes:
somebody either does the svc8 or not. if they didn't do the svc8, the system would crash regardless ... since the appropriate linkage process wouldn't have been done. if they did the svc8 ... it would all automagically work. also, remember that the called routine returns via executing an svc12 interrupt ... which also requires all the necessary processing has been performed. If you branched directly to a standard called routine (not executing an svc8), all the calling structures wouldn't have been appropriately done ... and then there would be a problem when the called routine executing an svc12 interrupt for the return.
for pageable kernel, the support was added to the svc8 call routine (aka the svc interrupt routine that handled the svc8 instruction). All of the pageable kernel was positioned after the non-pageable kernel and there was a known boundary address separating the two. if the address of the called routine was greater than the boundary address, the svc8 call routine did the appropriate pageable kernel handling stuff. the return/svc12 handling routine ... would deallocated the dynamic register savearea ... and if the from address was greater than the boundary address, then it would perform the necessary house keeping.
the call macro would do
l 15,=a(kernel-routine)
svc 8
the svc interrupt handling routine would check for a code "8" ... and
then go off to the call processing. the call processing would
dynamically allocate a storage area for register save area ... which
also contained the linkage information as to the calling routine. It
would then go off to the called routine. If =a(kernel-routine)
address was greater than the end of the fixed-kernel address, it would
execute the appropriate pageable code support.
The non-pageable kernel starting at location zero and was contiguous. All called-to addresses were greater than the non-pageable kernel boundary address. The location where a pageable kernel routine was loaded was pretty indeterminate .... except since the non-pageable kernel locations were fixed, starting at location zero and contiguous, it was also known that the return address from a pageable kernel routine always had to be greater than the boundary address also (aka a pageable kernel routine could never be loaded at a location less than the highest address of the non-pageable kernel).
aka ... it wasn't the responsibility of the call or return macro to do the appropriate pageable kernel stuff .... it was the responsibility of the supervisor routine that handled call/returns.
so lets say ... somebody branched directly to any routine (modulo BALR routines) ... instead of invoking SVC8 ... and for some reason the lack of appropriate linkage stuff didn't crash immediately ... when the called routine invoked svc12 there would be a problem .. since (at least) the appropriate linkage information wouldn't have been initialized ... and there would likely be some sort of failure with the return sequence.
There is another glitch specific to the pageable kernel processing.
As per:
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
the pageable kernel mechanism made use of the real I/O support logic. For real I/O, there is a "TRANS" done with the lock option i.e. the page is checked for being in real storage, if not, it is brought in, and then it has its pinned/lock count incremented. For the period that it takes to perform the real i/o operation, the page is pinned in real storage. When the real i/o operation involving that virtual page is complete the pinned/lock count is decremented. Pages aren't eligible for replacement (removal from real storage) if there is a lock count greater than zero. There is also a system failure if a lock count ever goes negative.
So because there could be multiple, kernel threads simultaneously executing in a pageable kernel module ... and potentially any of those threads might be suspended for one reason or another .... the SVC8 routine not only does a "TRANS" operation on the pageable routine ... but also specifies the lock option; incrementing the pinned/lock count. The svc12/return processing does a lock count decrement on the page. It is likely if the system survived a direct branch to a pageable module (potentially because somebody didn't do a svc8 call for some reason) ... and the called routine returned with an svc12 ... and the svc12 routine didn't fail because there was not appropriate setup by an svc8 call ... then at least when the svc12 routine called the page lock decrement routine ... there would be a high probability that the count would go negative (because it hand't been appropriately incremented) and the system would fail. The pin/unpin lock increment/decrement occurs regardless of whether it is a constrained or unconstrained environment.
the call/return macros were modified to handle calls to BALR routines differently than SVC8/12 routines .... not because of the pageable/non-pageable issue .... but BALR called routines didn't require dynamically allocated storage for register save area ... and of course, BALR called routines couldn't be pageable.
earlier reply with details of calls handled by supervisor calls (in
order to have dynamically allocated storage for register save area) as
well as description of the BALR call changes:
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
your concern about in-line code in the calling routine ... and/or in-line code generated by the CALL macro doesn't directly concern the support for pageable kernel support ... since the CP kernel requires that all calls (modulo my original changes for selective BALR calls) be performed by the svc8 interrupt routine (the CP kernel calling convention requires the svc8 mechanism to dynamically allocate storage for register save area ... as well as other misc. housekeeping). The svc8 interrupt routine masks all the housekeeping mechanism associated with pageable kernel.
however, there is some in-line logic/code with regard to the changes for BALR calls (instead of supervisor interrupt). The call macro contains a list of BALR call routines. So if there is a statement:
CALL DMKFREEthe call macro checks the argument against a table internal to the macro ... and will generate the code:
l r15,=a(dmkfree) balr r14,r15rather than
l r15,=a(dmkfree) svc 8the balr instruction branches directly to dmkfree ... and puts the return address in r14 .... instead of generating a supervisor call.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: New RFC 3514 addresses malicious network traffic Newsgroups: alt.folklore.computers Date: Wed, 02 Apr 2003 17:38:39 GMTJoe Morris writes:
note that while the RFC is out ... i normally update my index based on the corresponding rfc-editor announcements which isn't do for at least a couple more hours.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: New RFC 3514 addresses malicious network traffic Newsgroups: alt.folklore.computers Date: Wed, 02 Apr 2003 18:01:37 GMTAnne & Lynn Wheeler writes:
having worked on the early SSL stuff for electronic commerce:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
I've observed that the CA infrastructure is now in something of a catch-22 situation. A major purpose for ssl server domain name certificates is to address some integrity issues with the domain name infrastructure. However, when somebody applies to a certification authority for a ssl server domain name certificate, the certification authority has to check with the authority agency for domain name ownership ... which is the domain name infrastructure.
so there are a number of proposals for improving the integrity of the domain name infrastructure ... some of them essentially from the certification authority industry ... so that they can better trust the information that they are certifying. note however, that in improving the integrity of the domain name infrastructure ... they are also reducing the justification for needing SSL server domain name certificates.
past posts observing the catch-22:
https://www.garlic.com/~lynn/aadsmore.htm#client3 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsmore.htm#pkiart2 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm4.htm#5 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm8.htm#softpki6 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#cfppki5 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm13.htm#26 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm13.htm#32 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/2000e.html#40 Why trust root CAs ?
https://www.garlic.com/~lynn/2001l.html#22 Web of Trust
https://www.garlic.com/~lynn/2001m.html#37 CA Certificate Built Into Browser Confuse Me
https://www.garlic.com/~lynn/2002d.html#47 SSL MITM Attacks
https://www.garlic.com/~lynn/2002j.html#59 SSL integrity guarantees in abscense of client certificates
https://www.garlic.com/~lynn/2002m.html#30 Root certificate definition
https://www.garlic.com/~lynn/2002m.html#64 SSL certificate modification
https://www.garlic.com/~lynn/2002m.html#65 SSL certificate modification
https://www.garlic.com/~lynn/2002n.html#2 SRP authentication for web app
https://www.garlic.com/~lynn/2002o.html#10 Are ssl certificates all equally secure?
https://www.garlic.com/~lynn/2002p.html#9 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2003.html#63 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003.html#66 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003d.html#29 SSL questions
https://www.garlic.com/~lynn/2003d.html#40 Authentification vs Encryption in a system to system interface
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 02 Apr 2003 22:32:58 GMTminor pageable kernel footnote to this thread:
the cp/67 kernel, vm/370 kernel, and even the cms kernel build all used a modified version of the BPS loader. Basically the appropriate assembly file outputs were batched together with the BPS loader, the BPS loader was invoked which then read all the files, resolving all there symbols and relocatable fields; creating a memory image of the kernel. The bps loader then branched to a entry point in what was just loaded ... which had the responsibility of finding the disk boot location and writing the memory image out to disk ... with things appropriately patched so on a disk boot, the process would be exactly reversed.
note bps (Basic Programming System) loader is possibly earliest of the "systems" built for 360 ... and was supposedly targeted at just reading real 80col cards and being able to work in 8K (16K?) real storage configurations.
So when I was hacking cp/67 kernel originally for pageable kernel ... i was splitting up all these existing routines into little small chunks. I hit a wall with the BPS loader since it had a fixed maximum table of 255 entry symbol table ... and all the fiddling had pushed the number of external entry symbols over 255.
As a result I had to redo the fiddling to stay within the 255 limit. As I was doing that, I found out that when the bps loader passed control to the loaded program ... it past a pointer to its internal symbol table and count of valid entries in registers. The standard cp process of dealing with kernel debugging was getting character output (real or virtual printer) from the load process and working with it manually. I thot wouldn't it be handy to include the full symbol table with the kernel boot image. So I revised the code that wrote to disk qthe boot image, to copy the symbol table entries and fake out the system as if the symbol table was explicitly loaded at the end of the pageable kernel.
This was never shipped in the cp/67 product. However, in the morphing
to VM/370 ... the BPS symbol table size once again became a
problem. Rumaging around in the attic/storeroom of 545-tech sq (top
floor of the bldg), I ran across an old CSC card cabinet that had the
card assembly source for the modified BPS loader being used. I was
able to hack that to extend the maximum symbol table size ... in part
because there was a whole lot more stuff being put into vm/370
... besides the symbol table additions necessary to support the
programming paradigm for pageable kernel. However, the feature that
appended the BPS loader symbol table to the end of the pageable kernel
got dropped ... somewhat akin to a lot of the stuff i had done for
dynamic adaptive sheduling and resource manager getting dropped:
https://www.garlic.com/~lynn/subtopic.html#545tech
https://www.garlic.com/~lynn/2003f.html#17 History of project maintenance tools
however, later when i was doing this problem determination project
for vm/370 ... i reintroced appending all the loader symbol table entries
to the end of the pageable kernel area:
https://www.garlic.com/~lynn/submain.html#dumprx Problem determination, zombies, dump readers
misc. past bps loader postings:
https://www.garlic.com/~lynn/98.html#9 Old Vintage Operating Systems
https://www.garlic.com/~lynn/99.html#135 sysprog shortage - what questions would you ask?
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#26 HELP
https://www.garlic.com/~lynn/2001b.html#27 HELP
https://www.garlic.com/~lynn/2002f.html#47 How Long have you worked with MF's ? (poll)
https://www.garlic.com/~lynn/2002h.html#35 Computers in Science Fiction
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#72 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#73 Home mainframes
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#62 cost of crossing kernel/user boundary
misc. past esd & symbol table postings:
https://www.garlic.com/~lynn/2001.html#8 finding object decks with multiple entry points
https://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First video terminal?)
https://www.garlic.com/~lynn/2001.html#60 Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)
https://www.garlic.com/~lynn/2001k.html#31 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001m.html#45 Commenting style (was: Call for folklore)
https://www.garlic.com/~lynn/2002f.html#41 Blade architectures
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2002o.html#26 Relocation, was Re: Early computer games
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Ibm's disasters in the 70's Newsgroups: alt.folklore.computers Date: Wed, 02 Apr 2003 22:41:01 GMT"George R. Gonzalez" writes:
some people have commented in the past .... that if any of the other computing companies had incurred the expense of a project the magnitude of FS ... and then simply canceled it ... they would have had to declare bankruptcy and go out of business.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: New RFC 3514 addresses malicious network traffic Newsgroups: alt.folklore.computers Date: Wed, 02 Apr 2003 22:55:11 GMTAnne & Lynn Wheeler writes:
go to:
https://www.garlic.com/~lynn/rfcietff.htm
either 1) Click on Term (term->RFC#) (in the RFC's listed by section and scroll down to "April1" or 2) in the lower frame click on "3514" and then click on "April1"
which gives you:
April1
3514 3252 3251 3093 3092 3091 2795 2551 2550 2549 2325 2324 2323 2322
2321 2100 1927 1926 1925 1924 1776 1607 1606 1605 1437 1313 1217 1149
1097 852 748
clicking on any of the RFC numbers in the above, gives you the RFC
summary. clicking on the ".txt=" field (in the summary) retrieves the
actual RFC.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Natl. Crypto Museum was: reviving Multics -- Computer Museum Newsgroups: alt.folklore.computers Date: Thu, 03 Apr 2003 15:36:13 GMTeugene@Durgon.Stanford.EDU (Eugene Miya) writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 03 Apr 2003 16:05:48 GMTJan C. Vorbrüggen writes:
an automating benchmarking process was developed and 2000 benchmarks
were run over a period of three months elapsed time to calibrate and
verify the operation (before release as a product):
https://www.garlic.com/~lynn/submain.html#bench
three people from csc had brought out a copy of cp/67 to the
univerisity the last week in jan. 1968 where i was an
undergraduate. between then and the fall '68 share meeting in Atlantic City
(inbetween there was the spring '68 share meeting in houston where
they publicly announced cp/67) ... i rewrote a lot of code for
optimized pathlength (part of presentation i made at the boston fall
'68 share meeting):
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
one of the rewrites was reducing the overhead of the svc call/return by over 70 percent. some other pathlengths i improved by a factor of one hundred times.
The selective use of BALR linkages .... as mentioned previously for
routines not requiring dynamic save areas ... I did the summer of '69
(boeing had just formed bcs and con'ed me into a summer job helping
set up their dataprocessing facilities and teach dataprocessing to
some of the technical staff, in the spring they had con'ed me into
teaching a one week dataprocessing staff to the technical staff during
spring break) ... along with the initial pass at fiddling the console
function routines for pageable kernel operation. I also created
fairshare scheduling, dynamic adaptive feedback algorithms,
https://www.garlic.com/~lynn/subtopic.html#fairshare
the clock page replacement alorightm (over ten years before the
stanford phd thesis on the same), and a different kind of measuring
real storage size requirements for controlling page thrashing
(different from the "working set" stuff that had been recently
published)
https://www.garlic.com/~lynn/subtopic.html#wsclock
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 03 Apr 2003 16:27:15 GMTAnne & Lynn Wheeler writes:
re:
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#4 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#10 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#16 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#23 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 03 Apr 2003 20:28:55 GMTAnne & Lynn Wheeler writes:
"PAM I/O" refers to changes to the cms filesystem to support a memory mapped paradigm.
The original CMS calling convention used supervisor call/interrupt
svc 202or
svc 202 dc al4(error)this was 24bit addressing so the first byte of the al4 address constant would always be zero. On normal return, if there was no zero following the svc202, it would return at that address. If there was a zero, t would return+4, skipping over the address constant. If there was an error during process, it would check for a zero following the svc202 instruction. If there was a zero, it would load the (presumably) address constant and branch to that location. If there was no zero byte, it would assume that there was no application supplied error exit, abort the process and go to system defined error processing. For embedded constants with read-only, shared segments, the value would be identical, regardless of the address position of the loaded segment. To support relocatable/floating shared segments, the standard error handling constant associated with the standard CMS calling process had to be fiddled.
from long ago and far away ....
Relocatable Shared Segments is part of a large set of shared
segment changes done in the early to middle VM/370 Release 2
time frame (Virtual Memory Management (1)). A subset of the
function under the heading of Discontiguous Shared Segment
was released as part of VM/370 Release 3. The development
group sanitized the CP and CMS code before releasing it.
Fortunately they did not eliminate the NUCON SVC$202 code.
The CMS SVC 202 convention requires a 'DC AL4(address)'
following the SVC for an error exit. The nonshared adcon
isn't required for the Discontiguous Shared code support but
it is mandatory for relocatable, shared code. The SVC$202
allows relocatable shared code to execute SVC 202s and still
specify an error exit. The SVC$202 in page 0 is followed by
an ERR$202 (an adcon) which is followed by a 'BR R14'. The
ERR$202 field can be filled in with a relocated address and
a 'BAL R14,SVC$202' executed. On return from the SVC 202
if there is no error, CMS will branch to the 'BR R14'. If
there is an error, CMS will branch to the address pointed to
by ERR$202. Yorktown Research has also been doing work in
this area attempting to eliminate the adcon error exit
requirement in conjunction with their Subcommand - Freeload
support.
The full shared segment code also allows named shared
segments both inside and outside of the virtual machine
size. Support for shared modules inside (or outside) of the
virtual machine is now running w/o using DMKSNT entries. The
IBM Palo Alto HONE VM/370 systems have been using the shared
module support for APL since early release 2 of VM. The
prototype code was originally written (along with PAM I/O
support) for CP/67 in 1972 and 1973 using SNT entries to
define the shared module.
... snip ...
note in the following ... the size of the files are nearly the same, the number of blocks differ primarily because one is a 4k/page formated area and the other is an 800-byte formated area. note that normal CMS processing when loading exectubles will attempt to read up to 64k bytes in one physical operation aka 65535 bytes at a time).
... continued ...
Normal FORTHX versus 'fixed' FORTHQ
The following is an excerpt from a terminal session where
FORTHX is in normal format on a normal formatted CMS disk.
FORTHQ (FORTHQ is an enhanced FORTHX) is in fixed page
aligned format on a PAM formatted CMS disk. There is a large
difference in the time to LOADMOD essentially the same sized
module in the different formats.
q search LYNN01 191 A R/W FORTHQ 5AA P R/O - PAM => FORTHQ disk CMS190 190 S R/O - PAM CMS19E 19E Y/S R/O - PAM R; l ifeaab module (date => reformatted FORTHQ FILENAME FILETYPE FM FORMAT RECS BLOCKS DATE TIME IFEAAB MODULE P2 F 128 3645 114 10/06/78 15:05 R; loadmod ifeaab R; T=0.01/0.14 17:18:32 Now do old formatted module q search LYNN01 191 A R/W FORTHX 4A8 P R/O CMS190 190 S R/O - PAM CMS19E 19E Y/S R/O - PAM R; T=0.02/0.07 17:18:50 l ifeaab module (date => normal formatted FORTHX FILENAME FILETYPE FM FORMAT RECS BLOCKS DATE TIME IFEAAB MODULE P1 V 65535 9 633 3/11/78 2:11 R; T=0.06/0.09 17:18:57 loadmod ifeaab R; T=0.07/0.57 17:19:09 The CPU times for LOADMOD'ing the two files are .01/.14 for PAM, Fixed, page aligned and .07/.57 for the normal one.... snip ...
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PDP10 and RISC Newsgroups: alt.folklore.computers Date: Fri, 04 Apr 2003 15:29:14 GMT"J. Clarke" writes:
random past refs to 195 dual-istream project:
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001c.html#1 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Lisp Machines Newsgroups: comp.lang.scheme,comp.lang.lisp,comp.arch Date: Fri, 04 Apr 2003 19:53:04 GMTMarc Spitzer writes:
being able to deploy a (portable) operating system on the processor for a fraction of the hardware development costs (rather than several times the hardware development costs) was significant. being able to pick up people that already had skills in the operating system was an additional characteristic of the emeraging portable operating system.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Public Encryption Key Newsgroups: sci.crypt,alt.security,comp.security,comp.security.misc Date: Fri, 04 Apr 2003 20:28:52 GMT"eric" writes:
to achieve #1, encrypt the message with the recipient's public key (or more frequently generate a random secret key, encrypt the message with the random secret key, and encrypt the secret key with the recepient's public key). only the recipient with the appropriate private key can decrypt the message.
to achieve #2, encrypt the message with the sender's private key (or more frequently take some trusted/secure hash of the message, and encrypt the hash with the sender's private key). only that sender's public key can decrypt/verify the message. typically it is just the hash that is encrypted with the sender's private key and referred to as a digital signature.
previous discussions of two distinct business processes:
https://www.garlic.com/~lynn/aadsm10.htm#keygen Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/2002f.html#9 PKI / CA -- Public Key & Private Key
https://www.garlic.com/~lynn/2003b.html#64 Storing digital IDs on token for use with Outlook
the two can be combined by: first do a digital signature of the message and then encrypt the combination of the original message and the digital signature.
so the vulnerabilities have to do with
1) the sender really having the recipient's real public key before starting the process 2) the recipient really having the sender's real public key
so the business process typically comes down to each (sender and recipient) having a table of public keys that traditionally have some trust information conveyed by some out-of-band process.
in the pgp web-of-trust ... the parties exchange public keys and use some additional trusted process to really validate that the keys that have been received are really for the parties.
The traditional certification authority (PKI or CADS) model defines things called certificates ... in the body of the certificate has some assertion and a public key; the CA then digitally signs the certificate, certifying the validity of the assertion (ex: an email address or a person's name).
In this scenario .... the sender can create a message, digitally sign it, and then transmit to the recipient: 1) the message, 2) the digital signature and 3) the certificate. The recipient still needs to have a table of public keys (aka like the web-of-trust model) for at least certification authorities (that have been independantly validated by some out-of-band trust process) ... allowing the recipient to validate the digital signature of the CA on the certificate.
This addresses the scenario where the recipient has had no prior contact or interface to the sender .... the sender can transmit a spontaneous message to just about anybody. The recipient then can be sure that the message has originated from an entity that matches the assertion in the appended certificate (assuming the recipient has the CA's public key in their trusted public key table).
However, the CA, spontaneous communication paradigm doesn't address the privacy issue. In order for the sender to encrypt the message with the recipient's public key, that recipient's public key needs to have been previously stored in some table kept at the sender. That means that the sender and recipient have had to made some previous contact and exchanged information.
The AADS scenario assertions is that for all serious business process
communication, the sender and recipient have established some sort of
previous business relationship .... making the CA, spontaneous
communication model redundant and superfluous.
https://www.garlic.com/~lynn/aadsover.htm
misc. redundant and superfluous postings:
https://www.garlic.com/~lynn/aadsm10.htm#limit Q: Where should do I put a max amount in a X.509v3 certificat e?
https://www.garlic.com/~lynn/aadsm10.htm#limit2 Q: Where should do I put a max amount in a X.509v3 certificate?
https://www.garlic.com/~lynn/aadsm11.htm#39 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda
https://www.garlic.com/~lynn/aadsm11.htm#40 ALARMED ... Only Mostly Dead ... RIP PKI ... part II
https://www.garlic.com/~lynn/aadsm12.htm#22 draft-ietf-pkix-warranty-ext-01
https://www.garlic.com/~lynn/aadsm12.htm#26 I-D ACTION:draft-ietf-pkix-usergroup-01.txt
https://www.garlic.com/~lynn/aadsm12.htm#27 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#28 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#29 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#39 Identification = Payment Transaction?
https://www.garlic.com/~lynn/aadsm12.htm#41 I-D ACTION:draft-ietf-pkix-sim-00.txt
https://www.garlic.com/~lynn/aadsm12.htm#52 First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm12.htm#53 TTPs & AADS Was: First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm13.htm#0 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#2 OCSP value proposition
https://www.garlic.com/~lynn/aadsm13.htm#3 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#4 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#5 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#6 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#14 A challenge (addenda)
https://www.garlic.com/~lynn/aadsm13.htm#16 A challenge
https://www.garlic.com/~lynn/aadsm13.htm#19 A challenge
https://www.garlic.com/~lynn/aadsm13.htm#20 surrogate/agent addenda (long)
https://www.garlic.com/~lynn/aadsm13.htm#25 Certificate Policies (addenda)
https://www.garlic.com/~lynn/aepay10.htm#46 x9.73 Cryptographic Message Syntax
https://www.garlic.com/~lynn/aepay10.htm#73 Invisible Ink, E-signatures slow to broadly catch on
https://www.garlic.com/~lynn/aepay10.htm#74 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aepay10.htm#78 ssl certs
https://www.garlic.com/~lynn/98.html#0 Account Authority Digital Signature model
https://www.garlic.com/~lynn/99.html#228 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#238 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#240 Attacks on a PKI
https://www.garlic.com/~lynn/2000.html#36 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2000b.html#92 Question regarding authentication implementation
https://www.garlic.com/~lynn/2000e.html#40 Why trust root CAs ?
https://www.garlic.com/~lynn/2000e.html#47 Why trust root CAs ?
https://www.garlic.com/~lynn/2000f.html#15 Why trust root CAs ?
https://www.garlic.com/~lynn/2000f.html#24 Why trust root CAs ?
https://www.garlic.com/~lynn/2001.html#67 future trends in asymmetric cryptography
https://www.garlic.com/~lynn/2001c.html#8 Server authentication
https://www.garlic.com/~lynn/2001c.html#9 Server authentication
https://www.garlic.com/~lynn/2001c.html#56 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#79 Q: ANSI X9.68 certificate format standard
https://www.garlic.com/~lynn/2001d.html#3 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001d.html#7 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001e.html#35 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001f.html#77 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#65 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#68 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001h.html#3 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#16 Net banking, is it safe???
https://www.garlic.com/~lynn/2002c.html#35 TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
https://www.garlic.com/~lynn/2002d.html#39 PKI Implementation
https://www.garlic.com/~lynn/2002e.html#49 PKI and Relying Parties
https://www.garlic.com/~lynn/2002e.html#56 PKI and Relying Parties
https://www.garlic.com/~lynn/2002e.html#72 Digital certificate varification
https://www.garlic.com/~lynn/2002m.html#16 A new e-commerce security proposal
https://www.garlic.com/~lynn/2002m.html#17 A new e-commerce security proposal
https://www.garlic.com/~lynn/2002m.html#55 Beware, Intel to embed digital certificates in Banias
https://www.garlic.com/~lynn/2002m.html#64 SSL certificate modification
https://www.garlic.com/~lynn/2002n.html#30 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002o.html#56 Certificate Authority: Industry vs. Government
https://www.garlic.com/~lynn/2002o.html#57 Certificate Authority: Industry vs. Government
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Super Anti War Computers Newsgroups: alt.folklore.computers Date: Fri, 04 Apr 2003 22:03:02 GMTMorten Reistad writes:
we had done separate utilities, emergency power, separate internet feeds into different places into the internet backbone ... with different ISPs .... co-lo in different major central exchange (48v, battery-backed, emergency power, etc). turns out the different telco paths had a routing under approx. same railroad area ... which had some construction one weekend and put both telco paths out of action for that weekend.
we had a little previous background in no-single-point-of-failure
having done ha/cmp project/product
https://www.garlic.com/~lynn/subtopic.html#hacmp
and had also coined the terms disaster survivability and
geographic survivabaility ... random past disaster/geographic
survivability posts:
https://www.garlic.com/~lynn/98.html#23 Fear of Multiprocessing?
https://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
https://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases
https://www.garlic.com/~lynn/99.html#145 Q: S/390 on PowerPC?
https://www.garlic.com/~lynn/99.html#184 Clustering systems
https://www.garlic.com/~lynn/aadsm2.htm#availability A different architecture? (was Re: certificate path
https://www.garlic.com/~lynn/aadsm9.htm#pkcs12 A PKI Question: PKCS11-> PKCS12
https://www.garlic.com/~lynn/aepay2.htm#cadis disaster recovery cross-posting
https://www.garlic.com/~lynn/2000g.html#27 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001.html#33 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#41 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001i.html#41 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#43 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#46 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#48 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#49 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#13 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001n.html#47 Sysplex Info
https://www.garlic.com/~lynn/2002.html#44 Calculating a Gigalapse
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002e.html#68 Blade architectures
https://www.garlic.com/~lynn/2002f.html#4 Blade architectures
https://www.garlic.com/~lynn/2002i.html#24 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002l.html#15 Large Banking is the only chance for Mainframe
https://www.garlic.com/~lynn/2002m.html#5 Dumb Question - Hardend Site ?
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2003.html#38 Calculating expected reliability for designed system
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: unix Newsgroups: alt.folklore.computers Date: Sat, 05 Apr 2003 17:07:35 GMT"Jonadab the Unsightly One" writes:
misc. ref. to merged security glossary & taxonomy:
https://www.garlic.com/~lynn/2003e.html#73 Security Certifications?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: unix Newsgroups: alt.folklore.computers Date: Sat, 05 Apr 2003 17:12:13 GMTSteve O'Hara-Smith writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 1130 Games WAS Re: Any DEC 340 Display System Doco ? Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sat, 05 Apr 2003 17:24:06 GMT"David Wade" writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: inter-block gaps on DASD tracks Newsgroups: bit.listserv.ibm-main,alt.folklore.computer Date: Sat, 05 Apr 2003 17:54:40 GMTIBM-MAIN@ISHAM-RESEARCH.COM (Phil Payne) writes:
early on ... in order to meet the performance requirement of +/- 10 percent of 3830 ... they did some tricks in the 3880 ... like signalling ce/de to the channel early and doing some of the clean-up after the end of the interrupt. The official "acceptance" test was done with a 2-pack VS1 system.
now since I had the systems in bldg. 14 (engineering) and bldg 15
(product test):
https://www.garlic.com/~lynn/subtopic.html#disk
I got the blame when one monday morning the thruput on the bldg. 15 internal machine went into the crapper. They swore up and down there were absolutely no changes. Well, it turned out that over the weekend they had replaced the 3830 on string of 16 3330s with an engineering 3880. The problem was that with some modest amount of concurrent and asynchronous activity ... there would be pending requests for the controller. When ce/de came in, the system would immediately redrive the controller with a pending request (the 2-pack VS1 "test" didn't have concurrent activity & pending requests). So just about every SIO was getting CC=1, SM+BUSY, and then have to be redriven again when CUE came in (in effect, every SIO had to be done twice).
Fortunately, this was six months before FCS (first-customer-ship) and there was time to do some stuff before it hit customer shops.
The other problem was that I had rewritten multiple channel pathing ... and claimed that the 370 code could almost match the dedicated processors that were handling multi-pathing in the later machines. The problem was a 3880 could have four channel paths .... but if it got hit on a channel path that was different than the path for the most recent I/O .... it went off into la-la land for on the order of a millisecond ... defeating a lot of the dynamics of multiple path load balancing (you were better off with a primary/alternate stragegy ... than a dynamic load balancing strategy). Of course, you were up the creek, if it was a shared-disk environment since you didn't have a lot of control of different processors hitting the same controller.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SLAC 370 Pascal compiler found Newsgroups: alt.folklore.computers Date: Sat, 05 Apr 2003 19:55:53 GMTPeter Flass writes:
360d-5.1-004, share contributed program library, submitted 5/26/66
strong rumor was that it was also used as the core which Univ. Of Mich built MTS on ... although LLMPS was just straight vanilla 360 with no support for virtual memory (however lincoln did have two-processor, SMP 360/67 and was the first installation of CP/67 outside of cambridge).
systems built for 360/67 virtual memory .... official tss/360 product, cp/67 from cambridge science center, and michigan terminal system. Boeing (with some? participation from somebody at Brown U? ... vague recollection) ... also modified a version of release 13, MVT to use 67 virtual memory hardware but not for paging. They had long-running interactive jobs under MVT driving 2250s. MVT had design issue that storage allocation had to be contiguous ... and there was severe storage fragmentation problems with long running applications. The MVT13 hack used the '67 virtual memory hardware to provide the appearance of contiguous storage.
random other ref:
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#89 TSS/360
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#64 PLX
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SLAC 370 Pascal compiler found Newsgroups: alt.folklore.computers Date: Sat, 05 Apr 2003 23:59:26 GMTPeter Flass writes:
and some drift:
https://www.share.org/share/website/shareweb.nsf/1d99cd65f2badf1e86256b78006ccb15/925f02a327e482ad86256b84006e3518?OpenDocument
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ECPS:VM DISPx instructions.. Newsgroups: alt.folklore.computers Date: Sun, 06 Apr 2003 03:45:33 GMTIvan writes:
it should be something like ... if the initial entry can fast restart the same virtual machine w/o doing anything else, it does; otherwise drops out to do something like call the scheduler and/or handle pending cpexblok. from vague recollection then you would have something like pick a different virtual machine (since wasn't able to restart the previous one) and dispatch it. they can have the same data and exit lists .... even tho one function wouldn't have used all of the same exits. remember .... each of the functions were the migration of specific sequences of 370 code directly into m'code ... and it was possible that the initial m'code function actually would flow (internally) into a subsequent m'code function (and whether it was directly invoked from a new E6, or flowed into from a previous E6, it would expect the same passed structure).
at boot, dmkcpi would have had an adcon list of all eazy-sixers and did a preliminary test to see if it was correctly installed ... if not, it would change all the easy-sixers into no'ops.
in general for instructions dropped into m'code there was about a 10:1
performance improvement over straight 370 (aka the native m'code
engine tended to execuate an avg. of ten instructions to emulate each
370 instruction, ecps got almost a one-for-one translation from 370 to
native)
https://www.garlic.com/~lynn/94.html#21
the above table has:
dsp+4 to dsp+c84 15105 374 2.18 asysvm entry until enter prob state dsp+4 to dsp+214 84674 110 3.61 main entry to start of 'unstio' dsp+214 to dsp+8d2 70058 45. 1.21 'unstio' with no calls dsp+8d2 to dsp+c84 67488 374. 9.75 from 'unstio' end to enter problem state dsp+93a to dsp+c84 11170 374 1.62 sch call to entry problem modethe following table wasn't measure of the effect of direct translation ... but the overall change for some function with and w/o assist; where the function would have only had a portion actually executing in m'code.
----------------------------------- | Benchmarks | |-----------------------------------| | | DOS/VS and CMS | | VS1 Under VM | under VM | |-----------------|-----------------| |Number of| % Supervisor State Time | Function Areas |Functions|Unassist.|Assist.|Unassist.|Assist.| ------------------|---------|---------|-------|---------|-------| CCW / CSW Trans. | 4 | 12.6 | 3.5 | 9.5 | 2.6 | Dispatching | 3 | 13.9 | 4.1 | 19.2 | 5.6 | Free Storage Mgmt | 2 | 4.4 | 1.0 | 5.3 | 1.2 | Call / Return | 2 | 4.7 | .9 | 5.2 | 1.0 | Virtual Storage | 4 | 6.8 | 2.2 | 5.3 | 1.7 | Mgmt and Locking| | | | | | Control Block | 2 | 2.2 | .7 | 2.6 | .8 | Scanning | | | | | | Trans. Table Mgmt | 2 | 1.3 | .2 | 2.7 | .5 | |---------|---------|-------|---------|-------| Total | 19 | 45.9 | 12.6 | 49.8 | 13.4 | -----------------------------------------------
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: unix Newsgroups: alt.folklore.computers Date: Sun, 06 Apr 2003 04:32:19 GMTCBFalconer writes:
the issue of network vis-a-vis relational is somewhat orthoganal to network dbms using physical pointers (requiring database manager maintenance) vis-a-vis relational with indexes which tended to hide some amount of the physical management infrastructure.
the issue of memory mappeds files and large virtual memories wasn't a
whole lot of help for production systems since cache hit management
and non-blocking operation is critical (not so much so for demo &
academic exercises). Over ten plus years ago ... psuedo storage
resident databases that did pointer swizzling. If the elements were in
real storage ... the pointers could be addresses .... otherwise the
pointers were things that caused the dbms to move things in/out of
storage.
http://citeseer.nj.nec.com/moss92working.html
http://citeseer.nj.nec.com/white92performance.html
http://www.informatik.uni-trier.de/~ley/db/journals/vldb/KemperK95.html
http://redbook.cs.berkeley.edu/lec26.html
recently i ran across comparison of a major production DBMS configured
so that the whole database was physically resident in the DBMS cache
vis-a-vis design for being physically residient .... showing something
like 10:1 performance improvement. I can't find it at the moment but
... misc. other refs via search engine:
http://portal.acm.org/citation.cfm?id=266925&dl=ACM&coll=portal
http://citeseer.nj.nec.com/cha95objectoriented.html
http://citeseer.nj.nec.com/447405.html
misc past
https://www.garlic.com/~lynn/submain.html#systemr
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Timesharing TOPS-10 vs. VAX/VMS "task based timesharing" Newsgroups: alt.folklore.computers Date: Sun, 06 Apr 2003 14:10:33 GMTSteve O'Hara-Smith writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Any DEC 340 Display System Doco ? Newsgroups: alt.folklore.computers,alt.sys.pdp10 Date: Sun, 06 Apr 2003 13:50:26 GMTals@usenet.thangorodrim.de (Alexander Schreiber) writes:
I made the executable available on the internal network and people that made 300 could ask for the source. This is before the 100 move limitation, during 1st shift was put in (and even later versions wouldn't move at all during 1st shift)
there were a couple of internal go-arounds .... since some labs found 1/3 to 1/2 their processing time going to playing adventure.
1) claim was that adventure was good example of configurable interactive software
2) total eradication on the system would drive it underground with people having private copies under various psuedonyms.
some labs did have to declare amnesty ... everybody had 24hrs ... and then they had to get back to spending the majority of their time actually working.
past refs:
https://www.garlic.com/~lynn/98.html#56 Earliest memories of "Adventure" & "Trek"
https://www.garlic.com/~lynn/99.html#52 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#83 "Adventure" (early '80s) who wrote it?
https://www.garlic.com/~lynn/99.html#84 "Adventure" (early '80s) who wrote it?
https://www.garlic.com/~lynn/99.html#169 Crowther (pre-Woods) "Colossal Cave"
https://www.garlic.com/~lynn/2000b.html#72 Microsoft boss warns breakup could worsen virus problem
https://www.garlic.com/~lynn/2000d.html#33 Adventure Games (Was: Navy orders supercomputer)
https://www.garlic.com/~lynn/2001m.html#14 adventure ... nearly 20 years
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001m.html#17 3270 protocol
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2002d.html#12 Mainframers: Take back the light (spotlight, that is)
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ECPS:VM DISPx instructions.. Newsgroups: alt.folklore.computers Date: Sun, 06 Apr 2003 14:01:14 GMT"Glen Herrmannsfeldt" writes:
the guy from PASC and I had done the original measurements ... and then I had worked with the manager of endicott assist microprogramming and his two microcode engineers. then the manager and I spent a period of a year or so, off & on, running around the world doing the product dog & pony show to various product managers and market forecasters.
random refs:
https://www.garlic.com/~lynn/submain.html#mcode
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Sun, 06 Apr 2003 23:16:08 GMTjfc@mit.edu (John F. Carr) writes:
i had rewritten and optimized the cp/67 code so the pathlength to take a fault, select a replacement page and do the pagein ... was around 500 instructions ... at least 1/4th to 1/5th the pathlength of the next best implementation that i knew of (this included page fault, page replacement algorithm, prorated portion of performing page write on fraction of pages selected for replacement that needed writing, schedule page read, task switch, page read complete, task switch). The next best implementation quoted numbers for an I/O trick with the fixed head drum that kept a continuous i/o operation going ... and the page supervisor just needed to update pointers in the continuous i/o operation ... as opposed to actually scheduling an independent asynchronous i/o operation.
There was also the slight-of-hand thing with the page replacment algorithm that was a variation of clock (although it preeceded the clock phd thesis by 10 years or so) ... where full instruction simulation showed it to be better than true LRU.
Turning 150 pages/sec ... 100 page-reads/sec plus 50 page-writes/sec
... was 50,000 instructions ... about 15 percent of a 1/3rd mip
processor ('60s). 4341 was about same mip as the 11/780 in the '80
time-frame
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
with a processor three times faster means that the 15 percent could drop to 5 percent of processor to turn 150 page i/o sec (it was actually slightly higher than that because the morph of cp/67 to vm/370 introduced some inefficiencies).
most of the dynamic adaptive stuff .... had to do with determining when memory was over commuted and do you need to suspend somebody .... or is memory under commuted and can more tasks be run concurrently.
the other thing .... was that tasks could be suspended for things other than memory over commitment. some of the brain dead implementations would do global sweep of a task's pages at moment of suspension ... even if it wasn't a memory over commitment. the dynamic adaptive work left the pages around ... if it there seemed to be a high probability that the task would resume .... before somebody else might need the real memory locations (this was the reference to both tss/360 and mvs having a deterministic sweep of all pages on task suspend ... it was always done .... even in lots of situations where it wasn't necessary).
before the big pages implementation .... there used to be a joke about how could you tell MVS was heavily paging? ... it was cpu bound.
there is the straight-line stuff ... and then there is the more global stuff ... as an aside ... the amortized overhead of the more global stuff was included in that 300-500 instruction avg.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: unix Newsgroups: comp.os.vms,alt.folklore.computers Date: Mon, 07 Apr 2003 14:02:18 GMTSami S. Sihvonen writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 07 Apr 2003 22:43:41 GMTAnne & Lynn Wheeler writes:
i tweaked the noses of the disk product people by claiming that the releative system performance of disk technology had declined by an order of magnitude during the 15 year period. they assigned their performance organization to proove me wrong .... however after a bit of study, they came back and basically said that i had somewhat understated the situation.
as an aside ... when i started on cp/67 ... it would peak out/saturage at around 80 page i/os per second consuming on the order of 40 percent of the processor related to virtual page stuff. As previously mentioned .... i significantly optimized the pathlength (more like 150 page i/os/sec avg, 15percent processor) ... but also restructured various pieces to peak-out at 300 page i/os/sec. this allowed it to achieve something like 80 concurrent users, with mixed-mode workload with subsecond interactive response.
some of the people at grenoble science center had taken system with
lots of my pathlength changes in it and implemented a "traditional"
working set dispatcher. with something like 50 percent more non-fixed
real storage (154 available 4k pages vis-a-vis 104 available 4k pages)
they got about the same thruput and interactive response with 35 users
as I was getting with 75-80 users:
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2001l.html#6 mainframe question
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
lots of repeat of the 3081k discussion:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2003.html#21 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: inter-block gaps on DASD tracks Newsgroups: bit.listserv.ibm-main Date: Mon, 07 Apr 2003 22:22:27 GMTpa3efu@YAHOO.COM (Jan Jaeger) writes:
vtoc & pds multi-track searches still had to be done. the interesting thing was that they were so horrendous ... that few people really realized it unless running in an environment when they nominally didn't occur.
i've related the tale at san jose research where mvs was on 168 and vm was on 158. although the dasd farm was fully interconnected ... there was a strict edict that any MVS (3330) pack could ever be mounted on a VM drive. The few times that it accidentally happened ... the operators would immedately get irate phone calls from the cms users about what had happened to make cms interactive performance go into the crapper (one indication about how inured TSO users are to really terrible performance .... was that you never heard them complaining when there were MVS packs mounted on MVS systems with TSO running ... presumably they just believed that was normal state of affairs, that you couldn't run TSO w/o MVS ... and you couldn't run MVS w/o MVS packs).
the one incident where the MVS operators refused to react to all the complaints from CMS users about the MVS mis-mounted pack .... we brought up a VS! (heavily optimized with VM handshaking) and put one of its packs on an MVS string ... and started doing some multi-track searches. Even VS1 on an extermely heavily loaded VM/158 system .... could cause enuf pain to the MVS/168 system that the MVS operators reconsidered their decision about not moving the MVS pack.
past telling of the multi-track search tales:
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/97.html#16 Why Mainframes?
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#19 OT?
https://www.garlic.com/~lynn/2000f.html#42 IBM 3340 help
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2000g.html#52 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#60 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#6 index searching
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002d.html#22 DASD response times
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers
https://www.garlic.com/~lynn/2002l.html#49 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002n.html#50 EXCP
https://www.garlic.com/~lynn/2002o.html#46 Question about hard disk scheduling algorithms
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003b.html#22 360/370 disk drives
https://www.garlic.com/~lynn/2003c.html#48 "average" DASD Blocksize
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ECPS:VM DISPx instructions.. Newsgroups: alt.folklore.computers Date: Tue, 08 Apr 2003 16:11:44 GMTararghNOSPAM writes:
the ECPS instructions having never even been in the principles of op,
aka, esa/390 POP:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CONTENTS?SHELF=
or Z/ POP:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/DZ9ZR000/CONTENTS?DT=20010102160855&SHELF=
the architecture redbook for 360&370 was cms script file ... which could either print the whole thing (and it would be the architecture redbook) or with a conditional set, just print the subset that was the 360 (& then 370) principles of operation (w/o all the architecture notes, engineering issues, justifications, trade-offs, unannounced instructions, etc).
redbook comes from the fact that it was distributed in dark red three
ring binder ... no relationship to redbooks for customers:
http://www.redbooks.ibm.com/
random past refs to architecture red-book:
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/2000.html#2 Computer of the century
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
https://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 bit?
https://www.garlic.com/~lynn/2001b.html#55 IBM 705 computer manual
https://www.garlic.com/~lynn/2001m.html#39 serialization from the 370 architecture "red-book"
https://www.garlic.com/~lynn/2001n.html#43 IBM 1800
https://www.garlic.com/~lynn/2002b.html#48 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002g.html#52 Spotting BAH Claims to Fame
https://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#21 PowerPC Mainframe
https://www.garlic.com/~lynn/2002h.html#69 history of CMS
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#69 The problem with installable operating systems
https://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#59 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003d.html#76 reviving Multics
https://www.garlic.com/~lynn/2003f.html#44 unix
misc. past ecps post/refs:
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/2000.html#12 I'm overwhelmed
https://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous
https://www.garlic.com/~lynn/2000g.html#7 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#3 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#57 IBM competes with Sun w/new Chips
https://www.garlic.com/~lynn/2002i.html#80 HONE
https://www.garlic.com/~lynn/2002j.html#5 HONE, xxx#, misc
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#62 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#48 Linux paging
https://www.garlic.com/~lynn/2003.html#4 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#7 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#16 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#17 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#61 MIDAS
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 08 Apr 2003 16:42:07 GMT"Don Chiasson" writes:
the issue with clock ... was that it was doubly, naturally,
dynamically adaptive .... the interval between resets was related to
how fast the hand cycled all pages, how fast it cycled was related to
how fast pages were being used and the demand for pages. If cycled
faster/slower based on demand for pages. It also cycled faster/slower
based on how many pages were being used in a cycle. Therefor it was
naturally adjusting feedback within broad range of normal operation
environments. The two extremes .... demand for pages is so low that
all bits are set ... and page thrashing needed additional help.
https://www.garlic.com/~lynn/subtopic.html#wsclock
As an aside, standard clock (&LRU) pathelogically degenerats to FIFO. My slight-of-hand two-bit ... looked exactly like a normal clock algorithm (the instructions looked exactly the same and the pathlength effectively was exactly the same) ... but had the interesting side-effect that it degenerated to random rather than FIFO (which a straight LRU does) ... w/o actually having any explicit pathlength or instructions that invoked any randomnization.
in the 71/72 time-frame when CSC was capturing page traces ... and for something like vs/repack ... full instruction traces .... and feeding it into replacement algorithm simulator .... the clock-hack that degenerated to random .... would always outperform straight clock, and with slight tweaks ... also, always outperform simulated, true LRU.
This is somewhat the claim from my youth where I liked to do extreme pathlength optimizations .... and the most extreme was to have something done in zero instructions .... typically some peculiar side-effect. It had downside that it was frequently totally opaque to anybody else that might be still maintaining the code 10-15 years later.
as an aside ... some number of the vm/370 group migrated to dec/vax development ... after the burlington mall site was shutdown (as opposed to moving to POK) ... but that didn't happen to late '76.
some past discussions of making LRU clock degenerate to random rather
than fifo:
https://www.garlic.com/~lynn/2000f.html#9 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#32 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#34 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2001f.html#55 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2002j.html#31 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002j.html#32 Latency benchmark (was HP Itanium2 benchmarks)
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#10 lru, clock, random & dynamic adaptive
https://www.garlic.com/~lynn/94.html#14 lru, clock, random & dynamic adaptive ... addenda
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/98.html#17 S/360 operating systems geneaology
vs/repack topic drift:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
and a whole lot of drift with burlington mall:
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/98.html#7 DOS is Stolen!
https://www.garlic.com/~lynn/99.html#179 S/360 history
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002e.html#27 moving on
https://www.garlic.com/~lynn/2002h.html#34 Computers in Science Fiction
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002j.html#17 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002m.html#9 DOS history question
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#14 Multics on emulated systems?
https://www.garlic.com/~lynn/2003c.html#0 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ECPS:VM DISPx instructions.. Newsgroups: alt.folklore.computers Date: Tue, 08 Apr 2003 17:03:36 GMT"Glen Herrmannsfeldt" writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Alpha performance, why? Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 08 Apr 2003 19:44:50 GMT"Don Chiasson" writes:
In some sense, within broad operating range, it was a naturally, self-correcting system. If page faults were happening too fast, and the hand started to sweep too fast, then the interval between sweeps would decrease, pages would have less chance of being referenced, the hand would find higher percentage of pages to replace and therefor slow down.
page-thrashing was a characteristic of spending majority of the time waiting for page i/os to complete ... not directly a characteristic of the hand sweep interval. if there was an infinitely fast page i/o operation, and zero overhead page replacement infrastructure ... then there might not be any system page thrashing .... regardless of the interval of the hand sweep.
as an aside .... these clock algorithms were global LRU replacement ... i.e. swept all real pages.
the traditional working set paper that was first published at the same time I was doing the original clock .... was local LRU and fixed wall clock timer ... and was not a natively, dynamically, self-correcting system.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: ECPS:VM DISPx instructions.. Newsgroups: alt.folklore.computers Date: Tue, 08 Apr 2003 19:31:07 GMT"Glen Herrmannsfeldt" writes:
remember the low-end machines were vertical m'code and were doing
soemthing like ten native micro-engine instructions per 370
instruction (something akin to the current generation of 370
simulators running on intel platforms). the high-end machines were
horizontal m'coded machines ... and typically measured in avg.
machine cycles per 370 instructions; ... aka the 370/165 was avg. 2.1
machine cycles per 370 instruction, the 370/168 got it down to 1.6
machine cycles per 370 instruction ... and the 3033 (which started out
simply as 168 wiring diagram remapped to faster chip technology) got
it down to 1 machine cycle per 370 instruction.
https://www.garlic.com/~lynn/submain.html#mcode
the fort knox stuff was targeted at replacing the low & mid range
microengines (370, controllers, rochester products, etc), with 801.
part of the stuff that aborted fort knox was that the mid-range was
starting to implement 370 directly in silicon. the 4341 follow-on
(4381) had just about all of 370 instructions directly in silicon.
https://www.garlic.com/~lynn/subtopic.html#801
... digression warning ... for VM (virtual machine simulator) there were two types of "overhead"
1) traditional kernel overhead for managing machine resources and 2) simulation of privilege instructions where their virtual machine definition was slightly modified from their real machine definition.
Most of ECPS was mostly of type #1 ... although there was some additional 370 privilege instructions that were modified to be virtual machine sensitive ... i.e. the native hardware implementation of the 370 instruction was different in real machine mode than in virtual machine mode. In any case, as later generations of mainframe hardware had instructions directly implemented in hardware, there was much less benefit to moving 370 instructions into microcode (since there was little or no difference in the time for execution).
The first instance of method number #2 was implemented on the 370/158 as microcode changes for certain privilege instructions (and predates ECPS). Basically VM would load a control block pointer into CR6. If real CR6 was zero, the hardware microcode would execute real machine operations as defined by the POP. If real CR6 contains a VM control block, then the microcode would implement various real machine operations as per the virtual machine restrictions. This continued to be of significant benefit, even on later generations of machines .... since it was slight bump on the native m'code implementation (compared to interrupting into the kernel, saving state, simulating the instruction from scratch, restoring state).
The ultimate of this was SIE instruction ... which sort-of swapped the situation ... rather than specific privilege instructions checking to see if real CR6 had a value in it; the SIE instruction basically put the real hardware into virtual machine mode (and had a pointer to a virtual machine control block with all the information necessary to operate in virtual machine mode). Basically, SIE was a hardware architected feature, somewhat analogous to the way virtual memory and virtual memory control blocks are hardware architected feature. Furthermore, in theory, SIE can be used by any operating system (in the same way as different operating systems can uttilize the same virtual memory hardware). Whereas the easy-six ECPS instructions were very kernel specific.
This was further extended with LPARS (or logical partitions) .... which effectively put a restrictive subset of the vm kernel function into the native microcode of the machine ... and the normal machine operation was psuedo virtual machine mode (or logical partitions).
... warning about different digression ...
the follow-on to xt/at/370 (vm/pc) was a74. and for some reason, i
did find a set of my updates for a74 ... included misc. changes to my
page-mapped filesystem/mmap api for running on a74 ... from some
unknown time warp, long ago and far away:
405 dmkcfi.updta74 405 dmkdsd.updta74 405 dmkium.updta74 1134 dmkmov.updta74 405 dmkser.updta74 2025 dmkpam.updta74 1377 dmkmch.updta74 3564 dmkcpi.updta74 1215 dmkpgv.updta74 162 dmkpgr.updta74 4374 dmkpgt.updtdbg 4455 dmkpgt.savea74past pc/xt/at/370, washington, & a74 posts.
past lpar, sie postings:
https://www.garlic.com/~lynn/94.html#37 SIE instruction (S/390)
https://www.garlic.com/~lynn/98.html#45 Why can't more CPUs virtualize themselves?
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#62 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2001b.html#72 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001d.html#67 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001e.html#61 Estimate JCL overhead
https://www.garlic.com/~lynn/2001f.html#17 Accounting systems ... still in use? (Do we still share?)
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001h.html#33 D
https://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001h.html#73 Most complex instructions
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2001m.html#53 TSS/360
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#32 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#53 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#25 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#57 IBM competes with Sun w/new Chips
https://www.garlic.com/~lynn/2002n.html#6 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#0 Home mainframes
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002o.html#18 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#4 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002p.html#40 Linux paging
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#48 Linux paging
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2002p.html#55 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002q.html#26 LISTSERV Discussion List For USS Questions?
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#7 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#41 How much overhead is "running another MVS LPAR" ?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm