From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CKD Disks? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 02 Nov 2004 09:18:52 -0700bblack@ibm-main.lst (Bruce Black) writes:
this was trade-off of a one-level index (pds directory or vtoc) being constantly scanned by I/O subsystem or keeping the index in memory. With the real storage constraints of the mid-60s, the constant I/O scanning appeared to be a reasonable IO-resource/memory-resource trade-off. By the mid to late 70s ... the situation had exactly reversed.
It wasn't just that ISAM CKD channel programs were long and complicated ... they were effectively implementing multi-level index searches with expensive i/o scanning ... earlier reads could provide the subsequent seek&search CCHHR for subsequent CCWs.
recent post about having to spend a week (back in 1970) at a customer
site working on an ISAM issue
https://www.garlic.com/~lynn/2004m.html#16 computer idnustry scenarios before the invention of the PC?
misc. past ckd posts
https://www.garlic.com/~lynn/submain.html#dasd
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch Date: Tue, 02 Nov 2004 10:12:19 -0700Benny Amorsen <benny+usenet@amorsen.dk> writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 02 Nov 2004 13:48:34 -0700"mike" writes:
an issue is totally abstracting away some concepts1 like locality of reference (as was done in tss/360), while resulting in some productivity increases for the casual programmer ... could lead to enormous thruput difficulties for production systems. GM claimed enormous productivity increases for their programmers developing 32bit address mode applications on tss/360 for the 360/67 ... but there wasn't much mention of things like system thruput.
the issue is trying to abstract concepts for programming productivity while at the same time not totally sacrificing system operational efficiency.
a large number of cp67/cms and vm370 installation were mixed mode
environments running significant batch operations concurrent with
loads of interactive activity. i originally did fairshare resource
manager as undergraduate (actually generalized policy resource manager
with fairshare the default)
https://www.garlic.com/~lynn/subtopic.html#fairshare
one of the issues in the background was scheduling to the bottleneck
... aka attempting to identify resources that represented significant
resource bottleneck thruput and dynamically adapting strategies to
deal with the resource bottlenecks. the remapping of cms filesystem
into a memory mapping paradigm ...
https://www.garlic.com/~lynn/submain.html#mmap
was not only dynamically recognizing large logical requests ... but also being able to contiguously allocate on disk ... and bring in large contiguous sections (as appropriate) if there was sufficent real memory for the operation. in situations where real memory was more constrained, either because there wasn't a lot ... or there was large amount of contention for real memory ... the service requests were dynamically adapted to the operational environment.
note that for quite a long time ... a large amount of the work that
went on in rochester actually was done on vm370 systems ... for minor
reference, this internal network update from 1983 lists several
rochester vm systems
https://www.garlic.com/~lynn/internet.htm#22
for total topic drift circa 1990 ... there was significant contention
between rochester and austin with regard to 64bit chip design ...
rochester kept insisting on having 65bit rather than 64bit.
https://www.garlic.com/~lynn/subtopic.html#801
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 02 Nov 2004 14:07:23 -0700part of the issue was that during FS, i frequently commented that I thot what I had deployed in production systems were more advanced than some of the stuff that was being specified in FS; there was this cult film that had been playing for a long time down in central sq ... and i sometimes drew analogies between what was going on in FS and the inmates being in charge of the institution. after FS was killed, it was some number of these FS'ers that went off to rochester to do s/38 ... random future system references:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 02 Nov 2004 15:54:07 -0700"mike" writes:
at the time, i was asked to write a section for the corporate
continuous availability strategy document .... however, both pok and
rochester non-concurred
https://www.garlic.com/~lynn/submain.html#available
however, file i/o can benefit a lot from both contiguous allocation and large block transfers (regardless of the file mapping).
note however, (i'be been told that) 400 seems to have fairly heavy weight file open/close overhead.
we were doing this thing called electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
and got involved a couple years ago looking at doing a fairly large deployment of servers for financial transactions. there was a 400 review with consulting house that specializes in cold fusion on 400s. they claimed that doing mandatory access control on file opens/closed increased overhead by something like ten times and would make it impractical in a 400 environment.
i had showed that contiguous allocation and block transfers with
mapping cms filesystem to page mapped infrastructure in the '70s. lots
of the *ixes frequently default to relatively scatter allocation
strategry (basic cms filesystem allocation tended to be much more like
the *ixes with scattered allocation). even tho cms (& cp) used ckd
disks ... their original strategy from the mid-60s was to treat ckd
disks as logical FBA having fixed records ... and tended toward using
a scatter record allocation strategy. When i did the remap to a page
mapped infrastructure ... i also put in contiguous allocation support
and (large) block transfer support. note the page mapped stuff never
shipped to customers as part of standard product ... although it was
used extensively at internal sites like hone
https://www.garlic.com/~lynn/subtopic.html#hone
there has been recent thread running in another n.g. on ckd, fba,
etc
https://www.garlic.com/~lynn/2004n.html#51 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#0 CKD Disks?
lots of other ckd posts
https://www.garlic.com/~lynn/submain.html#dasd
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Tue, 02 Nov 2004 19:13:38 -0700"mike" writes:
with respect to watching files for re-org, ... i think even windows/xp now has something similar. however, bunches of this stuff is orthogonal to whether there is a one level store paradigm or not ... just simple operational stuff that has been done (in some cases for decades) for operational systems.
the cold fusion/financial example was specifically a web server scenario where a huge numbers of file open/closes were happening. i did get the impression that some amount of the file open/close overhead ... was in fact related to the way 400 handled one level store objects. the issue (compared to some locked-down unix web servers) was that you couldn't be running cics-like operation with light-weight threads (frequently in the same address space) with startup pre-opened files (long ago and far away, when i as an undergraduate, the university got to be beta-test for original cics on project for library card catalog automation with a grant from onr ... i got the privilege of shooting several of the cics bugs).
the mention about continuous availability strategy document wasn't so
much about high availability ... although we started out calling the
project we were doing ha/cmp ... but along the way we coined the terms
disaster survivability and geographic survivability .... in fact
along the lines of some large financial transaction systems have done
with IMS hot-standy (geographically triple redundant). misc. stuff
from the past ... semi-related to IMS hot-standby
https://www.garlic.com/~lynn/submain.html#shareddata
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CKD Disks? Newsgroups: bit.listserv.ibm-main,alt.folkore.computers Date: Tue, 02 Nov 2004 21:43:30 -0700shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 03 Nov 2004 09:00:30 -0700Jan Vorbrüggen <jvorbrueggen-not@mediasec.de> writes:
two issues for windowing paradigm associated with page-mapped
filesystem ... was that it gave efficient hints to system for both 1)
loading the new stuff and 2) flushing the stuff no longer being used
(even for stuff that wasn't being sequentially accessed ... and
nominal read ahead strategies aren't triggered). this is somewhat
analogous to some of the cache preload instructions that give hints
about storage to be used:
https://www.garlic.com/~lynn/submain.html#mmap
of course this would enhanced by something like vs/repack
... attempting to group program & data being used together into more
condenced memory collection. recent post referencing vs/repack from
the 70s:
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
basically vs/repack and disk re-org technologies are very similar ... analysis of access patterns and attempting to re-og to maximuse system thruput ... where vs/repack applied to things like working set (but granularity of pattern analysis used by vs/repack was 32bytes which would also make it applicable to cache line).
at sjr in 1979, i did a special modification to vm/370 kernel to capture record addresses of all disk accesses (fba and ckd, it was extended in fall of 1980 to supprot extended-ckd, aka Calypso). this was deployed on internal installations in the san jose area capturing typical cms usage ... as well as a lot of mvs pattern usage (run under vm) from operations at stl.
the trace information was originally used in a detailed file cache model
... which investigated various combinations of disk drive, disk
controller, channel, groups of channels (like 303x channel director)
and system cache strategies. one of the results of the cache study was
for any total amount of cache storage ... the most efficient use of
that cache storage was a system-level cache ... aka 20mbytes of
system-level cache was more efficient than 1mbyte caches on 20
different disks. this corresponds with theory of global LRU
replacement algorithms ... that i did in the 60s when i was an
undergraduate
https://www.garlic.com/~lynn/subtopic.html#wsclock
the published literature at the time (in the 60s) was very oriented towards local LRU replacement strategies ... and I showed that global LRU replacement strategies always outperformed local LRU (except in a few, fringe, pathological cases). some ten years later, this work became involved in somebody's phd at stanford.
The next stage of using the disk record level trace information was showing how it could be used for arm load balancing as well as clustering of information that was frequently used together. There was an issue regarding the volume of such trace data for standard production systems, ... but a methodology was developed for being able to reduce the information in real time.
Note that at the time ... that standard vm system had a "MONITOR" facility that would record all sorts of performance & thruput related data ... but it went to disk in raw format. What was developed was a much more efficient implementation that was targeted at being of low enuf overhead that it could potentially always be active in all standard production systems ... and be able to support file load balanching and file clustering as part of normal system operation.
misc. other posts about activities done in conjunction with disk
engineering and product test labs
https://www.garlic.com/~lynn/subtopic.html#disk
as distinct from posts about ckd disk
https://www.garlic.com/~lynn/submain.html#dasd
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 03 Nov 2004 11:19:05 -0700oh, and couple random past posts about the disk activity analysis, file/disk cache modeling work, etc. in 79-80:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 03 Nov 2004 17:55:42 -0700"David Wade" writes:
the original cms (cambridge monitor system) under cp/67 ran as if it was on the real machine ... and in fact, it could run on a real 360.
i was doing a bunch of cp/67 pathlength optimizations as an undergraduate ... as well as developing fair share scheduling, global lru page replacement, etc ... in some of the stuff of running os/360 under cp/67 ... i reduced some amount of cp/67 pathlength by up to two orders of magnitude (for running os/360 in a cp/67 virtual machine) ... compared to the original version that had been installed at the university.
one of the issues then became cp/67 support of cms operations. since each cms (virtual machine) was pretty much single thread ... the SIO/LPSW-wait/interrupt/resume faithful virtual machine simulation was pretty superfluous. I originally created a special disk I/O operation (cms disk operations were extremely uniform so could special case/fastpath their translation) that was treated as a CC=1, csw-stored operation ... aka the return to the cms virtual machine after the SIO instruction was when the operation had been totally completed. With some other operations ... this cut typical cp67 related pathlength supporting cms virtual machines by 2/3rds or more.
The people controlling cp/67 architecture claimed that this exposed a violation of the 360 principles of operation ... since there were no defined disk i/o transfer commands that resulted in cc=1/csw-stored. However, they observed that the 360 principles of operation defines the hardware *diagnose* instruction as model specific implementation. They took this as an opportunity to create the abstraction of a cp67 virtual machine 360 model machine ... which defined some number of *diagnose* instruction operations specific to operations in a cp67 virtual machine. the cc=1/csw-stored simulation for sio was redone as a variation of a special *diagnose* instruction.
when i was redoing the page replacement algorithm, the disk device driver (including adding ordered seek queueing to what had been FIFO, and chained requests for 2301 fixed-head drum), interrupt handler, task switching, dispatching, etc .... i got the total round-trip pathlength for taking a page fault, executing the page replacement algorithm, scheduling the page read (and write if necessary), scheduling the page I/O, starting the i/o, doing a task switch, handling the subsequent page i/o interrupt, and the task switch back to the original task ... down to nearly 500 instruction pathlength total for everything. this is compared to typical bare minimum of 50k (and frequently significantly more) instructions pathlength for most any other operating system. note that later vm/370 versions possibly bloated this by a factor of five to six (maybe 3k instructions).
the ordered seek queueing allowed 2314 to peak out at over 30 i/o requests per second ... rather than the 20 or so with fifo queueing. the 2301 fixed-head drum had originally had single request page transfers and would max. out at about 80 transfers per second .... with chained requests ... and any sort of queue ... it could easily hit 150 transfers per second and peak at nearly 300/second under worst case scenarios.
an old posting on some of this
https://www.garlic.com/~lynn/93.html#31 Big I/O or kicking the Mainframe out the Door
for a little drift ... appropriately configured vs/1 operating systems with handshaking would run faster in a vm/370 virtual machine than on the bare hardware w/o vm/370. part of this was because the handshaking interface allowed responsibility for all vs/1 page i/o operations to be handled by vm/370 (rather than by vs/1 native code which had a significantly longer pathlength).
the original cp/67 could boot and do useful work in 256kbyte real 360/67.
the machine at the university was a 768kbyte real 360/67 ... but there was an issue of the fix kernel starting to grow upwards over 80kbytes (and this was all code in the cp/67 kernel, including all the console functions and operator commands). if you added the fixed data storage for each virtual machine you easily double the fixed storage requirements. various development was then starting to add feature/function & commands so the fixed kernel requirements were growing. to address some of this at the university, i did the original support for "paging" selective portions of the cp/67 kernel ... to help cut down on the fixed kernel requirements. This pageable kernel was never released to customers (although lots of the other stuff i had done as undergraduate was regularly integrated into standard cp/67 release). The pageable kernel stuff did finally ship with vm/370.
later, near the tail-end of cp/67 cycle ... about when vm/370 was
ready to come out ... I did the translation of the cms filesystem
support to a new feature in cp/67 supporting paged mapped disk
operations ... along with a whole bunch of stuff extending the concept
of shared segments.
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon
I finally got around to porting all of this to early vm/370 release 2
and making it available to a large number of internal installations ...
for instance it was used extensively at hone for a number of things
https://www.garlic.com/~lynn/subtopic.html#hone
a small subset of the shared segment stuff was incorporated into vm/370 release 3 and released as something called discontiguous shared segments. one of the reasons that the discontiguous shared segment stuff is such a simple and small subset of the full stuff ... was that the original code heavily leveraged the cms page mapped filesystem work as part of the shared segment implementation.
the page mapped version of the cms filesystem was never shipped in a standard vm/cms product (except for a custom version for xt/at/370). and since the page mapped cms filesystem stuff didn't ship ... none of the fancy bell & whistles that were part of the original shared segment support shipped either.
while the *diagnose* i/o significantly cut the pathlength for support cms disk i/o ... it retained the traditional 360 i/o semantics ... which when mapped to a virtual address space environment ... required all the virtual pages in the operation to be fixed/pinned in real storage before the operation is initiated ... and then subsequently released ... which still represents some amount of pathlength overhead.
going to paged mapped semantics for cms filesystem in virtual address space environment allows a much more efficient implementation (in part because the paradigms are synergistic as opposed to being in conflict).
a few recent descriptions of the overhead involved in mapping
the 360 channel i/o semantics into a virtual memory environment
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#0 IBM 360 memory
https://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#50 Chained I/O's
https://www.garlic.com/~lynn/2004m.html#16 computer industry scenairo before the invention of the PC?
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
random past posts mentioning pageable kernel work:
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002i.html#9 More about SUN and CICS
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#64 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#23 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004b.html#26 determining memory size
https://www.garlic.com/~lynn/2004f.html#46 Finites State Machine (OT?)
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#45 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Multi-processor timing issue Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 04 Nov 2004 06:27:37 -0700jmfbahciv writes:
one of the jokes leading up to releasing the resource manager ... was that an enormous amount of work went into dynanmic adaptive system operation ... and somebody in the corporation ruled that the "state of the art" for resource managers were lots of performance tuning knobs for use by the system tuning witch doctors ... and that the resource manager couldn't be released w/o any performance tuning knobs.
So i added some number of resource tuning knobs, documented the formulas on how the tuning knobs worked and gave classes on them ... and all the source was shipped with the product. With all of that, as far as I know, nobody caught the joke. The issue was that the base implementation operated with a lot of dynamic feedback stuff as part of its dynamic adaptive operation. The hint is from traditional operational research .... and the degrees of freedom giving the base implementation and the degrees of freedom giving the add-on performance tuning knobs (and whether the native base implementation could dynamically compensate for any change in a any tuning knob).
part of the driving factor (leading up to the joke) was that the big batch TLA system had hundreds of paramenters and there were tons of studies reported at share about effectively random walks performed of parameter changes ... attempting to discover some majic combination of tuning parameter changes that represented something meaningful.
part of the issue is that over the course of a day or a week or a month ... there can be a large variation in workload and thruput characteristics ... and the common, "static", tuning paramether methodology (from the period) wasn't adaptable to things like natural workload variation over time. Specific, fixed paramenters might be better during some part of the day and worse at other parts of the day ... and the reverse might be true of other specific, fixed parameters .... so there might be a large number of different combinatorial parameter settings all with similar, avg. overall results.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 04 Nov 2004 08:57:02 -0700"David Wade" writes:
the basic vm/370 structure had something called DMKSNT that had tables of named systems ... and pointed to reserved page slots on disk. A privileged user could issue a "savesys" command referencing a DMKSNT named system entry. The named system entry specified a set of virtual memory page addresses ... in the savesys command, the current contents of those specified virtual pages (belonging to the entity issuing the command) would be written to the specified reserved disk slots.
the "ipl" command simulated the front panel hardware IPL (initial program load, aka boot) button. a form of the ipl command could specify a named system ... in which case the issuers virtual memory would be cleared and initialized with pointers to the corresponding reserved disk locations. The named system could also specify spans on virtual page addresses (segments) that were to be be read/only shared across all users of that named system.
the problem was that this was primarily useful for sharing (virtual) kernel type software (like much of the cms kernel) but didn't work well with application code ... since the IPL command also completely reset the users virtual machine.
as part of redoing the cms filesystem to a paged mapped infrastructure, the result effectively subsumed the DMKSNT named system function ... and could be applied to any set of virtual pages ... and could be defined by all users (not just limited subset of things in the DMKSNT table under control of privileged control). The page mapped semantics included the ability to specify segments as read/only shared ... as part of the page mapping operation.
one of the most extensive early adopters of this was HONE
https://www.garlic.com/~lynn/subtopic.html#hone
their environment was primarily an extremely large APL application that provided a very constrained environment for the sales and marketing people world-wide. It provided almost all of the interactive environment characteristics ... and many users weren't even aware that CMS (and/or vm/370) existed.
The issue was that they wanted 1) "cms" named system with shared segments, 2) custom "apl" application named system that consisted of a custom APL interpreter that included much of the APL code for the customized environment integrated with the interpreter ... and almost all of it defined as shared segments, 3) over time it was realized that some number of the applications that had been written in APL ... could benefit from 10:1 to 100:1 performance improvement if it was recoded in fortran, and 4) they then needed to be able to gracefully transition back and forth between APL environment and the Fortran environment ... completely transparent to the end-user.
In the IPL paradigm supporting shared-segments ... there was no way of gracefully transitioning back & forth between the apl environment (with lots of shared segments) and the fortran environment.
Also, because of the ease with which shared-segments could be defined in the page-mapped filesystem environment ... it was easy to adopt a number of additional cms facilities to the read-only shared segment paradigm.
For vm/370 release 3, it was decided to release the enhanced, non-disruptive (non-IPL command) implementation of mapping memory sections. A subset of the CP code was picked up (but w/o the cms filesystem page mapped capability), the non-disruptive mapping had to be kludged into the DMKSNT named system paradigm. Some number of additional "name tables" were added to DMKSNT ... and a LOADSYS function was introduced ... which performed the memory mapping function of the IPL command w/o the disruptive reset of the virtual machine. Some amount of the CMS application code that had been reworked for read-only, shared-segment environment was also picked up (and mapped into the DMKSNT named system paradigm). This extremely small subset function was released as Discontiguous Shared Segements in vm/370 release 3.
there were still all the restrictions of having single set of system wide named systems that required special administrative privileges to manage ... and only applied to introducing the mapping new (and for shared segments, only read-only) pages into the virtual address spaced.
in the original filesystem paged mapped implementation ... any image in the filesystem was available for mapping into the virtual address space ... and the access semantics were provided by the filesystem infrastructure (aka trivial things like if you didn't have access to the specific filesystem components ... then you obviously couldn't map them into your address space).
misc. past posts on the subject:
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: some recent archeological threads in ibm-main, comp.arch, & alt.folklore.computers ... fyi Newsgroups: bit.listserv.vmesa-l Date: Thu, 04 Nov 2004 09:09:23 -0700
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 04 Nov 2004 17:43:14 -0700"David Wade" writes:
the ratio of cp/67 kernel "overhead" execution to virtual machine "problem state" execution tended to be associated with the ratio of supervisor state instructions to problem state instructions in the code running in the virtual machine. for what ever reason, the ratio of supervisor state instructions to problem state instructions significantly increased in the transition from os/360 MFT to os/360 MVT.
The transition of os/360 MVT to VS2/SVS was even more dramatic, in part because the cp kernel was now faced with emulated the hardware TLB with a lot of software (since the SVS kernel now was using virtual address space architecture ... while MVT had been running as if it was in real address space).
the 158 & 168 first got (limited) microcode virtual machine "assists" (vma); the vm/370 kernel would set a (privilege) control register ... and when various privilege instructions were encountered by the "hardware", ... rather than interrupting into the cp kernel, the native machine microcode would execute the instruction according to virtual machine rules (rather than "real machine" rules).
The amount and completeness of the virtual machine assists increased over time ... until you come to PR/SM and LPARS ... where there a logical partition can be defined and the microcode completely handles everything according to virtual machine rules (w/o having to resort to a cp kernel at all).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Multi-processor timing issue Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 05 Nov 2004 07:05:58 -0700jmfbahciv writes:
since all code shipped ... they could actually do to it anything they wanted. there were actually knobs ... but they goverened administrative resource policy issues ... as opposed to performance tuning issues. note that even experienced operators weren't able to change performance tuninng knobs in real-time as workload and requirements changed over the course of the day.
as always
https://www.garlic.com/~lynn/subtopic.html#fairshare
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 05 Nov 2004 07:51:22 -0700pg_nh@0409.exp.sabi.co.UK (Peter Grandi) writes:
they had been doing all their testing using stand-alone machine time ... that had to be serialized/scheduled among all the testers and different testcells. they had tried running concurrently under MVS but the MTBF for the operating system was on the order of 15 minutes.
the objective was a bullet proof i/o subsystem so that all disk engineering activities could go on simulataneously & concurrently sharing the same machine.
of course it had other side effects ... since the machines under heavy testcell load was possibly 1 percent cpu utilization ... which met that we could siphon off a lot of extraneously & otherwise, unaccounted for cpu. the engineering & product test labs tended to be the 2nd to get the newest processors out of POK (typically something like serial 003, still engineering models ... but the processor engineers had the first two ... and then disk engineering and product test got the next one).
at one point, one of the projects that was needing lots of cpu and having trouble getting it allocated from the normal computing center machines was the air bearing simulation work ... for designing the flying disk (3380) heads. dropped it on a brand new 3033 engineering model in bldg. 15 ... and let it rip for all the time it needed.
a recent posting about dealing with 3880 issue when it was first
deployed in the bldg. 15 ... for standard string of 16 3330 drives as
part of interactive use by the engineers ... fortunately it was still
six months prior to first customer ship ... so there was time to
improve some of the issues:
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware
there was a problem raised from a somewhat unexpected source. there was an internal only corporate report and the MVS RAS manager in POK strenously objected to the mention of MVS 15 minute MTBF. There was some rumor that the objection was so strenuous that it quelshed any possibility for any award for significantly contributing to the productivity of the disk engineering and product test labs.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch Date: Fri, 05 Nov 2004 07:59:19 -0700torbenm@diku.dk (Torben Ægidius Mogensen) writes:
more recently there was some issue that as the EU consolidation proceeds, shouldn't various international bodies replace the individual EU member country memberships with a single EU membership.
my first trip to japan was to do the HONE
https://www.garlic.com/~lynn/subtopic.html#hone
installation for IBM Japan in Tokyo in the early 70s. At that time, the yen exchange was greater than 300/dollar. what is it now?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 05 Nov 2004 09:54:43 -0700oh, and one reason that the disk engineering and product test labs tolerated me ... was that i wasn't in the gpd organizations. i had a day job in research (bldg 28) ... the stuff in bldg. 14&15 was just for the fun of it. something similar for the hone complex further up the peninsula or for stl/bldg-90 or for the vlsi group out in lsg/bldg.29. i would just show up and fix problems and go away. one of the harder problems was keeping track of all the security authorizations for the different data centers. I didn't exist in any of those organizations.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 05 Nov 2004 13:10:29 -0700"David Wade" writes:
The simple case is to have a single set of shadow tables and every time the virtual machine switches virtual address space, wipe it clean and start all over. This was the original implementation up until release 5/HPO, which would support keeping around a few shadow tables (per virtual machine) ... and hopefully when the virtual machine switched virtual address spaces, it was to one that had been cached.
not published, but dual-address space introduced on 3033 had somewhat
similar effect on the real hardware TLB. while dual-address
introduction brought common segment relief (so you didn't totally run
out of room in address space for application execution) ... it
frequently increased the number of concurrent distinct address spaces
needed passed the 3033 TLB limit ... in which case it needed to
scavenge a cached address space and all the associated TLB entries
(which resulted in measurable performance decreased compared to
non-dual-space operation on the same hardware). recent posting
discussing some dual-address space
https://www.garlic.com/~lynn/2004n.html#26 PCIs as chip-to-chip interconnect
one of the first production uses of virtual machines supporting
virtual address spaces was the 370 hardware emulation project (it was
also a something of stress case for the relatively new multi-level
source management support). the 370 hardware architecture book was
out but there was no actual hardware. cambridge was operating something
of a open time-sharing service on their cp/67 system
https://www.garlic.com/~lynn/subtopic.html#545tech
https://www.garlic.com/~lynn/submain.html#timeshare
with some number of mit, bu, harvard, etc students as well as others.
the issue was that we couldn't just start modifying the production cp/67 system providing 370 virtual machines ... because there was some possibility that some non-employee would trip over the new capabilities.
so the base production update level was somewhat called the "l" system ... lots of standard operational and performance enhancements to a normal cp/67 system.
a new level of kernel source updates were started ... referred to as the "h" updates; they were the stuff that when the operation was specified, the kernel emulated a virtual 370 machine (with virtual 370 relocate architecture) ... rather than a 360/67 virtual machine.
an "h" level kernel would be run in a virtual 360/67 machine on the production system (but isolated from all the other normal time-sharing users).
a third level of kernel source updates then were created, the "i" level ... which were the modifications to the kernel to operate on real 370 hardware ... rather than on real 360/67 hardware. the "i" level kernel was in regular operational a year before the first real 370 hardware with virtual memory support existed (in fact it was a 370/145 engineering machine in endicott that used a knife switch as an ipl-button ... and the "i" system kernel was booted to validate the real hardware).
in any case, the operation in cambridge then could be:
real 360/67 running a "l" level cp/67 kernel
virtual 360/67 machine running a "h" level cp/67 kernel
virtual 370 machine running an "i" level cp/67 kernel
virtual 370 machine running cms
this was something of a joint project with engineers in endicott and
one of the first uses of the "internal" network for distributed source
development (link between cambridge and endicott)
https://www.garlic.com/~lynn/subnetwork.html#internalnet
later there was a "q" level source updates (or maybe the project was called "q" and the update level was "g", somewhat lost in the fog of time) ... that included some 195 people. the project was to provide cp/67 support for virtual 4-way 370 smp. this was slightly related to the later effort to add dual i-stream hardware to an otherwise standard 195 (something akin to the current multi-threading hardware stuff). for most intents the kernel programming was as if it was a normal 2-way smp ... but it was standard 195 pipeline with one-bit flags tagging which i-stream an instruction belonged to.
a few random past l, h, & i kernel posts:
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2001k.html#29 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2003g.html#14 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004.html#44 OT The First Mouse
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#45 A quote from Crypto-Gram
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 05 Nov 2004 16:03:19 -0700Jon Forrest writes:
it was trivial to do in using cms filesystem semantics
also if you new the os/360 services magic words ... you could fake it out ... that was how the various compilers and assemblers that were brought over from os/360 worked ... they had some interface routine glue that you could specify the filename to the command ... and then inside the cms glue routine it made the magic os/360 services incantations.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Sat, 06 Nov 2004 08:25:15 -0700Gene Wirchenko writes:
random past posts mentioning llmps ... (i have hardcopy of the old
share contribution library document for llmps):
https://www.garlic.com/~lynn/93.html#15 unit record & other controllers
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2000.html#89 Ux's good points.
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001e.html#13 High Level Language Systems was Re: computer books/authors (Re: FA:
https://www.garlic.com/~lynn/2001h.html#24 "Hollerith" card code to EBCDIC conversion
https://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
https://www.garlic.com/~lynn/2001k.html#27 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#9 mainframe question
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
https://www.garlic.com/~lynn/2001n.html#89 TSS/360
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002d.html#49 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002f.html#47 How Long have you worked with MF's ? (poll)
https://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002m.html#28 simple architecture machine instruction set
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#64 PLX
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003f.html#41 SLAC 370 Pascal compiler found
https://www.garlic.com/~lynn/2003i.html#8 A Dark Day
https://www.garlic.com/~lynn/2003m.html#32 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#31 someone looking to donate IBM magazines and stuff
https://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004l.html#16 Xah Lee's Unixism
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Sat, 06 Nov 2004 08:44:52 -0700"Stephen Fuld" writes:
the interactive paradigms tend to do much more allocation on the fly ... frequently even a record at a time as needed. they tend to have filesystems that have allocation bit-map per record of allocation and files have indexes listing all the record for the file, which is updated as additional records are allocated. cms started this way with the original filesystem built starting in '65.
i did a morphing of this filesystem to a page map paradigm where the
records were pages ... and did some infrastructure smarts to promote
contiguous record allocation ... rather than the straight-forword
record at a time ... which could be quite scattered. previous page
map refs:
https://www.garlic.com/~lynn/submain.html#mmap
cms also imported some number of applications from the os/360 world (as did MTS) where the code did open/closes on DCBs ... and the traditional DCBs were expecting the DD-oriented file specification. The basics for this was provided by a bit of os/360 simulation code ... that was about 64kbytes in size. This included a filedef command that would do a straight forward simulation of the DD specification. However, for some number of the imported os/360 applications (commonly used compliers and assemblers) there were sometimes magic glue code written that handled the mapping of cms filesystem conventions to os/360 filesystem conventions at a much lower and granular level ... which tended to hide much more of the os/360 gorp in a cms interactive oriented environment.
the os/360/batch paradigm has evolved over the years ... providing more and more sophisticated facilities in support of running applications and programs where the default assumption is that the responsible party for the application is not present (as opposed to the interactive paradigm which assumes that the person responsible for the execution of the application, is in fact present).
the batch paradigm has somewhat found a resurgance ... even in the online and internet environment ... where there are deployed server applications that may have hundred of thousands of clients ... but there is some expectation that the server has much more characteristic of the batch paradigm ... the person responsible for the server operation isn't necessarily present when it is running.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Sat, 06 Nov 2004 09:14:16 -0700the other characteristic of the os/360 extent, non-record allocation was the trade-off between i/o capacity and real storage requirements in the filesystem design around multi-track search.
basically the os/360 had very simple, one level index filesystem orientation. the file information with the pointer to the start of the file was located in the vtoc (which contained a list of all the files). to open/find a file, the system would do a multi-track search on the vtoc to find (and read) the record with the specific file information. this traded off the seemingly significant real-storage requirements needed for record level bit-maps of various kinds with simple disk-based structure that could be searched with i/o commands.
"library" files could have members ... and for these kind of files, called partition directory datasets (or PDS) ... the PDS directory was simple and could also be searched&read with i/o command (rather than having index/directory storage that could tie-up real memory). members in directories were contiguous allocation ... and PDS were contiguous ... and deleting/replacing members just left gaps in the PDS database. At some point, PDS datasets had to be "compressed" to recover the gaps.
as i've pointed out previously sometime in the 70s, the io/memory trade-off for disk-based structures had shifted ... and it became more efficient to have memory based index structures and attempt to conserve/optimize i/o infrastructure.
minor ckd threads
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/2004n.html#51 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#0 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#6 CKD Disks?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Demo: Things in Hierarchies (w/o RM/SQL) Newsgroups: comp.databases,comp.databases.object,comp.databases.theory Date: Sat, 06 Nov 2004 11:08:41 -0700"Laconic2" writes:
there are actually (at least) four different databases (for res)
PNR database flight segment/seat database routes database (i once got to rewrite routes from scratch) fares database
random past posts re: working on routes and/or amadeus
https://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#153 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000.html#61 64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001d.html#74 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#50 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2001k.html#26 microsoft going poof [was: HP Compaq merger, here we go again.]
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002i.html#38 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#40 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
https://www.garlic.com/~lynn/2002l.html#39 Moore law
https://www.garlic.com/~lynn/2003b.html#12 InfiniBand Group Sharply, Evenly Divided
https://www.garlic.com/~lynn/2003c.html#52 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003o.html#17 Rationale for Supercomputers
https://www.garlic.com/~lynn/2003o.html#38 When nerds were nerds
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004m.html#27 Shipwrecks
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Demo: Things in Hierarchies (w/o RM/SQL) Newsgroups: comp.databases,comp.databases.object,comp.databases.theory Date: Sat, 06 Nov 2004 14:55:30 -0700a few years ago, oag for all commercial flights segments in the world had a little over 4000 airports (with commercial scheduled flights) and about half million "flight segments" (i.e. take/off landings). the number of flights segments didn't exactly correspond to flights/day ... since some flight segments didn't fly every day. there was also an issue that each individual flight segment was a flight ... plus combinations of flight segments were also flights (i.e. say flight from west coast to east coast with two stops could represent 3+2+1 different "flights"). the longest such flight that i found had 15 flight segments (it wasn't in the US) ... taking off first thing in the morning and eventually arriving back at the same airport that night ... after making the round of a lot of intervening airports.
there is also some practice of the same flight segment having multiple different flight numbers. the first instance of this (that i know of) was in the very early 70s ... the first twa flight out of sjc in the morning ... flew both to seatac and kennedy. it turns out that the people going to kennedy had a change of equipment at sfo (not a connection).
this is an airline "gimmick" ... traditionally the non-stops and directs are listed before the flights with connections. if you had a dual flight number ... with change of equipment ... tho flights would show up in the first "direct" section ... not in the following "connections" section.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: CKD Disks? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 06 Nov 2004 15:22:38 -0700tedmacneil@bell.blackberry.net (Ted MacNEIL) writes:
other posts about the period
https://www.garlic.com/~lynn/subtopic.html#disk
and mention of the air bearing simulations doing the calculations for (physical) design of the floating/thin-film heads.
so the heads fly closer ... enabling signficant increases in density
and transfer rate ... resulting in the manufacturing cost per bit
being reduced dramatically ... i try and stay out of pricing issues ...
although i got dragged into for the resource manager ... being the
guinea pig for first pricing kernel software
https://www.garlic.com/~lynn/subtopic.html#fairshare
the other part of the thin-film head design was chip that had both digital and analog circuits. a lot of this work was done over in the los gatos vlsi lab (bldg. 29, which has since been torn down, it was originally built in the 60s as advanced system development division hdqtrs). the group also did jib-prime ... which was the microprocessor in the 3880 controller.
in any case, they built the high performance logic simulation engine, LSM (originally named the los gotos simulation machine, but renamed logic simulation machine for external publication) ... and it included support for timing. it ran chip logic simulation about 50,000 times faster than could be done with convential programming on 3033. the subsequent logic simulators may have improved on that thruput ... but they dropped support for timing ... effectively assuming synchronous clock chip designs. One of the things that the LSM gave them with timing support was the ability to run a chip design that had non-synchronous timing and/or chip design that had combination of digital (potentially synchronous or asynchronous) circuits and analog circuits (like the design for the thin-film heads).
random past references to lsm & other logic simulators:
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002g.html#77 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#82 Future architecture
https://www.garlic.com/~lynn/2002j.html#26 LSM, YSE, & EVE
https://www.garlic.com/~lynn/2003.html#31 asynchronous CPUs
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2004j.html#16 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: pop secure connection Newsgroups: netscape.public.mozilla.general Date: Sun, 07 Nov 2004 18:41:44 -0700i've been trying to get mozilla mail &/or thunderbird to read pop mail on port 110 using secure connection.
i've got all unnecessary incoming/outgoing ports blocked using iptables.
evolution reads mail fine using SSL on port 110
i set mozilla mail (& thunderbird) to ssl on part 110 and try and read mail ... and it just hangs and eventually times out.
i can see it has established a connection to port 110 ... but iptables is also showing it is blocking attempts to port 995.
this is with older thunderbirds, older mozillas, brand new thunderbird, brand new mozilla, and brand new mozilla nightly build.
mozilla & thunderbird both handle outgoing/sent mail using SSL just fine.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 08 Nov 2004 12:58:49 -0700"Stephen Fuld" writes:
the two scenarios mentioned for the paradigm was
1) original batch-oriented (rather than interactive which could assume responsable human was present)
2) original io/real-storage tradeoff ... where real storage was conserved by having a relatively trivial filesystem structure that was totally resident on disk and could be searched/used by relatively trivial ckd multi-track search commands.
some cross-over from recent ckd thread
https://www.garlic.com/~lynn/2004n.html#51 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#0 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#6 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#21 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
low-end target for os/360 was (at least) the 32kbyte to 64kbyte 360/30; I did assemblies on 360/30 with something like os/360 pcp release 5 or 6(?).
later releases added some additional incremental, dynamic capability ... but the os/360 genre was never really targeted at the interactive computing environment ... and so didn't have a lot of (market?) motivation to support features for the interactive environment ... aka could they have done better?, very probabily; did they think that better interactive facilities would provide sufficient ROI to justify it ... i don't think they believed so.
Furthermore they continued to be saddled with the original filesystem design with its io/real-storage trade-off ... aka effectively all disk storage allocation was disk resident, no memory based structures, simple enuf structure that it could be managed with ckd disk i/o commands. Dynamic allocation in the abstract is possible ... but the amount of work to perform any allocation at all (static or dynamic) is extremely heavy weight ... and requires a lot of disk i/o activity.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: pop secure connection Newsgroups: netscape.public.mozilla.general Date: Mon, 08 Nov 2004 13:07:57 -0700Jay Garcia writes:
my isp supports ssl on port 110 (not 995).
evolution does it fine. even windows/laptop w/eudora does it fine. eudora even has a window that shows the last certificate & session ... which also shows it at port 110.
All versions of mozilla/thunderbird that i've tried with ssl/110 has failed to work (even when overriding the default 995 in the configuration menu).
one issue is that i have locked down all ports with iptables that are absolutely necessary (both incoming and outgoing).
when i try mozilla (&/or thunderbird) with ssl/110 ... i can see that a session has been established on port 110 ... but i also see in the iptables log that there are attempts at port 995 being discarded.
it almost appears that both mozilla and thunderbird, even when 110 is specified for secure connection (ssl) operation has vistages of code that still tries port 995 (even tho there is at least some code that honors the 110 configuration specification).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 08 Nov 2004 15:36:12 -0700now (vm, cms, apl, etc based) US HONE
in the late '70s put together something different for large scale cluster operation (eight large mainframe SMPs all sharing large disk farm and workload for all the branch/field service, sales and marketing in the US). it was the largest single-system-image cluster that i know of at the time (had something approaching 40,000 defined userids at the time)
they used a different on-disk structure ... not for allocation ... but for locking disk space use across all processors in the complex. there was a lock/in-use map on each disk (read-only, write-exclusive, etc, by processor). a processor that wanted to enable access to an area on disk, would read the disk's lock/in-use map ... check to see if the area was available, update the lock-type for that area, and use a ckd ccw sequence that emulated smp compare&swap semantics (aka search equal ... and then rewrite/update if the search still mapped).
misc. ckd dasd posts
https://www.garlic.com/~lynn/submain.html#dasd
it used standard disk controllers for the operation ... not the
"airline control program" (acp, later renamed to tpf) disk locking
RPQ. the acp controller rpq extended a little memory in the disk
controller and allowed symbolic locks that were used for various kinds
of cluster (aka loosely-coupled) disk access serialization &
coordination ... w/o resorting to the whole device reserve/release
locking, or a disk-based structure (like hone used). I know the
rpq was available in the early 70s on 3830 disk controller. 3330/3830
page:
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_3330.html
total aside ... possible typo in the above:
Features • 30 milliseconds was average access time; minimum was 55 milliseconds.
========
and a ACP/TPF history page
http://www.blackbeard.com/tpf/tpfhist.htm
some drift from the above ... one of the airline res efforts in the 80s was amadeus and my wife served for a brief time as chief architect for amadeus
random posts mentioning amadeus
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#50 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004m.html#27 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2004o.html#24 Demo: Things in Hierarchies (w/o RM/SQL)
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: z/OS UNIX Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Mon, 08 Nov 2004 16:06:03 -0700WGShirey@ibm-main.lst (Greg Shirey) writes:
old reference that has buried references to both iceberg and seastar
(from 2/96):
http://www.informationweek.com/565/65mtrob.htm
gone 404
https://web.archive.org/web/20080608164743/http://www.informationweek.com/565/65mtrob.htm
and another here
http://www.stkhi.com/nearline.htm
gone 404
https://web.archive.org/web/20060328034324/http://www.stkhi.com/nearline.htm
so if it was seastar ... might there be some topic drift to seascape and then netscape. ... so for some real topic drift and trivia question ... when mosaic was told that they couldn't use the name and had to change their corporate name ... who owned the term netscape and donated it to mosaic?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: NEC drives Newsgroups: alt.folklore.computers Date: Tue, 09 Nov 2004 14:01:17 -0700Morten Reistad writes:
around the time EMEA hdqtrs moved from the states to Paris (new bldgs.,
La Defense, on the outskirts of paris) and I got to help clone a copy
of HONE for them
https://www.garlic.com/~lynn/subtopic.html#hone
i lost some archives from the early & mid '70s when there was a datacenter glitch in the mid-80s that managed to clobber all three tape copies.
does having a dial-up home terminal since march 1970 count?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: What system Release do you use... OS390? z/os? I'm a Vendor S Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Tue, 09 Nov 2004 21:23:20 -0700rhalpern@ibm-main.lst (Bob Halpern) writes:
basically a 30+ year old software version of LPARs.
totally unrelated ... my wife was a catcher in the gburg JES group when ASP was transferred to gburg for JES3 ... one of her tasks was reading the ASP listings and writing JES3 documents.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 10 Nov 2004 06:51:13 -0700Jan Vorbrüggen <jvorbrueggen-not@mediasec.de> writes:
later, i got to do a microcoded smp project called VAMPS
https://www.garlic.com/~lynn/submain.html#bounce
where i migrates lots of the vm/370 to microcode ... sort of enhanced
version of various vm microcode performance assists (i was working on
ecps for the 138/148 about the same time)
https://www.garlic.com/~lynn/submain.html#mcode
what remained of the vm kernel was essentially single-threaded but it would use compare&swap semantics to place work on the dispatch queue. the microcode dispatcher pulled stuff off the dispatch queue for different processors. if a processor had something for the kernel and no other processor was currently executing in the kernel (global kernel lock), it would enter the kernel. however, if another processor was executing in the kernel ... there was a "kernel" interrupt queued against the kernel ... and the microcoded dispatcher would go off and look for other work (bounce lock, rather than spinning on a global kernel lock that was common at the time). when the microcode smp project was killed, there was an activity to adapt the design to a software only implementation. The equivalent kernel software (that had been migrated to microcode) was modified to support fine-grain locking and super lightweight thread queuing mechanism (as opposed to the hardware kernel interrupt).
this was shipped to customers as standard vm/370 product ... where the customers now had two different kernels ... one with the inline fine-grain locking and one w/o. however, all source was shipped and it was common for a large number of the customers to completely rebuild from source. the fine-grain locking was a combination of inline logic with conditional assembly and "lock" macro which also had conditional assembly statements. part of the issue was that there was actual inline code in the dispatcher and other places for smp queue/dequeue operations (instead of the possibly more straight-forward kernel spin-lock logic).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: The Sling and the Stone & Certain to Win Newsgroups: alt.folklore.computers Date: Wed, 10 Nov 2004 07:04:19 -0700some Boyd and OODA-loop drift ....
The Sling And The Stone ... off an infosec mailing list yesterday
http://www.washingtondispatch.com/article_10508.shtml
and related article, 4th Generation Warfare & the Changing Face of War
http://d-n-i.net/fcs/comments/c528.htm
Certain to Win, The Strategy of John Boyd, Applied to Business
http://d-n-i.net/richards/ctw.htm
Advance Reviews of Certain to Win
http://d-n-i.net/richards/advance_reviews.htm
and of course, lots of my other Boyd references
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Scanning old manuals Newsgroups: alt.folklore.computers Date: Wed, 10 Nov 2004 15:19:59 -0700"Charlie Gibbs" writes:
however we just unearthed a bunch of old handwritten letters from the 40s ... that i would also be interested in scanning(?).
when i looked at some of this stuff nearly 10 years ago ... it all seemed to be scaffolded off fax scanning, tiff format and ocr of tiff/fax softcopy (current scanners appear to have much higher resolution as well as color capability compared to the older fax oriented stuff).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 10 Nov 2004 15:53:08 -0700Anne & Lynn Wheeler writes:
in a couple hrs we had written a RED edit exec (edimac, RED was one of the fullscreen editors that some what predated xedit ... but never got passed the internal use only scenario) ... and had its own exec language .. with a lot of the characteristics of rexx ... but red-editor specific ... while rexx can run as either xedit environment or most any other environment) and some glue execs.
cms source update process used updates (as opposed to "down dates" that you find with things like RCS); each change was a separate cms file ... and the source update procedure would sequentially apply all appilicable updates to the base source file and then re-assemble the result (which was viewed as a temporary file).
the RED edit exec made whatever change that was necessary and generated a unique sorce update file for the change. the process then reran the build process to all kernel modules, applying all the source updates (including the brand new generate file), re-assembled each module and rebuilt the executable kernel.
instead of several person weeks ... it was a couple hrs to write all the necessary procedural code ... and then turn it loose and the total rebuild process (including finding and making the necessary source code changes, generating the new source update file, re-assemble the resulting temporary file, etc) took something like 22minutes elapsed time (i have no idea why i remember how long it took to run ... and don't remember what the change was).
at one time there was an analysis ... that the total number of lines of source code modifications (to the kernel) on the waterloo/share tape was greater than the total lines of source code in the base product.
random past descriptions of the cms source update process:
https://www.garlic.com/~lynn/2000b.html#80 write rings
https://www.garlic.com/~lynn/2001e.html#57 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2002g.html#67 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#67 history of CMS
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002n.html#73 Home mainframes
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2003.html#58 Card Columns
https://www.garlic.com/~lynn/2003e.html#38 editors/termcap
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003k.html#47 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2003l.html#17 how long does (or did) it take to boot a timesharing system?
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004d.html#69 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004g.html#43 Sequence Numbbers in Location 73-80
https://www.garlic.com/~lynn/2004g.html#44 Sequence Numbbers in Location 73-80
https://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004m.html#30 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#21 Integer types for 128-bit addressing
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: pop secure connection Newsgroups: netscape.public.mozilla.general Date: Wed, 10 Nov 2004 18:48:04 -0700Nelson B writes:
the glossaries
https://www.garlic.com/~lynn/index.html#glosnotes
etc. and so i use .forward, etc.
as i stated originally ... the isp (that i currently use) ... works using SSL to port 110 ... and that is how both eudora and evolution are working.
i have iptables on (internet) interface machine with all ports locked down ... other than what is absolutely necessary ... aka outgoing 25 & 110 is enabled; outgoing 465 and 995 are not enabled. both eudora and evolution are setup for SSL on port 110. eudora has feature that shows last session, port number and server certificate used for the ssl session. it shows port 110 used with the server certificate for the ssl session.
even if i go outside the iptables boundary and try port 995 with this specific isp ... it doesn't work. it only works with port 110.
as i mentioned in the previous post, inside iptables boundary ... with outgoing port 110 enabled and outgoing port 995 disabled ... eudora and evolution both work with SSL specified using port 110 (and you can query eudora for the last pop session and it will show the ssl certificate sent by the server and the port used ... aka port 110).
also, as per previous posts,
https://www.garlic.com/~lynn/2004o.html#26 pop secure connection
https://www.garlic.com/~lynn/2004o.html#28 pop secure connection
inside iptables "boundary", using numerous different versions of mozilla and thunderbird ... i set things up for ssl pop on port 110 and they all hang and then eventually time-out w/o transferring any email. i can see that there is a session initiated for port 110 ... but i also see in the iptables log that mozilla/thunderbird while having initiated a session on port 110 are also trying to do something on port 995 ... which is being thrown away by the iptables rules.
again, both evolution (fc2 & evolution 1.4 and fc3 & evoluation 2.0) and eudora (6.0) work with ssl on port 110 going to the internet thru the same iptable rules.
sending mail with ssl thru port 25 works for all ... evolution, eudora, mozilla, and thunderbird. pop receiving mail with ssl thru port 110 works for evolution and eudora but not with mozilla and thunderbird.
based on seeing port 110 session being active when trying to use mozilla and thunderbird ... but also seeing port 995 packets being discarded (by iptables), i suspect that there is some common code someplace that is ignoring the port 110 specification.
for some topic drift ... misc. other refs at garlic.com:
posts about domain name ssl certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcerts
and a little about electronic commerce (from the early days of ssl,
even before the group moved to mountain view and changed their name):
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
summary for rfc 2595
https://www.garlic.com/~lynn/rfcidx8.htm#2595
in the summary fields, clicking on the ".txt=nnn" field, retrieves that actual rfc. clicking on the rfc number brings up the term classification display for that rfc. clicking on any term classification ... brings up a list of all RFCs that have been classified with that term. clicking on any of the updated, obsoleted, refs, refed by, etc RFC numbers ... switches to the summary for that RFC.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Facilities "owned" by MVS Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 11 Nov 2004 11:25:01 -0700wdriscoll@ibm-main.lst (Wayne Driscoll) writes:
for instance there was technology transfer from sjr of system/r to
endicott for sql/ds ... and what person in this mentioned meeting
claims primary responsibility for technology transfer of sql/ds
from endicott back to stl to become db2?
https://www.garlic.com/~lynn/95.html#13
note that stl and sjr were only about 10 miles apart but the technology needed to make a coast-to-coast round-trip to go from system/r to db2.
misc. system/r
https://www.garlic.com/~lynn/submain.html#systemr
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Facilities "owned" by MVS Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 11 Nov 2004 16:08:50 -0700"Jim Mehl" writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Facilities "owned" by MVS Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 11 Nov 2004 16:58:30 -0700... from long ago and far away (and even more drift) ....
a quick search of the vmshare archive:
http://vm.marist.edu/~vmshare/browse.cgi?fn=ORACLE&ft=MEMO
.... a short extract from above:
Created on 03/25/80 17:13:20 by MUC
ORACLE Data Base System
Anyone have any info about a data base system called ORACLE?
It was rumoured to be a version of IBMs famed System R.
Alan
*** CREATED 03/25/80 17:13:20 BY MUC ***
Appended on 07/24/80 03:05:17 by WMM
Oracle is available from Relational Systems Inc in Menlo Park
for PDP systems (VAX-11/780 and others). It is supposed to be
ready for IBM systems (only under VM/CMS) around DEC 80.
About $100,000 for the whole system which is based
on IBM's SEQUEL language.
*** APPENDED 07/24/80 03:05:17 BY WMM ***
Appended on 12/15/82 17:37:26 by FFA
Oracle Relational Database for VM
I was wondering if anyone had actually installed the Oracle product.
We were considering it and, based on the discussions at the most
recent Oracle users group, we decided to postpone it until we could
find users who would say that it was a relativly trouble-free,
well running product.
I would appreciate any comments about the product from people who either
installed it or evaluated it. Our major concerns when we decided not
to install were inter-user security and reliability.
-
Nick Simicich - FFA
*** APPENDED 12/15/82 17:37:26 BY FFA ***
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: osi bits Newsgroups: alt.folklore.computers Date: Thu, 11 Nov 2004 17:22:12 -0700ok, i just couldn't resist. "Purchase" was going to have been the brand new Nestle bldg which they sold in the 80s (and was later resold to Mastercard ... and is now Mastercard hdqtrs).
a slight problem that i've frequently pointed out was that OSI didn't support heterogeneous networks (aka lack any sort of internetworking support). ISO compounded the problem by directive that only networking protocols that met the OSI model could be considered for standardization by ISO and ISO-charted standards bodies. Note this includes protocols that would talk to LAN MAC interfaces ... since LAN MAC interface violates OSI and sits in the middle of OSI networking layer 3.
minor related posts regarding Interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop
and misc posts referencing some of ISO/OSI ... especially with
respect to introducing "high speed protocol" to iso/ansi ...
unfortunately hsp was defined as directly interfacing to lan/mac
interface:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
.....
January 11, 1988
SUBJECT: OSI FORUM - Purchase, NY, March 15-18, 1988
Communications Products Division is sponsoring an OSI Forum in
Purchase, NY on March 15-18, 1988.
As you know, OSI is the universally accepted model by which
interconnectivity and interoperability for heterogeneous systems will
be accomplished. A number of standards bodies, user groups,
governments and special interest groups are focusing on OSI and
vigorously advocating the adoption of the emerging standards. The
pressure to support OSI is increasing and the stimuli influencing OSI
acceptance by users and vendors are numerous.
The purpose of this forum is to exchange information related to OSI so
that you will better understand the OSI market requirements, the
Corporate strategy for OSI and the OSI product directions. At the
same time, this forum will be used as a vehicle to collect additional
requirements and product observations, identify potential problem
areas, and assure synergism between the many OSI activities across
organizations.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: how it works, the computer, 1971 Newsgroups: alt.folklore.computers Date: Fri, 12 Nov 2004 06:44:44 -0700How it works, the computer, 1971
/.'ed late yesterday
http://slashdot.org/articles/04/11/12/131204.shtml?tid=133&tid=1
tab card from 60s and earlier .... of course, (at least) the area
around MIT was starting to change in the 60s, with CTSS and then
multics and cp67 in 545 tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech
with keyboards and various online & interactive computing.
lincoln labs got a version of cp67 in 1967 and cp67 was installed at the university i was at in jan. 1968.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 12 Nov 2004 07:37:33 -0700"Del Cecchi" writes:
after some business trip to japan ... i came back with a statement that i could get better technology out of a $300 cdrom player than some $20k (maybe only a little exaggeration) fiber-optic computer telecom gear ... and i would have a couple fine servo-motors left over.
random past mentions of cyclotomics
https://www.garlic.com/~lynn/2001.html#1 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2002p.html#53 Free Desktop Cyber emulation on PC before Christmas
https://www.garlic.com/~lynn/2003e.html#27 shirts
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: lynn@garlic.com Newsgroups: comp.arch,alt.folklore.computers Subject: Re: 360 longevity, was RISCs too close to hardware? Date: Fri, 12 Nov 2004 16:05:49 -0800Anne & Lynn Wheeler wrote:
and yes, the communication products division referenced in
the above post is the same one referenced in this
post from thurs:
https://www.garlic.com/~lynn/2004o.html#41 osi bits
another contrast ... the initial mainframe tcp/ip product got about
43kbytes/sec while consuming nearly a full (100 percent) 3090
engine. i added rfc1044 support to the base product and in tuning
tests at cray research between a cray and a 4341-clone was getting
1mbyte/sec sustained (nearly 25 times more thruput, hardware limit
with the 4341-attachment box) using very modest amount of 4341 engine
(and they actually shipped the code not too long after the rfc was
published) random past 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
and from my rfc index
https://www.garlic.com/~lynn/rfcietff.htm
the rfc1044 summary entry
https://www.garlic.com/~lynn/rfcidx3.htm#1044
as always, clicking on the ".txt=nnn" field retrieves the actual rfc
From: lynn@garlic.com Newsgroups: comp.arch,alt.folklore.computers Subject: Re: Integer types for 128-bit addressing Date: Sat, 13 Nov 2004 13:35:55 -0800Morten Reistad wrote:
another view of that period can be gotten from melinda's paper
https://www.leeandmelindavarian.com/Melinda#VMHist
there is even some 67 references at the multics "site"
https://www.multicians.org/thvv/360-67.html
one of the comments that i've periodically made is that cp67/cms and
then vm/370 had significant number of installed customers ... more
than many other (non-ibm) timesharing systems that might come to mind.
however, when people think about ibm, there is almost kneejerk
reaction to think about the batch systems .... since those numbers
tended to dwarf the other systems (significantly dwarfed the number of
vm/370 systems ... which in a number cases dwarfed non-ibm timesharing
systems). some random past comments about time-sharing
https://www.garlic.com/~lynn/submain.html#timeshare
over the years, one of my periodic hobbies was to build, ship, and
support customized systems (independent of development of product
features that were shipped by the official product group in standard
product) ... i was very active and in building and supporting the hone
system for going on 15 years (purely as a hobby)
https://www.garlic.com/~lynn/subtopic.html#hone
in addition, at one point, I believe i may have been building and shipping production customized operating systems to more "internal only" datacenters than there were total multics customers in the whole life of the multics product (again it was purely a hobby that i did for the fun of it).
one of the reasons that i developed an advanced (for the time) problem
determination tool
https://www.garlic.com/~lynn/submain.html#dumprx
was to help cut down on the time i spent in directly support
datacenters that were participating in my hobby. another that i
originally did and deployed at hone was backup/archive system that
eventually evolved into adsm and now tsm
https://www.garlic.com/~lynn/submain.html#backup
while i did a lot of stuff that was somewhat more research like stuff
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon
https://www.garlic.com/~lynn/subtopic.html#smp
https://www.garlic.com/~lynn/submain.html#bounce
https://www.garlic.com/~lynn/subnetwork.html#3tier
i also have enjoyed deploying real-live production systems ... like
the work to make a bullet proof system for the disk engineering and
product
test labs
https://www.garlic.com/~lynn/subtopic.html#disk
and along the way getting to play in disk engineering.
From: lynn@garlic.com Newsgroups: comp.arch,alt.folklore.computers Subject: Re: Integer types for 128-bit addressing Date: Sat, 13 Nov 2004 14:05:09 -0800Morten Reistad wrote in message news:<sj35nc.8i32.ln@via.reistad.priv.no>...
for an ibm glossary/jargon ... try ...
http://www.212.net/business/jargon.htm
From: lynn@garlic.com Newsgroups: comp.arch,alt.folklore.computers Subject: Re: Integer types for 128-bit addressing Date: Sun, 14 Nov 2004 09:57:34 -0800"del cecchi" wrote:
after joining the science center ... i was still called on to go to
user group meetings, call on customers, and do misc. and sundry other
stuff. it wasn't so much that i was at the science center ...
https://www.garlic.com/~lynn/subtopic.html#545tech
it was because i had been doing that sort of stuff for a number of years before getting hired.
i've claimed that having extensive contact with the rubber meeting the
road ... was one of the reasons that kept me from being susceptible to
the lure of future system project (and some number of my jaundice
comments about it) ....
https://www.garlic.com/~lynn/submain.html#futuresys
i had possibly way too much perspective of why it couldn't be done ... which people wouldn't have if they were several levels removed from real live operation.
of course there is some thread that one of the projects i worked on as
an undergraduate ... a 360 controller clone
https://www.garlic.com/~lynn/submain.html#360pcm
was (at least one of the) factor in creating the clone controller business .... which in turn is claimed to be (possiblye *the*) motivating business factor spawning the future system project.
when i moved to sjr ... i was allowed to continue to interact with real live customers (including, but not limited to academic oriented events) on a regular basis.
minor side-note, sjr put up the original corporate gateway to csnet
https://www.garlic.com/~lynn/internet.htm#0
and one of the sjr people registered the coporate class-a net in the
interop 88 time-frame
https://www.garlic.com/~lynn/subnetwork.html#interop88
... which i haven't paid any attention to recently as to whether it is still being used.
From: lynn@garlic.com Newsgroups: comp.arch,alt.folklore.computers Subject: Re: Integer types for 128-bit addressing Date: Mon, 15 Nov 2004 15:15:31 -0800nmm1@cus.cam.ac.uk (Nick Maclaren) wrote:
toolsrun was developed and deployed in the 80s ... in response to the
growing use of the internal network for information sharing. it
basically had a dual personality ... being able to act in a
maillist-like mode and in a usenet-like mode (local shared repository
and conferencing-like mode implemented for various editors). i was
somewhat more prolific in my younger days and there were sometimes
jokes about groups being wheeler'ized (where I posted well over half
of all bits). some subset flavor of toolsrun was communicated to
bitnet/earn (w/o the usenet like characteristic)
https://www.garlic.com/~lynn/subnetwork.html#bitnet
which eventually morphed into listserv (and a clone, majordomo); a
listserv "history"
http://www.lsoft.com/products/listserv-history.asp
in 84/85 timeframe, it also led, to a researcher getting assigned to
me for something like 9 months, they sat in the back of my office and
took notes on how i communicated, got copies of all my posts, incoming
and outgoing email ... and logs of all my instant messages. the
research and analysis also resulted in a stanford phd thesis (joint
between language and computer ai) and some follow-on books. random related
posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
From: lynn@garlic.com Newsgroups: comp.arch,alt.folklore.computers Subject: Re: Integer types for 128-bit addressing Date: Mon, 15 Nov 2004 16:15:49 -0800ref:
from long ago and far away ... userids munged to protect the guilty. this required getting legal sign-off making it available on internal machines. somebody was also starting to look at making machine readable material available to customers for a fee ... and there was some investigation regaring the complications of including the (internal) r/o shadow of the vmshare conferencing material as part of that machine readable distribution (with disclaimers that any fee would not be applicable to any included vmshare material).
note that the consolidated US HONE datacenter, Tymshare and SJR were
all within 20 miles.
https://www.garlic.com/~lynn/submain.html#timeshare
Date: 3/18/80 20:00:40 From: wheeler CC: xxxxx at AMSVM1, xxxxx at BTVLAB, xxxxx at CAMBRIDG, xxxxx at DEMOWEST, xxxxx at FRKVM1, xxxxx at GBURGDP2, xxxxx at GDLPD, xxxxx at GDLPD, xxxxx at GDLS7, xxxxx at GDLS7, xxxxx at GDLS7, xxxxx at GDLSX, xxxxx at GFORD1, xxxxx at HONE1, xxxxx at HONE2, xxxxx at LOSANGEL, xxxxx at NEPCA, xxxxx at OWGVM1, xxxxx at PALOALTO, xxxxx at PARIS, xxxxx at PAVMS, xxxxx at PAVMS, xxxxx at PLKSA, xxxxx at PLPSB, xxxxx at PLPSB, xxxxx at RCHVM1, xxxxx at SJRLVM1, xxxxx at SJRLVM1, xxxxx at SJRLVM1, xxxxx at SJRLVM1, xxxxx at SJRLVM1, xxxxx at SJRLVM1, xxxxx at SJRLVM1, xxxxx at SNJCF2, xxxxx at SNJCF2, xxxxx at SNJTL1, xxxxx at SNJTL1, xxxxx at STFFE1, xxxxx at STLVM2, xxxxx at STLVM2, xxxxx at TDCSYS3, xxxxx at TOROVM, xxxxx at TUCVM2, xxxxx at UITHON1, xxxxx at VANVM1, xxxxx at WINH5, xxxxx at YKTVMV, xxxxx at YKTVMV, xxxxx at YKTVMV, xxxxx at YKTVMV, xxxxx at YKTVMV re: VMSHARE; initial VMSHARE tape has been cut and is in the mail. Will be a couple of days before it is up in research. Will take several more before it is up at HONE.... snip ... top of post, old email index, HONE email
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch Date: Wed, 17 Nov 2004 07:25:54 -0700old_systems_guy@yahoo.com (John Mashey) writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Wed, 17 Nov 2004 09:58:30 -0700"Ken Hagan" writes:
original development is frequently inspiration and possibly may represent some sort of disruptive technology. added to all the (enormous amounts) grunt work to go from idea to business quality solution ... there also may also be large amounts of grunt work countering existing, entrenched interests.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 18 Nov 2004 06:31:16 -0700jmfbahciv writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 18 Nov 2004 06:59:43 -0700for some (more) drift, recently in presentation there was some comment about "see, the current internet infrastructure resillancy is the result of the original arpanet design for highly available networks".
my observation is that it is more the result of long(er) history of
telco provisioning than anything to do with the original arpanet
design. I have some recollection circa 1979 of somebody joking that
the IMPs were able to nearly saturate the 56kbit links with inter-IMP
administrative chatter about packet routing (bits & pieces about the
packet traffic, as opposed to the actual packet traffic) ... aka
administrative overhead and turning into a black-hole; in effect, the
design didn't scale ...
https://www.garlic.com/~lynn/2004o.html#52 360 longevity, was RISCs too close to hardware?
the original arpanet was homogeneous networking infrastructure using IMPs and any resilliancy was provided by these front-end IMPs and a lot of background (& out-of-band) administrative chatter. In the conversion to heterogeneous, internetworking .... various patches were made to the infrastructure to try and get around the scaling issues.
in the mid-90s, about the time e-commerce was being deployed ... the internet infrastructure finally officially "switched" to hierarchical routing (beginning to look more & more like telco infrastructure) in part, because of the severe scaling issue with (effectively) any sort of anarchy, random routing. this was the period that I got to have a lot of multiple A-record discussion with various people (several who made statements about basic, simple TCP protocol totally handled reliability, resilliancy, availability, etc ... and nothing more was needed ... including multiple A-record support).
At this point, i can insert the comment about: in theory, there is no difference between theory and practice, but in practice there is.
However, it is much more applicable to comment about some residual
entry level texts equating some of the original arpanet implementation
with existing implementations .... and there being some dearth of real
live experience with production, business-strength, deployable
systems. some amount of this we investigated in detail when we
were doing ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp
one might even claim that much of the lingering myths about arpanet
implementation and internet implementation being similar to some of
the lingering stuff about OSI being related in any way to
internetworking
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
random past mentions of multiple a-record:
https://www.garlic.com/~lynn/96.html#34 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#16 Old Computers
https://www.garlic.com/~lynn/99.html#158 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#159 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#164 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/aepay4.htm#comcert17 Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS
https://www.garlic.com/~lynn/aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aadsm13.htm#37 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm15.htm#13 Resolving an identifier into a meaning
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#34 Buffer overflow
https://www.garlic.com/~lynn/2003.html#30 Round robin IS NOT load balancing (?)
https://www.garlic.com/~lynn/2003.html#33 Round robin IS NOT load balancing (?)
https://www.garlic.com/~lynn/2003c.html#8 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#12 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#24 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#25 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#57 Easiest possible PASV experiment
https://www.garlic.com/~lynn/2004k.html#32 Frontiernet insists on being my firewall
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: evoluation (1.4) and ssl (FC2) update Newsgroups: alt.os.linux.redhat Date: Thu, 18 Nov 2004 07:09:37 -0700i've got FC2 and evolution 1.4 running on machine. evoluation receives (pop) and sends (smtp) mail using SSL ... and everything has worked since evoluation went up on the machine with FC1. yesterday, yum applied latest ssl changes for FC2. ever since then, evoluation aborts every time it attempts to read mail (which makes me suspicious that there is some tie-in with the ssl changes). i've even wiped all my existing email and evoluation definitions and recreated from scratch ... and evolution still aborts with any attempt to read mail.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 18 Nov 2004 08:59:25 -0700jmfbahciv writes:
spring of 69 ... shortly after boeing formed bcs ... ibm con'ed me into skipping spring break and teaching a one week computer class to the bcs technical staff. bcs then hired me as full-time employee ... while i was still a student (was even given supervisor parking permit at boeing field). there was this wierd semester where i was a full-time student, a full-time bcs employee (on educational leave of absence), and doing time-slip work for IBM (mostly supporting cics beta-test that was part of onr-funded university library project).
while i did a lot of research and shipped products over the years ... for much of the time ... i was also involved in directly building and supporting production systems for day-to-day production datacenter use; frequently wasn't even in my job description ... more like a hobby ...
example i've frequently used is hone
https://www.garlic.com/~lynn/subtopic.html#hone
which world-wide field, sales, and marketing all ran on. i was never part of the HONE structure or business operation.
another example was disk engineering and product test in bldgs. 14 &
15
https://www.garlic.com/~lynn/subtopic.html#disk
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 18 Nov 2004 09:18:28 -0700for even more topic drift ... i had worked construction in high school, and first summer in college ... got a job as forman on construction job. about year after i joined ibm ... they gave me this song and dance about ibm rapidly growing and THE career path was being manager, having lots of people reporting to you, etc. i asked to read the manager's manual (about 3in thick, 3ring binder). I then told them that how i learned (in construction) to deal with difficult workers was quite incompatable with was presented in the manager's manual. i was never asked to be a manager again.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 18 Nov 2004 13:12:06 -0700Edward Wolfgram writes:
370s had 24-bit addressing ... i've posted before how systems (cpus) were getting much faster than disks were getting faster (resulting in a decline in disk relative system performance). as a result real storage was being relied on more & more to compensate for disk declining relative disk eprformance.
3033 had 4.5 mip processor, 16 channels, and 24-bit addressing (both real and virtual) so was limited to 16mbytes real storage ... and was about the same price as a cluster of six 4341s, each was about one mip, had six channels, and 16mbytes real storage (6mips aggregate, 36channels aggregate and 96mbytes real storage aggregate). as a result, 3033 was suffering from the 16mbyte real storage limit .... and a two processor 3033 smp suffered even worse since the configuration was still limited 16mbytes real storage (since you were now trying to cram 8+mips worth of work thru a 16mbytes real storage limited system ... that was frequently real storage limited with 4.5mip single processor).
they came up with a gimmick to allow up to 64mbytes of real storage in a configuration ... even tho the architecture was still limited to 24bit addressing. the page table entry was 16bits; 12bit (4k) page number, 2 flag bits, and two unused bits. what they did was allow concatenating the two unused bits to the 12bit page number ... allowing addressing of up to 16384 real pages. instructions were still limited to 24bit addressing (real and virtual) ... but the TLB could resolve to a 26bit (real) address.
the issue wasn't specific to smp, it was that the system thruput balance could use 8-10 mbytes per mip ... and single processor 3033 was frequently under configured with real storage (and 3033 smp further aggrvated the mbytes/mip starvation). Note that the 4341 clustering solution got around the real storage addressing constraint by allowing an additional 16mbytes of real storage for every mip added.
MVS in the time-frame had a different sort of addressing problem. the original platform was real storage operating system that was completely based on pointer-passing paradigm. the initial migration of mvt real storage to virtual memory ... was doing enuf so that the kernel (still pretty much mvt) appeared to be running in 16mbyte real storage .... using a single 16mbyte virtual address space ... which still contained all kernel and applications (as found in the mvt real storage model) ... and continued to rely on pointer-passing paradigm.
the migration to mvs created a single 16mbyte address space per application ... but with the mvs kernel occupying 8mbytes of every virtual address space. also, since there had been some number of non-kernel system functions that were now in their own virtual address space ... a one mbyte "common area" was created. applications needing to call a subsytem function would place stuff in the common area, setup pointer and make a call. the call passed thru the kernel to switch address spaces and the subsystem used the passed pointer to access the data in the common area. this left up to 7mbytes for applications. The problem was that as various installation added subsystem functions, they had to grow the common area. In the 3033 time-frame, it wasn't unusual to have installations with 4mbyte common areas ... leaving a maximum of 4mbytes (in each virtual address space) for application execution.
note that TLB hardware was address space associative ... and for various reasons ... there was two address spaces defined for each application ... with most of the stuff in identical (a kernel-mode address space and a application-mode address space). Furthermore, the kernel stuff and the "common area" was pretty "common" across all address spaces ... however, the implementation tended to eat up TLB entries.
to "address" the looming virtual address space constraint .... "dual-address space" was introduced on the 3033. subsystems now had primary address space (the normal kind) and secondary address space. The secondary address space was actually the address space of the calling application that had past a pointer. Subsystems now had instructions that could fetch/store data out of secondary address spaces (calling applications) using passed pointers (instead of creating a message passing paradigm).
a little digression on the MVT->SVS->MVS evoluation. In the initial morphing of MVT->SVS, POK relied on cambridge and cp67 technology (for instance the initial SVS prototype was built by crafting CP67 "CCWTRANS" onto the side of a MVT infrastructure .... to do the mapping of virtual channel programs to real channel programs). There was also quite a bit of discussion about page replacement algorithms. The POK group was adamant that they were going to do a LRU approximation algorithm ... but modify it so that it favored selecting non-changed pages before changed pages (overhead and latency of writing the changed page wasn't required). There was no amount of argument that I could do to convince them otherwise; it was somewhat a situation of not being able to see the forest for the trees.
So about in the MVS/3033 time-frame ... somebody in POK finally came to the conclusion that they were biasing the page replacement algorithm towards private, application-specific, changed data pages, at the expense of high-use, shared system pages (i.e. high-use system, shared, instruction/execution pages would be selected for replacement before private, application-specific, modified data pages).
come 3090, they had a different sort of problem with real-storage and physical packaging. this somewhat harked back to 360 "LCS" or local/remoate memory in various SMP schemes like supported by SCI. They wanted more real memory than could be phsycially packaged within the latency distances for standard memory. Frequently you would find a 3090 with maximum amount of real storage ... and then possibly as much "expanded storage" (that effectively was identical technology used in the standard storage). expanded storage was at the end of a longer distance and wider memory bus.
A 360 *LCS* configuration could have say 1mbyte of 750nsec storage and 8mbytes of 8mic *LCS* storage. *LCS* supported direct instruction execution (but slower than if data/instruction were in 750nsec storage). Various software strategies attempted to trade-off copying from *LCS* into *local* storage for execution ... or executing directly out of *LCS*.
In the 3090, expanded stor case, it only supported copying (standard instruction&data fetch was not supported, also I/O wasn't supported). There was a special, synchronous expanded stor copy instruction ... and expanded stor was purely software managed ... slightly analogous to electronic paging device ... except it used synchronous instructions instead of I/O.
misc 3033, dual-address space, and/or mvs postings (this year):
https://www.garlic.com/~lynn/2004.html#0 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2004.html#9 Dyadic
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004.html#17 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004.html#19 virtual-machine theory
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#35 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004.html#49 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#49 new to mainframe asm
https://www.garlic.com/~lynn/2004b.html#57 PLO instruction
https://www.garlic.com/~lynn/2004b.html#60 Paging
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#2 Microsoft source leak
https://www.garlic.com/~lynn/2004d.html#3 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#19 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#20 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#41 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2004d.html#69 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004d.html#75 DASD Architecture of the future
https://www.garlic.com/~lynn/2004e.html#2 Expanded Storage
https://www.garlic.com/~lynn/2004e.html#6 What is the truth ?
https://www.garlic.com/~lynn/2004e.html#35 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#41 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#11 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#30 vm
https://www.garlic.com/~lynn/2004f.html#51 before execution does it require whole program 2 b loaded in
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#55 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#58 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#60 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#61 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#2 Text Adventures (which computer was first?)
https://www.garlic.com/~lynn/2004g.html#11 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#24 |d|i|g|i|t|a|l| questions
https://www.garlic.com/~lynn/2004g.html#29 [IBM-MAIN] HERCULES
https://www.garlic.com/~lynn/2004g.html#35 network history (repeat, google may have gotten confused?)
https://www.garlic.com/~lynn/2004g.html#38 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#55 The WIZ Processor
https://www.garlic.com/~lynn/2004h.html#10 Possibly stupid question for you IBM mainframers... :-)
https://www.garlic.com/~lynn/2004k.html#23 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
https://www.garlic.com/~lynn/2004l.html#22 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004l.html#23 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004l.html#54 No visible activity
https://www.garlic.com/~lynn/2004l.html#67 Lock-free algorithms
https://www.garlic.com/~lynn/2004l.html#68 Lock-free algorithms
https://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor
https://www.garlic.com/~lynn/2004m.html#36 Multi-processor timing issue
https://www.garlic.com/~lynn/2004m.html#42 Auditors and systems programmers
https://www.garlic.com/~lynn/2004m.html#49 EAL5
https://www.garlic.com/~lynn/2004m.html#50 EAL5
https://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?
https://www.garlic.com/~lynn/2004m.html#63 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#0 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#4 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#7 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#14 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004n.html#39 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004n.html#50 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#5 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#19 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#38 Facilities "owned" by MVS
https://www.garlic.com/~lynn/2004o.html#39 Facilities "owned" by MVS
https://www.garlic.com/~lynn/2004o.html#40 Facilities "owned" by MVS
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 18 Nov 2004 13:24:41 -0700"Tom Linden" writes:
the person assigned to head up BCS was from corporate hdqtrs (up at boeing field) and there was some possible turf issues ... since BCS was to absorb the significantly larger renton data center operation (as well as some number of other data center operations).
there was story i was told about a certain famous salesman that boeing was generating computer orders for as fast as the person could write up the orders ... whether or not the person actually knew what was being ordered. supposedly this is what prompted the switch from straight commission to the sales quota system (since the short-term straight commission off the sales to boeing was, itself, quite large). the salesman supposedly left shortly after the sales quota system was put in place and founded a large computer service company.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 18 Nov 2004 13:36:02 -0700Janne Blomqvist writes:
the kernels running on 3033 ... setup special page table pointing to the page above the 16mbyte line ... and either 1) directly accessed the data and/or 2) copied the contents of 4k page to a 4k slot in the <16mbyte area.
as an aside, i have a relatively new dimension 8300 with 4gbytes real storage ... which so far i've loaded fedora FC1 and FC2 ... and all they claim is 3.5gbytes real storage (while they indicate 2gbytes real storage on machine with 2gbytes real storage).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: JES2 NJE setup Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 18 Nov 2004 15:51:02 -0700edjaffe@ibm-main.lst (Edward E. Jaffe) writes:
somewhat earlier .... there was an effort to deploy a STL/Hursley offload project (hursley using stl computers offshift and stl using hursley computers offshift) using a high-speed(?) double-hop satellite link. they first brought up the link with native vnet drivers (note for customers ... they eventually managed to eliminate all the native vnet drivers ... leaving customers with only nje drivers on vnet).
then the networking manager in stl insisted on switching the double-hop satellite link to sna/nje. there was never any successful connection made. the stl networking manager had the link swapped back&forth a number of times with native vnet drivers and sna/nje drivers .... with the native vnet drivers always working w/o a problem and no sna/nje driver ever making a connection.
his eventually conclusion was that there was high error rate on the link and the native vnet driver was not sophisticated enuf to realize it. it turned out that the actual problem was sna/nje had a hard coded keep-alive check which was shorter interval than a complete round-trip over a double-hop satellite link (stl to geo-sync orbit over the us, down to an east coast earth station, up to geo-sync orbit over the atlantic and down to hursley ... and then return). note however, that was an unacceptable conclusion ... it was much more politically correct that the transmission was malfunctioning.
for some additional/total topic drift, got to be in the stands for
http://www.nasa.gov/mission_pages/shuttle/shuttlemissions/archives/sts-41D.html
because hsdt
https://www.garlic.com/~lynn/subnetwork.html#hsdt
was going to make use of one of the payloads.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 18 Nov 2004 16:31:30 -0700glen herrmannsfeldt writes:
hsdt used geo-sync satellite for some links (also had fiber and microwave links).
single hop round trip is about 88k miles, double hop is twice that.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Thu, 18 Nov 2004 18:21:46 -0700"del cecchi" writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 19 Nov 2004 08:04:59 -0700Anne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 19 Nov 2004 08:56:07 -0700nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 360 longevity, was RISCs too close to hardware? Newsgroups: comp.arch,alt.folklore.computers Date: Fri, 19 Nov 2004 09:27:15 -0700... the use of one of the (hsdt) links between austin and san jose is claimed to have helped contribute to bringing the rios chip set in a year early; .... (large) chip designs being shipped out to run on the EVE and LSM logic simulators in san jose
random recent posts mentioning the logic simulators
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2004j.html#16 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Integer types for 128-bit addressing Newsgroups: comp.arch,alt.folklore.computers Date: Sat, 20 Nov 2004 07:15:17 -0700glen herrmannsfeldt writes:
was going to replace it about mid-thru 370, possibly w/o even having to announce virtual memory for 370s.
in the early 70s, Amdahl gave a talk at mit about starting his new company and the business case he used with the investors. there was some line about customers had already invested something like $200b in software (this was only 7-8 years after the original 360 announcement) for the architecture and that even if IBM were to completely walk away from it (could be interpreted as veiled reference to FS), there would still be customers running that software at least until 2000 (which also showed up as big y2k remediation projects in the late 90s ... driving demand for programming skills with all the other frenzy of the late 90s).
specific blurb/reference on fs
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System
the above references the plug compatible and controller clone
competition that was showing up ... at the time there was some write
up blaming a project that i had worked on as undergraduate
https://www.garlic.com/~lynn/submain.html#360pcm
the folklore is that after FS was killed, some of participants retreated to rochester and produced the s/38 (which morphed into the cisc as/400 and then into the current risc as/400).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Relational vs network vs hierarchic databases Newsgroups: comp.databases.theory,alt.folklore.computers Date: Sat, 20 Nov 2004 09:15:11 -0700"Dan" writes:
the big argument that i remember going on between stl/ims (bldg. 90)
and sjr/systemr (and bldg. 28, just a couple miles away):
https://www.garlic.com/~lynn/submain.html#systemr
was that there was a trade-off between the disk space of direct (physical) pointers in the 60s era database implementations (ims, hierarchical, network, etc) and relational using indexes. the comparisons in the late '70s was that the typical systemr implementation doubled the physical disk space (to handle the indexes) which was trade-off against the reduction in dba/sysadm people time managing the pointers (the issue of trade-off of physical pointers vis-a-vis physical disk space for indexes was common across the various '60s database implementations, regardless of information organization/model, hierarchical, network, etc).
note that the target of relational at the time was fairly striaght forward "flat", single table, bank account database .... with the account number the primary index ... not really taking into consideration infrastructures involving multiple tables, joins, etc.
as an undergraduate, i had worked on an onr-funded university library
project that utilized bdam (& cics) ... which turns out to have been
similar project going on at the same time at the NIH's NLM ... using
the same technology (but much larger scale). I had an opportunity to
look a NLM's implementation in much more detail a couple years ago
with some stuff associated with UMLS ... a couple recent posts
mentioning UMLS:
https://www.garlic.com/~lynn/2004f.html#7 The Network Data Model, foundation for Relational Model
https://www.garlic.com/~lynn/2004l.html#52 Specifying all biz rules in relational data
There had been some work on mapping UMLS into RDBMS implementation ... the problem was that much of UMLS is non-uniform and frequently quite anomolous ... making anything but a very gross, high-level schema extremely difficult and people intensive .... for really complex, real-world data, the trade-off in operational sysadm/dba time was being traded-off against heavily front-loaded people time associated with normalization .... with the difference in physical disk-space requirements (for indexes) being ignored.
In any case, at that time, NLM was still using the original BDAM implementation (from the late '60s).
random bdam &/or cics posts:
https://www.garlic.com/~lynn/submain.html#bdam
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/