List of Archived Posts

2004 Newsgroup Postings (04/22 - 05/14)

were dumb terminals actually so dumb???
A POX on you, Dennis Ritchie!!!
Expanded Storage
Expanded Storage
Expanded Storage
A POX on you, Dennis Ritchie!!!
What is the truth ?
OT Global warming
were dumb terminals actually so dumb???
Authentification classifications
What is the truth ?
Gobble, gobble, gobble: 1.7 RC1 is a "turkey"!
Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
were dumb terminals actually so dumb???
were dumb terminals actually so dumb???
Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
Paging query - progress
were dumb terminals actually so dumb???
RFC 3766 Determining Strengths For Public Keys Used For Exchanging Symmetric Keys
Message To America's Students: The War, The Draft, Your Future
Soft signatures
A POX on you, Dennis Ritchie!!!
Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
Relational Model and Search Engines?
Relational Model and Search Engines?
Relational Model and Search Engines?
The attack of the killer mainframes
NSF interest in Multics security
The attack of the killer mainframes
[OT] Faces of terrorism
The attack of the killer mainframes
[OT] Faces of terrorism
The attack of the killer mainframes
The attack of the killer mainframes
The attack of the killer mainframes
The attack of the killer mainframes
NSF interest in Multics security
command line switches [Re: [REALLY OT!] Overuse of symbolic
[REALLY OT!] Overuse of symbolic constants
Candle support from Los Delhi
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
security taxonomy and CVE
Infiniband - practicalities for small clusters
going w/o sleep
going w/o sleep
going w/o sleep
Data Display & Modeling
c.d.theory glossary (repost)
going w/o sleep
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
c.d.theory glossary (repost)
Is there a way to configure your web browser to use multiple

were dumb terminals actually so dumb???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: were dumb terminals actually so dumb???
Newsgroups: alt.folklore.computers
Date: Thu, 22 Apr 2004 10:11:53 -0600
mwojcik@newsguy.com (Michael Wojcik) writes:
There was (and still is) a healthy market for 3270 "screen-scraping" products, which would present 3270 and/or TN3270 terminal data in a fancier fashion or wrap an API around it for programmatic processing. IBM had provided a standard API, HLLAPI (High Level Language API), later enhanced as EHLLAPI (usually pronounced "ee-hell-appy"), for this purpose.

there was the VTAM lu2.1 for "remote" 3270s. there were also lots of local 3270s ... where the controllers were directly channel attached to the machine. you could have vtam lu2.1 327x controllers and direct channel attach 327x controllers. There were also a huge number of bisync (BSC) 327x controllers (percusor to SNA) ... and I believer there is still some amount of bisync stuff floating around.

the original was 3272 for 3277 terminals. there was then the protocol on the coax between the 327x controller and the 327x head. The original protocol on the 3272/3277 coax was called ANR. That left the keyboard control in the head of the terminal.

the later controllers were 3274 and a variety of terminals, 3278, 3279, 3290. The protocol on this coax had a lot more features (& called DFT) ... as well as moving some amount of the control logic that had been in the 3277 head back to the controller.

the 3277 had slow cursor repeat delay and slow cursor repeat rate. it also had an annoying habit it you were used to full-duplex and/or at least fast typing ... of locking the keyboard if you happen to hit a key at the same time the system was (re)writing/updating the screen. you then had to stop and hit reset to unlock the keyboard.

there was a hardware patch inside the 3277 keyboard that allowed you to adjust the repeat delay and the repeat rate. there was also a FIFO box available for the keyboard lock problem. Inside the 3277 head, you unplugged the keyboard, plugged in the FIFO box, and plugged the keyboard into the FIFO box (closed up the head). The FIFO box would hold/delay keystrokes if it monitored that the screen was being updated.

With the 3274 DCA protocol ... all of that logic moved to the controller and it was no longer possible to modify a 3278/3279 for those human factor issues (using common shared logic in the controller, reduced the cost of the individual terminal ... aka they got even dumber ... even tho they appeared to support more features).

misc. past ANR/DFT posts
https://www.garlic.com/~lynn/2001m.html#17 3270 protocol
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
https://www.garlic.com/~lynn/2002j.html#77 IBM 327x terminals and controllers (was Re: Itanium2 power
https://www.garlic.com/~lynn/2002k.html#6 IBM 327x terminals and controllers (was Re: Itanium2 power

when i was undergraduate I worked on a project that built a plug-compatible terminal controller ... and have since been accused of helping start the PCM controller business:
https://www.garlic.com/~lynn/submain.html#360pcm

supposedly the whole FS project was in large part based on IBM's response/reaction to the PCM controller business:
https://www.garlic.com/~lynn/submain.html#futuresys

there was several comments that after FS was canceled there still continued some efforts to make complex integration between the processor and the outlying boxes. there have been some references that might account for the extremely complex nature of SNA and the PU4/PU5 interface (although note, there are a number of references to SNA not being a system, not being a network, and not being an architecture). random related references along those lines:
https://www.garlic.com/~lynn/subnetwork.html#3tier

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A POX on you, Dennis Ritchie!!!

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A POX on you, Dennis Ritchie!!!
Newsgroups: alt.folklore.computers
Date: Thu, 22 Apr 2004 20:13:12 -0600
Rupert Pigott writes:
Old hat in the grand scheme of things. IBM did a coarse version of it with the NorthStar, but that wasn't the first time they did it. Lynn has mentioned at least one piece of big iron that did it way back when. :)

dual i-stream on 370/195 circa 74, 30 years ago. post to comp.arch earlier this year:
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?

i thot i had run across a reference to red/black instruction bit in possibly ACS reference from the '60s but can't find it the specific quote at the moment, acs (IBM 1960s supercomputer project):
https://people.computing.clemson.edu/~mark/acs.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Expanded Storage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Expanded Storage
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 22 Apr 2004 20:42:50 -0600
WilliamEk@ibm-main.lst (William Ek) writes:
Expanded storage is never as good as central. Don't configure any expanded.

the story i heard was that they wanted to get larger amounts of memory attached to 3090 than packaging would allow within the latency limits for random fetch/store accesses. nominally, expanded storage was done with normal electronic storage technology but packaged at further away latency distances. The longer latency to access the more physically distant expanded storage was somewhat compensated for by having a wider bus that could transfer larger amounts of data per transfer. This effectively becomes software managed cache with 4kbyte wide cache lines.

the downside of the implementation was that data couldn't be pushed/pulled directly to/from expanded memory and dasd backing store.

the upside for the 3090 was that expanded stor bus provided a wide-enuf data path for attaching HiPPI I/O ... recent thread mentioning HiPPI, FCS, SCI, misc.
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
and very slightly related posting on MVS-based SANs (from the early 80s):
https://www.garlic.com/~lynn/2004d.html#75 DASD Architecture of the future

although the 3090 HiPPI implementation had to be somewhat hokey ... effectively an analogy to PEEK/POK the HiPPI i/o commands to reserved expanded storage address locations. the standard 3090 I/O interface wasn't fast enuf to support the HiPPI transfer rates which is the reason that HiPPI found its way to the expanded storage bus.

from long ago and far way
Re: Extended vs. expanded memory just to "refresh your memory"...

"Extended memory" refers to RAM at addresses 100000-FFFFFF. Although the PCAT only permits 100000-EFFFFF.

"Expanded memory" refers to the special Intel/Lotus memory paging scheme that maps up to 8 megabytes of RAM into a single 64K window beginning at absolute address 0D0000.

"Expended memory" refers to RAM that you can't use anymore. It is the opposite of Expanded Memory.

"Intended memory" refers to RAM that you were meant to use. It is the opposite of Extended Memory.

"Appended memory" refers to RAM you've got to add to make your application run.

"Upended memory" refers to RAM chips improperly inserted.

"Depended memory" refers to ROM that you cannot live without.

"Deep-ended memory" refers to RAM that you wish you had, but don't.

"Well-tended memory" is a line from the movie "Body Heat" and is beyond the scope of this glossary.


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Expanded Storage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Expanded Storage
Newsgroups: alt.folklore.computers
Date: Fri, 23 Apr 2004 09:13:57 -0600
Rupert Pigott writes:
Sounds a bit like Cray's SSD (solid state disk). Would you configure that as a kind of swap device perhaps ?

vulcan would have been an SSD ... but was canceled. 1655 was from another vendor but a special production run for internal corporate installations ... see 3350 fixed-head reference:
https://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future

expanded stor ... acted somewhat more like software managed cache with 4kbyte cache lines using a really wide bus. synchronous instruction was used to move 4kblocks to/from expanded storage. while the instruction was synchronous, it executed in less time than typical first level interrupt handler in an asynchronous i/o paradigm ... and drastically significantly less time than it takes to setup an i/o operation, perform the i/o operation, take the interrupt, perform all the asynchronous bookkeeping, etc.

an issue was that when expanded stor filled up ... there could be a need to migrate pages from expanded stor to more traditional spinning storage. this had the downside that the 4k blocks had to be dragged back thru main memory and written from there. normal i/o bus had no way of accessing expanded stor.

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

Expanded Storage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Expanded Storage
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 23 Apr 2004 16:45:06 -0600
Peter Flass writes:
So what's the difference between expanded storage and LCS on the 360's?

LCS had longer latency and instructions ran slower ... but the bus width and use was the same as the rest of storage, just slower.

expanded storage had longer latency but a much wider bus ... and only the move page instruction worked (basically storage to storage operation). the idea was to help compensate for the longer latency by having much wider move.

expanded storage
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/2.2?SHELF=EZ2HW125&DT=19970613131822

move page instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9ar004/7.5.59?SHELF=EZ2HW125&DT=19970613131822&CASE=
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9ar004/10.20?SHELF=EZ2HW125&DT=19970613131822&CASE=

HiPPI crafted off the side of the expanded storage bus wasn't a normal installation and the programming was hokey ... reserved addresses in expanded storage were used to place i/o commands and retrieve status (i.e. the peek/poke analogy).

misc 3090 hippi refs from search engine:
http://www.scd.ucar.edu/docs/SCD_Newsletter/News_summer93/04g.big.gulp.html
http://hsi.web.cern.ch/HSI/hippi/applic/otherapp/hippilab.htm
http://www.lanl.gov/lanp/hippi-101/tsld024.htm
http://www.lanl.gov/lanp/hiphis.html
http://www.slac.stanford.edu/grp/scs/trip/European-computing-Dec1993/store.html
http://www.cacr.caltech.edu/Publications/techpubs/PAPERS/ccsf003/project.html
http://www.auggy.mlnet.com/ibm/3376c53.html

in the last ref above, there is some references to IBM HIPPI restrictions (for the 3090, i.e. i/o multiple of 4k bytes).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A POX on you, Dennis Ritchie!!!

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A POX on you, Dennis Ritchie!!!
Newsgroups: alt.folklore.computers
Date: Sat, 24 Apr 2004 18:42:23 -0600
Brian Inglis writes:
Neither IBM nor the government thought anything of it. There were probably a few 100K addon interfaces as well. NASA, FAA (still running US ATC on them).

I ran into somebody in the late 90s who said he made a very excellent living selling ibm mainframe channel attached perkin-elmer boxes to NASA in the early 80s. I commented about the lineage going back to when I was an undergraduate in the late 60s and worked on project that reverse engineered the ibm channel interface and built a channel attach card for an Interdata3 ... as part of building a terminal controller replacement for the 2702 (that included adding support for dynamic baud rate determination ... the 2702 had preset baud rates with specific oscillator hardwired to each line/port).

he commented that the channel attached cards on the perkin-elmer boxes in the early 80s looked as if they could have very easily been designed and originally built in the late 60s. note that the lineage involves perkin-elmer having bought interdata.

random other refs:
https://www.garlic.com/~lynn/submain.html#360pcm

About that timeframe (late 90s), I happen to visit a very large commercial datacenter that still had perkin-elmer boxes in service for handling large number of incoming lines.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What is the truth ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is the truth ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 26 Apr 2004 14:25:47 -0600
tlast@ibm-main.lst (Todd Last) writes:
I have considered myself a MVS Systems Programmer for 15 years (I'm 39 years old) and I don't write code (however, I can read it and modify it). Most companies do not need true system programmers anymore. Most companies want a lot more out of their people. My job still involves installing the operating system (eg pre-package software installer), but I also am the person of last resort when application folks are having problems, or when security doesn't work just right, or when there are problems in the network. Companies want people that can see the big picture and solve specific business problems on the mainframe. I consider myself a mainframe architect, a project manager, a consultant, and trouble shooter as well as being a technical analyst. I think I provide value for my organization and hopefully the companies I previously worked for saw that too. A title shouldn't define yourself because you can be much more but I'm still proud to call myself a Systems Programmer.

programmers are people that tend to program ... this specific job description is more analogous to old time ibm system engineer ... but that was also used as a job title ... so it somewhat was pre-empted from being used as a job description. system engineers could possibly program but for the most part rarely did ... they helped the customer do all the system installation, maint. things, problem analysis & resolution, etc.

with june 23rd, 1969, unbundling announcement, ibm started charging for some of the System Engineer time ... and shops started taking more & more of the system engineering duties in house.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

OT Global warming

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT Global warming
Newsgroups: comp.arch
Date: Mon, 26 Apr 2004 14:14:09 -0600
"Stephen Fuld" writes:
Well, I am sure there are a few, occasional examples like this, but they are "the exceptions that prove the rule". The big, overland freight runs cross country are not generally used by any commuters. The only way anything but commuter rail works economically is by the government subsidy of Amtrack. In fact, Amtrack was formed because the rail companies wanted out of long range passenger service and it would have disappeared absent government subsidy.

i remember (long ago and far away) as a kid (out west) seeing track maint crews coming thru every other year on the long streaches of long haul lines ... doing things like pulling & replacing ties.

my uncle would pick up surplus equipment at auction ... things like perfectly good ties and jacks for his house moving business. There were two (railroad) jacks ... used for lifting rails and ties. One was cast aluminum about 3ft high and weighed about 65 lbs. The other was 4ft or so and weighed about 115lbs. He would have a flat solid plate wielded to the jack toe ... which would bring the weight up to about 125lbs (the toe of the jack designed for lifting rails was a little small for getting good traction on large beams). I remember going out on some jobs with him and having my own 125lb jack to lug around; which weighed more than i did at the time.

for jack handle would use a 6ft long solid steel wrecking bar ... which sometimes I had to hang on the end of to get any decent leverage. It was slightly pointed on one end and the other end was flat round plate that could be used for tamping.

years later at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

I would commute into north station on b&m. several of the ties on the line were so far gone that you could stick your finger into them ... and the track speed limit was down to 10mph in places. There was also a section out near acton that was referred to as the boxcar grave yard because there were so many derailments ... even at 5mph.

I was told that east coast railroad consolidation during the 50s and 60s had characteristics that weren't unlike current enron situation; effectively there were large executive bonuses and stock dividends being payed in lieu of doing track maintenance. It was possible to defer track maintenance for a couple years to help make your bottom line look good ... but after a couple years the street becomes accustomed to the numbers ... and then after 20 years or more of deferred maintenance ... things start failing. 30-40 years of deferred maintenance going to executive bonuses and stock dividends builds up a significant accumulated exposure that isn't easily rectified ... effectively the infrastructure has been almost totally raided of much of its value.

Note that from an infrastructure standpoint the current road system is almost a total subsidy for heavy trucking industry; while consumers use the roads ... the cost of design, building, and maintenance of the roads are almost totally based on heavy trucking usage. While fuel taxes are leveraged across everybody ... the majority of the fuel use & total fuel taxes effectively are from consumers ... while the majority of the total costs are based on heavy trucking usage; not consumer usage.

misc. past threads about cars & small trucks have nearly no impact at all on road design & maint. .... that it is totally based on the expected heavy truck usage

specific referenses to cal state DOT highway road construction manual:
https://www.garlic.com/~lynn/2002j.html#41 Transportation
https://www.garlic.com/~lynn/2002j.html#42 Transportation

misc. other refs:
https://www.garlic.com/~lynn/99.html#23 Roads as Runways Was: Re: BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/2002n.html#41 Home mainframes
https://www.garlic.com/~lynn/2003i.html#17 Spam Bomb
https://www.garlic.com/~lynn/2003i.html#21 Spam Bomb
https://www.garlic.com/~lynn/2003i.html#57 TGV in the USA?
https://www.garlic.com/~lynn/2004c.html#20 Parallel programming again (Re: Intel announces "CT" aka

from the above CAL-DOT reference, one could see that besides alleviating unwanted neighborhood traffic ... the various signs (typically in residential areas) about no-thru trucks over some weight limit could be due to the streets not built to take significant amount of
80 kN ESALs

.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

were dumb terminals actually so dumb???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: were dumb terminals actually so dumb???
Newsgroups: alt.folklore.computers
Date: Tue, 27 Apr 2004 08:43:41 -0600
"Bill Leary" writes:
My earliest recollection of a terminal being called "dumb" was when we started getting addressable cursors. The terminals which could not address the cursor started being called "dumb." I do not, however, recall that we called the addressable ones "intelligent." As time went on and terminals became more "feature rich" (glup) the term "intelligent" emerged and the bar on what was "dumb" started to rise. Again, using my personal experience, "dumb" got stabilized somewhere above where the terminal could address a cursor and, *perhaps*, do an "erase to end of line" but below where it could do a scroll backwards, insert line and so forth. "truly dumb" was below the addressable cursor level.

at least the local channel attach 3270s were supposedly so fast that the issue of dump/smart didn't really enter into it (except for some niggling human factors issues). the corporate home terminal program started taking off with 300 baud and 3101 ascii terminals (although i had home terminal starting in spring of 1970 with 2741 and later "upgrade" to ti silent 700). the 3101 period was also somewhat the era of the transition from 300 to 1200 baud.

the support on the host side for 3270 emulation with 3101 block mode expanded to include a host-side representation of what was on the screen of the 3101 (and in the local 3101 memory) and logic that if the new screen contained significant portions from the old screen, it would push the old characters to the new position(s) and then only have to paint in the new characters (potentially significantly reducing the number of characters that needed to be transmitted).

this was significantly expanded upgrading 3101s to PCs ... and some transition from 1200 baud to 2400 baud. The PC side got something like a 64k byte cache of previously transmitted characters ... and the host could specify portions of screen to be painted by indexing previously transmitted information in the PC's cache; and of course the actual transmission in both directions was subject to heavy compression. an operation scrolling (up) to the previous page became really fast.

the next big step was moving from 2400 baud modems to telebit modems running at 14k ... and then with even faster baud rates as well as compression technology moving into the modems.

some topic drift ... there were big battles with the early 3270 full-screen editors and what did UP and DOWN mean. they all tended to use the metaphor that the screen represented a window on a continuous "scroll" of text. The issue was somewhat whether

1) the up/down used a program centric orientation where the scroll of text was moved up or down, i.e. scrolling (the text) UP resulted in moving towards the end of the scroll/file

2) the up/down used a person-centric orientation where the window (&/or person's eye) was moved up/down, i.e. scrolling (the window) UP resulted in moving towards the start of the scroll/file.

I've had a TV settop box that exhibited some problem in this area. When viewing a program, hitting "up" moves to a larger number channel and hitting "down" moves to a smaller number channel .... however if viewing the menu of channels (& programs), hitting "up" moves to smaller number channel on the menu (top of screen) and hitting "down" moves towards larger number channel (towards bottom of the screen) ... the order of the buttons are exactly reversed ... depending on whether a program is being viewed or the menu of channels is being viewed.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Authentification classifications

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Authentification classifications
Newsgroups: alt.computer.security
Date: Tue, 27 Apr 2004 08:59:56 -0600
marc.jaeger@laposte.net (Marc Jaeger) writes:
Hi everybody,

I try currently to make a classification of every authentication methods that exist.

I have noted as authentication methods : /etc/passwd, Windows SAM file, Active Directory, NIS, PAM, Kerberos, PAP, CHAP, EAP, RADIUS, NTLM, SASL, SSL, TLS, NDS, TACACS, IPsec, ISAkmp, pki, ..., MD5, 3DES, LDAP,...


another classification/taxonomy for authentication is what does the authentication really represent ... i.e. 3-factor authentication:
1) something you know 2) something you have 3) something you are

furthermore most of the factors can either be implicit or explicit and can either utilize shared-secrets or non-shared-secrets.

this is "authentication method" with respect to the meaning of the authentication as opposed to the implementation authentication product/mechanism.

passwords then tend to be
a) something you know and b) shared-secret

it is possible to have a hardware-token implementation that only operates in a specific way when the owner imputs the correct PIN into the token. the infrastructure then infers by responses from the token

a) something you have (i.e. inferred because only the token could provide the correct response) b) something you know (i.e. inferred because the token only works with the correct pin input) c) non-shared-secret (i.e. what is known is only inferred by the operation of the hardware taken, the server side doesn't actually have to verify what is known, only infers that it is known).

lots of posts about what does a server domain name certificate in SSL really represent:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

misc. stuff on radius & kerberos:
https://www.garlic.com/~lynn/subpubkey.html#radius
https://www.garlic.com/~lynn/subpubkey.html#kerberos

some stuff about identity, authentication, and privacy:
https://www.garlic.com/~lynn/subpubkey.html#privacy

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What is the truth ?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What is the truth ?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 27 Apr 2004 13:36:13 -0600
many of the smaller & desktop systems lump all the maint activity w/sysadm ... rather than sysprog; but they tend not to have large enuf operation to justify large staffs with specialized job descriptions.

besides the unbundling of 6/23/69 ... transfering a lot of system engineering duties (typically maint & support activities for other kinds of platforms w/sysadms) to in-house staff ... there was OCO that was starting to be a hot topic (something like ten years after unbundling). a lot of the source that a mainframe sysprog might be expected to deal with ... just dried up.

sysadms are expected to perform some amount of script programing (or in some instances JCL programming, i.e. as in interpreted language ... as opposed to the original program name that supported markup language)

random past posts mentioning OCO.
https://www.garlic.com/~lynn/94.html#11 REXX
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2001e.html#6 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2002c.html#4 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2002p.html#3 IBM OS source code
https://www.garlic.com/~lynn/2002p.html#7 myths about Multics
https://www.garlic.com/~lynn/2003k.html#46 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2003k.html#50 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2004d.html#19 REXX still going strong after 25 years

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Gobble, gobble, gobble: 1.7 RC1 is a "turkey"!

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Gobble, gobble, gobble: 1.7 RC1 is a "turkey"!
Newsgroups: netscape.public.mozilla.general
Date: Tue, 27 Apr 2004 13:53:22 -0600
Will Dormann writes:
No problems here. Did you remove any previous version of Mozilla first, and make sure that the target directory was empty before installing the new version?

I really like tab feature .... being able to queue up 50-100 windows and then go thru reading w/o network delays. prior to 1.7 ... such activity could chew up 500mbytes of virtual memory.

starting with first 1.7, I found that the linux version, the memory hog problem has cleared up. I now have a bookmark folder with 50+ websites that I can tab open ... and then read at my leisure ... w/o having severe virtual memory consumption.

however, whatever problem it is, still seems to be there with windows version. I open the same bookmark folder on windows ... and it chews up nearly 200mbyte. these are all daily news related web sites with only modest graphic usage ... but it means that it is consuming avg of 4mbyte per tab ... for stuff that I would expect to avg. less than 100k/tab.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre-relational, post-relational, 1968 CODASYL "Survey of Data
 Base Systems"
Newsgroups: comp.databases,comp.databases.theory,comp.lang.mumps
Date: Tue, 27 Apr 2004 14:38:57 -0600
"Laconic2" writes:
Given that MUMPS and UNIX were both originally developed on DEC gear, and that DEC-10s were heavily represented in the early ARPANET, the influence of DEC on computing is larger than many people realize.

part of the issue was that ibm mainline business somewhat walked away from the educational & interactive market ... cutting way back on their traditional deep educational discounts as well as adopting a very strong commercial batch orientation. during the later half of the 60s and the 70s it was a very successful business approach.

some past posts mentioning not winning project mac & belllabs in the 60s (specific mention in the 2nd to last paragraph in the this post):
https://www.garlic.com/~lynn/2004c.html#11 40yrs, science center, feb. 1964

the science center:
https://www.garlic.com/~lynn/subtopic.html#545tech

did build a virtual memory, interactive oriented system that did see quite a bit of success, even at some number of universities and had some arpanet nodes. however, it was also used extensively inside the corporation (as well at large number of corporations).

the internal network ... also done at the cambridge science center was larger than the arpanet/internet thruout most of its lifetime until sometime mid-85. This was in part because the arpanet was a relatively traditional homogeneous network concept (although packet based rather than circuit base). It wasn't until the great switchover of 1/1/83, that the apranet/internet got internetworking protocol and gateways. By contrast, the internal network basically had gateway-like function built into every node. As a result, the internal network was much more a internetworking implementation all thru the 70s (which simplified the task of adding additonal nodes). somewhat related posting on the subject:
https://www.garlic.com/~lynn/aadsm17.htm#17

reference to the 1000th node on the internal network the summer of 83:
https://www.garlic.com/~lynn/internet.htm#22

not long after the arpanet conversion to internetworking protocol and enabling to go past 255 nodes (i.e. at the time the arpanet reached 255 nodes, the internal network was around 800 nodes).

the other thing that enabled the internet to exceed the number of nodes on the internal network by the summer of '85 was the prolification of workstations and PCs as network nodes (while the internal network continued as predomitly mainframes).

To slightly bring it back on topic, this platform developed at the science center in addition to providing the basis for most of the internal corporate business and network platforms (as well as a fairly successful product offering to customers) ... it was also the platform used at SJR to develop the original relational database and SQL. misc posts on the subject:
https://www.garlic.com/~lynn/submain.html#systemr

... and eventually as a product offering with system/r tech transfer from SJR to Endicott for SQL/DS. there was then tech transfer from Endicott back to STL for what became DB2 (note that SJR and STL were only something like ten miles apart).

SJR was in bldg. 28 on the main plant site ... now Hitachi. However, in the mid-80s, research had already moved up the hill and is now referred to as ARC.

STL was originally going to be called the Coyote Lab .... following some convention of naming after the nearest post office. However, the week before STL opening ceremonies, there was a group of ladies (from san francisco) that demonstrated on the steps of congress. This prompted changing the labs name.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

were dumb terminals actually so dumb???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: were dumb terminals actually so dumb???
Newsgroups: alt.folklore.computers
Date: Wed, 28 Apr 2004 08:37:03 -0600
mwojcik@newsguy.com (Michael Wojcik) writes:
I agree. I had those listed out of chronological order in my previous post, and while I described "3270 on a card" as mainly a 1980s phenomenon (as I remember it), I didn't give any date for SNA-and- Token-Ring in PCs - I don't recall seeing that before the '90s.

my wife is one of the co-inventors on patent for token/loop lan architecture (Loop Configured Data Transmission Systems; '78 in the US and '79 in europe) and (at least) both the Series 1 chat ring and token ring came after that ('80s).

quicky use of search engine comes up with
http://www.cs.bham.ac.uk/~gkt/Teaching/SEM335/token/history.html

which has Zurich research publishing specification for token-ring in '81 and IEEE802.5 token ring published in '83

course it also mentions OSI model beginning in '77. recent post about ISO and ANSI in the late '80s not allowing standards that violate OSI; aka

1) internet violates OSI because the internetworking layer doesn't exist in OSI (sitting mythical place between the bottom of layer 4/transport and the top of layer 3/networking)

2) LANs violate OSI because the LAN MAC interface is actually someplace in the middle of layer 3/networking (aka LAN MAC is higher than top of layer 2, because it performs some function that are part of networking/layer 3)

3) the HSP (high speed protocol) work item was rejected by ANSI X3S3.3 (US group responsible for OSI layer 3 & layer 4 standards work) because it violated the no standards violation of OSI rule. it

a) violated the rule because it went directly from the top of layer 4/transport directly to the LAN MAC interface bypassing layer 3/4 interface

b) violated the rule because it went directly to the LAN MAC interface (something that doesn't exist in OSI).

misc. comments about OSI, GOSIP (as late as '92, federal gov. mandate that all networks be converted to OSI), and other things:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
a very recent posting in comp.database.theory:
https://www.garlic.com/~lynn/2004e.html#12 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
and a crypto mailing list
https://www.garlic.com/~lynn/aadsm17.htm#17 PKI International Consortium

the following, references token-ring "since its introduction in 1985"
http://www.reuters.com/article/pressRelease/idUS152235+12-Feb-2009+MW20090212
which would have been 4mbit token-ring.

we got a lot of heat for presentation we did where we first introduced the invention of 3-tier architecture using an ethernet as example (from both the SAA crowd and the token-ring crowds). The SAA crowd was somewhat trying to put the client/server genie back in the bottle (or at least a straight jacket) and some in the token-ring crowd were claiming 1mbit (which may have been based on early 3mbit enet w/o listen before transmit) thruput for typical ethernet compared to 2-3mbit thruput for typical 4mbit token-ring. various refs:
https://www.garlic.com/~lynn/subnetwork.html#3tier

There was a paper in '86(?) ACM sigcomm proceedings (somewhere in boxes) showing effective thruput of 80-95 percent of physical media (8mbit-9.5mbit) for typical enet deployments. the new ARC building had been totally wired for CAT4 ... for token-ring ... but by 1990, the people in ARC comp center had shown higher thruput over CAT4 for 10mbit enet than for 16mbit token-ring in typical configurations.

real quick pass with search engine isn't turning up any dates for bisync 3272 controller or sdlc. remember the protocol between the mainframe and the controller (either channel attach or via some networking interface with bisync or sdlc) was different than the protocol over the coax between the controller and the 327x terminal. I did turn up some page that said that the history of EDI (electronic data exchange standards) was closely related to bisync.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

were dumb terminals actually so dumb???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: were dumb terminals actually so dumb???
Newsgroups: alt.folklore.computers
Date: Wed, 28 Apr 2004 11:57:18 -0600
Brian Inglis writes:
ISTR the desktop Silent 700s being termed intelligent. They had dual data cassette tape program and data storage that could be up-/downloaded under host control and run key-to-tape programs with validation offline. I don't remember smart or intelligent being applied to video terminals with cursor addressing or any level of control character or escape sequence support.

the two vague recollections I have of some reference to "dump terminal" basically had to do with various forms of screen-scrapping & terminal scripting

1) my brother was an apple-ii salesman and then became a regional apple marketing rep. i was getting monthly archived tape dump from tymshare of all the vmshare conference files and making them available on various internal machines as well as on the HONE complex. I talked to him about using an apple-ii to login and emulate a terminal and "download" update/changed vmshare files to the apple-ii and then upload them to ibm mainframe. vmshare archives:
http://vm.marist.edu/~vmshare/
misc. HONE refs:
https://www.garlic.com/~lynn/subtopic.html
note that the same share organization
http://www.share.org/

that sponsored vmshare on tymshare ... later sponsored a parallel pcshare conference ... which then would also get dumped to my monthly tape.

2) HSL's parasite/story. the internal network was typically referred to as VNET. There was an additional facility that emulated virtual 3270s over the network called PVM. You could have a 3270 terminal a) that talked directly to local PVM ... which then would provide simulated, direct channel attach 3270 support to local or remote machine or b) run a program under CMS that talked full-screen to your 3270 and interacted with PVM to talk to either the local system or remote systems. parasite/story used the programed interface to PVM to provide various terminal scripting facilities .... akin to the later HLLAPI stuff that run on a PC. misc. past parasite posts:
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2003i.html#73 Computer resources, past, present, and future
https://www.garlic.com/~lynn/2003j.html#24 Red Phosphor Terminal?

it wasn't so much a case of a dumb real terminal vis-a-vis a smart real terminal ... but a (dumb) real terminal vis-a-vis a (smart, programmable) emulated terminal

total topic drift into commercial time-sharing service bureaus (including tymshare):
https://www.garlic.com/~lynn/submain.html#timeshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
Newsgroups: comp.databases,comp.databases.theory,comp.lang.mumps
Date: Wed, 28 Apr 2004 12:17:56 -0600
and slight drift from pre-rdbms to pre-sql ... the whole ramis/nomad/focus 4th gen genre ... specific tale:
http://www.decosta.com/Nomad/tales/history.htm

note that the system/r & sql stuff was done on the platform developed at the science center ... previous post
https://www.garlic.com/~lynn/2004e.html#12

the whole ramis/nomad/focus genre was done on the same platform ... but at various time-sharing service bureaus (that happened to be using the same platform as their service delivery), various past ramis/nomad/focus related posts:
https://www.garlic.com/~lynn/2002i.html#64 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#69 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002l.html#56 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2003d.html#15 CA-RAMIS
https://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003m.html#33 MAD Programming Language
https://www.garlic.com/~lynn/2003n.html#12 Dreaming About Redesigning SQL
https://www.garlic.com/~lynn/2003n.html#15 Dreaming About Redesigning SQL

for total topic drift ... the science center had also originated the whole markup language thing ... originally called GML (the letters are the initials of three of the people at the science center). Both "G" and "L" transferred out to san jose ... and I remember "L" working on early BLOBs on R* (r-star, follow-on to system/r).

one of the big arguments that i remember from the pre-relational dbms people in stl ... was the physical disk space requirements. The earlier databases had physical pointers ... which relational replaced with indexes. The indexes tended to double the physical space requirements (as well as increasing overhead) vis-a-vis the databases with direct physical pointers. The indexes, however, reduced the manual maint. involved in maintaining the physical pointers. The issue then was somewhat the disk space & processing overhead vis-a-vis the manual maint overhead. As disk space & processing became cheaper, the trade-off tended to shift towards optimizing people effort as opposed to optimising hardware.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Paging query - progress

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Paging query - progress
Newsgroups: bit.listserv.ibm-main
Date: Wed, 28 Apr 2004 14:53:51 -0600
rhawkins@ibm-main.lst (Ron and Jenny Hawkins) writes:
Just down the road from Chris's place, we had a 3084Q with 64MB and 700 TSO users doing DPR around 1000/sec, and Swap-in rate of 1400/sec. Don't recall the total page rate, but we were running around 8-9 swap volumes and 10 page volumes on 3380 to get this rate. TSO trivial was 0.3 seconds (85% of transactions) which was pretty good.

swap wasn't swap in the traditional sense ... it was big pages. the issue was that in a 15 year period the relative system performance of disks had declined by a factor of 5-10 (aka processor and memory had increase by a factor of 50, but disks had only increased by a factor of 5-10). However, the 3380s, the transfer rate had increased by a factor larger than the disk arm thruput ... the objective was therefor to trade-off (aka possibly do uncessary) data rate against number of arm access.

"swap" was formated for 10-4k "big pages" that filled a 3380 track. a group of 10 pages at a time were collected for page out ... and whenever there was a page fault for any page in a big-page happened, the whole big-page/track was brought in. Compared to doing 4k page at a time .... there was a tendency to do unnecessary transfers ... and it somewhat increased the real-storage usage. However, it was basically trading off real memory resources and transfer rate resources to decrease the total number of arm accesses.

The other was that big pages used a moving cursor type of algorithm. Space was typically allocated at ten times the expected allocation (again you are trading off significantly increased disk data capacity against arm motion). The arm would sweep across the allocated area, with big pages alwas being written to the next available cursor position ... which tended to be sparsely populated so there was little or no motion typically needed for a write (same cylinder or possibly the next cylinder). Reads tended to be from the trailing edge of the cursor ... and any time a big-page was read, it was de-allocated. This also tended to drive up the page-write data rate ... since the traditional paging infrastructure might leave a page allocated even after it was read ... that on the off-chance when it was selected for replacement, it might not be changed and therefor wouldn't require a page-write (since an identical copy was still on disk). In the big page scenario, a replaced page always had to be written, whether or not it had been changed in the most recent stay in memory.

Big-pages might possibly increase the real memory requirements by 20-30 percent because out of the group of ten pages being transferred, not all of them might actually be needed by the program. The trade-off was that 5-8 pages were being transferred in one arm motion ... that would possibly be needed against unnecessarilly transfer 2-5 pages that might not being needed ... aka possibly 30 percent increase in real storage and data transferred against a possibly 5:1 reduction in number of 3380 arm access.

So big pages using 3380s might have an actual higher paging rate compared to same configuration using single page implementation on a hundred or so fixed-head 2305s (which wasn't really economically or physically practical) because:

1) it might unecessarily be bringing in some pages in a ten-page group that weren't actually needed

2) there would be more real storage contention since the bringing in of uncessary pages would also be forcing out other pages

3) there was a stricked deallocation algorithm for all pages read forcing all replaced pages to be (re)written .... even if they hadn't been changed during the most recent stay in memory.

when I first pointed out the enormous decline in relative system disk thruput, (san jose) GPD assigned the disk performance group to refute the assertion. after a couple months of study ... they came back and said that I had actually somewhat understated the problem because of the performance degradation caused by RPS-miss in heavily loaded configurations. The study eventually turned into a SHARE (63) presentation on things to do to help disk thruput. specific past post with a couple quotes from the presentation:
https://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)

random other past postings on the subject:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

were dumb terminals actually so dumb???

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: were dumb terminals actually so dumb???
Newsgroups: alt.folklore.computers
Date: Wed, 28 Apr 2004 16:07:18 -0600
Giles Todd writes:
Is this the one?

http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-88-4.pdf


yep, "Measured Capacity of Ethernet: Myths and Reality" in proceedings of ACM SIGCOMM, 8/16-19, 1988, V18N4

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

RFC 3766 Determining Strengths For Public Keys Used For Exchanging Symmetric Keys

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject:  RFC 3766 Determining Strengths For Public Keys Used For Exchanging Symmetric Keys
Newsgroups: sci.crypt
Date: Thu, 29 Apr 2004 15:23:03 -0600
finally out as rfc3766 and best current practices;

summary entry at
https://www.garlic.com/~lynn/rfcidx12.htm#3766
clicking on ".txt=nnn" field in the summary retrieves the actual RFC


        BCP 86
RFC 3766
        Title:      Determining Strengths For Public Keys Used
For Exchanging Symmetric Keys
Author(s):  H. Orman, P. Hoffman
Status:     Best Current Practice
        Date:       April 2004
Mailbox:    hilarie at purplestreak.com, paul.hoffman at vpnc.org
        Pages:      23
Characters: 55939
Updates/Obsoletes/SeeAlso:    None
I-D Tag:    draft-orman-public-key-lengths-08.txt
        URL:        ftp://ftp.rfc-editor.org/in-notes/rfc3766.txt

Implementors of systems that use public key cryptography to exchange
symmetric keys need to make the public keys resistant to some
predetermined level of attack.  That level of attack resistance is the
strength of the system, and the symmetric keys that are exchanged must
be at least as strong as the system strength requirements.  The three
quantities, system strength, symmetric key strength, and public key
strength, must be consistently matched for any network protocol usage.
While it is fairly easy to express the system strength requirements in
terms of a symmetric key length and to choose a cipher that has a key
length equal to or exceeding that requirement, it is harder to choose
a public key that has a cryptographic strength meeting a symmetric key
strength requirement.  This document explains how to determine the
length of an asymmetric key as a function of a symmetric key strength
requirement.  Some rules of thumb for estimating equivalent resistance
to large-scale attacks on various algorithms are given.  The document
also addresses how changing the sizes of the underlying large integers
(moduli, group sizes, exponents, and so on) changes the time to use
the algorithms for key exchange.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Message To America's Students: The War, The Draft, Your Future

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Message To America's Students: The War, The Draft, Your Future
Newsgroups: alt.folklore.computers
Date: Fri, 30 Apr 2004 12:09:05 -0600
"William" writes:
Estimated federal taxes: Before tax cuts: $97 After tax cuts: $97

for some topic drift:
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security

mentions the variable mortgage rates almost doing in citibank and FIRREA solution to the savings&loan crisis which also has an obligation of $100k for every person ... quote from above reference:
To date the last reported dollars I have seen for each one of us to perform our refunding of the banks and S&L's exceeds 100K per person. Whether you like it or not, in a rather benign interest rate environment you will pay over 100K in your lifetime of taxpayer dollars to pay for this bailout. The dollars are so high that they are carried as an off balance sheet number so as not to capsize the US budget or cause attention. At one point they exceeded $ 1 trillion. This is what I mean when I say that ALL of the moneys gained by individuals in the asset appreciation (real estate) of the 70' & the 80's went in one pocket and the pay-out of the costs for the S&L industry came out of the other. The result - a zero, if not negative, sum game. The horrifying part of all of this was that it happened over a very benign interest rate cycle. Institutions were toast overnight because of a short term rate spike. Today prevention and anticipation are the order of the day and the keys to good regulations.

and for even a lot more drift ...

my wife's father was engineers combat group ... which went ahead of the tanks, clearing the way. towards the end, he frequently was the highest ranking officer into enemy territory and managed to accumulate a collection of very high ranking officer daggers (as part of surrenders). not as pleasant, he also managed to liberate some camps. as a reward(?), he got assigned to nanking ... where the family had to be airlifted out when the city was ringed (my wife's mother did have stories of the three sisters and dinners with missy-mo):
http://www.wesleyancollege.edu/firstforwomen/soong/
http://chinese-school.netfirms.com/Madame_Chiang_KS.html
http://www.time.com/time/poy2000/archive/1937.html?cnn=yes

... one loved money, one loved power, one loved china

during desert storm, my daughter's husband was in engineers and was behind enemy lines 3days before the tanks came thru ... also clearing the way. he commented that they were roaring around in a bradley with only a 50cal ... and numerous times would spot russian tanks in the distance.

the line is something like first are special forces (or marine recon, depending on who you talk to), then the engineers, and finally the tanks.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Soft signatures

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Soft signatures
Newsgroups: sci.crypt
Date: Sun, 02 May 2004 21:58:24 -0600
Nomen Nescio writes:
I had an idea the other day for "soft" signatures. These are digital signatures which aren't completely binding. Maybe you'd like to make a signed statement with a large degree of certainty about who you were, but to leave open a small possibility that it might have been forged.

part of the problem is that there is some semantic ambiguity between the concept of signatures and the term "digital signatures".

typical digital signature starts out with computing a secure hash of the message, "encrypting" the secure hash, transmitting the message and the encrypted hash. The recipient then recomputes the secure hash of the message, decrypts the transmitted version and compares the two values. If there is any bit difference, then either 1) the message is tampered with in some way or 2) some problem in encrypt/decrypt.

the definition of secure hash typically is such that it is hard to tell the amount of mismatch ... just that there is some mismatch.

The encrypt/decrypt part (not the message tampering angle) is a part of three-factor authentication ... aka
1) something you have
2) something you know
3) something you are


... now, digital signature, typically just represents that you (in some way) "have" the corresponding private key. There is some hope that the private key hasn't been replicated and you are actually the only person with that private key (aka something you have, typically presumes that you uniquely possess the object).

now, this "digital signature" still just represents message integrity and something you have authentication .... not really anything about things like non-repudiation and/or forging agreement.

To get from this technology (that just happens to have a label containing the character string "signature") ... to things like real Signature ... carries with it things like intent and non-repudiation. For some time there were arguments that if a certificate contained the bit flag called non-repudiation ... any message validated with the certificate's public key couldn't be repudiated. A relying party just needed to find some certificate authority willing to issue a certificate with your public key and the non-repudiation flag in it. There has been lots of past threads about what extra operations are needed at the time a signature is created to even begin non-repudiation issues. various are referenced in the following collection on client and radius authentication:
https://www.garlic.com/~lynn/subpubkey.html#radius
and identity, authentication, and privacy:
https://www.garlic.com/~lynn/subpubkey.html#privacy

So one of the issues that I raised at:
http://www.nist.gov/public_affairs/confpage/new040412.htm

was issue of using private keys for both authentication as well as for agreeing to some business process (like authorizing some operation, demonstrating agreement and/or intent like in real signatures). the conflict is that there can be challenge/response authentication mechanisms that send you random data, you sign the random data and send it back to authenticate who you area. The problem is that you aren't "signing" the data (in the sense that people sign documents) ... you are just establishing something you have "authentication". The issue, is that if you ever use a private key as part of "signing" random data for something you have authentication purposes, the random data might actually not be random ... it could actually be some valid transaction. the solution might be that before you use your private key to ever sign anything, you first combine the message (before doing the secure hash) with a couple kilobytes of legal text explicitly stating what kind of signature operation you believe you are performing. Part of the issue is there may not just be one way to forge something. Note that this kind of forging is akin to phishing ... convincing you to do something w/o you really realizing what it was you were doing.

The real issue is that "digital signature" technology is being used for a number of different business purposes. There is simple, something you have authentication. However, attempting to move upstream to higher value operations like human signatures that carry with it implication of intent, agrees, approves, and/or authorizes, can cause significate problems.

Using the same technology for simple authentication as well as intent, agrees, approves, and/or authorizes can create a fundamentally flawed environment. Basically, the use of digital signatures in pure authentication operations already implies the use of signatures for operations that are specificly nonbinding; aka the digital signature is purely to prove who i am and doesn't include any signature concept that i'm agreeing to anything.

... at the conference, there were also a number of references to "naked public keys" ... i.e. another way of describing aads.
https://www.garlic.com/~lynn/x959.html#aads

the issue is that most people observe that public keys are running around armored in something called (x.509) certificates. The certificates aren't actually to armor the public keys but to to armor the business relationship between the public keys and some amount of other information.

For a long time, business have used fairly armored processes, where the information and actual operations of any value were carried on in an armored environment ... the armoring designed to protect both the information and the validity of the operations.

It was observed that there was the possibility of performing some number of low/no value operations in the wild ... outside of traditional business controls and in an offline environment w/o direct recourse to the real business operation. The result was somewhat the creation of armored certificates that contain a subset of information (that might be found in an armored business process environment) ... where some amount of the information and a public key were armored in a "certificate". These armored certificates would promote some number of low/no value operations to proceed in the wild outside of normal prudent business controls.

One of the issues that cropped up in the mid-90s was the severe privacy and liability issues with over abundance of information in an x.509 identity certificate ... and the retrencing to a severely truncated relying-party-only certificate. .. misc. past references:
https://www.garlic.com/~lynn/subpubkey.html#rpo

the truncation of the armored binding between information and a public key for a relying-party-only certificate might be as little as a public key and an account number.

so the scenario goes that a armored business operation creates an armored relying-party-only certificate (drastic subset of information present inside the armored business operation) that is allowed to float around in the wild. this relying-party-only certificate (containing drastic subset of information that the relying-party already has) is periodically sent back to the relying-party as part of some business operation that will occur within the confines of a traditional armored business environment (containing a superset of information contained in the armored relying-party-only certificate).

the repeated assertion is that in such scenarios that the existence of an armored relying-party-only certificate floating around in the wild, AND being transmitted back to the relying-party is redundant and superfluous to the relying-party business operation.

the observation is that in the AADS scenario where the public key is contained in the armored business operation along with lots of other information ... can be done without the protective certificate armoring that is necessary when the public key and its related information is allowed to float around in the wild.

the actual issue is that such repositories inside normal, prudent business operations isn't actually a naked environment ... it is typically protected as required not only for the actual information but also the actual business processes that go on in such an environment.

It is only when such information is allowed out in the wild ... that the protective armoring (say analogous to space suits) is required to provide integrity of the public key bound to specific information.

One of the issues exemplified by the relying-party-only certificate case is that for operations of any value, they also need to occur within an armored/protected environment. Given that such an environment is used for business operations, having separate armoring for a small restricted subset of the business information is redundant and superfluous (like it typically isn't necessary to wear a space suit on the surface of the earth).

The certificate armoring is specifically targeted at providing support for low/no value operations occurring in the wild, outside the bounds of normal business process operations.

So the AADS public keys aren't really naked inside normal business operations .... they just don't need to have the space suit protective armoring that is necessary for existing in a wild and hostile environment. To some extent this is akin to some science fiction movie where people who have only familiarity with airless planets are dismayed at the thought of earth inhabitants going around in the open on the surface w/o space suits. It isn't that they are naked, it is just that they are operating in a less hostile environment. Furthermore, since there is some presumption that a valid business process will be occurring in that environment, there should be other safeguards in place making certificate armoring redundant and superfluous.

so to slightly wander back in the direction of the original topic ... the current armoring of a public key and information in a certificate doesn't even include the demonstration that you are in agreement with what is in the certificate. current certificate-based infrastructures don't have you signing the contents of the certificate ... and then the certification authority signing the composite including your signature of agreement.

this issue has to do with whether or not the contents of a certificate can create any legal obligations for you. If I'm agreeing to some sort of legal obligation, i'm typically required to sign &/or initial every document. In the existing certificate-based paradigm, there is a

1) message 2) digital signature 3) certificate

the only thing that I've actually signed is the message, I haven't signed the certificate ... so it isn't impossible that a different certificate (with my public key) could be substituted. The current environment doesn't have the key owner signing either the certificate and/or the choice of certificate that is included with any message.

and already discussed, with possibility for private keys being used to sign both random data (in purely authentication scenarios) and possibly legal documents (as in real signatures demonstrating intent, agrees, approves, and/or authorizes), there is significant ambiguity about the meaning of any digital signature.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A POX on you, Dennis Ritchie!!!

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A POX on you, Dennis Ritchie!!!
Newsgroups: alt.folklore.computers
Date: Mon, 03 May 2004 07:57:12 -0600
Morten Reistad writes:
For someone doing office work I understand that a PC is a necessary tool. But that is only around 30% of all jobs. The example of bartender came up. That is a "wet space" in terms of environmental hazards. A wonderful place for an industrial-control front end. It could run KDE/gnome etc to look nice; and have a browser for the boring moments.

if you look at any serving establishment ... a large percentage have PCs with touch screens for entering the order and generating the bill, even the bar. there is some application ... i think called squirrel that seems to be the dominant player in the market (i think i remember seeing "squirrel" and squirrel logo on number of otherwise idle screens for taking&entering orders). given that the squirrel logo looks to be done with old-fashion letters ... it is possible that it is still DOS-based (w/o windows).

they seem to have a number of authentication mechanisms ... some large number seem to have cardkeys ... like credit card sized door badges and/or PIN-code ... that the bar/wait/serve person uses for entering the order. at least the majority of eating establishments that i've been in recently have had them ... including the attached bar ... and the bartender uses it just like everybody else. depending on the situation, it may also double as the cash register. for large bar/eating, there seem to be a number of dedicated order entry station ... but in the bars, it may also be combined with a cash register.

part of the issue is that with the volumes, it becomes a commodity priced item that is a standardized unit & price choice for a lot of other applications. I believe the card-swipe point of sale terminals ... from a programming standpoint look like a pc/xt with a 1200 (or 2400) baud modem (and has some old version of dos). they've just been physically packaged in different form factor.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
Newsgroups: comp.databases,comp.databases.theory
Date: Mon, 03 May 2004 08:16:52 -0600
"Ken North" writes:
At a database conference in 1998, I heard that IBM was actually experiencing an increase in the number of IMS licensees. Companies were bringing in consultants and veteran developers for Y2K. Perhaps they were also using them to work through an application development backlog on some legacy hardware.

IMS may prove to be as venerable as the C-47 (DC3) aircraft.


back to previous post ... the physical (some number of IMS developers in stl/bldg 90)/rdbms (systemr in sjr/bldg 28) trade-off argument was admin/support overhead for changes to structure vis-a-vis disk space requirements and overhead for maintaining indexes (although rdbms still have significant manual care&feeding). if you've matured to relatively stable application feature environment ... there may not be a whole lot of re-org required ... significantly mitigating the admin and manual effort issue.

the c47/dc3 analogy is somewhat with out-of-the-way, low-volume operations. some number of the IMS &/or VSAM (non-rdbms) operations just keep getting moved to larger and larger machines. some of the largest infrastructures have large numbers of largest configured, maxed out mainframes using it for critical business operations. these continue to represent significant revenue flow ... especially with all the stuff about processing-unit based pricing.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relational Model and Search Engines?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relational Model and Search Engines?
Newsgroups: comp.databases,comp.databases.theory
Date: Mon, 03 May 2004 09:56:57 -0600
Leandro Guimarães Faria Corsetti Dutra writes:
Main memory is just a hype word for big cache, little reliability. It is just an implementation trick that's orthogonal to the database model.

I've seen some numbers for main memory databases being up 10-100 times faster than fully cached RDBMS. the issue is that these main memory databases have gone (back) to direct pointers ... but instead of being physical disk pointers ... there are direct memory addresses; ... while the RDBMS ... even with the database completely cached in memory ... is still threading thru indexes.

there was possibly an intermediate step in the 90s when some object database played games with direct pointers that were swizzled into direct memory pointers when things were resident in memory.

the issue for reliability more has to do with the transaction and commit boundaries and how they handle updates (being main memory doesn't preclude various journalling of changes to disk). somewhat to paraphrase, reliability/integrity can be an implementation trick that is orthogonal to the database model. the main memory can be initialized at start up from a combination of files on disk (say large scale striped array) and a journal. The issue of journal recovery time then is a performance issue ... but if you can perform updates 100 times faster, then possibly you can recover a journal a 100 times faster.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relational Model and Search Engines?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relational Model and Search Engines?
Newsgroups: comp.databases,comp.databases.theory
Date: Mon, 03 May 2004 13:32:50 -0600
Nick Landsberg writes:
For a 30 GB database, this is still on the order of 10+ minutes to initialize even if there were no journal files to read (graceful shutdown as opposed to crash.) This is limited by the throughput of the array because, even with an array, there is a physical limit on how fast you can get the data off the disks. (1-2 ms. clock-time per logical disk read - measured on a large array).

lets say i initialize from some checkpointed image ... then recovery is place in the journal from the checkpointed image to the current entry. the size of journal is proportional the update rate since the last checkpoint (and could be independent of database size, for some databases it could be trivial over extended period of time).

lets say build stripe array that when read sequential saturates a 30mbyte i/o interface. 20+ years ago, i demonstrated sequential recovery sequences that would read single disk recovery and do 15 tracks in 15 revolutions (effectively achieving very near disk media transfer rate at 3mbyte/sec, in this situation the i/o bus rate and the disk transfer rate were the same) ... and be able to do multiple in parallel on different channels. so say a single striped array @30mbytes/sec recovers 30gbytes in approx. 1000 seconds or 17 minutes. spread across two such i/o interfaces would cut it to 8.5minutes and spread across four such i/o interfaces cuts it to a little over four minutes.

The problem now is that you getting into some operating system restart times. Given that you are doing 30gbyte image recovery ... it is too bad that you can't go a little further and do like the laptop suspend operations that write all of memory to protected disk location for instant restart. With trivial amount more I/O, checkpoint all of physical memory for "instant on" system recovery.

Backup image can be done like some of the hot disk database backups. Since you aren't otherwise doing a lot of disk i/o ... and you probably have at least ten times the disk space in order to get enuf disk arms, then you could checkpoint versions to disk with journal cursors for fuzzy image states. Frequency could possibly be dictated by trade-off between overhead for doing more frequent checkpoint vis-a-vis having to process more journal records ... as well as projected MTBF. Five-nines availability allows 5minutes downtime per year. At four minutes recovery ... that says you get one outage per year. For really high availability ... you go to replicated operations. Recovery of a failed node than is slightly more complicated since it has to recover the memory image, the journal and then the journal entries done by the other processor.

With replicated systems, then there is some issue of whether you can get by with two 30mbyte/sec transfer arrays per system for 8min system recovery time .... since the other system would mask downtime. Each system would have two 30mbyte/sec transfer array configurations rather than single system have four 30mbyte/sec transfer arrays.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relational Model and Search Engines?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relational Model and Search Engines?
Newsgroups: comp.databases,comp.databases.theory
Date: Mon, 03 May 2004 18:37:53 -0600
Leandro Guimarães Faria Corsetti Dutra writes:
So, it is basically for throwaway data. I was hoping for something more interesting.

the loss of the last 10 transactions isn't any different than other DBMS ... if it hasn't been written to the log/journal as part of commit ... and the system goes down ... then the transactions haven't been done.

i would guess that almost any high end system in existance would likely have ten or more received transactions that are actively in the process of execution & uncommitted at any point of time ... and are conceivably lost if the system crashes at any particular moment.

these days it isn't actually quite that bad ... there are frequently other components in the infrastructure that will redrive transactions that time-out w/o acking (i.e. indicating complete & committed).

the log/journal/commit scenario isn't any different for disk-based DBMS than it is for real-memory DBMS.

all the DBMS tend to have a backup, they all tend to have home-record location (except for some versioning DBMS), and they all tend to have commits that involve recovery with log/journal.

the difference between disk-based and memory-based DBMS is that disk-based have home record location on disk and memory-based DBMS have home record location in memory (totally eliminating caching process).

in the hot-backup scenario ... disk-based DBMS can do hot-backup with slightly fuzzy image corrected with appropriate journal entries. this typically involves doing disk-to-disk copy while the system is running live.

the memory-based DBMS similarly can do hot-backup ... except it is directly from memory to disk ... and may have some of the same fuzzy copy issues that a disk-based hot-backup has ... and is also made consistent with journal entries.

disk-based DBMS may expect to only infrequently resort to recovery from a hot-backup ... however the memory-based DBMS is expecting to do recovery from its hot-backup every time the system is rebooted and/or the dbms restarted.

since the memory-based DBMS is 1) expecting to make more frequent use of its hot-backup (vis-a-vis a disk-based DBMS), 2) the disks are otherwise idle, and 3) hot-backup doesn't represent contention with transactions accessing the disks ... it is likely to perform more frequent hot-backups.

some of the technology issues are that memories have gotten large enuf to contain what used to be large databases ... and raid/stripping disk technology has gotten fast enuf that the hot-backups and recovery is a reasonably containable process.

the memory size increases also represent other types of paradigm shift opportunities. a long time, extrenely large, widely used, query only database implementation has been around for 40 some years (there are updates that happen every couple weeks, but those changes are batch processed).

about ten years ago, i looked at it and realized that there was a paradigm shift available. the core information could now fit in real memory and the query results could be calculated in real time. over the previous 40 years, the core information couldn't be contained in real storage ... so the process was to precalculate all possible answers and store them in a disk database. The resulting database of all possible answers was extremely large and couldn't fit in even today's real memory. However, I realized that with technology changes, it was now possible to load the core information into real memory and it would take less CPU time to calculate the answer in real time than the CPU time to read the disk record with the precalculated answer from a (disk) database (or even look it up in a cache).

As it happens, they couldn't actually practically precalculate all possible answers, they had to leave out a large number of cases ... however, it was actually possible to calculate in real-time the answer for every possibly query (given the core data was available in memory). The issue of not being able to answer every possible query was a long-term recognized limitation of the original solution.

I used a fast, sequential memory load ... which for this particular operation could be optimized and startup/recovery could be done in a few seconds. Answers that didn't come back within very short prescribed time were automagically redriven ... masking failure/recovery scenarios.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The attack of the killer mainframes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The attack of the killer mainframes
Newsgroups: comp.arch
Date: Tue, 04 May 2004 21:23:45 -0600
rmyers@rustuck.com (Robert Myers) writes:
On the face of it, virtualization would only seem to make things worse, since it adds another layer in which things can go wrong. The saving grace, as far as I know, is that the extra layer can also be a backstop. I don't see how a virtualization layer can prevent problems from happening, but I do see how it might be helpful or even decisive in keeping the system from being brought down.

in the '60s the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

did cp/67 as a virtual machine operating system which did relatively faithful implementation of the hardware ... so that operating systems beliving they were running on the real hardware ... could actually run in a virtual machine under cp/67.

over the years, various hardware assists (microcode) was added to the mainframes to assist in thruput when running in a virtual machine. the descendent of cp/67 is still around today as z/vm ... and a couple of years ago was clocked at running >40,000 linux virtual machines on a single hardware platform. much of the virtual machine hardware performance assists are now also used for something called LPARs (logical partitions) ... where a drastic subset of virtual machine facility implemented totally in the microcode provides virtual machines (ala logical partitions) w/o any VM software kernel requirement. A large percentage of customer mainframe installations now operate with LPARs for the traditional batch operation system work ... even if they don't have the (software) vm operating system installed.

another effort in the '80s for another type of virtual machine was the PC/RT VRM. Original ROMP/801 was targeted as office products replacement for the displaywriter. When that got canceled they looked at repositioning in the unix workstation market segment. They hired the company that had done the AT&T port for PC/IX to do one for what was to be the PC/RT. However, they had all these PL.8 programmers that had been on the displaywriter project. These were retarged to implementing an abstract virtual machine interface (the VRM) and the outside company was told to build the unix port to the abstract machine definition (as opposed to the real machine definition). This resulted in long term problem with device drivers ... one special, non-(unix)-standard device drivers had to be built to the VRM interface .. and then VRM device drivers also had to be built.

in general a virtualization layer can provide partitioning for both failure isolation as well as security isolation. part of this is KISS ... in that the virtualization layer can be simpler and therefor less prone to integrity and security problems ... while the more complex stuff is kept partitioned. the concept to smaller/simpler partitions is somewhat analogous to dual core chips, hyperthreading, or even windowing; aka take a simpler paradigm and replicate to get scale-up as opposed to trying to building a single, much more complex, all encompassing whole.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

NSF interest in Multics security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: NSF interest in Multics security
Newsgroups: alt.os.multics
Date: Tue, 04 May 2004 22:07:30 -0600
Christopher Browne writes:
It seems to me that the last ten years of CPU work has been pretty much at odds with this, where RISCy designs pretty much eliminate the use of rings or segments.

i've claimed that the original 801/risc (early to mid-70s, i was at presentation on 801, CP.r and PL.8 in mid-70s) was in part, something of a lashback/reaction to the failed/canceled FS project
https://www.garlic.com/~lynn/submain.html#futuresys
which was at the opposite extreme of hardware complexity.

the original 801 hardware had stated goals of trading software complexity off for hardware simplicity. there was absolutely no security/protection domains in the hardware ... protection/security was specifically going to be provided by closed, proprietary software infrastructure implemented in PL.8.

the design points of the 801 hardware represent that design philosiphy. The virtual memory infrastructure had 32-bit addressing with 28-bit (256mbyte) segments and 4-bit (16 segments). The 16 segments were implemented in 16 segment registers that carried 12bit "identifiers". The hardware supported inverted table virtual memory ... i.e. the 12bit segment identifier plus the 16bit page number (12bit/4096 byte tables) were used to find a specific real page in real memory.

An issue was that common virtual memory object environments tend to have a lot larger number of simultaneous different virtual memory objects simultaneously in the same address space. The compromise was that all security and integrity were attributes of the compiler and the loader/binder. After the loader had determined that the loaded code was valid ... it was allowed to run in an environment that had no other protection domains. The statement was that inline code could as easily change segment register values as application code could change address registers or any general purpose register value. All code that passed the loader/binder was by definition trusted code.

The standard model does a high-overhead protection checking at the time a virtual memory object is introduced into the address space ... so they tend to be infrequent ... and once introduced they tend to stay around for extended periods of time. The 801 trade-off was to move all the protection checking to the responsibility of the compiler, the runtime libraries (effectively also compiler) and the loader/binder that validated the code being loaded.

When the proprietary, closed operating system effort was canceled and the 801/romp chip retargeted to a unix paradigm ... a more traditional kernel/application boundary protection paradigm had to be adapted and access to the virtual memory segment registers made privileged. very recent post in comp.arch about some of the 801 (original risc) & romp issues:
https://www.garlic.com/~lynn/2004e.html#26 The attack of the killer mainframes
and lots of other posts on the subject:
https://www.garlic.com/~lynn/subtopic.html#801

for something that was done more along the lines of partitioning, security and integrity ... there was the gnosis/keykos stuff. recent posting with respect to gnosis/keykos
https://www.garlic.com/~lynn/aadsm17.htm#31 Payment system and security conference
and the derivative "extremely reliable" eros operating system

i wasn't on the 5th floor of 545tech sq from 70-77, i was on the 4th floor during those years.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The attack of the killer mainframes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The attack of the killer mainframes
Newsgroups: comp.arch
Date: Wed, 05 May 2004 14:04:49 -0600
Eric writes:
I have difficulty believing claims that this is somehow easier, cheaper, more reliable or maintainable. This just strikes me as the nice sounding wishful thinking that developers often convince themselves of before a project. (I'm not calling it outright bunk, but I would want to see the actual code first.)

there are two parts of virtualization, partitioning and sharing;

for the most part, the LPAR implementation provides straight partitioning (for devices), different virtual machines/LPARS are provided "dedicated devices". The basic I/O channel (aka bus) architecture is extremely regular and the only thing that the LPAR needs to do is virtualize the I/O channel architecture ... doesn't have to actually directly concern itself with device specific characteristics since they are handled by the operating system running in the virtual machine .. driving the dedicated device.

this isn't too hard to believe since a lot of the I/O channel architecture is implemented in the processor microcode in any case. The LPAR support is effectively also processor microcode in effectively the same microcode layer. So rather than viewing the LPAR support as duplicated effort for implemented (this level) of virtualized I/O ... it is more like a add-on feature to the basic machine implementation.

The VM software operating system provides some amount of device driver virtualization and support for most commoningly used devices ... allowing sharing and/or partitioning within a device. In case of disks, it is somewhat analogous to logical volume support ... but done on the other side of a virtual machine layer rather than as a bump on the side of a kernel device driver. IBM had huge RAS requirements for disks with a lot of controller microcode that could recover from all sorts of soft errors and report all kind of hard error characteristics. The software had to have lots & lots of error recovery as well as error recording and recovery characteristsics. The error recording and reporting infrastructure includes a bunch of stuff about predictive hard failures and pre-emptive maintenance scheduling.

During the 80s & 90s ... most of the vendor mainframe ports of Unix were deployed at customer shops running under VM; not so much that the Unix software couldn't drive the devices ... but nobody wanted to write the enormous amount of RAS code for unix that was already in VM. The hardware vendor field support wouldn't support the hardware unless the softare RAS support was running. The straight-forward port of Unix to do device drivers and running programs was one thing ... but there was enormous amount of RAS stuff that just didn't exist in Unix implementations.

One example I give of the difference in cultures ... was that I had done this implementation for virtualized channel extenders over telco links. Part of the design was reporting certain kinds of telco transmission errors as a specific kind of channel I/O error in order to provoke specific kinds of I/O retry operations. Now there is this commercial service that gathers all the mainframe error reporting logs from the majority of the commercial customers and creates composite monthly & trend reports for processers, devices, etc.

The 3090 processor was designed to have 3-5 errors of a certain kind per year across all customers (not 3-5 errors per annum per 3090, but 3-5 errors per annum across all 3090s). After a year of 3090s being in customer shops ... the 3090 product managers got really upset because there were almost 20 such errors reported across all 3090s for the year (instead of a grand total of only 3-5). This was serious concern and the subject of a detailed, expensive investigation. Eventually it was localized to customer running the channel extender software and unrecoverable telco transmission error faked as a specific kind of channel error. I was contacted ... and figured out that I could choose a different kind of faked I/O error that would essentially invoke similar sequence of I/O recovery operations (but not mess up the 3090 processor reliability standings in this commercial service reports).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[OT] Faces of terrorism

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT] Faces of terrorism
Newsgroups: alt.folklore.computers
Date: Wed, 05 May 2004 14:17:34 -0600
jmfbahciv writes:
You're wrong [emoticon thinking of Texan attitudes]. In the state I grew up (Michigan), there are regions based on The Netherlands. I can remember my folks' attitudes about people were based on the town they lived. For some reason, Drenthe was no good; there were other pecking rungs. I never figured out why. People whose roots are in Maine, appear to consider the rest of the US as a carbuncle. I would suspect that a moving work force would undermine this attitude of "ownership".

not only that, but lexis/nexis has an interesting problem when lawyers are looking up legal precedent; in effect local state laws go back to the country of the primary original immigrants ... not only may a specific state be based on some part of scandanavia ... but the state legal system will derive its precedents from scandanavian common law ... while other states have their legal system deriving precedents from english, french, etc, common law.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The attack of the killer mainframes

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The attack of the killer mainframes
Newsgroups: comp.arch
Date: Thu, 06 May 2004 13:01:16 -0600
Eric writes:
I wasn't questioning the doability (because it obviously does work), or the usefulness, under certain circumstance such as OS development.

I did want to challenge the conventional assumption that such a software project is easier, cheaper, more reliable and maintainable. I'm not just considering 'easy' devices like disks, but all devices. It looks to me intrinsically more complex, therefore higher cost, more error prone, and more difficult to maintain and enhance to achieve the same level of functionality.


in the case of CP/67 and CMS in the '60s, i would claim that CMS was made simpler and easier to understand because it only worried about user interface stuff ... and relied on a fairly structured interface to provide resource management. Furthermore, CP/67 was made simpler and easier to understand from a resource management standpoint since it didn't have to worry about a lot of user interface stuff.

Since both parts were designed for the same environment ... a lot of duplication was eliminated. However, the KISS & focus allowed each to do a better design and implementation than if the implementation had beein a consolidated whole.

It isn't that it is impossible to do a better overall implementation if do a consolidated implementation ... its just that having two simpler parts with a firm interface increased the probability that it can be done well. Some of the differentiation from other projects that attempted to achieve KISS with partitioning was that the project didn't also have to come up with the partitioning specification ... they just used the 360 principles of operation.

For the impossible devices ... the tendency was to start out using them as a dedicated device and limit the virtualization to the I/O bus with the really complicated device virtualization stuff in the virtual machine. There is no magic here ... just some practical benefits from simplifying and partitioning ... the places where partitioning tends to go wrong is when the partitioning specification itself is not concrete. The KISS/simplicity partitioning advantages breaks down when the interfaces aren't well defined & understood ... and the project still has to fallback on interproject coordination ... instead of free wheeling independent, asynchronous operation.

Specifically with complex devices, moving their support into a virtual machine ... and keeping the KISS/simplicity bus virtualization in the lower level ... is that complex device things have a tendency to have various failure modes related to the complexity. Having that support in an isolated virtual machine has the advantage of isolating/partitioning the failures.

CP/67 support for more complex networking devices and networking in general was done that way. All of the networking support was developed in a virtual machine ... and the other (CMS) virtual machines in the complex communicated with the networking virtual machine using interface definitions that were again taken out of hardware manuals.

There has been some assertions that because of this KISS and partitioning, the resulting networking support ... was able to concentrated on pure network issues. The claim is that it then had effectively gateway support (because of clarity of issues from KISS and partitioning) in every node from the start ... and could, in large part account for the fact that the internal network was larger than the internet from just about the start until sometime mid-85 ... the internet not getting gateway support until the great switchover 1/1/83 ... misc references:
https://www.garlic.com/~lynn/internet.htm

CP/67, CMS, networking support, GML, compare&swap instruction, and numerous other things originated at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

which avg. around 35-40 people over the years. The claim could be made that the science center was so prolific because it was able to clearly partition the problems and issues.

misc. & random other refs:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/subtopic.html#smp
https://www.garlic.com/~lynn/submain.html#360mcode

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[OT] Faces of terrorism

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT] Faces of terrorism
Newsgroups: alt.folklore.computers
Date: Thu, 06 May 2004 16:38:27 -0600
hawk@slytherin.ds.psu.edu (Dr. Richard E. Hawkins) writes:
I'm *very* skeptical. I've never heard this in, uh, 15 years as a lawyer. Even in the southwest, where the smaller states still rely primarily upon common law rather than legislation, it's all british common law--even though they were originally settled from Mexico.

this was from series of meetings with mead data central top technical people some 10+ years ago ... and the different common law legal precedent was an example issue discussed (if lawyer was searching precedent .. you might have to know which locality he was searching from/for).

... they had hired the former vp of aix software development (austin) to be MDC vp of development and operations (there was something about possibly the then MDC president had formally been head of the austin site) ... minor reference from the era:
http://www.att.com/news/0593/930504.ncb.html they were also a big HYPERchannel and IBM mainframe shop.

this was before they were sold off to Reed/Elsevier

NOTE: while the following URLs are gone, the wayback machine is still your friend:
https://web.archive.org/

quicky use of search engine (mead, lexis/nexis, common law, etc) didn't turn up specific reference to the common law issue ... but did turn up this series titled "Online Before The Internet"
http://www.infotoday.com/searcher/jun03/ardito_bjorner.shtml
http://www.infotoday.com/searcher/oct03/CuadraWeb.shtml
http://www.infotoday.com/searcher/oct03/SummitWeb.shtml part 5 is interveiw with giering & mead data central
http://www.infotoday.com/searcher/jan04/ardito_bjorner.shtml more mead data central
http://www.infotoday.com/searcher/apr04/ardito_bjorner.shtml

slightly related history; 20 years of headline news:
http://www.onlinemag.net/JanOL97/headlines.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The attack of the killer mainframes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The attack of the killer mainframes
Newsgroups: comp.arch
Date: Fri, 07 May 2004 10:29:05 -0600
Eric writes:
As I understand it, in VM the device io is virtualized by trapping the StartChannel instruction, and the VM takes over from there. However the channel program itself is not virtualized, and that is where the device control (control register reads and writes) takes place. So the drivers really are not being virtualized. To try and find an analog in a VMS/WNT async driver, it would be like trapping at the QueueIO routine call, rather than in the low level device driver, and rerouting the command block onto a different device. This would be a much easier task.

if it is just a dedicated device .. then the (i/o) channel programs have to be shadowed ... i.e. copied into real storage and the storage addresses translated from virtual to real (and the corresponding real pages pinned/locked until the real i/o completes). the shadow CCWs are then executed instead for the real ones. when the i/o completes, the real pages are unpinned ... and the real CCW I/O status is converted to reflect the virtual CCWS. at this level, little or no device charactistic sensitivity is needed.

if it is a simulated device ... then there is varying degree of simulation that may be performed. Many virtual disks consists of subset of dedicated areas on real disks. Support then is the standard shadow CCW process enhanced to covert disk record location specification from virtual to real.

In the mid-70s, the science center had done a custom VM kernel for AT&T longlines ... bunch of my stuff
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
etc and some other stuff.

AT&T longlines enhanced the support with "virtual tape" ... where the kernel intercepted tape drive requests and forwarded over telco links to another system which then performed the actual physical tape operation and returned the necessary status/info.

The extreme case ... is somewhat analogous to VMware ... except more difficult. VMware is virtualizing i86 architecutre on i86 real machines. There are a number of offerings that execute on i86 real machines and provide virtual mainframes ... including some of them with fairly comprehensive mainframe i/o device emulation.

some misc. to URLs to various of these mainframe activities:
http://groups.yahoo.com/group/hercules-390/
http://groups.yahoo.com/group/hercules-390/archive
http://www.conmicro.cx/
https://web.archive.org/web/20240130182226/https://www.funsoft.com/
http://www.umxtech.com/index0.html

note that these might be considered somewhat more analogous to the original 360 mainframe machines from the 1960 (and numerous of the follow-on 370 mainframes machines from the 1970s) which were "microcode" implementation on a different architecture microcode engine. For the "vertical" microcode machines (i.e. not VLIW microcode) it look somewhat like normal machine language program and tended to avg. ten micro-instructions for every 360/370 instruction (comparable to what the current generation of i86-based mainframe emulators are running). misc. topic drift about m'code
https://www.garlic.com/~lynn/submain.html#360mcode

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The attack of the killer mainframes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The attack of the killer mainframes
Newsgroups: comp.arch
Date: Fri, 07 May 2004 10:52:31 -0600
Eric writes:
As I understand it, in VM the device io is virtualized by trapping the StartChannel instruction, and the VM takes over from there. However the channel program itself is not virtualized, and that is where the device control (control register reads and writes) takes place. So the drivers really are not being virtualized. To try and find an analog in a VMS/WNT async driver, it would be like trapping at the QueueIO routine call, rather than in the low level device driver, and rerouting the command block onto a different device. This would be a much easier task.

... oops part II.

various HYPERchannel implementations did that. HYPERchannel had a series of adapters. The A22x adapter sat on a mainframe channel and appeared like a control unit.

There was also an A51x adapter that emulated a mainframe channel and allowed real mainframe control units to be attached to it.

The "remote device" support would intercept I/O requests fairly high up in the operating system device driver ... create a copy of the I/O channel program ... and download the translated channel program to the memory of a A51x. The A51x remote adapter would then execute the channel program emulating a real channel ... and drive the real devices.

The A51x would reflect the status back to the mainframe HYPERchannel driver, which would then translate it and emulate an i/o interrupt to the low level interrupt routine as if it was coming from a device directly attached to the local machine. This was what was involved in the channel extender activity & the 3090 RAS numbers that I mentioned in an earlier post.
https://www.garlic.com/~lynn/2004e.html#28 The attack of the killer mainframes

HYPERchannel had both local area network as well as telco WAN support. In the above, I was translating certain types of telco transmission errors into local attach channel errors. The problem was that the channels had BER objectives of 10**-21 ... while the telco fiber stuff tended to have BER in the range of only 10**-9.

Note that both LANL and NCAR had SAN type implementations starting in the early to mid 80s. They had an IBM mainframe that managed the tapes and disks. A cray on the hyperchannel network would send a network message to the IBM mainframe requesting access to some data. The IBM mainframe would stage it from tape to disk (if necessary), download some CCW commands into the memory of the A51x (which the disks were attached to) and return a network message to the cray. The cray would then send a message to the A51x requesting that the specific CCW command set be executed.

This split apart the control path from the data path ... and use the feature that one machine could load the CCW commands into the memory of the A51x for execution by another machine. Both the network activity for control requests and the disk data flow was flowing over the same HYPERchannel network/lan.

The standards activity in the 90s for HiPPI/IPI3 attempted to get equivalent feature/function specified ... loosely under the heading "3rd party" transfer ... aka the control machine could set up the I/O control commands ... but enable that the I/O flow was actually between the device and some client machine. This is somewhat the genesis of the SAN stuff that is starting to take hold these days.

random past stuff on high speed networking, hyperchannel, (and other stuff) etc:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The attack of the killer mainframes

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The attack of the killer mainframes
Newsgroups: comp.arch
Date: Fri, 07 May 2004 11:26:22 -0600
Eric writes:
As I understand it, in VM the device io is virtualized by trapping the StartChannel instruction, and the VM takes over from there. However the channel program itself is not virtualized, and that is where the device control (control register reads and writes) takes place. So the drivers really are not being virtualized. To try and find an analog in a VMS/WNT async driver, it would be like trapping at the QueueIO routine call, rather than in the low level device driver, and rerouting the command block onto a different device. This would be a much easier task.

oops, part 3 ... i believe somebody pointed out in a.f.c. newsgroup that a virtual machine VMS implementation was done in late 80s(?) called SecureVMS ... I think it had some sort of low level VMS that provided virtual vax machines for VMS operating systems running at higher level. I believe it was to see if SecureVMS could get a B3(?) rating ... i.e. the virtual machine implementation provided partitioning for security.

note that cp/67 and then vm/370 provided such partitioning starting in the '60s.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The attack of the killer mainframes

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The attack of the killer mainframes
Newsgroups: comp.arch
Date: Fri, 07 May 2004 12:04:42 -0600
Joe Seigh writes:
VTAM used to simulate 3705 network controllers and simulated SNA network by trapping the SIO instruction. For DOS/VSE, SVS, MVS the simulator ran as an application with various hooks in the OS supervisor code. When VTAM was ported to VM, rather than port the simulator to a drastically different environment than the aforementioned OSes, I ported it to CMS. Fortuanately CMS "OS" simulation a simple round robin scheduler sufficed the MVS multi-tasking. Command handling was set up to allow use of REXX subcommand enviroment so you could write scripts. There was some CMS nucon hackery to allow saving of the simulated network at a particular state.

working on hsp protocol during the late '80s there was some concern that standard tcp stack out the wire was between 5k-40k (depending on whos you were working with) instructions and five data copies. somebody reported that equivalent lu6.2/vtam was something like 140k instructions and 14 data copies ... and for large transfers the 14 data copies might possibly involve more machine cycles than the 140k instructions (along with secondary effects of flushing stuff in cache that was actively being used).

hsp was looking at doing stuff to get it down to zero data copies ... and whatever scatter/gather I/O specification that might be necessary. random refs:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

NSF interest in Multics security

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: NSF interest in Multics security
Newsgroups: alt.os.multics
Date: Fri, 07 May 2004 15:09:30 -0600
"Paul Green" writes:
We know how to engineer significantly more secure systems than commonly exist today. But if the experience of the last twenty years of advances in computing products teaches us anything, it is that the commercial marketplace places little value on security. Vendors and customers alike must share the responsibility for this situation. Vendors are driven to create new products that are cheaper, faster, more functional, and easier to use, because those are the attributes that customers demand. I'd even argue that the relative order I just stated is the order in which most customers would rank these attributes. Given the economic incentive to produce millions of computers that are identical, even people who are willing to pay extra for secure systems find there are none they can buy, or none they can afford, or none that interoperate with existing, insecure, systems.

there is something of an orthogonal issue ... most of the really insecure systems were effectively designed to be insecure table top, stand-alone, disconnected systems where applications had free reign to do everything that they possibly wanted to. they somewhat grew into local LAN connected, homogeneous, non-hostile environments.

they were never designed to handle the multiple objective, possibly conflicting hostile environments that a lot of the time-sharing systems were designed to handle (the system running on the 2nd floor for the science center had a combination of local MIT & BU students along with armonk/hdqtrs people doing business modeling in APL with the most sensitive and protected data in the corporation).

one could claim that the stand-alone table-top targeted sysems are only insecure from the standpoint when they are connected into a farflung network ... something that they were never originally designed to do. A lot of the home-gaming, table-top applications are likely to stop working if they had to contend with an industrial strength operating system. It somewhat creates enormous ambiquity and conflict trying to use something that has a design point of allowing anything to do everything in a standalone environment and also using it for an open, potentially extremely hostile online networking environment.

I would bump it up into a meta-level about how to resolve the conflict of addressing two essentially diametrically opposing design points in a single box.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

command line switches [Re: [REALLY OT!] Overuse of symbolic

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: command line switches [Re: [REALLY OT!] Overuse of symbolic
constants]
Newsgroups: alt.folklore.computers
Date: Sat, 08 May 2004 14:13:12 -0600
Peter Flass writes:
Not quite. TSO uses keywords with argumantes in parens:
ALLOC FILE(XYZ) DATESET(USER.PARMLIB) SHR

CMS has all "switches" (not so-called) following a single '(':
COPY INFILE DATA A = = B (REP OLDDATE


lots of stuff cms inherited from ctss ... some number of the ctss people went to multics on the 5th floor and others went to the science center on the 4th floor (and did cp/67 and cms).

copyfile command is (in)famous (for all its options) ... having been done new for (vm/370) cms by one of the people that had worked on CPS ... when the VM/370 development group absorbed the Boston Programming center that was on the 3rd floor (part of the effort converting cp/67 for the 360/67 to vm/370 for the new line of 370 mainframes).

the rapidly expanding vm/370 development group quickly outgrew the 3rd floor and moved out to the old SBC bldg in burlington mall (which had been vacated with the settlement of some issue with CDC).

minor past references to conversational programming system (cps) and boston programming center:
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2002.html#48 Microcode?
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002j.html#17 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#19 ITF on IBM 360
https://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
https://www.garlic.com/~lynn/2004.html#20 BASIC Language History?
https://www.garlic.com/~lynn/2004.html#32 BASIC Language History?
https://www.garlic.com/~lynn/2004d.html#42 REXX still going strong after 25 years

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[REALLY OT!] Overuse of symbolic constants

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [REALLY OT!] Overuse of symbolic constants
Newsgroups: alt.folklore.computers
Date: Sat, 08 May 2004 19:15:11 -0600
Té Rowan writes:
It came from a guy with an Intel system and one of those newfangled 8in floppy drives. The first versions were written in PL/M. Try googling for cpm1975s.zip which contains BDOS and CCP sources dated June 1975.

see this reference, the origin of cp/m's name, 1973-
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html

from the above:
And, page 61, you can find the following sentence: "The particular example shown in Figure IV-6 resulted from execution of PLM1 on an IBM System/360 under the CP/CMS time-sharing system, using a 2741 console."

Conclusion

I can therefore affirm that the name CP/M is coming from the CP/CMS Operating System used on the IBM System/360 used at the Naval Postgraduate School of Monterey, California, in 1972, when Gary Kildall was writing the PL/M compiler to program the Intel 8008 CPU, which led him to write the Disk Operating System known as CP/M (that MS-DOS copied) (that was patterned after the commands of the Operating System of the DECsystem-10 used inside Intel), in order to have a resident DOS under which to run PL/M on one Intel MCS-8 computer system.


....

misc. past related posts:
https://www.garlic.com/~lynn/2004b.html#0 Is DOS unix?
https://www.garlic.com/~lynn/2004b.html#4 Comments wanted on an authentication protocol
https://www.garlic.com/~lynn/2004b.html#5 small bit of cp/m & cp/67 trivia from alt.folklore.computers n.g. (thread)
https://www.garlic.com/~lynn/2004b.html#33 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004d.html#72 ibm mainframe or unix

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Candle support from Los Delhi

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Candle support from Los Delhi
Newsgroups: bit.listserv.ibm-main
Date: Sun, 09 May 2004 18:58:46 -0600
edgould@ibm-main.lst (Ed Gould) writes:
BTW, I thought that PID distibuted stuff in IEBUPDTE format, not IEBCOPY unloaded format. At least, whenever I got anything from them it was in that format.

A somewhat tangentel story was that someone (here in the US) wrote a PLS compiler and offered it to the public at GUIDE (in Anehiem) in the early 90's (late 80's???) and IBM shut it down really quickly. I almost got my hands on it but was 5 minutes late.


I seem to remember anahiem share being spring '92 (or fall '91?) because i did an ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp
presentation there in an aix/6000 session.

when I did the resource manager (& guinea pig for first priced SCP/kernel offering)
https://www.garlic.com/~lynn/subtopic.html#fairshare

the source & PLC distribution was in standard CMS vmfplc tape format ... but I had to do a "PID" listing tape in IEBUPDTE format. I believe that if anybody ordered microfiche source from PID ... it was actually the assembly listings ... and the process that "PID" used to generate microfiche listings required an IEBUPDTE format.

... aka "PID" could duplicate CMS vmfplc tapes for tape distribution (as easily as they could duplicate any other tape format). However, all "listing" tapes had to be in IEBUPDTE format because that was also what the microfiche listings were built from (and the microfiche generation process required IEBUPDTE format).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Wed, 12 May 2004 12:46:27 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
There is actually one way to eliminate the kernel call with no change needed to the application as such, which is to use the facility to shift most of the management from the kernel to the language library. Come back the System/360 Access Methods - and, in particular, chained scheduling using PCIs - all is forgiven :-)

I don't think that was what he meant!

In any case, all that does is remove the need to redesign the application as such - the library needs even more gutting than is needed to provide direct support for DMA. And the kernel changes are pretty pervasive, too. Conceptually, it isn't new, but IBM dropped support for chained scheduling in MVT 21.7 (if I recall), as they found it too fiendish to support. The I/O efficiency of System/370 went downhill all the way from there ....


virtual memory was on its way ... ludlow was possibly already doing the mvt hack with ccwtrans from cp/67 on the 360/67 ... getting ready for vs2/svs.

the os/360 standard was that the application (or application library) built the real channel command programs (CCWs) and did an excp/svc0 to the kernel. the kernel did a little sanity checking on the CCWs and possibly prefixed it with set file mask CCW (aka wouldn't allow disk commands to move the arm).

PCI was a I/O hardware interrupt sometimes used on long running channel programs and would generate a hardware interrupt indicating that the channel had executed a specific command in the sequence. this could be reflected to the application in a PCI appendage. Frequently the application PCI appendage would interpret various things and dynamically modify the running channel program ... anticipating that the PCI appendage got control and was able to change the channel program ... before the channel execution got to the channel commands that were being modified.

so we come to mvt->svs & virtual memory ... all the application-side CCWs now had address with respect to virtual address space. the EXCP/svc0 code now had to make a copy of the (virtual) ccws and substitute real addresses for all the virtual addresses (as well as pin/fix the affected pages until the i/o completed). In the PCI appendage world .. the application code would now be modifying the "virtual" ccws ... not the CCWs that were really executing.

So there were some operating system functions that ran in virtual space that still needed to do these "on-fhe-fly" channel program modifications ... like VTAM. Some of the solutions were run the subsystem V=R (virtual equals real) ... so that the application program CCWs could still be directly executed. Another in the CP/67 & VM/370 world was to define a new virtual machine signal (diagnose function) that signaled that there had been a modification to the virtual CCWs and that the copied (real) CCWs had to be modified to reflect the virtual channel program modifications.

a really big problem from the MVT real memory to SVS virtual memory transition was the whole design point of the application space being allowed to build the channel programs ... and that in new virtual memory environment ... channel programs still ran with "real memory" address ... while standard processor instructions now could run with virtual addresses. Under SVS, appications still continued to build channel programs ... but they no longer could be the "real" channel programs. The EXCP/SVC0 routine had to build a copy of the virtual channel program commands, substituting real addresses for any virtual addresses.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Thu, 13 May 2004 09:49:16 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Grrk. Not really. All it did was allow the user to bypass the Access Methods. The restrictions on what could be done were pretty comparable for it and BSAM/BDAM/BPAM, for example.

little explanation: (warning: long, wandering post)

EXCP/SVC0 was the (os/360 I/O) call from application space to the supervisor/kernel ... whether it was actual user code or access-method/library code. essentially the access methods were library code that ran in user space, generated CCWs (with real addresses) and performed a EXCP/SVC0. as a result, application programs were pretty much free to also do anything that the access-methods did. At this level, almost everything used a pointer passing convention ("by reference" rather than "by value").
CCW -- channel command word; sequences of CCWs where channel programs
EXCP -- EXecute Channel Program


A typical disk channel program use to be:


seek BBCCHH   position disk head
search        match record information
tic *-8       branch back to search
read/write    physical address

....

BBCCHH -- B: two bytes BIN, C: two bytes cylinder, H: two bytes head
BIN was typically zero, it was carry-over from 1960s 2321 datacell,
a device that was something like large washing machine with bins
       positioned inside the cylinder. the bins rotated under the read/write
head. bins had long strips of magnetic material, which the read/write
       head extracted and re-inserted from/to the bins.

search if criteria was successful, it skipped the next CCW, otherwise it fell
thru to the immediate next CCW. typical search was for "identifier
       equal", looping, examining each record until it found match

Processing for disks in an EXCP/SVC0 environment would have the supervisor generate a CCW sequence of
seek BBCCHH
set file mask
tic <address of user space ccws>


so rather than starting the channel program with the first user space CCW, it positioned the head and then used the "set file mask" command to limit what following commands were valid/invalid; i.e. read or write allowd, head switching allowed, diagnostic commands allowed.

normally channel programs ran asynchronously to the processor and generated a I/O interrupt when complete. It was possible to turn on the PCI-flag (programmed controlled interrupt) in a CCW which would queue a kind of soft I/O interrupt.

scatter/gather could be accomplished by having a whole sequence of search/read/write commands chained together.

within a read/write command it was possible to do scatter/gather with "data chaining" ... a sequence of two or more CCWs where it only took the command from the first CCW and the remaining CCWs were only used for address and length fields.

in the move from 360 to 370 there was starting to be a timing problem. channel programs are defined as being exactly serialized; the channel can't fetch the next CCW until the previous CCW has completed (aka no prefetching). There were starting to be scatter/gather timing problems, especially with operations that had previously been single real read/write CCW that now had to be copied/translated into scatter/gather sequence with non-contiguous virtual pages. The problem was that in some cases, the channel wasn't able to fetch the next (data chained) CCW and extract the address before the transferring data had overrun what limited buffering and/or the disk head had rotated past position. IDALs were introduced (indirect data address list) ... a flag in the CCW changed the address field from pointing directly to target real data address to pointing at a list of data addresses. It preserved the requirement that CCWs could not be prefetched ... but allowed the channel to prefetch IDALs.

here is google HTML translation of Share PDF file giving intro to channel programming:
http://216.239.57.104/search?q=cache:ilwHKHohAMUJ:www.share.org/proceedings/sh95/data/S2874A.PDF

it covers other features introduced over the years, things like rotational position sensing and fixed-head architecture. it has some sample disk programs ... including "EXCP to simulate BPAM".

note that later, IDALs solved another problem. The address field in the CCW is 24bits; while IDAL entries were full 32bits. The 3033 was a 24bit addressing machine but offered option for more than 16mbytes of real memory. The page table entry format specified a 12bit page number for 4k pages (giving 24bit addressing). However, there two stray, unused bits in the PTE which could be scavanged and concatenated with the page number to address up to 64mbytes of real pages. This allowed multiple 24-bit virtual address spaces ... that could have pages resident in more than 16mbytes of real memory. IDALs then provided the addressing for doing I/O into/out-of real memory above the 16mbyte line. The actual CCWs and IDALs were limited to being below the 16mbyte line ... but the target data addresses could be above the 16mbyte line. This carried forward to 370-XA and the introduction of 31-bit virtual addressing. It was now possible to have 2gbyte real memories and 2gbyte virtual memories ... but (the real) CCWs and IDALs were still limited to being below the 16mbyte line.

All library code running in application space coupled with everything oriented towards pointer passing somewhat gave rise to the later access register architecture. For various reason, there was motivation to move various code out of application address space (in 24bit virtual days ... it was beginning to consume large parts of the address space, some installations, available space to applications was down to as little as 4mbytes out of 16mbytes; also raised was integrity of library code being out of the user address space). However, they didn't want to give up the pointer passing convention and the efficiency of direct branch calls (w/o having to go thru a kernel call). So new tables were built and control registers were set up and a "program call" instruction invented. Program call effectively emulated the old branch&link subroutine call ... but specified an address in a different virtual address space ... under control of a protected table. So now, library code is running in a different address space and it is being passed pointers from the user application addres space. There also now has to be instructions (cross-memory services) where the library code can differentiate between instruction address arguments for the current virtual address space and the calling application address space.

discussion of address types: absolute, real, virtual, primary virtual, secondary virtual, AR-specified, home virtual, logical, instruction, and effective:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.2.1?SHELF=EZ2HW125&DT=19970613131822&CASE=

translation control (and various different virtual address spaces):
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.11.1?SHELF=EZ2HW125&DT=19970613131822

a little multiple virtual address space overview
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.8?SHELF=EZ2HW125&DT=19970613131822

changing address spaces
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.8.1?SHELF=EZ2HW125&DT=19970613131822

set address space control insturction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.33?SHELF=EZ2HW125&DT=19970613131822

program call instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.26?SHELF=EZ2HW125&DT=19970613131822

program return instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.27?SHELF=EZ2HW125&DT=19970613131822

program transfer instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.28?SHELF=EZ2HW125&DT=19970613131822

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Thu, 13 May 2004 10:32:39 -0600
glen herrmannsfeldt writes:
Can you explain the VM command SET ISAM, and the self modifying channel programs used by ISAM?

ISAM had long, wandering CCW sequences ... and all the structure of everything was out on disk (more than you really want to know).

Modern days, you have lots of incore tables ... it tells you where you want to go and a i/o command is generated to read/write that record.

in ckd "dasd", there was early convention of lots of the structure being on disk (in part of really limited real storage). "search" commands could search for id equal, unequal, greater than, less than, etc. in vtoc/PDS convention ... you might know the "identifier" of the record you want but only the general area that it might be located in. A multi-track search is generated of "identifier-equal" and turned loose to scan the whole cylinder for the specific record.

ISAM got even more complicated .... structure on disk could have record identifiers of the actual record you were looking for .. so you have these long wandering CCW sequences ... that search for a record of a specific match (high, low, equal) and reads record identifier for other reocrds which are then searched for (high, low, equal).

so here is hypothetical channel program example:


seek  BBCCHH
search condition (hi, low, equal) identifier
tic *-8
read   BBCCHH-1
read   identifier-1
seek   BBCCHH-1
search (hi,low,equal)  identier-1
read   BBCCHH-2
read   identifer-2
seek BBCCH-2
search (hi,low,equal) identifier-2
read/write data

so process is somewhat:

1) seek to known location
2) search for a known identifier record
3) read new location-1
4) read new identifier-1
5) seek to new location-1
6) search for new identifier-1 record
7) read new location-2
8) read new identifier-2
9) seek to new location-2
10) search for new identifier-2 record
11) read/write data

now all of the "read" operations for BBCCHHs and identifiers are addresses in the virtual address space.

for CP67/VM370 ... the CCWs as well as the BBCCHH fields are copied to real storage and the seek CCWs updated to reflect the copied BBCCHH fields. The example channel program starts and eventually reads the BBCCHH-1 into the real page containing the virtual page. The channel program then gets to the seek CCW that references the BBCCHH-1 field. However, the real seek is pointing to the copy of the contents of the BBCCHH-1 location ... not the virtual address location where the new BBCCHH-1 value has just been read.

So the SET ISAM option turns on additional scanning of CCW sequences (during the copy/translation process) to specifically recognize case of previous CCW in the chain reading a value needed as an argument by a subsequent CCW. There is also some restrictions that these have to be disks where there is a one-to-one mapping between virtual disk location and real disk location ... and there are no integrity issues with one virtual machine being able to seek into a non-authorized area of the disk.

The example channel program could even have something at the end that read a new value into the original BBCCHH field and a "tic" instruction that branched back to the first CCW and looped the whole process all over again.

think of it as sort of register dependency checking for instruction out-of-order, prefetching

a couple months out of the university (summer of '70), i got selected to do onsite for a customer that was trying to bring up large ISAM production under cp/67 ... and this was before any ISAM support what so ever. This was still in the days where corporations put their datacenter on display behind glass on first floor of tall corporate buildings. This place ran normal operation during the day and I got the machine from midnight to 7am or so. You are sitting there doing some all night debugging and the early birds are starting to walk by on their way to work and stare at you.

then there is my soap-box that this is all from the days where there was significant constraints on memory and significant extra I/O capacity ... so there was a design point trade-off of I/O for memory (significant I/O resources were consumed to save having tables of stuff in memory). by the mid-70s, the technology had shifted to where memory was starting to be more abundant and disk I/O was the constrained resource.

random past posts related to IO/memory trade-off design points:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?

random past CKD posts:
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#19 OT?
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2000g.html#52 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001.html#12 Small IBM shops
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
https://www.garlic.com/~lynn/2001.html#55 FBA History Question (was: RE: What's the meaning of track overfl ow?)
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001f.html#21 Theo Alkema
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#6 index searching
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002b.html#1 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002d.html#19 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#46 What goes into a 3090?
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002g.html#32 Secure Device Drivers
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2002g.html#84 Questions on IBM Model 1630
https://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002l.html#49 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002q.html#25 Beyond 8+3
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#22 360/370 disk drives
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003c.html#66 FBA suggestion was Re: "average" DASD Blocksize
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003o.html#47 Funny Micro$oft patent
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004d.html#43 [OT] Microsoft aggressive search plans revealed
https://www.garlic.com/~lynn/2004d.html#63 System/360 40 years old today
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

security taxonomy and CVE

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: security taxonomy and CVE
Newsgroups: comp.security.misc
Date: Thu, 13 May 2004 10:53:54 -0600
I was looking at the CVE entries
http://cve.mitre.org/

to see if there was any structure that I might be able to add to my security taxonomy & glossary
https://www.garlic.com/~lynn/index.html#glosnote

i didn't find any institutional vulnerability structure ... the descriptions are pretty free-form. I did do simple word & word-pair frequency counts on the descriptions

Using simple grep on CVE descriptions:


1246 mentioned remote attack or attacker
 570 mentioned denial of service
520 mentioned buffer overflow
105 of the buffer overflow were also denial of service
76 of the buffer overflow were also gain root

buffer overflow posts
https://www.garlic.com/~lynn/subintegrity.html#buffer

Using awk to do simple word and word-pair counts (most frequent words & word pairs in 2623 vulnerability descriptions (recent download of the CVE database).

the only really obvious semantic content that I observed was that it would appear that people doing bad things on a local system tend to be described as "local users" and people doing bad things remotely are "remote attackers". There aren't a lot of "local attackers" and "remote users"


domain                     40
get request                40
interface                  40
path                       40
administrator              41
connection                 41
connections                41
files by                   41
modifying                  41
cgi program                42
dns                        42
requests                   42
session                    42
directories                43
in an                      43
in freebsd                 43
which is                   43
environmental variable     44
hp-ux                      44
in solaris                 44
proxy                      44
systems allows             44
the default                44
to access                  44
explorer 5                 45
network                    45
servers                    45
temporary                  45
to be                      45
list                       46
memory                     46
programs                   46
use                        47
aix                        48
attack on                  48
http request               48
may                        48
uses                       48
windows 2000               48
netscape                   49
some                       49
specifying                 49
error                      50
certain                    51
get                        51
program allows             51
html                       52
line                       52
packet                     52
restrictions               52
characters                 53
mail                       53
telnet                     53
environmental              54
overwrite arbitrary        54
via shell                  54
iis                        55
string vulnerability       55
large number               56
enabled                    57
in windows                 57
passwords                  57
possibly execute           57
allow local                58
number of                  58
security                   58
sending                    58
sensitive                  58
the file                   58
program in                 60
the web                    60
local user                 61
address                    62
ip                         62
running                    62
traversal vulnerability    63
directory traversal        64
traversal                  64
control                    65
daemon                     65
freebsd                    65
the server                 65
web server                 65
format string              67
message                    67
malicious                  68
causes                     70
privileges by              70
the user                   70
windows nt                 70
package                    71
to obtain                  71
default                    72
php                        72
shell metacharacters       72
metacharacters             73
to modify                  73
packets                    74
solaris                    77
service in                 78
allow remote               79
execute commands           80
obtain                     80
permissions                80
server allows              80
via an                     80
user to                    81
access to                  82
function                   82
allows attackers           83
cisco                      83
to overwrite               85
root access                87
overwrite                  88
shell                      88
ftp                        89
port                       90
internet explorer          92
client                     94
explorer                   96
symlink attack             97
systems                    97
variable                   97
modify                     99
configuration             100
symlink                   100
format                    102
parameter                 102
to bypass                 103
password                  110
script                    111
string                    111
remote attacker           112
information               113
authentication            114
bypass                    115
code via                  116
read arbitrary            128
internet                  129
http                      130
system                    130
could allow               131
cgi                       139
arbitrary code            143
privileges via            145
url                       145
linux                     147
root privileges           151
microsoft                 156
malformed                 165
files via                 176
gain root                 183
gain privileges           187
directory                 193
attack                    195
attacker to               197
not properly              199
request                   204
commands via              212
windows                   214
vulnerability in          215
code                      218
attacker                  220
command                   222
to read                   228
web                       230
read                      243
service via               244
allow                     250
program                   251
arbitrary files           252
arbitrary commands        256
earlier allows            258
user                      273
root                      294
access                    323
vulnerability             347
to gain                   370
commands                  391
privileges                410
execute arbitrary         429
gain                      435
overflow in               494
cause                     495
files                     495
server                    497
overflow                  520
allows local              524
execute                   560
denial                    571
denial of                 571
local users               573
users to                  629
service                   676
local                     727
users                     733
arbitrary                 748
allows remote            1062
remote attackers         1134
attackers to             1228
attackers                1268
remote                   1365
allows                   1990

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From nobody Thu May 13 15:34:44 2004
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Thu, 13 May 2004 15:31:06 -0600
"Stephen Fuld" writes:
IBM was really behind in the area of true, shared memory (AKA tightly coupled) multi-processors. (They were ahead in "loosly coupled" processor complexes). The Univac 1108 had true shared memory multi-processing, with each CPU capable of doing I/O back in the mid-late 1960s. IBM didn't really address that problem until the "dyadic" processors (early 80s?).

you are possibly thinking of two different things. the standard 360 multiprocessors had shared memory but non-shared I/O channels; they achieved "shared devices" using the same technology they used for loosely-coupled (i.e. non-shared memory); aka device controllers that could attach to multiple channels. A typical tightly-coupled, shared memory configuration tended to have controllers configured so that devices appeared at the same channel address on the different processors.

there was a big distinction/deal made about multiprocessors that they could be divided and run as totally independent uniprocessors.

the exception to non-shared channels was 360/67 multiprocessor (and some of the special stuff for FAA air traffic control system). The standard 360/67 multiprocessor had a channel controller and other RAS features ... which allowed configuring memory boxes and channels ... as shared or non-shared; aka a 360/67 multiprocessor could be configured so that all processors had access to all channels.

370s continued the standard 360 multiprocessor convention of being able to partition the hardware and run as independent uniprocessors as well as non-shared channels ... that typically had controllers configured so that devices appeared at the same i/o addresses on the different processors. Later on, 370 introduce a "cheaper" multiprocessor called an "Attached" processor ... it was a second (shared-memory) processor that didn't have any channels at all.

3081 introduced the dyadic ... it was a two-processor shared memory box that couldn't be partitioned to operate as two independent uniprocessors and the channels could be configured as either shared or non-shared (something that hadn't been seen since the 360/67). The 3081 was never intended to be made available as a uniprocessor. It was however, possible to combine two 3081s into a four-processor 3084 (and 3084 could be partitioned to operate as two 3081s). Somewhere along the way ... I believe primarily for the "TPF" market ... a less expensive, "faster", single processor was made available called a 3083 (part of the issue was it couldn't be a whole lot less expensive than the 3081 ... since the 3081 didn't have a lot of fully replicated infrastructure ... so going to a single processor 3083 was still a lot more than half a two processor 3081).

The two-processor cache-machine 370s ... and carried into the 3081, ran the caches ten percent slower in multiprocessor mode than in uniprocessor mode. This was to accommodate the cross-cache chatter having to do with keeping a strongly consistent memory model. While the 3083 uniprocessor couldn't actually cut the 3081 hardware (& costs) in half ... it could run the cache nearly 15 percent faster (than the 3081 caches).

Note TPF was the follow-on to the airline control program operating system ... originally developed for airline res. systems ... but by the time of the 3081 it was also being used in a number of high transaction financial network applications. While TPF had support for loosely-coupled (non-shared memory multiprocessing ... or clustering), at the time, it didn't yet have support for tightly-coupled, shared-memory multiprocessing ... and many customers were running processors at saturation during peak loads ... and they could use all the processing cycles that they could get ahold of.

Some number of the TPF customers would run VM supporting 3081 multiprocessing and run two copies of TPF in different virtual machines (each getting a 3081 processor) and coordinate their activity with the loosely-coupled protocol support (shared i/o devices and various message passing).

somewhat aside/drift, charlie's work at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
on fine-grain (kernel) locking for the cp/67 kernel running on 360/67 multiprocessing resulted in the compare&swap instruction (CAS are charlie's initials, the selection of name compare&swap was so the mnemonic would match charlie's initials) ... which first appeared in 370s over thirty years ago ... random smp posts:
https://www.garlic.com/~lynn/subtopic.html#smp

numerous past postings regarding tpf, acp, &/or 3083:
https://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#152 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000.html#31 Computer of the century
https://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2000c.html#60 Disincentives for MVS & future of MVS systems programmers
https://www.garlic.com/~lynn/2000d.html#9 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE? Big Iron
https://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#45 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#47 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
https://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002m.html#67 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#28 TPF
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
https://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
https://www.garlic.com/~lynn/2003g.html#30 One Processor is bad?
https://www.garlic.com/~lynn/2003g.html#32 One Processor is bad?
https://www.garlic.com/~lynn/2003g.html#37 Lisp Machines
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003p.html#45 Saturation Design Point
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004.html#24 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#49 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004.html#50 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

going w/o sleep

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: going w/o sleep
Newsgroups: alt.folklore.computers
Date: Thu, 13 May 2004 15:58:15 -0600
i just saw a news thing about depriving people of sleep for up to 70 hrs was one of the tortures. in college, they used to shutdown the datacenter at 8am sat. morning ... and not staff it again until 8am mon. morning; that or they turned it over to me ... it wasn't unusual to pull a 48hr shift w/o sleep on the weekend with the datacenter all to myself and then go to monday classes ... which i guess works out to about 60hrs of torture on a normal basis?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

going w/o sleep

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: going w/o sleep
Newsgroups: alt.folklore.computers
Date: Thu, 13 May 2004 18:20:43 -0600
et472@FreeNet.Carleton.CA (Michael Black) writes:
About ten years ago, they had a movie marathon to mark a big anniversary of an independent theater. It was a marathon, though I don't think it lasted more than a day. They actually did study the participants, and the woman who sat up through all those films won a movie pass for a year.

in the early '80s there was somebody that sat in the back of my office for something like nine months ... taking notes on how i communicated; they also got copies of all my incoming & outgoing email ... and logs of all instant messages. the research report became a stanford phd thesis joint between language and ai. it was also the basis for some subsequent books. there was some statistic that i had email communication with an avg. of two hundred and seventy-some different people per week for the nine months of the study.

slightly back to computers (& movies) ... during the days of FS (future systems)
https://www.garlic.com/~lynn/submain.html#futuresys

there was a theater in central sq that was noted for running a cult film every day for the previous 15? years .. i think called queen of hearts(?) ... it was black & white about WW2 and us soldiers entering french town that had been vacated ... it was currently populated by inmates that had escaped from local asylum; i used to make some analogy about FS, the movie, and the inmates being in charge of the institution.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

going w/o sleep

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: going w/o sleep
Newsgroups: alt.folklore.computers
Date: Thu, 13 May 2004 20:08:00 -0600
Roland Hutchinson writes:
Now the ODD thing about MY memory is that seem to I remember it being in color, which a brief googlation reveals is wrongity-wrongity-wrong.

what the joke about the three signs ... anyway, so i tried using search engine:
http://www.filmforum.com/films/king.html the above claims something about being in color? other references:
http://cinematreasures.org/theater/6561/
http://www.imdb.com/title/tt0060908/
http://www.imdb.com/title/tt0060908/plotsummary
http://www.epinions.com/content_20347915908

many mention its central sq. affiliation. a couple search engine refs even found some of my past postings attempting to remember its name & details.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Data Display & Modeling

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Data Display & Modeling
Newsgroups: comp.databases.theory
Date: Thu, 13 May 2004 22:02:07 -0600
"Dawn M. Wolthuis" writes:
I'm not the one who makes the distinction, although I try to work with the distinction when others care about it. A Date in PICK is typically handled as atomic (from my perspective) and that is why pick shops had very little work with Y2K -- they weren't splittin' that puppy apart all the time, but were working with an odd "days since Dec 31, 1967" internally stored number that simply permits the type of arithmetic one does with dates. It can be viewed as MM/DD/YYYY or in any other format, but it is not stored that way. So, the POSSREPs have components, but the type Date doesn't. It does have functions, however, so you can get the month as a logical consequence. So, is Date atomic or compound? I don't care what you call it, but don't handle it the way SQL-DBMS's or at least DBMS developers have done in the past, resulting in the Y2K maintenance costs.

note that a large part of the Y2K issues could be blamed on legacy things carried forward from tab-card-based infrastructures that conserved space in the 80character cards by representing years as two digit fields. the convention was carried forward when the 80byte card records were transferred to disk records ... especially with paradigms that effectively mimic the tabcard based paradigm with rows and columns ... where rows represent each card and the columns represent the different fields defined in a card. There were pervasive accounting oriented card-based systems ... with unique card for each account and various fields associated with the account carried in columns of the 80byte card record. This paradigm has perpetuated up until now in various disguises and is still used extensively.

The issue to conserve space by representing year fields as two digits (or in some cases even a single digit as carry over from the tab card days) was somewhat mitigated by the rapid growth in large disk spaces and drastic reduction in the disk price per megabyte. However, it had been institutionalized by hundreds of billions of dollars worth of applications software (or at least that was the original development costs) that embedded various space-conserving encoding methods.

I wouldn't say that it is specific to SQL-DBMS or other various implementations ... it is common to all deployments that perpetuate the tab-card paradigm and legacy (the legacy dates back to at least the 50s for relatively wide-spread deployment ... and has its roots with events like the 1890 census).

random reference:
http://www.computerhistory.org/exhibits/highlights/hollerith.shtml

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

c.d.theory glossary (repost)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: c.d.theory glossary (repost)
Newsgroups: comp.databases.theory
Date: Thu, 13 May 2004 22:48:52 -0600
mAsterdam writes:
This glossary seeks to limit lengthy misunderstandings in comp.database.theory.

People tend to assume that words mean what they are accustomed to, and take for granted that the other posters have about the same connotations. They don't always.

Some words are particularly suspect: database, object, normalisation. Some just cause minor annoyances, the misunderstanding is cleared and the discussion goes on: domain, type, transaction.

We don't know well-accepted, formal or comprehensive definitions for everything. If you do have a useful reference, please provide it.

If an informal description is all we have, so be it.


i've done merged taxonomies & glossaries in various fields:
https://www.garlic.com/~lynn/index.html#glosnote
i.e. security, payments, financial, privacy, standards, etc.

the issue isn't so much the intercategory relationships ... that could be contorted into a rdbms type schema ... although they frequently are many-to-many ... (say a word with multiple definitions and/or a common/same definition for different words). the somewhat more difficult is the arbritrary many-to-many intracategory relationships ... aka many-to-many relationships between words ... much more of an arbitrary mesh than any sort of structured row/column representation.

there was a story about an attempt to map a relatively straight-forward repository of metadata information (another case of arbitrary many-to-many mesh structures) into a rdbms paradigm ... and it supposedly got to over 900 tables (i think before they quit).

for a little humor ... quote from article today
http://www.financetech.com/utils/www.wallstreetandtech.com/story/enews/showArticle.jhtml;jsessionid=PAYTB5SORIJ1OQSNDBCCKHQ?articleID=20300854
end of first paragraph about data, information, & knowledge.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

going w/o sleep

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: going w/o sleep
Newsgroups: alt.folklore.computers
Date: Fri, 14 May 2004 12:45:47 -0600
Roland Hutchinson writes:
Now the ODD thing about MY memory is that seem to I remember it being in color, which a brief googlation reveals is wrongity-wrongity-wrong.

ok, i've got to admit to never actually haven seen the movie; i walked by that cinema a lot and during the height of FS
https://www.garlic.com/~lynn/submain.html#futuresys
remember reading an article about the movie and thinking that it would make a great theme movie for the project.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Fri, 14 May 2004 13:22:24 -0600
"Stephen Fuld" writes:
I had forgotten about the /67 MP, but were there other examples of S/360 tightly coupled MP systems? I don't remember any, but I could certainly be wrong. I guess I don't count the /67 as it was a sort of specialized machine, not part of the "mainline" S/360 line. As a historical note, the site I was at had a 4 plex of /65s running ASP (not HASP), the predessor of JES3. It didn't work too well. :-(

there were a number of customer 360/65 smp installation ... which had shared-memory and non-shared i/o channels ... but used device controllers with multiple channel connectors to "simulate" shared i/o (i.e. the devices were physically connected to multiple processors ... even if the i/o bus/channel was non-shared). os/360 smp shared-memory had old-time kernel/supervisor spinlock implementation ... aka multiple applications could run concurrently on the different processors but only one processor could be executing in the kernel at a time. part of the issue with shared/nonshared channels was that the channels on the 65 weren't integrated ... they were external boxes and it would have taken quite a bit more engineering to support shared channels (especially when they could achieve effectively the same results with shared devices using the same technology used for loosely-coupled/cluster operation).

360/67 uniprocessor was basically a 360/65 with virtual memory address translation box added on. the 360/67 smp did have the engineering changes with a brand new box referred to as the channel director which had a bunch of configuration switches ... that not only configured channels but also banks of real stroage. the 360/67 smp allowed for sensing all the current settings of the channel director ... and at least one machine was built where the processors could change the configuration switches under software control ... having to do with a extremely high-availability situation (and being able to do things like partition &/or fence components).

note also that the 360/67 had both 24bit and 32bit virtual addressing modes. 32bit addressing disappeared in transition to 370 which only had 24bit virtual address. the 3081 dyadic mentioned in previous posting, introduced 31bit (not 32bit) virtual addressing. and as previously noted, charlie's work on fine grain kernel locking for cp/67 resulted in the compare&swap instruction (CAS being his initials).

immediately following the 370 ... but before the 3081 dyadic there was a 303x mainframe generation which had a box called a channel director. The lowend & midrange 370s tended to have integrated channels (aka the real processor engine was shared between executing microcode handling 370 instructions and microcode that handling channel programs). The high-end processors had dedicated external boxes that implemented the channel functions. For the 303x line, there was an external box that supported six channels, instead of one box per channel. It was called a channel director, but didn't function as the 360/67 channel director; i.e. it only provided the support for six (instead of single) channels in an external box.

some time in the past (when she was very, very young) my wife was in the JES group and the "catcher" for the transfer of ASP (from the west coast) to the JES group in g'burg. One of the first things she had to do was go thru and read the listings and write "standard" product documentation. she later got con'ed into going to pok and given responsibility for loosely-coupled (cluster) architecture.

some discussion of controls regs & other features from the 360/67 "blue card":
https://www.garlic.com/~lynn/2001.html#69 what is interrupt mask register?
https://www.garlic.com/~lynn/2001c.html#15 OS/360 (was LINUS for S/390)

other misc. past postings about 360/67 "blue card"
https://www.garlic.com/~lynn/99.html#11 Old Computers
https://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001.html#71 what is interrupt mask register?
https://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2003l.html#25 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#31 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003m.html#35 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2004.html#7 Dyadic

lots of past postings on cluster and misc. high availability
https://www.garlic.com/~lynn/subtopic.html#hacmp

my wife authored/invented Peer-Coupled Shared Data when she was in pok responsible for loosely-coupled architecture; misc. random refs:
https://www.garlic.com/~lynn/98.html#35a Drive letters
https://www.garlic.com/~lynn/98.html#37 What is MVS/ESA?
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases
https://www.garlic.com/~lynn/2000.html#13 Computer of the century
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001d.html#71 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001g.html#44 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002g.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002o.html#68 META: Newsgroup cliques?
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003h.html#9 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#31 OT What movies have taught us about Computers
https://www.garlic.com/~lynn/2003i.html#70 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2004.html#50 Mainframe not a good architecture for interactive workloads

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Fri, 14 May 2004 14:13:22 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Only when you are using a synchronous (single-CPU) design. On a genuinely asynchronous threaded design, you can simply queue an action for a kernel thread to execute. But I know of no current general purpose system that works that way. I shall try to see if I can catch Marc Tremblay on that ....

I worked on a 5-way SMP project (called VAMPS that eventually never shipped) that had the opportunity for a lot of microcode changes. this was something of a skunk-works w/o a lot of resources for redoing kernel. so basically i took a uniprocessor vm/370 kernel and extended a lot of the microcode performance work that was done for ecps:
https://www.garlic.com/~lynn/submain.html#mcode
as well as the dispatcher and some low-level device I/O handling.

The idea was getting enuf of the kernel pathlength optimized so that the ratio of virtual machine time to kernel time was better than 4:1 (i.e. get kernel time down to something like ten percent ... then having a serialized kernel on five-way processor wouldn't be a bottleneck).

The dispath microcode was somewhat analogous to some of the later i432 stuff, ... the dispatch mcode was fully SMP enabled ... processors not running the kernel code would wait in the dispatcher for the kernel code to put something on the dispatch list. when the kernel had nothing else to do ... it entered the dispatching kernel microcode. If a virtual machine request needed the kernel process, it would attempt to enter the kernel; if another processor was already in the kernel; it would just queue the request and go off to the dispatcher looking for another virtual machine to run.

This custom box had some other hardware tweaks, the I/O was fully shared across all processors ... and all processors could execute kernel code ... but only one processor could be in kernel mode at a time. This was somewhat similar to the traditional kernel spinlocks of the period ... limiting only a single processor in the kernel at a time ... but with the distinction that there was no spinning. In effect, if a processor couldn't get the kernel lock, rather than spinning, it would queue the request and go look for some other work (although a lot of this was actually under the covers in the microcode, rather than exposed in the kernel software). This approach surficed as long as the total requirement for kernel processing was less than 100 percent of a single processor.

When VAMPS was killed, there was some effort underway to ship a vm/370 kernel supporting shared-memory multiprocessing. There was a prototype underway that implemented the traditional kernel spinlock methodology. I got involved with adapting the VAMPS microcode to kernel software .. and initially called it a kernel bounce lock rather than a spin lock.

Several hundred instructions in the kernel were SMP'ed for concurrent operation ... bascially a relatively thin layer in all the interrupt interfaces and the dispatcher. On entry to the kernel, an attempt was made to obtain the kernel lock, and if it couldn't ... rather than spinning, it queued an extremely lightweight thread request and went to the dispatcher looking for something else to do (aka rather than spinning on the kernel lock, it bounced off the kernel lock and went looking for non-kernel work to do).

I made some claim that this represented the optimal SMP thruput for the fewest number of code changes (in part because the percentage of time-spent in the kernel was small percentage of total elapsed time).

The first pass at a later enhancement was extensive kernel changes for implementing fine-grain locking actually resulted in much worse performance:

1) spin-locks and kernel serialization increased from neglibible to approx. ten percent of each processor. in most configurations the bounce lock might introduce slight theoritical latency at nearly negligible overhead processing (having negligible smp kernel overhead more than offset any theoritical increase in processing latency for any specific application/virtual machine).

2) the bounce lock logic had secondary effects that a processor currently executing in the kernel would be the same processor that would dequeue pending kernel requests ... this tended to somewhat conserve cache locality ... increasing the efficiency of executing kernel code. the improvement in cache-hit kernel execution also tended to more than offset any theoritical increase in processing latency for any specific application/virtual machine. again this was dependent on managing total aggregate kernel execution to less than a single processor.

lots of past SMP posts ... including some number of past references to VAMPS:
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

c.d.theory glossary (repost)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: c.d.theory glossary (repost)
Newsgroups: comp.databases.theory
Date: Fri, 14 May 2004 16:47:25 -0600
mAsterdam writes:
:-) To what end? What was the stated purpose?

to what end where they doing it? .. or why did they stop?

i think they were doing it because somebody decided that the metadata should be in some sort of repository and somebody else decided that met an RDBMS. the problems were with a huge number of arbitrary many to many relationships.

this is somewhat similar to nlm using a RDBMS as repository for UMLS.
http://www.nlm.nih.gov/research/umls/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Is there a way to configure your web browser to use multiple

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is there a way to configure your web browser to use multiple connections?
Newsgroups: netscape.public.mozilla.general
Date: Fri, 14 May 2004 17:06:40 -0600
"J.O. Aho" writes:
I don't think there is anything to gain from this, as 97% of all html pages are just hosted on one server, so you would get the same speed or less if you had a "html manager".

To gain speed for your browsing, would be to get a braodband connection and/or change to a less bloated operating system that don't use up CPU power on unnessesary tasks.


i had problem with dialed line and wanting to daily view/check news items from specific sites. i eventually ended up with 50+ news sites in a bookmark folder that I would "tab" select ... and then go off and do something totally different for awhile. I would eventually have all 50+ sites in tabs ... which I would start checking. for interesting references, I would tab-select them in the background. As I finished a website, I would cntrl-w/close-tab it. By the time I had waded thru all the initial 50+ websites, most of the specific news articles I had been selecting were loaded. Other than the initial 50+ website fetch, I wasn't subjected to any latency while reading ... everything from then on was effectively immediate.

my issue was being subjected to waiting while pages I wanted to look at were loading. the initial bookmark folder load would take long enuf that it was worthwhile doing something else. From then on, any new things that I was selecting were being loaded in background tabs while i was reading other material.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/


previous, index - home