List of Archived Posts

2005 Newsgroup Postings (02/12 - 03/16)

[Lit.] Buffer overruns
Self restarting property of RTOS-How it works?
360 longevity, was RISCs too close to hardware?
IBM Acronyms
Self restarting property of RTOS-How it works?
360 longevity, was RISCs too close to hardware?
[Lit.] Buffer overruns
Self restarting property of RTOS-How it works?
intel's Vanderpool and virtualization in general (was Re: Cell press release, redacted.)
intel's Vanderpool and virtualization in general (was Re: Cell press release, redacted.)
Cerf and Kahn receive Turing award
Cerf and Kahn receive Turing award
Thou shalt have no other gods before the ANSI C standard
Cerf and Kahn receive Turing award
data integrity and logs
Thou shalt have no other gods before the ANSI C standard
Thou shalt have no other gods before the ANSI C standard
Digital signature with Javascript
Digital signature with Javascript
Digital signature with Javascript
shared memory programming on distributed memory model?
Digital signature with Javascript
Latest news about mainframe
Radical z/OS
Thou shalt have no other gods before the ANSI C standard
The future of the Mainframe
Latest news about mainframe
Adversarial Testing, was Re: Thou shalt have no
Adversarial Testing, was Re: Thou shalt have no
Adversarial Testing, was Re: Thou shalt have no
The Mainframe and its future.. or furniture
The Mainframe and its future.. or furniture
Is a cryptographic monoculture hurting us all?
Thou shalt have no other gods before the ANSI C standard
Adversarial Testing, was Re: Thou shalt have no
Thou shalt have no other gods before the ANSI C standard
backup/archive
Thou shalt have no other gods before the ANSI C standard
Thou shalt have no other gods before the ANSI C standard
Thou shalt have no other gods before the ANSI C standard
Thou shalt have no other gods before the ANSI C standard
Thou shalt have no other gods before the ANSI C standard
Thou shalt have no other gods before the ANSI C standard
Secure design
Thou shalt have no other gods before the ANSI C standard
Thou shalt have no other gods before the ANSI C standard
Secure design
Thou shalt have no other gods before the ANSI C standard
Secure design
Secure design
Secure design
Thou shalt have no other gods before the ANSI C standard
Thou shalt have no other gods before the ANSI C standard
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
Convering 2 Bytes of binary to BCD
Thou shalt have no other gods before the ANSI C standard
Virtual Machine Hardware
Misuse of word "microcode"
Misuse of word "microcode"
Virtual Machine Hardware
Misuse of word "microcode"
Cranky old computers still being used
Misuse of word "microcode"
[Lit.] Buffer overruns
Virtual Machine Hardware
[Lit.] Buffer overruns
Misuse of word "microcode"
[Lit.] Buffer overruns
Virtual Machine Hardware
Metcalfe's Law Refuted
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sat, 12 Feb 2005 09:25:45 -0700
Anne & Lynn Wheeler writes:
According to NIST in the past 4 years, 871 buffer overflow vulnerabilities were exploited, commprising ablout 20 percent of all exploits

... snip ...

which is about the same percentage that I calculated from the CVE database.


ref:
https://www.garlic.com/~lynn/2005b.html#20 [Lit.] Buffer overruns

just for reference ... the nist vulnerability database
https://nvd.nist.gov/

note in the query interface at the nist ref, one of the top supplied vulnerability keywords is buffer overflow

the cve database
http://cve.mitre.org/

for a little drift: "Safety-Critical Systems Computer Language Survey"
http://vl.fmnet.info/safety/lang-survey.html

for a little more drift, the waterfall method (from nasa):
http://www1.jsc.nasa.gov/bu2/PCEHHTML/pceh90.htm

and even more nasa drift, nasa dependable computing conference
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html

and even more drift ... the first two keynote speakers in the above ... worked on original sql/relational database (morphed into sql/ds ... and later db2):
https://www.garlic.com/~lynn/submain.html#systemr

buffer overflow posts
https://www.garlic.com/~lynn/subintegrity.html#buffer

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Self restarting property of RTOS-How it works?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Self restarting property of RTOS-How it works?
Newsgroups: comp.arch,comp.arch.embedded
Date: Sat, 12 Feb 2005 09:53:11 -0700
Ed Beroset writes:
It's interesting to learn that no engineers were ever involved in building such flaws.

My background happens to be more in the engineering than the computer science end of things, but I don't share your evident contempt for the field. Here's an example: An embedded communication system receives packet-based messages of varying lengths at an average rate of 100 packets per minute, but asynchronously. Because the system also checks its timing against the recovered clock from the messages, which it can easily keep synchronized within limits as long as it doesn't go too long without receiving a packet. What is the probability that no packets will arrive in an interval of five seconds?

I can answer that question easily because I've studied a little computer science. Can you? If not, how can you properly engineer the system?


here is example of a configuration and workload analysis
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was System/1?)

with the help of performance predictor and configurators on hone:
https://www.garlic.com/~lynn/subtopic.html#hone

hone was the online system(s) that provided support for world-wide sales, marketing, and field people.

performance predictor was outgrowth of work at the science center on performance management, workload profiling, the early technology transition from performance management to capacity polanning:
https://www.garlic.com/~lynn/submain.html#bench

that allowed sales people to input customer configuration and operational information (often softcopy extracted from the system itself) and be able to do what-if questions about changes to configuration and workload.

as hardware got more and more complex ... configurators were the applications that allowed a sales person to specify rough product specification ... and the application would make sure that enuf correct information was supplied for ordering the equipment.

now, this particular analysis ... i presented at the SNA architecture review board meeting in raleigh and took lots of arrows on.

the reference about keeping timing sync ... is somewhat related when the telcos stopped letting customers have clear-channel T1 links and required them to conform to the ones-density (and 193rd bit) specification.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 12 Feb 2005 11:38:35 -0700
Charles Shannon Hendrix writes:
That was the original drive for things like Beowulf. NASA centers were losing supercomputers, still needed them, and they could get the budget for a stack of x86 machines.

Another side effect of this is that businesses or individual offices that would never have considered a supercomputer in the past, are starting to build various kinds of commodity cluster systems.


predates/precursor to beowulf:
https://www.garlic.com/~lynn/95.html#13

i had a running argument at acm sigops (91, the one where they had the evening event at the monterey aquarium) with somebody (who was at dec at the time and involved with dec vax/cluster) about ha/cmp being able to do scaleable computing with commodity cluster systems. random ha/cmp stuff
https://www.garlic.com/~lynn/subtopic.html#hacmp

in an earlier life, my wife had gotten con'ed into doing a stint in pok in charge of loosely-coupled architecture (aka mainframe for clusters). she had done Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

but in that period almost all the attention was on bigger and faster iron (uniprocessors and SMPs) ... and not a lot given to clustering.

i was involved in supporting and driving hone complex ... both from the standpoint of writing smp kernel support as well as cluster support. circa '79, '80 , hone had possibly deployed the largest single-system image cluster around
https://www.garlic.com/~lynn/subtopic.html#hone

a couple recent posts on scalable dlm:
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory
https://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql

note that GRID is somewhat the evolution of clusters ... and moving into the commercial space ... some of it can be viewed as time-sharing appplied to clustering .... misc. old time-sharing postings
https://www.garlic.com/~lynn/submain.html#timeshare

for total topic drift ... in the following list, there is a talk I gave at last summer's global grid forum:
http://forge.ggf.org/sf/docman/do/listDocuments/projects.ogsa-wg/docman.root.meeting_materials_and_minutes.ggf_meetings.ggf11

and select: GGF11-design-security-nakedkey

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM Acronyms

From: lynn@garlic.com
Date: 13 Feb 2005 09:25:01 -0800
Subject: Re: IBM Acronyms
Newsgroups: alt.folklore.computers
Andy Canfield wrote:
1 Big Mutha'

(from the 1960's)


a couple definitions from Mike's glossary that leaked onto the internet:
http://www.212.net/business/jargonb.htm
[BiCapitalization] n. The PracTice of PutTing CapiTal LetTers in the MidDle of WorDs. This was originally used to refer to microcomputer software trademarks such as VisiCalc, EasyWriter, and FileCommand but has since spread even to products totally unrelated to computing, and to many more than two capitals. The mainframe world, however, is still mostly devoid of BiCapitalization - in that environment the use of abbreviations is still the PMRR (Preferred Method of Reducing Readability).

[BICARSA GLAPPR] bi-carsa glappern. The cornerstone applications of commercial computing. An abbreviation for Billing, Inventory Control, Accounts Receivable, Sales Analysis, General Ledger, Accounts Payable, PayRoll; the applications for which IBM's mainframes were built and on which its fame and fortune were founded. Usage: Yeah, it's fast, but how does it do on BICARSA GLAPPR?

[Big Blue] n. IBM (when used by customers and competitors). n. The Data Processing Division (when used by people concerned with processors that do not use the System/370 architecture). The> DPD no longer exists, but the phrase big blue boxes is still used to refer to large System/370 installations.

[Big Blue Zoo] n. The manufacturing plant and laboratory complex located at the junction of Highway 52 and 37th Street, Rochester, Minnesota, USA. It really is blue. zoo

[Big Four] n. The original four largest TOOLS (q.v.) disks. These are the IBMPC and IBMVM conference disks, and their corresponding tool (program) collections, PCTOOLS and VMTOOLS.

[big iron] n. Large computers. Also big iron bigot. iron

[Big OS] big ozzn. Operating System/360. This term was popular in the late '60s when OS was the operating system, and it was believed to do and know everything.


Self restarting property of RTOS-How it works?

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 13 Feb 2005 10:01:41 -0800
Subject: Re: Self restarting property of RTOS-How it works?
Bernd Paysan wrote:
Hm, I could make some assumptions about the statistical properties of those "100 packets per minute", but simple assumptions can be quite wrong. If I have a web server, and I get 100 requests per minute, I expect the traffic to be shaped over the day, and that the likelyhood to receive a single packet between 3 and 5am of my target audience is quite low, while there'll be several high-activity spots during the day. That means a simply random distribution of the packets is not sufficient to give you the worst case.

At least on my most popular German page, the access frequency drops down to one tenth of the typical day use between 2 and 6am. The traffic during the day is pretty flat, though, about one hit per minute. During late night, there are only about five per hour.


glossaries/taxonomies at
https://www.garlic.com/~lynn/

account for about 1/3rd of total hits ... with pretty good sample around the world at any time of day

various search engines account for possibly 15 percent of total hits (sometimes getting every page every night) ... which aren't driven by people characteristics (i sometimes wonder if the site is being used for testing, possibly because of the extremely high ratio of hrefs to text).

in the 60s & 70s, local time-sharing systems use tended to have very strong time-of-day correlation ... a morning peak around 10am and an afternoon peak around 3pm. There could be as much as a 10:1 difference between peak period avg. use and overall 1st shift avg. use. as some of these time-sharing systems offered national service, they started to see rolling peaks from the different time-zones (this pattern was somewhat repeated later in things like ATM cash machines).

one of the interesting problems with offering 7x24 time-sharing service in the 60s was the leased/rental cost of the machines ... with vendor charing the datacenter by the processor meters. 3rd & 4th shift might be 5percent of the system ... and couldn't cover the costs of the machine or even an onsite operator.
https://www.garlic.com/~lynn/submain.html#timeshare

so some things done on cp/67 enabling 3rd & 4th shift service:

1) automating some of the simpler common operator functions ... including automatically rebooting the system after failures (helping eliminate requirement for onsite operator 3rd & 4th shift)

2) the meter would run whenever the cpu was running and/or whenever there was active I/O. the system had used a i/o sequence that waited for terminal &/or network input ... which ran the meter. however, a ccw sequence was discovered that could wait for character input w/o having the meter run (when character's weren't actually arriving).

360 longevity, was RISCs too close to hardware?

From: lynn@garlic.com
Newsgroups: comp.arch
Date: 13 Feb 2005 14:02:50 -0800
Subject: Re: 360 longevity, was RISCs too close to hardware?
Charles Shannon Hendrix wrote:
That was the original drive for things like Beowulf. NASA centers were losing supercomputers, still needed them, and they could get the budget for a stack of x86 machines.

Another side effect of this is that businesses or individual offices that would never have considered a supercomputer in the past, are starting to build various kinds of commodity cluster systems.


in conversations with a couple labs ... there were also situations where the people supporting the large machines with propriatary operating systems were retiring and the labs weren't finding people to backfill the slots ... the only thing that was coming out of the universities were people that were trained on systems running on commodity processors.

one lab. supposedly retired a machine on the same day their last person, responsible for supporting the machine, retired

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: sci.crypt, alt.folklore.computers
Date: 14 Feb 2005 18:32:47 -0800
Subject: Re: [Lit.] Buffer overruns
Morten Reistad wrote:
Or, just having 200 seats for breakfast at that same hotel, because you know half your guests go directly to meetings, and the rest is nicely spread out during the morning.

A failure in capacity will manifest itself as a little queing, then a little more queing; and then you had better rent the restaurant next door. But if the restaurant is 1/3 full all morning you know you don't have a problem.


lots of banks have gone to single queue for multiple servers (bank tellers) ... frequently there are more tellers than concurrent long running requests ... so the quicky requests frequently get to be processed and bypass a couple tellers taking long time on long requests. supermarkets have multiple queues ... but tend to have assigned special servers dedicated to few items.

i had redone page replacement, dispatching and scheduling for cp67 while and undergraduate (and it was shipped in the product). a lot of it was dropped in the morph from cp67 to vm370 ... but i got to reintroduce it all with the resource manager. but this time they decided to use the resource manager as guinea pig for chargeable kernel software (starting with unbundling they were charging for some application software, but kernel software was still free, i got to spend way too much time with business people on software charging policy).

by carefully optimizing a bunch of stuff thruout the kernel ... i could effecientily do various kinds of pre-emptive dispatching ... small light weight requests getting very timely service by pre-emption of heavy weight stuff. in much of that period ... many systems had lot of guidelines of online/interactive systems running at 20-30percent utilization in order to be able to provide reasonable interactive response. I did a lot of optimization thruout the kernel infrastructure allowing various components to operate at 100 percent utilization and still provide superior interactive response.

the science center had pioneered a lot of performance optimization and management technology.
https://www.garlic.com/~lynn/subtopic.html#545tech

Not uncommon at the time was instruction address sampling ... to identify high usage areas as targets of optimization .... past posting on investigation of kernel components for moving into hardware microcode
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

another was use of APL for extensive performane modeling ... technology was used extensively in calibrating the resource manager for product release (performance profiling, configuration characterization, workload profiling, and a lot of the early transition to capacity planning):
https://www.garlic.com/~lynn/submain.html#bench

but it also evolved into the *performance predictor* which was made available on the internal HONE system(s)
https://www.garlic.com/~lynn/subtopic.html#hone

which provided support for worldwide marketing, sales, and field support people. Marketing people could gather softcopy information from customers operations about configuration, workload, performance and use the performance predictor to answer what-if questions about what happens when changing hardware, workload, etc.

Another technology used in performance management (that complimented the instruction address sampling) was multiple regression analysis. One of the original applications on cp67 was a program that snapshot system and task performance, workload, and thruput information information every 5-15 minutes ... and had nearly a decade of information (by the time of the resource manager) across cp67 and vm370. Also had possibly year of more data on possibly a couple hundred internal machines for processing also. There was a lot of processing looking at this at specific points in time using multiple regression analysis.

slightly related thread from comp.arch
https://www.garlic.com/~lynn/2005d.html#1 Self restarting property of RTOS
https://www.garlic.com/~lynn/2005d.html#4 Self restarting property of RTOS

partially because of the strong background scheduling and resource management algorithms ... when we were doing the high-speed data transport project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

i did rate-based pacing at various levels in the network stack. As referenced here ...
https://www.garlic.com/~lynn/internet.htm#0

we weren't allowed to bid NSFNET ... but did get a audit from NSF of the high speed backbone we were operating ... and some statement to the effect that what we were operating was at least five years ahead of all (NSFNET) bid submissions (to build something new).

we got to have a lot of fun(?) with high-speed crypto (for the time) because all transmissions leaving facilities had to be encrypted.

Self restarting property of RTOS-How it works?

From: lynn@garlic.com
Newsgroups: comp.arch
Date: 15 Feb 2005 09:43:08 -0800
Subject: Re: Self restarting property of RTOS-How it works?
Charles Krug wrote:
When we were younger, we stayed in a hotel in San Juan where the elevators went to floors in the order the buttons were pressed, without regard for proximity.

Not sure who came up with that brilliant idea.

You might imagine that a clever 10yo boy made certain that the elevator's path was the most convoluted possible. My sisters weren't much better.


they obviously never heard of the elevator disk arm scheduling algorithm

intel's Vanderpool and virtualization in general (was Re: Cell press release, redacted.)

From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 16 Feb 2005 10:21:25 -0800
Subject: Re: intel's Vanderpool and virtualization in general (was Re: Cell press release, redacted.)
Sander Vesik wrote:
To be brutaly honest, they simply had way incomptently run windows shop in that case (esp the "running around and rebooting" part) and probably hired people they shouldn't have. Taking people who are good at running OS Y and having them run OS Q dependably gives such a result and tells you little about the quality of the OS-s in question

for the mainframe batch systems, there is a small matter that for 40 years (or more) that have evolved a "batch" paradigm where the responsible party for executing the program isn't present when the progeram is executing. as a result they have a long-standing default design point where the system needs to be doing things automagically on behalf of people that aren't present.

this continued with the batch-based "online" paradigm ... the responsible people for the deployed online application aren't present ... even tho the online application is providing various kinds of services for other individuals. the batch-based "online" paradigm has quite a large number of simularities with the client/server web paradigm .... where the server side of the web paradigm is providing various kinds of online services for the client side users.

in contrast ... most of the recognized "easy to use" interactive systems have long standing design point that the person responsible for running the application is actually physically present when the application is run (aka rather than having extensive setup so the system can automagically handle various conditions ... it defaults to throwing it to the person that is presumed to be present, invoking the application).

many of the batch oriented systems have spent more than 40 years evolving their automagically handling methodologies.

intel's Vanderpool and virtualization in general (was Re: Cell press release, redacted.)

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 16 Feb 2005 21:29:42 -0800
Subject: Re: intel's Vanderpool and virtualization in general (was Re: Cell press release, redacted.)
Alex Colvin wrote:
Although there was also a long-held assumption in some mainframe systems that an operator was present. Hence the semi-manual I/O transports for cards, tape, printers. And the lack of time-of-day clocks.

so in the 90s we were talking to a large financial transaction operation and they claimed that the two factors that enabled them to have 100 percent availability for the previous six years were

automated operator • ims hot standby

i had done a lot with automated operator in various scenarios starting in the late 60s ... for 7x24 operation (allowing time-sharing service around the clock w/o requiring onsite person during "slack" hours) ... recent posting
https://www.garlic.com/~lynn/2005d.html#4 Self restarting property of RTOS-How it works?

collected postings on the subject of time-sharing from the period:
https://www.garlic.com/~lynn/submain.html#timeshare

and then did a lot more in the 70s ... at the time with automating benchmarks perparing the research manager ... leading up to release of the research manager there was over 2000 benchmarks that took 3 months elapsed time (which included a system reboot between each benchmark). ... some collected postings on the benchmarking (and other subjects):
https://www.garlic.com/~lynn/submain.html#bench

as to ims hot-standby ... my wife had been con'ed into serving time in POK in charge of loosely-coupled architecture ... and while there originated Peer-Coupled Shared Data
https://www.garlic.com/~lynn/submain.html#shareddata

ims group was one of the few organziations paying attention since most of the corporation was focused on ever bigger uniprocessors and SMPs (as opposed to cluster solutions and protocol).

we pulled some of that together for ha/cmp .. a specific ha/cmp reference
https://www.garlic.com/~lynn/95.html#13

some collected ha/cmp, clustering, and loosely-coupled postings
https://www.garlic.com/~lynn/subtopic.html#hacmp

Cerf and Kahn receive Turing award

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 16 Feb 2005 21:15:57 -0800
Subject: Re: Cerf and Kahn receive Turing award
glen herrmannsfeldt wrote:
Even without logging in, it says "creating the underpinnings of the Internet", I believe that is TCP/IP.

Al Gore helped supply the funding for "the Internet as we know it", that is, for use by more than academics.

Both are important.


the underpinnings of arpanet was a homogeneous packet based network with host protocol. my frequent assertion that the internal network was larger than the arpanet/internet from just about the beginning until sometime mid-85 because the internal network nodes effectively had a form of gateway ... which the "internet" didn't get until the great cut-over on 1/1/83. misc. past internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

a lot of the internet as we know it was the commercialization that happened ... which you started to see at least by the time of interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop88

where you saw a lot of vendors selling tcp/ip commercial products to commercial customers.

NSFNET1 and NSFNET2 were high-speed backbone RFPs for T1 (and then T3) interconnect for a select set of educational &/or gov. nodes. pure aside ... we weren't allowed to bid on NSFNET1/NSFNET2 ... although we got a technical audit by NSF of the high speed backbone
https://www.garlic.com/~lynn/subnetwork.html#hsdt

we were operating ... which made some reference to what we were operating was at least five years ahead of all NSFNET bid submissions to build something new. minor past references:
https://www.garlic.com/~lynn/internet.htm

misc. other archeological references:
https://www.garlic.com/~lynn/rfcietf.htm#history

some past post/threads on the subject:
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#56 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#58 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#67 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#76 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#11 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#13 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#14 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#18 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#26 Al Gore, The Father of the Internet (hah!)
https://www.garlic.com/~lynn/2000e.html#28 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#31 Cerf et.al. didn't agree with Gore's claim of initiative.
https://www.garlic.com/~lynn/2000e.html#38 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/2000e.html#39 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#45 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#46 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#47 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#49 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#50 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2002b.html#40 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002g.html#73 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002g.html#74 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#80 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#81 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#82 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#85 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#86 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#28 trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#35 pop density was: trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#36 pop density was: trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002l.html#70 Al Gore and Fidonet [was: 10 choices]

Cerf and Kahn receive Turing award

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 17 Feb 2005 09:50:45 -0800
Subject: Re: Cerf and Kahn receive Turing award
Morten Reistad wrote:
Until around 1990 the construction of routers was an arcane art. Cisco, Proteon and Wellfleet were building boxes on top of weird bus constructions, crufty software, and standard cpu-memory constructions. The first router to "break out" from the cruft was the Cisco 7000.

Did we miss anything significant from IBM in that timeframe? I know that was the time of an intense Token Ring love-in at IBM.


my wife has her name on an (international/pto) token passing (lan) patent from the 70s.

when almaden was built in the mid-80s, it was provisioned w/CAT4 ... but when they actually connected it up ... they found that twisted-pair enet hardware had lower latency and higher thruput than 16mbit t/r.

we found something similar ... and took a lot of heat from the saa/token-ring crowd when we pitched it (along with the original 3-layer architecture & middle laywer) in customer executive presentations.
https://www.garlic.com/~lynn/subnetwork.html#3tier

ibm mainframe tcp/ip stack was done in vs/pascal ... it had some thruput issues ... on 3090, it could use a whole 3090 processor getting 44kbytes/sec. I added rfc1044 support to the stack ... and in tests at cray research between a 4341-clone and a cray ... was getting 1mbyte/sec using only a modest amount of the processor
https://www.garlic.com/~lynn/subnetwork.html#1044

however, outside of technology ... while the NSFNET1 RFP provided the venue for the backbone ... the folklore is that the actual provisioning by commercial companies was something like 4-5 times that of what the gov. funded. the conjecture was that (at least for the telcos) there was a substantial amount of dark-fiber and they had never figured out a transition program. telcos have certain fixed run-rate ... if they dropped the price of all bit transmission (say by a factor of ten times) as a strategy to promote new applications ... they could never cover their fixed run-rate during the multi-year transition period. the donation of enormous excess resources for the backbone could help promote the evolution of new bandwith use paradigms ... w/o directly impacting their existing revenue.

at supercomputer in austin ('90?, '91?) there were some lessor known router vendors supporting full long-haul T3 ... with more modern router architectures. There was some conjecture that the more mainstream router vendors weren't very motivated in developing new architectures and technologies ... because they were already selling everything they made. This continued well thru the 90s with things like support for really robust packet filtering capability (various of the lessor known vendors had substantially more robust implementations than the market leaders).

somewhat side-track was that in the late 80s and early 90s ... many of the world govs. were mandating the elimination of the internet and transition to OSI (including US gov. with GOSIP). interop 88
https://www.garlic.com/~lynn/subnetwork.html#interop88

there were substantial amount of OSI products from commercial vendors (attempting to conform with the gov. mandates).

my 'oft repeated story was that ISO compounded the problem by mandating that ISO and ISO-chartered standards organization could work on networking standards that violated the OSI model. HSP in ansi x3s3.3 was going directly from transport/layer4 to LAN/MAC interface.
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

this was rejected because it violated OSI model:

  1. going direct from transport to LAN/MAC bypassed the level3/level4 interface, violating OSI model

  2. HSP would support internetworking (aka IP). IP doesn't exist in the OSI model, so supporting IP violates the OSI model

  3. HSP would support LAN/MAC interface, the LAN/MAC interface sits somewhere in the middle of networking/level3. LAN/MACs violate OSI model, so supporting LAN/MACs also violates OSI.

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: sci.crypt, alt.folklore.computers
Date: 17 Feb 2005 20:26:44 -0800
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Don Chiasson wrote:
And then when you consider the number of beers consumed at a DECUS....

SHARE had evening SCIDS (Society for Continuous Inebriation During Share) and into the 70s ... it was still open bar (if your badge got you into the ballroom, you poured whatever and as much as you wanted). this definition has since fallen into disfavor.

the folklore was that IBM didn't allow expense re-imbursement for alcohol ... so the user group meeting bundled the cost of drinks into the overall registration.

once I was scheduled for one hr presentation at european share on performance management ... which was actually a full day presentation. i gave the one hr ... and then was scheduled for birds-of-feather during scids (in a room just off the ballroom) ... starting at 6pm and continued until past midnight ... but well lubricated with side trips to the ballroom.

misc. past postings mentioning scids
https://www.garlic.com/~lynn/2000d.html#5 Definition of SHARE & SCIDS Requested
https://www.garlic.com/~lynn/2000d.html#6 Definition of SHARE & SCIDS Requested
https://www.garlic.com/~lynn/2001k.html#20 OT: almost lost LBJ tapes; Dictabelt
https://www.garlic.com/~lynn/2001l.html#12 mainframe question
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#60 IBM-Main Table at Share
https://www.garlic.com/~lynn/2002o.html#31 Over-the-shoulder effect
https://www.garlic.com/~lynn/2002q.html#11 computers and alcohol
https://www.garlic.com/~lynn/2002q.html#23 Free Desktop Cyber emulation on PC before Christmas
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003k.html#62 The Incredible Shrinking Legacy Workforces
https://www.garlic.com/~lynn/2004d.html#10 IBM 360 memory
https://www.garlic.com/~lynn/2005b.html#44 The mid-seventies SHARE survey

Cerf and Kahn receive Turing award

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 18 Feb 2005 09:10:14 -0800
Subject: Re: Cerf and Kahn receive Turing award
for the NSFNET2 RFP ... there was a red team and a blue team formed for a bid response. The red team was my wife and I, the blue team was 20 people from seven labs around the world. For the final review, the red team presented first and then the blue team. Ten minutes into the blue team presentation, it was evident to everybody in the room, that the red team solution was vastly superior. At this point, the person running the review, pounded on the table and exclaimed that he would lay down in front of a garbage truck before he let any but the blue team solution go forward.

an email from the NSFNET1 period ... names have been changed to protect the guilty. note that the referenced meeting was called off before it could be held.

......

Date: 05/01/86 17:35:04
From: wheeler

fyi, re: meeting today with cornell on nsf/super computer system

They went great!!!

Ken King is going to call the key people at the following universities:

MIT, Stanford, Berkeley, Michigan, Minnesota, Illinois U of Texas, Columbia, Deleware, Maryland, TUCC, Princeton, Penn State, Wisconsin, UCLA

and

NCAR, FERMI, and VLA

for a two day meeting in Yorktown in June. After he makes the calls a formal memo (also inviting Erich Bloch) will be sent to these people.

The purpose is to set the stage for the writing of the NSF proposal Some of these universities will participate as BITS recipients, some as DNS hosts, and some as hubs for both.


... snip ... top of post, old email index, NSFNET email

data integrity and logs

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 18 Feb 2005 08:57:04 -0800
Subject: data integrity and logs
in the privacy & law track yesterday at RSA conference there was a lot of talk about logs being needed to guarantee data integrity, problems of merging logs from distributed systems (in correct time sequence), etc.

it brought to mind an explanation made at an ACM SIGMOD conference in the early 90s about what was all this ISO x.50x stuff; the description was that all this x.50x stuff was a bunch of networking engineers attempting to re-invent 1960s data base technology.

Thou shalt have no other gods before the ANSI C standard

From: lynn@garlic.com
Newsgroups: sci.crypt, alt.folklore.computers
Date: 18 Feb 2005 12:29:56 -0800
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Douglas A. Gwyn wrote:
They're not very relevant to sci.crypt either, except that this thread really started in sci.crypt with somebody asserting that buffer overruns were inherent in using C. I don't know who first added a.f.c to the discussion, nor why. Surely by now you must know how to skip postings that don't interest you? Or did I miss something and you're the a.f.c moderator?

i'm guilty on most counts.

my assertion was that there were a number of environments (over the years) which had target buffers that included length semantics and that copy operations in these environments leveraged such length semantics to avoid a many of the buffer overflow vulnerabilities common to C programs (many as a direct result of the common practice of not having infrastructure length semantics associated with target buffers).

furthermore, in subsequent subthread regarding automatic bounds checking ... i commented that automatic bounds checking is dependent on infrastructure start/end (and/or start/length) constructs ... w/o which it is not possible for automatic bounds check to have a basis on which to determine the bounds to be checked.

... and as a minor corollary ... for infrastructures that had start/end (and/or start/length) constructures for storage areas ... that such constructs could be leveraged by standard library copy operations (in addition to any use by automatic bound checking facilities).

my original x-posting with sci.crypt & alt.folklore.computers:
https://www.garlic.com/~lynn/2005b.html#16 [Lit.] Buffer overruns

somebody else had previously x-posting to comp.security.unix:
https://www.garlic.com/~lynn/2005b.html#6 [Lit.] Buffer overruns

my earlier participation in the thread:
https://www.garlic.com/~lynn/2004q.html#2 [Lit.] Buffer overruns

also note that there was an a.f.c thread from last fall (that also had quite a bit of drift)
https://www.garlic.com/~lynn/2004p.html#3 History of C

Thou shalt have no other gods before the ANSI C standard

From: lynn@garlic.com
Newsgroups: sci.crypt, alt.folklore.computers
Date: 19 Feb 2005 08:36:13 -0800
Subject: Re: Thou shalt have no other gods before the ANSI C standard
jmfbahciv writes:
Did you expect that the post would produce such wonderful fruits?

sometimes people with a range of programming experience that weren't necessarily limited to C ... might offer wider perspective on pros & cons of particular C features ... based on comparison to broad range of features from variety of backgrounds; also, when this started there had recently been a thread on the history of C in a.f.c.

one analogy is some recent claims that better understanding of current conditions on titan may contribute to better understanding of how earth came to be the way it is.

Digital signature with Javascript

From: lynn@garlic.com
Newsgroups: sci.crypt
Date: 19 Feb 2005 09:30:30 -0800
Subject: Re: Digital signature with Javascript
devnull wrote:
Sorry for not making myself clear: I can't use any server side scripts (and don't really need it for my purpose), I just want to make sure the user is who he claims to be. Computation time is not really a problem. Any suggestions?

fundamentally, you are dealing with 3-factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor
something you have
something you know
something you are


along with the level of security that you might imploy with various techniques.

a relying party verifying a digital signature with a public key will reduce to some form of 3-factor authentication ... possibly

  1. private key for performing digital signature is kept in an encrypted file ... when relying party verifies a digital signature ... the relying party assumes that it originated from somebody that knew the secret key to decrypt the encrypted file containing the private key. therefor it single-factor, something you know authentication. strength of security and vulnerabilities are related to attacks on the private key encrypted file (kept by the originator, key owner). it may be considered a step up from shared-secret, something you know authentication. In the shared-secret scenario, both the originator and the relying party have knowledge of the shared-secret (pin, password, etc); and therefor security paradigms require a unique shared-secret for every, unique security domain (otherwise a relying party in one security domain might impersonate the originator to a different security domain). misc. past postings on secrets and shared-secrets:
    https://www.garlic.com/~lynn/subintegrity.html#secrets

  2. private key for performing the digital signature is kept in a hardware token and is never divulged to anybody (even the key owner). when relying party verifies a digital signature ... the relying party assumes that it originated from somebody that is in possession of the hardware token (something you have authentication). if the relying party has proof of the integrity characteristics of the hardware token and possibly also has proof that the particular hardware token also requires a correct PIN for operation ... then the relying party might have higher level of trust in such digital signatures .... being able to assume two-factor authentication ... both something you have, aka the actual hardware token and something you know ... the hardware token's PIN. Note that this level/kind of trust and integrity is totally orthogonal to anything having to do with the cryptograpy.

  3. the private key hardware token scenario but instead of (or in addition to) the hardware token requiring a PIN ... the token requires correct biometric for operation. when the relying party verifies a digital signature with a public key, then the relying party might assume two (or three) factor authentication: something you have and something you are (and possibly also something you know).

aka ... the digital signature isn't part of 3-factor authentication ... however, given that the relying party has the appropriate assurances about how the digital signature originated ... the relying party can infer from the digital signature something about some characteristic from 3-factor authentication.

this aspect of digital signature authentication (with regard to how much trust that a relying party can place in the verification of a particular digital signature) is totally independent of any cryptography characteristic of the digital signiture ... this trust characteristic is purely related to

• the integrity characteristics of keeping the private key really private ... and • the integrity characteristics of the environment that executes/generates the digital signature.

It also is totally unrelated to infrastructure characteristic of things like digital certificates ... and is applicable, whether a digital signature authentication infrastructure is certificate-based or the infrastructure is totally free of certificates ... some past post related to certificate-less infrastructure operation
https://www.garlic.com/~lynn/subpubkey.html#certless

Digital signature with Javascript

From: lynn@garlic.com
Newsgroups: sci.crypt
Date: 19 Feb 2005 09:50:11 -0800
Subject: Re: Digital signature with Javascript
devnull wrote:
I want to write a script that allows to verify the identity of the users. My idea is to take the username and some other data, sign this and give that signature to the user as password. This signature is then checked by my script and the username given on login and the one retrieved from the signature match, the identity has been confirmed and the script continues. This does not need to be extremely secure, since with client-side javascript anybody with some knowlege can circumvent the protection anyway.

Do you know of any existing scripts that I could use or adapt? Alternatively, could you point me to a site where I can find infos on using the RSA algorithm for digital signatures (I have a script to use it for encryption, but couldn't figure out how to use it for signing


webservers tend to have stub code for implementing specific client authentication. frequently this has been used to implement some form of (possibly flat file) userid/password ... where the userid/password is entered possibly over a secure SSL channel.

other implementations have stub code invoking radius code (radius code is also standard for ISPs doing initial customer connection authentication) for authentication. default radius authentication has been userid/password based (dating back to when it originating for livingston modem pools ... dare i admit to configuring livingston boxes?). some number of radius implementations have other authentication mechanisms ... including registering a public key in lieu a password (in radius file) for doing userid/digital-signature verification. misc. past posts of public key enhancements for radius:
https://www.garlic.com/~lynn/subpubkey.html#radius

another scenario is interfacing the webserver stubcode to kerberos and implementing kerberos PKINIT with "naked" public keys (actually public keys registered in lieu of passwords with kereberso). misc past kerberos pkinit posts:
https://www.garlic.com/~lynn/subpubkey.html#kerberos

Digital signature with Javascript

From: lynn@garlic.com
Newsgroups: sci.crypt
Date: 19 Feb 2005 15:56:53 -0800
Subject: Re: Digital signature with Javascript
re:
https://www.garlic.com/~lynn/2005d.html#17 Digital signature and Javascript
https://www.garlic.com/~lynn/2005d.html#18 Digital signature and Javascript

aka ... the verification of the digital signature doesn't directly demonstrate any of the components of the 3-factor authentication.

the construction of the infrastruction uses some combination of 3-factor authentication to enable the generation of the digital signature (using a private key). when the relying party verifies the digital signature ... then given the relying party has some certification as to the process that enables use of the private key for producing the digital signature ... then a valid digital signature implies that the certified process (for accessing/enabling the private key) occurred.

in general, the integrity of the public/private key technology should be proportional to the integrity of the process that controls access/use of the private key.

some variation of 3-factor authentication is used to control access and use of the private key (for digital signature generation). the relying party then depends on the validation of a digital signature to imply that some combination of 3-factor authentication was used to access the private key (in order to generate the digital signature).

the level and integrity of some combination of 3-factor authentication (used to access and use the private key) is what the relying party is dependent on (the level and integrity of the cryptography used in the public/private key technology is secondary to the level and integrity involved in controlling access and use of the private key).

the level of trust that is implied by verification of a digital signature is dependent on the relying party being guaranteed as to the level and integrity of the process that protects access and use of the associated private key.

w/o the relying party having direct guarantee as to the process used to control access and use of a private key ... then a relying party has no basis for determining what is implied by the verification of a digital signature (i.e. whether or not any component of 3-factor authentication is involved at all in the access and/or use of the associated private key).

shared memory programming on distributed memory model?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: shared memory programming on distributed memory model?
Newsgroups: comp.arch
Date: 20 Feb 2005 09:16:36 -0800
del cecchi wrote:
If you want to avoid the evils of message passing, then ccNUMA is for you. A number of companies manufacture them. The earliest papers I recall reading were DASH from Stanford. See also SCI, IEEE1596 standard. SGI origin systems are ccNUMA, as are some X series from IBM, Superdome from HPQ. I believe DASH was followed by FLASH, also at Stanford. I'm sure the manufacturers have white papers or other stuff about their NUMA systems.

A little google and some luck should get you going.


three SCI were convex exemplar (hp processors), sequent (intel processors), and DG (intel processors). HP bought convex. IBM bought Sequent, and DG (?, their disk arrays sold off and somebody bought their brand new campus complex)

at least convex and sequent did a fair amount on partitioning and locality ... sort of an intermediate stage akin to cache locality and cache miss rates. convex heavily modified MACH; sequent modified their dynix system.

SCI was oriented towards taking synchronous bus protocols and converting them to dual-simplex asynchronous (hardware) message(?) protocol. Standard SCI memory "bus" has 64-ports; convex put dual HP processor boards at each "port" (128 processors). Sequent and DG put quad Intel processor boards at each "port" (256 processors).

SCI has been applied to stuff other than memory bus.

i found it interesting in late '80s & early 90s period .... LANL was driving COTS factor for cray channel in standards meetings (HiPPI), LLNL was pushing COTS for what became fiber-channel standard, and SLAC was the driving force behind COTS for SCI.

Digital signature with Javascript

From: lynn@garlic.com
Subject: Re: Digital signature with Javascript
Newsgroups: sci.crypt
Date: 20 Feb 2005 09:31:49 -0800
minor postscript observation ... digital certificates and the certification of the process used to access and use private key are totally unrelated. the certification of the process used to access and use private key provides the relying party with some assurance that the verification of a digital signature is in any way related what-so-ever to any 3-factor authentication characteristics.

digital certificates were invented to address the scenario where the relying party has no prior relationship with the key-owner and has to recourse to any sort of near-time access to any information about the key-owner. even before the 70s, business relying parties did possibly 90percent of their business based on established business relationship (making digital certificates redundant and superfluous). with the modern, online world ... possibly 99 percent of the remaining 10 percent ... a relying party has recourse to near-time information about the key-owner (further making digital certificates redundant and superfluous)

however, don't confuse the business purposes of digital certificates with any characteristic related to 3-factor authentication aspects involving access and use of the private key.

trivial characteristic is registration of public key in lieu of mother's maiden name or PINs (in account records) or in lieu of passwords in RADIUS and KERBEROS databases.

slightly related recent observation (comment from recent rsa conference):
https://www.garlic.com/~lynn/2005d.html#14 data integrity and logs

Latest news about mainframe

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Latest news about mainframe
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 20 Feb 2005 19:40:29 -0700
SchiradinR@ibm-main.lst (Schiradin,Roland HG-Dir itb-db/dc) writes:
In Europe we call it GSE but we never setup meetings like share. Each working group is separated from the other for CICS we have a meeting twice a year including some guys from austria and switzerland (almost 85 attend the sessions). The language is a big issue.

Remember beside the big states we have different languages, different countries, different organization. Asia and austie might be covered by Shane but I guess it's the same like europe.


i gave presentation at fall '86 european share (SEAS) meeting ... was on the isle of jersey.

"share europe" (seas) url:
http://www.daube.ch/share/seas01.html
http://www.daube.ch/share/seas02.html

from above:
SHARE Europe (SEAS)

SHARE Europe was an international (voluntary) association of users of IBM equipment, primarily main frames. The purpose was to exchange information and knowledge by conferences and publications. The scope was scientific technical at the beginning and extended to commercial and administrative data processing.

SHARE Europe was founded 1963 with the name SEAS (Share Europe Association) as an offspring from the SHARE association in the USA. Membership reached about 500 scientific and commercial institutions.

The organisation was clearly International (European), with very little regional work. At the Anniversary Meeting fall 1994 in Vienna it was decided to merge with G.U.I.D.E. to form the new association GSE (Guide Share Europe).

G.U.I.D.E.

G.U.I.D.E. was an international, non-profit-making association of IBM mainframe users. Its name is derived from Guidance for Users of Integrated Data Processing Equipment which summarises the objectives of the association.

G.U.I.D.E. was founded in 1959 from its origins as a division of the GUIDE International Corporation (USA). Until the merger with SHARE Europe, membership of G.U.I.D.E. reached about 2000 companies and institutions. The organisation was based on vivid regional work carried out in local language. Twice a year international conferences were held in English.

At the Anniversary Meeting fall 1994 in Vienna it was decided to merge with SHARE Europe to form the new association GSE (Guide Share Europe).


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Radical z/OS

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Radical z/OS
Newsgroups: bit.listserv.ibm-main
Date: Sun, 20 Feb 2005 22:45:56 -0700
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
That has not been my experience. My experience has been that the menu structure is counter-intuitive, and that about the time that I learn it they change it again. To say nothing of m$[1] not following its own "standards".

i've seen two different scenarios ... most CLI were originally developed for line terminals ... expert users could efficiently and quickly accomplish quite a bit in few keystrokes.

with advent of display screens ... menus tended to be targeted at the novice user ... people who infrequently did various tasks and had a hard time remembering what features available, what commands were, arguments were, etc. ... menus tended to be like training wheels on bicycles ... with some people never progressing past that point. these menus tend to be laborious ... because they are intended to hand-hold a casual user thru various complex/convoluted operations (effectively a menu interface as a beginner's training manual that they never progress past).

the counter example is an online environment that has large number of power-users ... they spend a large portion of their day using a very archaic CLI (left over from the sixties). A typical operation requires archaic CLI input, examine response, and then several more CLI inputs ... each requiring response examination before the next input.

A graphical interface was built ... not so much as a menu input crutch ... but to provide responses with more comprehensive information (in effect collapsing several CLI inputs into a single operation, with the user being able to deal with more comprehensive information from a much richer response display). This resulted in a much more effective operational environment.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 21 Feb 2005 09:12:34 -0700
Steve O'Hara-Smith writes:
I am actually suffering a laptop with Windows eXPectorate at the moment which I am not allowed to treat as I normally would. I was allowed to install Cygwin and SFU on it though which eases the pain :)

i felt like i was the only one at RSA with fedora on laptop (wiped clean and fresh install). i have wireless working fine in places like coffee shops, airports, hotels, etc. But for the RSA show, simple wireless (with or w/o encryption) wasn't good enuf. in any case, i didn't get it working.

i guess given all the news stories about attacks on wireless at the show ... i shouldn't be complaining.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The future of the Mainframe

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The future of the Mainframe
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 21 Feb 2005 10:39:05 -0700
Steve Samson writes:
Reminds me of the early MVS objective for continuous operation. As the need for therapeutic IPLs diminished, the need for IPLs to apply updates persisted. IBM then made the wonderful(?) discovery that parallel sysplex seemed to meet the customer need for continuous operation of <something>.

Thus the effort to allow MVS to be refreshed dynamically died, and true continous operation remains as far off as it did in 1980. I still think that IPL should be a single event in the life of a hardware/software combination, and that restarting the system after, say, a power outage should be a simple restore of a snapshotted state.

Maybe it's not so important any more, but is there still an outstanding requirement for continuous operation?


i got to write that up in the corporate continuous availability strategy document .... but it was for ha/cmp ...
https://www.garlic.com/~lynn/subtopic.html#hacmp

and (at the time), both rochester and pok non-concurred (because they couldn't support it) and the section was removed.

a lot of the parallel sysplex stuff came out of the peer-coupled shared data architecture my wife did when she served her stint in POK in charge of loosely-coupled architecture ... misc. past posts
https://www.garlic.com/~lynn/submain.html#shareddata

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Latest news about mainframe

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Latest news about mainframe
Newsgroups: bit.listserv.ibm-main
Date: Mon, 21 Feb 2005 10:33:22 -0700
IBM-MAIN@ibm-main.lst (Phil Payne) writes:
"Here's Phil Payne to talk about Solid State Disk drives. SAS will be sponsoring the bar from 18:00."

I made it. Fourteen minutes - fastest pitch I've ever given.

I'm probably pitching again this year - IBM Mainframe Futures or some such.


my pitch at fall '86 seas (25th anv. meeting) was a full-day presentation ... but they only scheduled an hour. a bof was then scheduled at 6pm in the room next to the scids ballroom ... and ran until midnight (with lubrication). recent posting:
https://www.garlic.com/~lynn/2005d.html#12

30 year history of european share (seas) ... including dates and location of meetings:
http://www.daube.ch/share/seas01.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Adversarial Testing, was Re: Thou shalt have no

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Adversarial Testing, was Re: Thou shalt have no
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 21 Feb 2005 18:39:53 -0700
"Trevor L. Jackson, III" writes:
This is a great theory but it fails in practice. The problem with it is that the relationships between the developers and testers evolve to "game the system" with the result that the tension that should exist between them is dissipated. On of the keys to the creation of a market is the independence of hte transactions, which mandates independence of the participants.

For a semi-formal treatement of such iterated transcatios see Axelrod's 1986 book "The Evolution of Cooperation". If the soldiers in the trenches of WWI were able to establish relationships that frustrated the generals what hope is there for mere managers of software development?


there was a process in the disk engineering lab ... where the engineers in the bldg. 14 disk engineering development lab and the engineers in bldg. 15 product test lab didn't report to the same chain of management ... until you got to the lab. director.

supposedly anybody with badge access to one of the bldgs wasn't allowed to have badge access to the other bldg. There was exception for the processor field engineers that took service calls on the mainframes used for validating the disk hardware (mainframes used by both bldg. 14 and bldg. 15) ... and another exception for me ... partially because i liked to wander around and fiddle with stuff ... both breaking stuff and building stuff that wouldn't break:
https://www.garlic.com/~lynn/subtopic.html#disk

bldg 15 had (has?) this large room sized environmental chamber that can control air pressure, temperature, humidity.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Adversarial Testing, was Re: Thou shalt have no

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Adversarial Testing, was Re: Thou shalt have no
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 21 Feb 2005 22:35:16 -0700
"Trevor L. Jackson, III" writes:
That's a good start. But years later they will all know each other and tit-for-tat backscratching will contaminate the independence. You really need hired guns (they are disposable and not offended thereby).

the ww1 trench cooperation ... if i don't shoot at you, you don't shoot at me ... there is some quid-pro-quo.

the disk engineering have extremely detailed error tracking over period of years ... if the product test engineers let something slip thru ... it reflects back on the product test engineers (they are on the line for not catching it ... and may involve large number of people physically visiting customer locations to remedy the problems)... there is nothing the development engineers can offer to really ameliorate/compensate. one issue is that there is an accountability infrastructure that really holds people accountable. this has been long term operation spanning decades producing long string of products.

a bigger issue in this scenario is to not have the product test engineers contaminated by assumptions made by the development engineers ... so they don't become blind to same/similar short comings in the product ... it isn't a backscratching issue ... it is a view point contamination issue. if development engineers have overlooked something ... possibly because of the way they are thinking about the subject ... you don't want the same symptoms affecting the product test engineers.

for some topic drift ... a tale about how serious some of this is taken. there is this industry service that gathers erep (error reporting) data from customer installations about errors and publishes it. long ago and far away i had done this software hack for channel (aka local i/o bus) extension over telco link ... and if there was an unrecoverable error ... i simulated something called channel check to the regular channel processing error handling code (which would retry the operation in various ways).

some time later, a new processor was coming out which they had designed to have no more than 3-4 channel checks per year across all customers (not 3-4 channel checks per year per machine ... but an aggregate total of 3-4 channel checks per year across all machines). well to their dismay ... the industry reporting service shows up with there having been an aggregate of 15 channel checks across all machines the first year of operation. this kicked up some serious forensics (? the current in-word). They eventually tracked it down to this stuff I had done for channel extender over telco link and asked if I could change it. After some investigation, i determined that if i simulated IFCC (interface control check) instead of CC (channel check) ... the error recovery/retry processes would for all intents and purposes be the same.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Adversarial Testing, was Re: Thou shalt have no

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Adversarial Testing, was Re: Thou shalt have no
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 21 Feb 2005 22:46:19 -0700
... there was a lot at risk, in the early 80s, stuff coming out of bldg. 14, disk engineering was doing in the range of $20B to $30B per annum revenue (in 1980 dollars) ... for comparison here is intel's total 2004 revenue (in 2004 dollars):
http://press-releases.techwhack.com/50/intel-quaaterly-annual-revenue/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Mainframe and its future.. or furniture

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Mainframe and its future.. or furniture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 22 Feb 2005 10:09:22 -0700
dba@ibm-main.lst (David Andrews) writes:
Can't speak for the old timers... but I remember Magnuson. ;-)

What I remember: Carl Amdahl was involved, the machine was rather low-end, and was customer microprogrammable. I was freshly out of college then, and wanted badly to get my hands on one of these (our weekend attempts to hammer microcode into the /145 turned on red lights but accomplished little else). Alas, my employer didn't see the wisdom of purchasing a Magnuson box, so I had to learn microcode on the local university's V-73.


it was a 4341-clone .. i have some recollection of them being over the hill in santa cruz.

the only time i believe i actually ran into one was at cray research.

the standard tcp/ip product was done in vs/pascal and on a 3090 would consume a whole 3090 processor getting 44kbytes/sec.

I had added rfc 1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044

to the product and in testing at cray research between a cray and a 4341-clone (which i believe was magnuson) was getting 1mbyte/sec sustained thruput (controller channel interface speed) between cray and 4341-clone using only a modest amount of the 4341-clone processor.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Mainframe and its future.. or furniture

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Mainframe and its future.. or furniture
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 22 Feb 2005 10:14:58 -0700
gilmap writes:
.. and a 6603 platter for a coffee table. (A couple of my friends have them.)

i used to spend weekends in the science center machine room
https://www.garlic.com/~lynn/subtopic.html#545tech

one weekend i had been at it over 24hrs and something went wrong and i needed a backup tape. for some reason the room that had most of the tapes in had its door locked. these doors were heavy, solid fir and i kicked it once. it decided to split in clean line from top to bottom where the knob was.

the door was moved to 4th floor conference area and layed over two (two-drawer) file cabinets to form a desk ... and to remind me to not kick doors.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Is a cryptographic monoculture hurting us all?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is a cryptographic monoculture hurting us all?
Newsgroups: sci.crypt
Date: Tue, 22 Feb 2005 12:16:06 -0700
Jean-Luc Cooke writes:
And every "bank" should have the choice of which approved "safe" to use.

that is possibly crypto/technical "safe" ... where "safe" for a financial institution may have additional considerations.

one of the issues with us/ansi X9 standards (and the ISO international equivalents) is that if the financial institution demonstrates that they comply with official standards ... then in any litigation, the burdon of proof tends to be on the other party.

If the financial institution chooses implementation (regardless of the reason) that deviate from official standards then in situations involving litigation ... the burdon of proof can shift from the other party (prooving the financial institution at fault) to the financial instituation (having to proove it wasn't at fault).

then there are things like reg-E ... where there is an assumption of inequality between institutions and individuals ... which places the burdon of proof on the institution.

in the mid-90s there were even large merchants approached with the story that if they adopted digital signature authenticated transactions, ... then if a consumer certificate could be produced that happen to have the non-repudiation flag set ... then the burden of proof could shift from the merchant to the consumer. attractive financial prospect for the merchant ... although could be considered complete mis-representation of any certificate non-repudiation flag (which has since been depreciated).

note also ... as outlined
https://www.garlic.com/~lynn/2005d.html#21
https://www.garlic.com/~lynn/2005d.html#19
https://www.garlic.com/~lynn/2005d.html#18
https://www.garlic.com/~lynn/2005d.html#17

even digital signature authentication is somewhat misnomer. a digital signature might be taken as an indirect indication of some aspects of 3-factor authentication having been performed in the access and use of a specific private key ... but w/o the relying party having some knowledge of what authentication processes where used to access and use the private key (for purposes of generating a digital signature), the relying party doesn't actually have any real knowledge about what authentication (if any) a digital signature might imply.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Tue, 22 Feb 2005 12:57:11 -0700
daw@taverner.cs.berkeley.edu (David Wagner) writes:
I'm afraid Olson is right. Testing with the all-zeros file might crash a system, but it is certainly not sufficient for security, not by a long shot. (Make sure you keep clear in your mind the distinction between *necessary* and *sufficient*.)

The all-zeros file is not an example of a malicious input. Security requires going way beyond that kind of simplistic testing. If anyone thinks the all-zeros file is the worst that attackers are going to feed them, they are going to get a big surprise when a serious adversary first tries to attack their software.


when we were working on the original payment gateway and this thing called SSL with this little small startup in silicon valley (that later moved to mountain view)
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

after they had gone thru all there extensive testing ... we produced a fault/failure matrix which listed all possible faults/failures (some amount of this is traditional enumeration of edge conditions) we could come up with in conjunction with all possible states ... and had them demonstrate that all possible conditions in all possible states were handled.

this is somewhat like chip logic checkers (before chips became too complex to do complex coverage) ... one of the earliest was the los gatos state machine (LSM, later renamed logic simulation machine for public consumption) ... which was followed by EVE (endicott verification engine ... and somewhere inbetween was yorktown simulation macine).

as chips became more and more complex ... you started to see by at least the mid-90s situations where there might only be a couple percent test coverage ... and the evoluation of testing methodologies using things like genetic (adaptive) algorithms.

misc. past posts about lsm, ysm, eve, etc:
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002j.html#26 LSM, YSE, & EVE
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002p.html#38 20th anniversary of the internet (fwd)
https://www.garlic.com/~lynn/2003b.html#10 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003.html#31 asynchronous CPUs
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003o.html#38 When nerds were nerds
https://www.garlic.com/~lynn/2004i.html#7 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004j.html#16 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#65 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns

by comparison in the mid-70s ... when i released the (mainframe) resource manager .... we developed an automated benchmarking methodology and defined something like 1000 different benchmarks to validate and calibrate the resource manager. the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

had done a lot of work on performance monitoring, workload profiling, performance management, performance simulation, and the early inception of capacity planning. part of this was an APL performance model that took in real live data from running systems and was later used as a product called performance predictor on HONE
https://www.garlic.com/~lynn/subtopic.html#hone

where marketing and sales people could ask what-if questions about customer configurations (i.e. what benefit was there to more disks, faster disks, faster cpu, more real storage, etc).

however, for the resource manager validation ... the APL model was feed the results of the first 1000 or so benchmarks and then was allowed to choose configuration, workload, and system parameters for another 1000 benchmarks (examining the result from each benchmark before defining the next one). This set of 2000 benchmarks took something like 3 months elapsed time to run:
https://www.garlic.com/~lynn/submain.html#bench

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Adversarial Testing, was Re: Thou shalt have no

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Adversarial Testing, was Re: Thou shalt have no
Newsgroups: sci.crypt,alt.folklore.computers
Date: Tue, 22 Feb 2005 20:01:32 -0700
another possible facet comes from boyd ... slightly related paper:

Boyd Cycle Theory in the Context of Non-Cooperative Games: Implications for Libraries:
http://www.webpages.uidaho.edu/~mbolin/bridges2.htm

i sponsored boyd's talks a number of times in the '80s ... various of my boyd related postings:
https://www.garlic.com/~lynn/subboyd.html#boyd

other articles about boyd from around the web:
https://www.garlic.com/~lynn/subboyd.html#boyd2

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Tue, 22 Feb 2005 20:16:56 -0700
Morten Reistad writes:
No, we have made loud complaints about how the academic field handles this subject before. The good results are made by a handful of people. Go look at Multics and OpenBSD. These are the only places I know that has attacked this problem in a systemic manner. Be prepared to read source code.

Or, go read some sites with a different colour on the hat. 2600 magazine is a good place to start.


note that cp67 and vm370 also "attacked" such things in systematic (and systemic) manner ... there just weren't as many academic papers, even tho multics was on the 5th floor of 545tech sq .. and cp67 and vm370 was done at the science center on the 4th floor of 545tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech

there were a number of cp67 and vm370 online time-sharing commercial service bureaus built on the platform (which wasn't true of multics) that required high integrity isolation between the different users:
https://www.garlic.com/~lynn/submain.html#timeshare

minor multics x-over posting
https://www.garlic.com/~lynn/2001m.html#12
https://www.garlic.com/~lynn/2001m.html#15

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

backup/archive

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: backup/archive
Date: Tue, 22 Feb 2005 23:57:30 -0700
Newsgroups: bit.listserv.vmesa-l
on 22feb05 at 13:20:36 -0500, alan altmark wrote:
Alyce, IBM already has a product that provides Linux file-level backup and restore: Tivoli Storage Manager. IBM Backup and Restore Manager for z/VM is for backing up and restoring z/VM objects, not Linux objects.

before it was called TSM ... it was called ADSM, which had evolved out of workstation datasave facility ... and there is supposedly still compatibility with workstation datasave (or at least workstation datasave clients)

workstation datasave was facility that ran on vm ... and provided vm as well as various kinds of distributed, pc, workstation, etc backup.

workstation datasave had evolved out of an internal backup/archive that I originally wrote for sjr, hone, and engineering labs. ... misc. past postings on backup/archive
https://www.garlic.com/~lynn/submain.html#backup

misc. past postings on hone
https://www.garlic.com/~lynn/subtopic.html#hone

misc. past postings on disk engineering labs
https://www.garlic.com/~lynn/subtopic.html#disk

checking some online TSM documents ... it still references at least supporting workstation datasave clients.

earlier posting on subject:
http://listserv.uark.edu/scripts/wa.exe?A2=ind0312&L=vmesa-l&F=&S=&P=12122

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 23 Feb 2005 08:48:04 -0700
Brian Inglis writes:
Lynn,

Was any of your work on e.g. vs clock, deadline scheduling, dynamic resource management, I/O recovery, etc. ever written up for IBM Sys.J., IBM J.R&D, other pubs, SHARE presentations, or whatever?


there were several cambridge science center reports and a couple of SJR technical reports. then there is my marathon session at SEAS that i've recently referenced.
https://www.garlic.com/~lynn/2005d.html#22 Latest news about mainframe
https://www.garlic.com/~lynn/2005d.html#26 Latest news about mainframe

clock, resource management, etc ... was part of resource manager, before it was released, i had to write a specific manual (maybe 40-50 pages) for it and give classes (i don't think the manual is still available ... and it carried copyright ... so you didn't see it leaking into academic press). this has transcription of the product announcement letter of the initial resource manager release:
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

there is the indirect publication of the stanford phd on clock (and global LRU) in the late 70s; there was an issue of opposition to granting the phd ... because of the disagreement between local LRU and global LRU. I was asked to provide supporting evidence to break the deadlock. There had been a ACM article by the grenoble science center implementing the local LRU strategy on the same operating system and hardware (that i had done global LRU & clock in the 60s while undergraduate ... about the same time that the original ACM paper on local LRU was published). I had numbers that showed that my global LRU implementation from the 60s significantly outperformed the grenoble local LRU (on same operating system and platform) ... misc. past refs:
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001l.html#6 mainframe question
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/93.html#7 HELP: Algorithm for Working Sets (Virtual Memory)

after i was at the science center ... lots of research resulted in code that was absorbed directly into products ... in one way or another. there was misc. stuff ... like gov. restrictions on pre-announcing products ... if it was being absorbed into product, talking about it could be considered violation until after it shipped (and that process could be a year or more). as code became less & less free, you started to see more & more concerns expressed about giving too detailed description about commercial products. of course when i was an undergraduate ... there was less concern about possibly proprietary code issues ... even when the code was being absorbed in distributed commercial product ... like extract from '68 share presentation i made as undergraduate:
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14

i guess part of the issue ... was that while i spent a great deal of my career at the cambridge science center and san jose research, a lot of my research ... i also wrote product code for that was shipped in commercial product. a lot of time that might have gone into producing academic papers were instead spent on shipping commercial products.

there were half dozen or so SJR reports drafted that never received corporate approval for publication ... numerous references about the work being too close to commercial (primarily because i would write all the code and drop it into internal production systems ... and so it took on quite a bit the characteristic of being real and commercial ... as opposed to research).

there were also rumor that i was going to have harder time receiving publication approval after being blamed for tandem memos (early online computer conferencing and other stuff). At one time, there was some claim that for specific months, I was some how responsible for as much as 1/3rd of all bits flowing that month across all of the internal network. note, from just about the beginning until mid-85 of so, the internal network was larger than all of the arpanet/internet ... misc past posts about the internal network:
https://www.garlic.com/~lynn/subnetwork.html#internalnet

random past refs to tandem memos:
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#38 ibm time machine in new york times?
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 23 Feb 2005 09:12:27 -0700
Peter Flass writes:
Two OSs for two different purposes. VM was built for isolation of users - everybody has their own dedicated machine - and what sharing there is was added later and not too efficiently (e.g.SFS). Multics was built to allow users to share code and data in a controlled manner, and wasn't interested in isolation.

however, there has been a lot written about a lot of the Multics being deployed in various (gov) environments where it was specifically used for isolation/partitioning of users.

frequently cited is the air force evoluation of multics (for gov. use) and the side note that it has no evidence of having buffer overlow situations (either from actual events or from detailed code reviews ... significantly contributing to this is that buffers and especially target buffers for copy operations carried explicit max. length values ... the semantics of which were supported by the infrastructure and libraries):
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002g.html#81 Multics reference in Letter to Editor
https://www.garlic.com/~lynn/2002h.html#30 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation

cp67 and vm370 in the commercial time-sharing deployments had significant requirements for sharing between (at least) subsets or collections of the online users (i.e. collections of users from the same corporation using the services). There were specific enhancements made by these service bureaus to accomplish such sharing (which didn't show back up in official product ... since they were viewed as commercial advantage).

note also that the original sql/relational database was built on vm/370 ... which had some number of enhancements for sharing, misc. system/r posts:
https://www.garlic.com/~lynn/submain.html#systemr

even tho the first commercial relational database was shipped on multics
http://www.mcjones.org/System_R/mrds.html

i had done a lot at science center with paged mapped filesystem and sharing of file memory-mapped objects ... only a trivial small subset of which actually shipped in products:
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

some of which was used later at sjr in system/r work. in the tech. transfer of system/r to endicott for sql/ds ... they eventually regressed to wanting a version that didn't require any kernel changes, so it had to be mapped to less finer granularity sharing constructs.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 23 Feb 2005 09:19:45 -0700
there was some competition between the code that i was writing on the 4th floor and the code that they were writing (for multics) on the 5th floor. one of my hobbies was doing internal custom production kernel distributions (which had lots of enhancements only a subset of made it to customer product release).

it wasn't fair to compare the work on the 5th floor to the full vm/370 product ... in terms of number of customers ... since the vm/370 group was much larger and had much larger number of customers. It wasn't even fair to compare the work on the 5th floor to just the total number of internal vm/370 installs (since that was still way more than an order of magnitude larger than all multics installs).

so the comparison was between what the 5th floor was doing and what i was doing ... since the total number of multics installations was about the same as the total number of internal corporate installations that i directly supported with highly modified production kernel distribution.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 23 Feb 2005 10:03:23 -0700
ref:
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard

for even more topic drift:
https://www.garlic.com/~lynn/2005d.html#14 data integrity and logs

which is somewhat related to
https://www.garlic.com/~lynn/2005d.html#36 backup/archive

as well as
https://www.garlic.com/~lynn/submain.html#systemr

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 23 Feb 2005 12:52:45 -0700
Brian Inglis writes:
That would be LY20-1996, so available only to licensed users of the PRPQ product. Was that superceded by VM/HPO for VM/SP? I only got involved running VM/SP/HPO(/VSE/CICS/MVS/VTAM/TCP/...) for a few years from about the time the Waterloo etc. tape mods were being reimplemented to include on the vanilla product PUT tape.

that is the story about problems with later shipping SMP support.

up until that time, they were charging for application code ... but not kernel code. the original resource manager got selected to be the guinea pig for charging for kernel code. i got to spend six months off and on with the business people working out the business practices for pricing kernel code. the decision at that time was that if the kernel code wasn't directly necessary for hardware support (stuff like enhanced resource management), then it could be priced; otherwise it was still free.

the problem was that i included in the resource manager a lot of stuff that had been dropped in the transition from cp67->vm370 ... including a lot of structuring for supporting SMP operation. the resource manager was shipped before there was a decision to ship vm370 SMP support ... although I had been also working on a microcode-based SMP implementation:
https://www.garlic.com/~lynn/submain.html#bounce
as well as the microcode ECPS performance assist for 148 (later 4341)
https://www.garlic.com/~lynn/submain.html#mcode

concurrently with working on the resource manager.

In any case, the decision was then made to ship vm370 SMP support ...
https://www.garlic.com/~lynn/subtopic.html#smp

which is obviously directly involved in supporting hardware ... and therefor must be free. however, a lot of the SMP code needed the code in the resource manager. The business rules required that you couldn't have "free" code with dependencies on "non-free" code. The decision was then made to take about 80 percent of the resource manager code and move it to the "free" kernel ... and the remaining code in the resource manager then eventually formed the basis for SEPP (and BSEPP) ... and eventually vm/sp ... where you could now again have a single product ... where all the kernel was now priced.

two months before the resource manager shipped ... VS/REPACK shipped (from the science center). at the time that VS/REPACK shipped, the "science center" (was separate entity from the vm/370 development group and) was eligible for non-development product program (people eligible for this product program received 1/12th of each annual lease for the first two years). The month before the resource manager shipped, the "science center" was removed from the list of internal sites that were eligible for the non-development product program. As an aside, the resource manager passed 1000 installed licenses shortly after availability/FCS ... at $850/month ($10k/year). I even offered to forgo my salary to be part of the program.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 24 Feb 2005 08:34:21 -0700
jmfbahciv writes:
It is very rare to have one person be able to do more than one kind of thinking style. Here I'm back to the discussion a.f.c. had with me about my terms "compiler-thinking" and "OS-thinking".

What strikes me as a flaw in security is that the security types are always playing catchup to the crackers. This is an unideal approach and I've been thinking about how to get the security biz out of this rut. Caveat: I've also been accused of believing in fairies, virgins, and inside straights, too. :-)


many situation where they are playing catchup is because security hasn't been built into the infrastructure ... and has become a re-active post-deployment (after-market) solution (they appear to be constantly playing catchup because they are just reacting to stuff that happens).

when integrity is part of the fundamental infrastructure ... security is less featured ... because there are much fewer events that require post-deployment, band-aid workers.

about the time we were doing the original internet payment gateway,
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

i gave a talk at a graduate student seminar at isi/usc about why internet isn't business critical dataprocessing. you do things differently if you think about situations in terms of threats and countermeasures ... and it becomes part of the basic landscape.

architects designing buildings are taught about things like wind-loading on exterior surfaces and having to take into account worst-case local wind conditions. there are certain things you do when you design a building to withstand force 5 hurricane. attempting to retrofit an existing building to withstand a force 5 hurrican isn't normally very successful. If you have situations where the daily wind is able to blow down every building ... eventually you create conventions &/or guidelines where the buildings don't blow down quite so often. i believe most licensed architects are held professional responsible if they don't take into account well understood hazards when they produce a design.

how is this for a little topic drift (curtesy of search engine):

SHOULD THE PUP BE SPENDING BORROWED MONEY ON 10,000 HOUSES WITH A CUBAN FACTORY, PRODUCING FLAT WALLS, OR BE BUILDING HURRICANE RESISTANT CONCRET DOME HOUSES INSTEAD?
http://belize1.com/BzLibrary/trust235.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Secure design

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Secure design
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 24 Feb 2005 10:03:29 -0700
"Tom Linden" writes:
Unlike KeyKOS (Gnosis), hurricane rated design using your metaphor I think about the only think that was not considered was covert channels.

Changed the subject, tired of seeing it.


when i was undergraduate in the 60s, i was once asked to address low-bandwidth information leakage situation (common example in the literature is the number of pizzas delivered after hrs to the white house). i look at it for a bit ... but i pointed out that the masking activity was probably going to consume more resources than they were willing to devote (a benefit/cost trade-off). some similarity to the differential power attack countermeasures.

it came up again in human factors studies regarding human behavior in predicatable response and unpredictable response scenarios ... that human productivity tends to be better in predictable response environments (modulo predictable response needs to be orders of magnitude worse than just trying to handle the 80th percentile) ... . aka making response uniform regardless of the system loading ... and is related to making response predictable.

the opposite example is letting people see response variation under heavy loading ... can influence behavior ... sometimes resulting in people re-arranging schedule to take advantage of better response during system lightly loaded periods.

rush hour congestion on road systems could be considered an example.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 24 Feb 2005 10:38:48 -0700
Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:
There are 100's of thousands of Apache and IIS installations, OS kernels, SMTP and DNS servers, etc., all subject to internet attack. The whole mainframe industry put together never shipped more than a fraction of that much hardware through its whole history. There similarly are hundreds of millions of Windows clients, maybe as many as a billion. And software applications are orders of magnitude more complicated than they were in the old days. Stuff written with the old methods just has no chance now.

the large mainframe of old tended to have large user propulations. After standard testing ... deployed systems tended to have uniquely reported bugs proportional to the uniquely different things it was being used for. this tended to result in some increase in uniquely reported bugs up until some threshold of 500-600 (of these really large sysems with large user populations) ... but while the number of bug reports somewhat tended to increase proportional to the number of system/users ... the number of uniquely reported bugs didn't tend to continue growing (say as the number of systems increased from 500-600 to possibly 20,000).

i would also conjecture for those individuals interested in launching serious internet attacks ... that dockmaster might have represented an extremely attractive target:
https://www.multicians.org/site-dockmaster.html

... you could do search in various crypto archives for much of the 90s for "dockmaster" in the email address.

while the internal network had more nodes the internet from just about the arpanet origins up thru sometime mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet

a large part of the internet node growth in the post 1/1/83 switchover to internetworking protocol ... where for individual node machines ... while the internal network continued to be primarily large mainframe systems.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 24 Feb 2005 14:00:42 -0700
"Trevor L. Jackson, III" writes:
Historical note: Check out the post-WWII development methodology called "Massive Engineering". I think it was first formalized by Lockheed. It is the technique of using, say, 5,000 engineers to design and build an airplane. It is a popular American dysfunction.

note also that lockheed had the skunkworks ... where the really innovative stuff was done.

also ... boyd was driving factor behind much of the f15, f16, f18 (and based on lots of his comments the northrup f20/tigershark which never made it anywhere).
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Secure design

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Secure design
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 24 Feb 2005 14:07:38 -0700
Anne & Lynn Wheeler writes:
when i was undergraduate in the 60s, i was once asked to address low-bandwidth information leakage situation (common example in the literature is the number of pizzas delivered after hrs to the white house). i look at it for a bit ... but i pointed out that the masking activity was probably going to consume more resources than they were willing to devote (a benefit/cost trade-off). some similarity to the differential power attack countermeasures.

one might guess that if they got around to asking for countermeasure to low-bankdwidth information leakage vulnerability ... that they possibly felt that some of the more serious vulnerabilities had already been addressed.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Thu, 24 Feb 2005 15:44:19 -0700
recent posting found on bit.listserv.ibm-main ... of course i'm a little biased about tcp/ip having done rfc 1044 support in the '80s
https://www.garlic.com/~lynn/subnetwork.html#1044

....
According to the article, "A VM Renaissance: VM and Linux" by Philip H. Smith III, which was not one of the contributing factors to VM/ESA V2's remarkable stability? (Hint: the article can be found in the Operating Systems department on www.zjournal.com)

a) MTBF rates of more than 50 years for 9672 G5 hardware b) Mature system management tools c) Improved TCP/IP connectivity d) A VM graphical user interface


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Secure design

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Secure design
Newsgroups: sci.crypt,alt.folklore.computers
Date: Fri, 25 Feb 2005 08:10:27 -0700
Brian Inglis writes:
Was that done in 3x0 assembler or HLL?

at the time, mostly kernel stuff in assembler. but i may have been somewhat atypical. There was some jokes, that when i was coding, whatever language I was using tended to become my natural language (aka the analogy observation about indication that you are becoming proficient in a natural language is when you start dreaming in it).

at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

besides vs/repack ... detailed memory reference tracing and fortran program that implemented cluster analysis for semi-automated program re-arrgangement and APL for performance, configuration, workload modeling ... described for selecting automated benchmarking for validating resource manager
https://www.garlic.com/~lynn/submain.html#bench

and performance predictor (sales & marketing support tool) on world-wide hone system
https://www.garlic.com/~lynn/subtopic.html#hone

there was also a couple people that had implemented an event-driven system model in PLI. I used to joke with them that I would invent some new variation on performance algorithm ... and could then code it in assembler, regression test, and deploy in production operation faster than they could write the PLI code for their model.

One variation on their model could take detailed memory traces (ala vs/repack) and simulate various kinds of page replacement algorithms ... clock global LRU (approximates exact LRU), true, exact global LRU (not very practical in real life, keeping exact order of all pages with respect to reference), various of others (like some of the stuff Belady had published from YKT ... like OPT).

So most of the LRU approximations were judged on how well they came to "true" LRU. There was ACM paper by one of the Multics paper showing numbers, using 1, 2, 3, & 4 "history" bits, improving ... but 4bits didn't improve much more than 3bits. The PLI model could also do exact LRU (kept exact reference ordering of all pages) ... something like the graph shown here:
http://www.cs.mun.ca/~paul/cs3725/material/web/notes/node15.html

so i came up with this slight of hand variation on clock. In real life, it should to be better than standard clock and in the PLI model tended to be slightly better than true LRU ... rather than slightly worse than true LRU. various past postings on page replacement algorithms:
https://www.garlic.com/~lynn/subtopic.html#wsclock

as mentioned before, i had essentially done the original clock as undergraduate in the 60s. and then had done the slight of hand variation on clock after joining the science center.

something like 10 plus years after doing the original clock, there was the big fuss over granting a stanford PHD on clock, This was basicaly over clock being a global-LRU approximation rather than local-LRU approximation. I got to somewhat help resolve the issue because grenoble science center had published a paper in ACM on performance numbers for a local LRU approximation implementation ... done effectively on the same hardware and operating system being run at the cambridge science center (except grenoble machine had more real memory which should have given them advantage in paging tests). The cambridge machine performance (with clock, global LRU) had much better performance than the published grenoble numbers (even tho grenoble machine had 50 percent more real memory for paging).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Secure design

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Secure design
Newsgroups: sci.crypt,alt.folklore.computers
Date: Fri, 25 Feb 2005 08:21:02 -0700
Anne & Lynn Wheeler writes:
at the time, mostly kernel stuff in assembler. but i may have been somewhat atypical. There was some jokes, that when i was coding, whatever language I was using tended to become my natural language (aka the analogy observation about indication that you are becoming proficient in a natural language is when you start dreaming in it).

for even more topic drift, in the early 80s there was a researcher that sat in the back of my office, went to meetings with me, etc and took notes on how i communicated (telephone, face-to-face, etc). They were also provided logs of all my incoming and outgoing email as well as logs of all my instant messages. this went on for nine months.

the research was eventually published as a stanford phd thesis (joint between language and computer ai departments) ... as well as material for subsequent papers and books.

misc. general posts on cmc (some of it about the communication research project)
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Secure design

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Secure design
Newsgroups: sci.crypt,alt.folklore.computers
Date: Fri, 25 Feb 2005 17:37:28 -0700
"Tom Linden" writes:
The above paragraph which prompted the question was about KeyKOS (aka Gnosis) which had a small kernel written in assembler, otherwise everything else in PL/I. I have the sources.

it is my fault for causing confusion ... i was replying to your post on keykos/gnosis mentioning covert channels ... but brought up situation from 60s where i was asked to change kernel to address low-bandwidth information leakage.

ref:
https://www.garlic.com/~lynn/2005d.html#43 Secure Design
https://www.garlic.com/~lynn/2005d.html#46 Secure Design

later i had visited tymshare numerous times and was somewhat aware of gnosis ... but it was when m/d bought tymshare (80s) and was looking at spinning off various stuff (gnosis, tymnet, etc), i was brought in to do evaluation of gnosis.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Fri, 25 Feb 2005 23:25:50 -0700
"Trevor L. Jackson, III" writes:
However, see IBM's Chief Programmer Team concept. It was based on the idea that there really are huge variations in productiity that can be amplified to affect an entire organization. The structure of the team is a Chief Programmer, who is supposed to do all the heavy hauling ( the 20% that matters), a team manager (I've forgotten the actual titles) who insulted the team and especially the CP from administrative tacks, a librarian who was supposed to keep track of _everything_, and a large group of assistant programmers who work on derivative code (the 80% that isn't critical).

i was at a conference in early 70s (71?) held at the twin bridge marriott (no longer exists) where harlen mills gave a talk on chief/super programmer. quick use of search engine
http://c2.com/cgi-bin/wiki?ChiefProgrammerTeam

prior post mentioning the conference and also twin bridge marriott (listed as the 1st Marriott hotel, opened in '57)
https://www.garlic.com/~lynn/2004k.html#26 TImeless Classics of Software Engineering

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sat, 26 Feb 2005 07:52:52 -0700
Brian Inglis writes:
IIRC some base data came from TRW and USAF, some results published as: Software Engineering Economics, Barry W. Boehm, PH, 1981 and latterly at USC CSE: Software Cost Estimation with COCOMO II, Barry W. Boehm, PH, 2000 also
http://sunset.usc.edu/research/cocomosuite
and some others have similar interests.


there are other misc. studies at the software consortium
http://www.software.org/

however, i'm partial to

http://www.software.org/quagmire

(just checking software.org website at this moment and for some reason it is timing out)

recently brought to my attention:
http://www.fastcompany.com/magazine/06/dropcode.html

and:

http://www.fastcompany.com/online/06/writestuff.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 27 Feb 2005 08:13:40 -0700
'no execute' flag waves off buffer attacks (in c & c++):
http://www.washingtonpost.com/wp-dyn/articles/A55209-2005Feb26.html

...
They attack programs written in the widely-used C and C++ programming languages. A malicious application will try to bowl them over with a too-large chunk of data that hides some executable code.

... snip ...

prior posts:
https://www.garlic.com/~lynn/2004q.html#82 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#25 360POO
https://www.garlic.com/~lynn/2005b.html#39 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#66 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#28 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#44 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 27 Feb 2005 09:36:51 -0700
Anne & Lynn Wheeler writes:
'no execute' flag waves off buffer attacks (in c & c++):
http://www.washingtonpost.com/wp-dyn/articles/A55209-2005Feb26.html


note that the above URL requires post registration ... however if you go to one of the news servers (google, msbot) and search on no execute ... it will have a URL to click on that bypasses the post registration.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: sci.crypt, alt.folklore.computers
Date: 4 Mar 2005 15:06:46 -0800
Subject: Re: [Lit.] Buffer overruns
Anne & Lynn Wheeler wrote:
'no execute' flag waves off buffer attacks (in c & c++):
http://www.washingtonpost.com/wp-dyn/articles/A55209-2005Feb26.html


Microsoft Researchers Target Worms, Buffer Overruns
http://www.neowin.net/comments.php?id=27321&category=main
Microsoft researchers target worms, buffer overruns
http://www.infoworld.com/article/05/03/03/HNmicrosoftworms_1.html
Microsoft Researchers Target Worms, Buffer Overruns
http://www.pcworld.com/news/article/0,aid,119891,00.asp

Convering 2 Bytes of binary to BCD

From: lynn@garlic.com
Newsgroups: comp.arch
Date: 5 Mar 2005 14:44:46 -0800
Subject: Re: Convering 2 Bytes of binary to BCD
del cecchi wrote:
Doesn't System/360, 370,......z have a convert to decimal/packed decimal opcode?

convert to binary
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.32?SHELF=EZ2HW125&DT=19970613131822

convert to decimal
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.33?SHELF=EZ2HW125&DT=19970613131822

pack
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.68?SHELF=EZ2HW125&DT=19970613131822

unpack
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.98?SHELF=EZ2HW125&DT=19970613131822

also, possibly of some interest

edit
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/8.3.4?SHELF=EZ2HW125&DT=19970613131822

edit and mark
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/8.3.5?SHELF=EZ2HW125&DT=19970613131822

example of edit and mark
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.4.5?SHELF=EZ2HW125&DT=19970613131822&CASE=

partial extract (see above for additional explanation):
The EDIT AND MARK instruction may be used, in addition to the functions of EDIT, to insert a currency symbol, such as a dollar sign, at the appropriate position in the edited result. Assume the same source in storage locations 1200-1203, the same pattern in locations 1000-100C, and the same contents of general register 12 as for the EDIT instruction above. The previous contents of general register 1 (GR1) are not significant; a LOAD ADDRESS instruction is used to set up the first digit position that is forced to print if no significant digits occur to the left.


Pattern
1000                                100C
__ __ __ __ __ __ __ __ __ __ __ __ __
|40|5B|F2|6B|F5|F7|F4|4B|F2|F6|40|40|40|
|__|__|__|__|__|__|__|__|__|__|__|__|__|
  b  $  2  ,  5  7  4  .  2  6  b  b  b

This pattern field prints as:

$2,574.26

Condition code 2 is set to indicate that the number edited was greater
than zero.

Thou shalt have no other gods before the ANSI C standard

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: sci.crypt, alt.folklore.computers
Date: 8 Mar 2005 12:58:37 -0800
Subject: Re: Thou shalt have no other gods before the ANSI C standard
Douglas A. Gwyn wrote:
That's not a good general principle. In fact, pointers are at their most useful when used to link together nodes of a nonsequential data structure such as a tree.

row/column organization was the paradigm for relational/sql .... numerous posts about relational/sql work from the 70s and early 80s
https://www.garlic.com/~lynn/submain.html#systemr

pointers still find use in organizations that are non-uniform (not homogneous row/column) which can be tree, mesh, hierarchical, network, etc.

UMLS at NIH's NLM (organization of medical knowledge) is an example for infrastructure that has both hierarchical organization (possibly tree) as well interconnected mesh:
http://www.nlm.nih.gov/research/umls/

where the structure is not easy to mangle into an organization conducive for uniform row/column representation.

in the 70s i got to work on the development of both the original relational/sql implementation as well as a network-orientation implementation (both implementations using higher level abstractions to subsume physical pointers).

the network-orientation is much easier to use with large amounts of information that is non-uniform, with arbitrary and possibly anomolous organization.

a small example is the RFC index and the merged glossary and taxonomy work:
https://www.garlic.com/~lynn/index.html

Virtual Machine Hardware

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 12 Mar 2005 05:41:38 -0800
Subject: Re: Virtual Machine Hardware
John Savard wrote:
I have been doing a bit more work on my pages - now in a PDF format - about a computer architecture. I had put in a bit for a 'virtual' mode, but without any further description of the feature.

Now, I have fleshed it out. I have a bit for virtual mode, a bit for multi-level virtual mode, and an eight-bit field identifying the parent process of a process.

It occurred to me that having virtual machine child processes might be something a virtual machine child process might do.

Now, when a virtual machine program running under a real machine program executes an instruction, three things can happen; the instruction can fail; the instruction can be executed immediately; or the parent process can simulate the instruction, because the child process 'thinks' it can do what the instruction does, but it really can't.

But if you allow multiple levels, a *fourth* thing can happen.


one of the original uses of distributed development with the internal network (the internal network had more nodes than the arpanet/internet from just about the beginning to sometime around mid-85)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

was a project between science center
https://www.garlic.com/~lynn/subtopic.html#545tech

and endicott for 370 virtual machines with virtual memory support.

cp/67 was a virtual machine operating system running on 360/67. 370 was going to have virtual memory support .... while there was a large commonality between 360 and 370 instructions (although there were some new instructions in 370) ... the control registers and segment/page tables had different formats between 370/67 hardware virtual memory and 370 hardware virtual memory.

cp/67 already support "non-virtual memory" 370 virtual machines as well as 360/67 virtual machines. it was possible to run a copy of cp/67 in a virtual machine under cp/67 running on a real 360/67 (and run other operating systems under the virtual copy of cp/67).

the cp/67 kernel was modifed to provide a special 370 virtual machine .... which provided support for the new 370 instructions (not implemented on 360) as well as translation of 370 virtual memory tables into "shadow" 360 virtual memory tables.

another set of modifications were made to cp/67 so that it assumed that it was running on a 370 real machine ... and it created/used 370 format tables (rather than 360/67 format tables).

there was another problem. the 370 virtual memory support had not been announced. The science center cp/67 system provided time-sharing service to numerous non-corporate employees ... including various BU, MIT, Harvard, and other students in the cambridge area. there was a big concern that any of these non-corporate employees might accidentally stumble across the 370 implementation support. as a result, it was decided that the version of cp/67 that provided 370 virtual machine support (as an option, in addition to regular 360 virtual machines) would only run in a 360/67 virtual machine and not on the real hardware (so it would be isolated from access by non-authorized individuals also using the same machine).

so the operation was:
360/67 real hardware cp/67-l on real hardware ... providing 360 and 360/67 virtual machines cp/67-h running in 360/67 vm ... provide 360, 360/67, 370 virtual machines cp/67-i running in 370 vm ... provide 370 virtual machiness cms running in 370 virtual machine

note that the cp/67-i kernel was in regular operation a year before there was the first 370 engineering processor running with hardware virtual memory support.

a story about when the engineers got the first 370 engineering processor with virtual memory support operational, they were interested to see if the cp/67-i kernel would run on the machine. one of the cambridge people went to endicott with a copy of the kernel. they booted the kernel on the machine and it failed. after a little investigation, it was determine that the engineers had made a mistake in implementing some of the 370 virtual memory support. introduced with 370 were the "b2" opcodes ... and two of the new "b2" opcodes, were RRB (reset reference bit) and PTLB (purge table lookaside buffer). the engineers had reversed the opcode implementation for the two instructions. The cp/67-i kernel was patched to reverse the opcodes to correspond with what was implemented in the hardware and the kernel then booted and ran successfully.

misc. past postings on 370 virtual machine effort
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general

Misuse of word "microcode"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 12 Mar 2005 06:07:57 -0800
Subject: Re: Misuse of word "microcode"
John Savard wrote:
Yes, microcode is the hard-to-use instruction set intended for interpreting other instruction sets in a computer, and it can be programmable (Packard-Bell 440, the Microdata machines). Many computers don't have microcode - not just RISC computers. Thus, the IBM System/360 Model 75 was the first hardwired model in that series, the smaller ones having microcode. A modern Pentium IV does a few instructions with microcode - but it uses a different technique for the rest, known as a 'decoupled microarchitecture'.

I think that this is what Gene Amdahl was thinking about when he started Trilogy and he said he had come up with this ingenious new idea that provided the flexibility of microprogramming and the efficiency of a hardwired architecture; was the Trilogy secret ever revealed?

But I have heard the word 'microprogramming' used in a different sense at an early stage. The PDP-8 operate instructions, which had individual bits controlling operations like shift, increment, and negate, were called 'microinstructions' in the early PDP-8 manuals.


the low-end 370 machines tended to have vertical microcode .... instruction set that looked very much like regular instructions. many of these implementations ran at approximately 10:1 ration ... i.e. avg ten microcode instructions executed for every 370 instruction. misc. past m'code postings:
https://www.garlic.com/~lynn/submain.html#mcode

one of the things that was for ECPS project was to identify the highest executed kernel paths and translate them into microcode. for the ecps kernel paths, there was about a 1:1 translation from 370 to microcode ... achieving a 10:1 performance improvement. the project was started for 370 138/148 new machines when they determined that they had approx. 6kbytes of unused microcode instruction space and was looking for new stuff that could be done to use up that space.

misc. past ecps postings
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

the high-end 370s used horizontal microcode which was a much more difficult programming undertaking. introduced at some point by the Amdahl 370 clones was something called "macrocode" .... it was a microcode mode for the Amdahl 370 clones that used a subset of the 370 instruction set (and didn't support self-modifying instructions, a long-term performance issue in standard 370 architecture). 3033 cross memory services, 370-xa, and follow-ons were starting to introduce much more complex processor operational characteristics ... most of them in privilege or supervisor mode and some not very performance sensitive. it was significantly easier for Amdahl to implement many of these features using a subset of 370 instruction set than the native machine horizontal microcode.

Amdahl also introduced a hypervisor mode for their processors that was implemented mostly in "macrocode" .... basically a subset of virtual machine operating system. IBM eventually responded to the Amdahl hypervisor support with PR/SM which has since evolved into LPARs (logical partitions).

random past posts mentioning pr/sm, lpars:
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2001b.html#72 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001f.html#17 Accounting systems ... still in use? (Do we still share?)
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#32 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#53 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002e.html#25 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002n.html#6 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#0 Home mainframes
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#18 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#40 Linux paging
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#45 Linux paging
https://www.garlic.com/~lynn/2002p.html#46 Linux paging
https://www.garlic.com/~lynn/2002p.html#48 Linux paging
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003l.html#12 Why are there few viruses for UNIX/Linux systems?
https://www.garlic.com/~lynn/2003l.html#41 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#32 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2003n.html#13 CPUs with microcode ?
https://www.garlic.com/~lynn/2003n.html#29 Architect Mainframe system - books/guidenance
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2004b.html#58 Oldest running code
https://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
https://www.garlic.com/~lynn/2004c.html#5 PSW Sampling
https://www.garlic.com/~lynn/2004e.html#26 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#28 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004f.html#47 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004k.html#37 Wars against bad things
https://www.garlic.com/~lynn/2004k.html#43 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004m.html#41 EAL5
https://www.garlic.com/~lynn/2004m.html#49 EAL5
https://www.garlic.com/~lynn/2004n.html#10 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#13 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#32 What system Release do you use... OS390? z/os? I'm a Vendor S
https://www.garlic.com/~lynn/2004p.html#37 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004q.html#18 PR/SM Dynamic Time Slice calculation
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005b.html#5 Relocating application architecture and compiler support
https://www.garlic.com/~lynn/2005c.html#56 intel's Vanderpool and virtualization in general

Misuse of word "microcode"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 12 Mar 2005 17:57:49 -0800
Subject: Re: Misuse of word "microcode"
Peter Flass wrote:
This is becoming more confusing as time goes by. Now IBM has something called "millicode", which, I guess, is supposed to be somewhere between microcode and user-level code, and is written in something like a subset of z/Assembler:
http://www.research.ibm.com/journal/rd/483/heller.html

Then there's the code that drives the channel subsystem, etc., whichI would also have called microcode, that's written in PL.8
http://www.findarticles.com/p/articles/mi_qa3751/is_200207/ai_n9095577


the millicode description sounds a lot like the Amdahl macrocode implementation from the early '80s.

lots of the custom microprocessors have migrated to 801 processors ... at one time in the late 70s / early 80s .... there was a project to migrate all of the (smaller) microprocessors to 801s
https://www.garlic.com/~lynn/subtopic.html#801

including low end 370s, controllers, subsystems, etc.

about the time some number of these projects were killed, there was romp, (research/office products) which start out being a folloew-on for the office products displaywriter (using CPr operating system ... everything programmed in PL.8). when the displaywriter project got killed ... it was decided to retarget the hardware to the unix workstation market. The PL.8 effort was retargeted to implementing a kind of 801 hypervisor called VRM (virtual resource manager) providing an abstract virtual machine interface. The organization that had done the AT&T unix port to the PC for pc/ix was hired to do a similar port to the VRM interface. this was announced as the pc/rt and aix.

Virtual Machine Hardware

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 12 Mar 2005 17:45:15 -0800
Subject: Re: Virtual Machine Hardware
Eric P. wrote:
One interesting twist on this is allowing shared memory between separate virtual machines so they can communicate. The wall between VM's doesn't have to be totally solid (it's just software after all so it can do whatever we want).

note that in the 70s ... this was the original relational/sql implementation under vm/370 for system/r:
https://www.garlic.com/~lynn/submain.html#systemr

which had implemented read/write shared segments for sharing across cms virtual machines running pieces of system/r.

later in the early '80s ... similar type of stuff was done for a unix implementation. the model was taken from the tss/370 ssup implementation done for at&t unix in the late '70s.

minor past reference to unix vm/370 ...
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2004p.html#10 vm/370 smp support and shared segment protection hack

Misuse of word "microcode"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 13 Mar 2005 06:33:09 -0800
Subject: Re: Misuse of word "microcode"
Peter Flass wrote:
Then there's the code that drives the channel subsystem, etc., whichcc I would also have called microcode, that's written in PL.8
http://www.findarticles.com/p/articles/mi_qa3751/is_200207/ai_n9095577


note also in the low-end 360s, not only was 360 instruction set implemented using the instruction set (microcode) of the native processing engine ... but they also had integrated channels ... which had separate microcode and the processing engine was time-shared between the 360 processing code and the channel processing code. the 360/65/67 and above had separate channel processing boxes.

the other issue was that the channel program interface was synchronous. there was a language & sequence of instructions called channel programs that could do things like conditional and looping. the 360 processing had the start i/o (SIO) instruction that tell the channel processer to initiate a channel program. the channel would indicate when a channel program was finished with an i/o interrupt to the processor.

couple issues in 370 addressed was

1) introduced siof ... start i/o fast, in 360 the sio instruction went all the way out to the controller and device and came back ... which could be a couple hundred feet away ... which could be tens of microseconds (worst case could be significantly more). siof would handshake with the channel and return ... with the initial contact of the controller and device proceeding asynchronously

2) DASD CKD operation defined a synchronous outboard record search operation that dedicated the channel and controller while the disk revolved. A new "sector" disk operation was defined that would disconnect the device from the controller and channel during rotation until a specific sector location had been reached ... at which time it (attempted) reconnect. mist posts about CKD disk operational characteristics
https://www.garlic.com/~lynn/submain.html#dasd

3) there could be a (relative) large amount of latency between the time the device signaled completion, the processor accepted the interrupt, handled any interrupt processing and finally got around to redriving the device with any queued channel program. the new 2305 fixed-head disk introduced the concept of multiple device addresses. the 2305 fixed-head disk had eight separate device address .... each could have a pending channel program initiated; helping mask the standard device redrive latency delay.

the low-end 370s continued the implementation of integrated channels ... i.e. the same native processor having different set of time-shared native programming ... one implementing channel processing and one implementing 370. for some processors there was even integrated controller implemention ... where there was additional native processor programming that also implemented device controller function.

for 303x line, the 370/158 was taken and a dedicated processor was configured that only contained the integrated channel processing programming. this was called a channel director. the 3031 was a 370/158 processing engine with only the native processing engine programming support for the 370 instruction set. there was a second native 370/158 processing engine that was dedicated channel processing programming (aka, a default 3031 uniprocessor was actually a pair of 370/158 processors sharing the same memory, one dedicated to executing the channel processing native code and one dedicated to executing the 370 processing native code). a 3032 was a 370/168 repackaged to use a channel director (370/158 native engine with only channel processing native code). A 3033 processor started out being a 370/168 wiring diagram remapped to faster chip technology (and configured to utilize 303x channel director).

the avg. 3031 mip processing benchmarks showed significant 370 processing thruput because the native engine was no longer being time-shared between executing the 370 native programming and the channel native programming.

some past 158 & 3031 comparison postings
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#22 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#37 IBM was: CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)

while 303x channel processing was outboard to dedicated processing, there was still additional latency involved in handling the SIOF instruction and there was all sorts of synchronous latency with handling i/o interrupts and redriving queued device i/os (general case, not just 2305 fixed-head disk) as well as the effects on cache hit ratios of having asynchronous i/o interrupts.

as part of rewriting i/o subsytem for the disk engineering lab to significantly improve the reliability and availability ... essentially in an extremely hostile i/o environment attempting to test devices under development;
https://www.garlic.com/~lynn/subtopic.html#disk

i had also highly optimized the pathlength for device i/o redrive latency ... however there still some latency and there was still the cache hit effects of asynchronous i/o interrupts.

370/xa ... besides introducing 31-bit addressing mode and expanding on the dual-address space architecture also introduced a queued i/o interface. it was now possible to define a queued interface of pending channel programs and queued interface completed i/o operations .... this was an attempt to mitigate the cache effects of having asynchronous i/o interrupts as well as the latency in getting around to redriving a device with pending channel program. this required a more sophisticated and more powerful channel subsystem operation. among other things ... since a lot of the low-level processing activity was masked from the main processor ... there was a lot more administrative, timing, and capacity planning information that had to be kept by the channel subsystem processor ... and available for sampling/interrogating by the main operating system.

misc. past posts related to benchmarking, performance tuning and the early days of capacity planning:
https://www.garlic.com/~lynn/submain.html#bench

misc. past posts on multi-address space addressing
https://www.garlic.com/~lynn/98.html#36 What is MVS/ESA?
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#28 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000e.html#58 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001d.html#28 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001h.html#73 Most complex instructions
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2001k.html#16 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#18 Black magic in POWER5
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002q.html#1 Linux paging
https://www.garlic.com/~lynn/2003c.html#13 Unused address bits
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003m.html#29 SR 15,15
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004f.html#27 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#53 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005c.html#63 intel's Vanderpool and virtualization in general
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns

Cranky old computers still being used

Refed: **, - **, - **
From: lynn@garlic.com
Subject: Re: Cranky old computers still being used
Newsgroups: alt.religion.kibology,alt.folklore.computers
Date: 13 Mar 2005 07:31:12 -0800
dogsnus wrote:
USENET? I'd no idea that's what it was called it in France. Here,we call it the INTERNET. Al Gore invented it.

Terri


as an aside, the internal network had more nodes than the arpanet/internet from just about the beginning to sometime mid-85:
https://www.garlic.com/~lynn/subnetwork.html#internalnet

collection of some posts related to transition from arpanet to internet in the great 1/1/83 switch-over
https://www.garlic.com/~lynn/internet.htm

there are some misc. internet archeological references at my rfc index:
https://www.garlic.com/~lynn/rfcietff.htm
i.e.
https://www.garlic.com/~lynn/rfcietf.htm#history

i was starting some computer conferencing related stuff in the late '70s on the internal network ... about the same time uucp & usenet was evolving. thell sre was still significant usenet flowing over uucp & non-internet paths into the mid-90s.

a few related past post/threads
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000d.html#56 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#58 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000d.html#67 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#11 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#13 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#14 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#18 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#26 Al Gore, The Father of the Internet (hah!)
https://www.garlic.com/~lynn/2000e.html#28 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#38 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/2000e.html#39 I'll Be! Al Gore DID Invent the Internet After All ! NOT
https://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#45 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#46 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#47 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#49 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#50 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#17 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001f.html#2 Mysterious Prefixes
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001h.html#59 Blinkenlights
https://www.garlic.com/~lynn/2001h.html#60 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001h.html#74 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#49 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#10 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
https://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#51 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#52 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#12 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#19 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#6 LISTSERV(r) on mainframes
https://www.garlic.com/~lynn/2002e.html#61 Computers in Science Fiction
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#21 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#74 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#58 history of CMS
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#80 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#81 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#82 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#85 Al Gore and the Internet
https://www.garlic.com/~lynn/2002h.html#86 Al Gore and the Internet
https://www.garlic.com/~lynn/2002.html#16 index searching
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#28 trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#35 pop density was: trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#36 pop density was: trains was: Al Gore and the Internet

Misuse of word "microcode"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Misuse of word "microcode"
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 13 Mar 2005 15:49:58 -0700
lynn writes:
the low-end 370 machines tended to have vertical microcode .... instruction set that looked very much like regular instructions. many of these implementations ran at approximately 10:1 ration ... i.e. avg ten microcode instructions executed for every 370 instruction. misc. past m'code postings:
https://www.garlic.com/~lynn/submain.html#mcode


the 370 115/125 were another interesting microcode machines.

the basic machine had a shared memory bus with ports for up to nine processors. a typical 370 115 configuration might have 4-6 identical processors installed; one with microprogramming to implementing 370 instruction set and the rest with microprogramming implementing various device controller functions (disk, tape, communications, etc). the native processor engine was about 800kips which with the 370 microprogram load delivered about 80kips 370 (approx. 10:1 ratio).

125 was identical to the 115 except a faster native processing engine was used for the 370 function. the native 125 370 processor engine was about 1mips yielding approx. 100kips 370. otherwise the rest of the hardware configuration was the same as 115 (possibly 2-8 800kip processor engines loaded with various kinds of device controller microcode).

in the 70s, i worked on VAMPS design;
https://www.garlic.com/~lynn/submain.html#bounce

special 125 smp implementation where up to five of the shared memory ports would be occupied by 370 processor engines.
https://www.garlic.com/~lynn/subtopic.html#smp

i migrated much of the dispatching logic into the microcode of the processor engine ... and the kernel software would queue and dequeue tasks ... but the machine microcode actually handling dispatching of tasks on specific processors (somewhat analogous to i432 migrating multiprocessing dispatching into the silicon of the machine).

i also did a higher level queue/dequeue for disk i/o operations ... somewhat akin to what was found in later 370/xa that appeared with the 3081 ... discussed in previous post:
https://www.garlic.com/~lynn/2005d.html#62 Misuse of word "microcode"

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 14 Mar 2005 08:14:00 -0700
jmfbahciv writes:
People are still dying in automobile crashes. No matter how much crash prevention is installed, there will still be auto crashes as long as people use them. Similarly, there will always be buffer management problems. The key is to not allow the rogue code make a mess; I include certain OS distributors as rogue code.

I still think there can be a way for an owner of computer gear to vet all bits before they're written in core; I'm assuming there isn't any path to disk without going through memory. This gives control of bits to the owner/user of the boxes.


note however, that in the case of fatalities and automobile crashes, there is active study of the most common causes and efforts to institute corrective actions for the most common. the no execute countermeasure for a particular class of buffer overflows is a simple analogous example.

one of the others raised in the side thread about automatic bounds checking (ABC) ... was that in common deployment today ... string copy operations don't have an infrastructure defined length to the target buffer length ... and a claim that represents a potential fruitful area for human mistakes. furthermore, normal ABC operation is dependent on the infrastructure providing indications as to the bounds (length and/or end) of areas involved .... and if the infrastructure has no indication as to the bounds ... it is difficult for ABC-operations to be performed (without having any infrastructure bounds to provide base ABC-operations). the side point was that if the infrastrucutre was enhanced to provide bounds information for target buffers, then not only could ABC operations make use of the bounds information, but standard string library functions could also be enhanced to make use of the infrastructure provided bounds information.

misc. past archived posts regarding buffer overflows
https://www.garlic.com/~lynn/subtopic.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Virtual Machine Hardware

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Virtual Machine Hardware
Newsgroups: comp.arch
Date: Mon, 14 Mar 2005 08:47:30 -0700
"Eric P." writes:
Actually this is not quite enough. If VM_2 can write arbitrary values to its own page table, it could deposit a physical address in a PTE that would allow it write access to protected parts of VM_1. So the VM_2 page tables must be supervisor read-only (something the x86 cannot do - it only could do NoAccess) so write accesses trap to VM_1 so it can verify that any frame numbers VM_2 is trying to deposit do in fact belong to that virtual machine. That would slow things down considerably.

note that virtual machine supervisor will be emulating hardware lookaside buffer semantics with the shadow tables (or in the case of cp/67 on 360/67, the 360/67 hardware was called the associative array) ... creating localized copies of the page tables.

from the standpoint of the operating system running in a virtual machine ... the virtual machine supervisor IS the hardware for its machine. the fact that virtual machine supervisor is using defined page table format for its emulation of the hardware look-aside buffer is an artifact of the emulation.

note however, in the previous post about cp67-h providing 370 virtual machines .... the 370 tables were somewhat different than the definition of the 360/67 tables.

in much the same way that the hardware lookaside buffer doesn't have to match any format of the page table definitions ... the cp67-h supervisor was free to translate the 370 page tables into 360/67 "shadow" page tables.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 14 Mar 2005 10:00:59 -0700
Brian Inglis writes:
According to the National Institute of Standards and Technology, in the past 4 years, 871 buffer overflow vulnerabilities were exploited, comprising about 20 percent of all exploits. ** ^^ The world of computers has developed astronomically since the 1960s, yet buffer overflow vulnerabilities have persisted. It's 2005 now -- perhaps it's about time for general programming practices to catch up.

earlier in the thread ... i had pointed out that the NIST numbers from the linux article were approx. the same that i had come up with last spring looking at the CVE numbers. prior postings mentioning the nist numbers:
https://www.garlic.com/~lynn/2005b.html#20 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#28 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#0 [Lit.] Buffer overruns

as an aside, did anybody stop at the mitre cve booth at rsa?

we had one of the people from the cve booth come give at talk at the x9f security/crypto standards meeting in san antonio two weeks ago.

x9 standards site:
http://www.x9.org/

for a little topic drift ... merged x9f glossary and taxonomy:
https://www.garlic.com/~lynn/index.html#glosnote

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Misuse of word "microcode"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Misuse of word "microcode"
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 14 Mar 2005 11:12:56 -0700
pechter@shell.monmouth.com (Bill Pechter Carolyn Pechter) writes:
Actually, IIRC Reuters ran an emulated PDP8 on the 11/60's Writable Control Store...

Said to be the fastest #$(*&^% PDP8 ever -- but I think my Simh emulated 8 on a P4 probably beats it (or the Doug Jones emulator with the blinkin lights may be quicker if I didn't tune it down for reasonable emulation)...


there are a couple packages (commercial as well as open source) that provide 360/370/etc virtual machines on current intel platforms.

many people have commented that they are signficantly faster than most of the 360s or 370s that they dealt with in the 60s and 70s.

there even have available old operating systems (vm/cms and mvs) available for running in these old machine architectures.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Tue, 15 Mar 2005 07:07:20 -0700
jmfbahciv writes:
Sure. Another note would be that there are only a few auto manufacturers and the quality of their distribution can be controlled (to the point of too much control) by governments. OTOH, there are millions of people producing code and gazillions of computers producing and distributing code...and data. I predict that, if we don't start to solve these problems in-house,

there is also a large difference in the number of c compiler writers and the number of c coders. one of the early thread postings was that most c-environment string copy operations are to buffer areas that have no infrastructure defined length. this led to some observations

1) some other environments (like PLI) where both source and target areas had explicit infrastructure defined lengths ... have had significantly lower buffer overflow issues (analogous to reduction in traffic fatalities when various safety related features were introduced).

2) automatic bounds checking is dependent on infrastructure determinable bounds (like start/end or start/length) ... it would appear to be difficult to implement automatic bounds checking for storage areas that have no infrastructure determinable bounds.

the corollary was that if storage areas had infrastructure determinable bounds ... say in order that automatic bounds checking implementation were possible (aka #2), then C environmental libraries might be able to also take advantage of such infrastructure determinable bounds ... which might result in C implemented applications having frequency of buffer overlow events much more akin to other application environments that had infrastructure determinable bounds as part of their basic environment (aka #1).

misc ...
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Virtual Machine Hardware

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Virtual Machine Hardware
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 15 Mar 2005 13:31:03 -0700
"Eric P." writes:
It would probably make nested VM's easier if the MMU optionally allows software managed TLB. Assuming it already supports the normal instructions for invalidating all/one TLB entries, it also needs to:

- support supervisor read-only pages - allow disable of automatic hardware table lookup or tree walks so it can optionally be software managed. - have bit in interrupt vector entries that optionally either disables virtual translation or forces address space switch. - Address Space IDs in the TLB would help make this efficient.

support MMU/TLB instructions that: - query whether an entry is present in the TLB - add an entry to the TLB - lock a few entries down so the TLB miss routine cannot miss


so a VM supervisor next to the hardware has to emulate a TLB for the virtual machines that it manages; this could be by having special unique address space (shadow) tables ... specific to the virtual address space it was managing. if the VM supervisor was running on a software-load, inverted table hardware, it would possibly have to assign a unique address space identifier and do the correct stuff with tlb loads and invalidates.

a VM supervisor only really needs to be aware of the next level virtual machine address spaces ... it shouldn't have to know that the supervisor running in the virtual machine ... for which it is managing virtual address spaces ... in turn is emulating virtual machines in those virtual address spaces.

so on 370, the low-end machines had TLBs that only handled a single virtual address space at a time ... switching virtual address space pointer would always flush the TLB. the 370/168 was STO-associative (segment table origin, unique per virtual address space) with a 3bit identifier. It had seven slots to remember STO addresses ... and each TLB entry was either invalid, or associated with one of the seven (saved) STO addresses. If a new STO address was loaded that wasn't already in the TLB table, it would select one of the STO slots for replacement and invalidate all TLB entries with the corresponding STO-associative slot tag.

vm370 went thru somewhat similar evolution ... initially keeping only a single shadow table per virtual machine and each time the virtual machine changed its virtual page table pointer, it would invalidate all the page table entries in the shadow table. Later versions of vm370 would keep multiple shadow tables per virtual machine and effectively perform a management algorithm similar to that described for the 370/168 TLB.

At the same time, there was an evoluation of virtual machine microcode hardware assists ... with awareness of running in virtual machine mode with shadow tables ... and on a page table entry miss ... would simulate the function of the VM supervisor ... checking to see if the corresponding address space table in the virtual machine space was valid and then translating it using the address space table in the VM supervisor (for that virtual machine) to resolve the page number for filling in the page table entry in the shadow table.

there is a special case for virtual machine microcode hardware assists for pr/sm and lpars ... where the hypervisor microcode is providing a subset of the virtual machine function and the virtual machine storage is a dedicated section of contiguous memory. in this case, the TLB management hardware just has to be aware of which partition it is operating on and adjust all (supposedly) real page numbers with the fixed address offset (aka starting address of that partitions fixed storage).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Metcalfe's Law Refuted

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Metcalfe's Law Refuted
Newsgroups: alt.folklore.computers
Date: Tue, 15 Mar 2005 14:38:25 -0700

http://slashdot.org/articles/05/03/15/1820230.shtml?tid=95

from above:
pdp0x14 writes "Cnet News reports on a powerful refutation of Metcalfe's Law (that the value of a network goes up with n^2 in the number of members). The academic paper is available at Southwest Missouri State University. Basically, the thesis is that not all the links in a network are equally valuable, so Metcalfe's argument that everyone can connect to everyone (n(n-1)/2 links, roughly n^2) is irrelevant. The authors propose nlog(n) instead, a much smaller increase"

snip ...

slight topic drift ...

in the early 80s, somebody from corporate hdqtrs came out and stated that the internal network couldn't exist
https://www.garlic.com/~lynn/subnetwork.html#internalnet

because such a large "peer-to-peer" network (standard corporate orientation at the time was that "network" met a large number of communication lines connecting terminals to a mainframe host ... so "peer-to-peer" label was needed to indicate the difference) would involve a huge design and programming implementation effort requiring really significant dollars and people resources.

no such significant dollar and people resource items had ever showed up at the corporate level ... and all significant resources were accounted for ... it was therefor impossible for the internal network to exist.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 16 Mar 2005 08:01:46 -0700
jmfbahciv writes:
What I would you to speculate about is: If PLI had been distributed as freely and widely as C would the oodles of code still be "buffer safe"? I've only met PLI once and that was in the form of the cards I punched for the guy who wrote the code 37 years ago so I don't remember much. PLI never got popular because it was a PITA to use. In my area one had to drive from Kalamazoo to Ann Arbor, submit the card deck, and then drive back with the results because a user was only allowed one run/visit.

the upfront learning curve tended to be higher and the full PLI (at least IBM) tended to be much larger. however there were various PLI subsets that were used at universities.

i would assert that a number of things were going on, at least by the early 80s.

ibm mainframes and software minimum configurations were fairly large (not easy to justify incremental installations at universities, a least by comparison to some alternatives)

ibm's education discount was significantly reduced (especially in comparison to what they had been in the 60s)

there weren't a lot of freely available, easily portable PLI compilers

the environment for the ibm pli compilers; operating systems, etc were proprietary and not portable

a number of new hardware vendors were starting to appear with relatively inexpensive computing offerings, smaller minis, workstations, etc. the past model of a vendor building both proprietary hardware and operating systems from scratch was difficult to apply (i.e. a full-blown proprietary operating system from scratch would be significantly more than the whole vendor hardware effort).

a demand was emerging for entry level, relatively portable, relatively non-proprietary operating system and programming environment at universities and the vendors of these new emerging class of hardware computing products.

i had put together a proposal in the 82/83 time-frame to address most of these issues ... but it became involved in large corporation politics and got way too much attention and one characterization would be that it reached (blackhole) critical mass and imploded. there was some study of various (portable?) languages that could be used for operating system implementation and their associated integrity characteristics. I did a couple demos of taking existing kernel components (implemented in assembler) and redesigning and recoding them from scratch in (enhanced) pascal.

a few postings on one such activity:
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2004g.html#19 HERCULES
https://www.garlic.com/~lynn/2004p.html#3 History of C
https://www.garlic.com/~lynn/2005d.html#38 Thou shalt have no other gods before the ANSI C standard

this somewhat overlapped the fort knox/801 period where a variety of custom microprocessors (used in controllers, devices, low-end 370s, various system/xxs) would be replaced with 801 with common pl.8 programming language (supposedly pl.8 comes from it being 80 percent of pli).
https://www.garlic.com/~lynn/subtopic.html#801

the low-end 370s would have had an 801 microprocessor with the 370 microcode implemented (mostly) in pl.8 ... but suffering the traditional 10:1 instruction ratio (10 microprocessor instructions for every 370 instruction). there was also starting to appear small chipsets that implemented 370 directly in silicon (avoiding a lot of the 10:1 mip ratio degradation). I had proposed uniform board design with (relatively) large collections (primary limiting factor were cooling constraints) of such computer boards could be packaged in drawers and racks (mixing 370 boards, 801 boards, memory boards). this was envisioned to be an mixture of shared-memory (aka tightly-coupled) multiprocessing and loosely-coupled/cluster operation ... aka a real early precursor to GRID. the idea was to take most cost effective components and be able to replicate them in large numbers. minor ref with some drift:
https://www.garlic.com/~lynn/95.html#13

i envisioned making a transition period from 370 assembler based infrastructures to pl.8/pascal implementations running natively (at significant higher mip rate) directly on 801s. part of this could be directly implementing operating system feature/function from scratch ... and part incremental transition involving translating native 370 assembler to higher level code and then compiling back down to native 801. for instance, i've posting before about a pli program that i had written in the early 70s that processed 370 assembler listings, built abstract representation of the program, did detailed code flow and register use analysis ... and could spit out higher level abstract representation of the program.

misc. past postings on assembler analysis:
https://www.garlic.com/~lynn/94.html#12 360 "OS" & "TSS" assemblers
https://www.garlic.com/~lynn/2000d.html#36 Assembly language formatting on IBM systems
https://www.garlic.com/~lynn/2001j.html#6 Losing our culture.
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2002k.html#38 GOTOs cross-posting
https://www.garlic.com/~lynn/2003n.html#34 Macros and base register question
https://www.garlic.com/~lynn/2004d.html#21 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004i.html#12 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004k.html#36 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004m.html#35 Shipwrecks
https://www.garlic.com/~lynn/2005b.html#16 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#17 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#28 Relocating application architecture and compiler support

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 16 Mar 2005 08:09:28 -0700
jmfbahciv writes:
But would the evolution of the functionality of PLI match C? Would the bounds checking have given the coders enough of a headache during their development cycles that they would have defaulted to an "easier" technique?

bounds checking in PLI could be turned on/off ... in fact i believe that the optimizing compiler and the check-out/debug compiler were totally different products in the early 70s.

the issue about automated bounds checking ... is that it is extremely difficult if the infrastructure has no information as to the bounds of specific storage areas. standard convention in C is for the programmer to manage bounds/lengths of allocated buffers ... with no infrastructure available length/bounds information.

PLI environments tend to support infrastructure length/bound information on allocated buffers and the semantics of the standard library string operations (not just qspecial debugging ABC mode) make use of such length/bound for as part of their normal operation (aka the normal semantics of lots of library features include the standard use of length/bounds metaphor ... and would avoid buffer overflow events because of the normally defined semantics).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 16 Mar 2005 08:16:38 -0700
jmfbahciv writes:
What I would you to speculate about is: If PLI had been distributed as freely and widely as C would the oodles of code still be "buffer safe"? I've only met PLI once and that was in the form of the cards I punched for the guy who wrote the code 37 years ago so I don't remember much. PLI never got popular because it was a PITA to use. In my area one had to drive from Kalamazoo to Ann Arbor, submit the card deck, and then drive back with the results because a user was only allowed one run/visit.

as an aside, the boston programming center (2nd floor 545tech sq had the science center machine room, the 3rd floor had the boston programming center, the 4th floor had the science center, and the multics group were on the 5th floor) was shipping something called CPS (conversational programming system) on os/360 in the 60s and early 70s. It had support for conversational basic and PLI (2741s, 105x terminals as standard operations as opposed to card decks). There was also special microcode available for 360/50 that singnficantly improved the performance of many CPS operations.

misc. past boston programming center and/or cps postings:
https://www.garlic.com/~lynn/2000d.html#37 S/360 development burnout?
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001b.html#42 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002j.html#17 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#19 ITF on IBM 360
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
https://www.garlic.com/~lynn/2003c.html#0 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003h.html#34 chad... the unknown story
https://www.garlic.com/~lynn/2003k.html#0 VSPC
https://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
https://www.garlic.com/~lynn/2004d.html#42 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004e.html#37 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004g.html#4 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#47 PL/? History
https://www.garlic.com/~lynn/2004.html#20 BASIC Language History?
https://www.garlic.com/~lynn/2004.html#32 BASIC Language History?
https://www.garlic.com/~lynn/2004m.html#54 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#0 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004p.html#37 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004q.html#18 PR/SM Dynamic Time Slice calculation
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#28 Relocating application architecture and compiler support

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 16 Mar 2005 08:24:01 -0700
"Trevor L. Jackson, III" writes:
When was this? Were you visiting a place called North Campus?

my wife periodically complains about sometimes not carefully managing class schedule and finding that she had two physically distant, back-to-back classes and not enuf time to make it from one to another.

a couple years ago we were in a business meeting which eventually became a little more social discussion at a coffee break ... and discussing where and when people went to school. it eventually came out that my wife and another person had been in the UofM engineering graduate school at the same time ... and my wife commenting that she was the only female at the time. this other person said no you weren't ... and gave a name. my wife claimed to be her ... and this other person then made the mistake of carefully looking at her and stating that my wife had gotten a lot older.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

previous, index - home