List of Archived Posts

2006 Newsgroup Postings (04/19 - 04/30)

The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
It's official: "nuke" infected Windows PCs instead of fixing them
It's official: "nuke" infected Windows PCs instead of fixing them
The Pankian Metaphor
Binder REP Cards
Security
Security
Security
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
Binder REP Cards (Was: What's the linkage editor really wants?)
Binder REP Cards
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
The Pankian Metaphor
Security
confidence in CA
confidence in CA
confidence in CA
The Pankian Metaphor
Intel vPro Technology
Intel vPro Technology
The Pankian Metaphor
The Pankian Metaphor
64-bit architectures & 32-bit instructions
The Pankian Metaphor
Taxes
Taxes
The Pankian Metaphor
Mainframe vs. xSeries
Mainframe vs. xSeries
Mainframe vs. xSeries
Intel VPro
Mainframe vs. xSeries
The Pankian Metaphor
blast from the past, tcp/ip, project athena and kerberos
guess the date
The Chant of the Trolloc Hordes
Mainframe vs. xSeries
nntp and ssl
The Chant of the Trolloc Hordes
Need Help defining an AS400 with an IP address to the mainframe
Mainframe vs. xSeries
Mainframe vs. xSeries
History of first use of all-computerized typesetting?
The Pankian Metaphor
PDS Directory Question
Sarbanes-Oxley
The Pankian Metaphor

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Wed, 19 Apr 2006 19:43:21 -0600
Donald Tees writes:
Building a road happens once. Maintenance goes on forever.

a little more search engine use:

this references
http://www.pavingexpert.com/blokroad.htm

has the following:
Before any pavement construction can be specified, it is necessary to quantify the value of 2 variables....

* The CBR (California Bearing Ratio) of the sub-grade

* The expected traffic volumes and types over the design life of the pavement (usually 20 years)


... snip ...

http://www.fhwa.dot.gov/hfl/about.cfm

has the following:
Highways built to last 25 years take such a pounding from the level and the weight of traffic that they rarely last that long. Almost a third of all bridges in the country are either structurally deficient or functionally obsolete. Yet, many highway engineers agree that 50-year pavements and 100-year bridges should be attainable using current technology.

... snip ...

http://www.eng-tips.com/viewthread.cfm?qid=113163&page=1

has the following:
QUOTE: IT IS RECOMMENDED THE NEW PAVEMENT FOR 30 TO 50 YEAR RANGE DESIGN LIFE TO ACCOMODATE THE 75 YEAR LIFE OF THE BRIDGE STRUCTURE. THE DESIGNER SHOULD CHOOSE THE DESIGN WHICH PROVIDES THE MAXIMUM DESIGN LIFE WITHIN THE BUDGET CONSTRAINTS.

... snip ...

also from above:
For pavement design, in the UK the design life could be as little as 10 years, with increasing requiements for stiffness to prolong life. As stiffness goes up, so should the 'potential' life of the pavmement.

... snip ...

for some drift, World Bank has reference here that summarizes some of the software tools they have for designing roads
http://www.worldbank.org/html/fpd/transport/roads/rd_tools/hdm3.htm

....


http://www.ctre.iastate.edu/educweb/ce453/labs/Lab%2003%20Design%20Traffic.doc

has the following
In the design of any street or highway the designer must select a design life for the facility. In the case of most highways the geometric design life is 30-60 years and the pavement design life is 20-30 years, taken from the date of construction. The length of the design life is affected by policies established before and during the project's life for the environment in which it is placed. Residential and commercial development, geography, expected traffic, and soil conditions may affect the performance of the pavement and the operation of the facility. The objective of this laboratory exercise is to estimate the traffic volumes and percentage of cars and trucks that will use the facility over the course of the next thirty years. This estimate will be used in the design of the pavement thickness, completion of the environmental impact statement, and will affect the horizontal and vertical alignments.
... snip ...

and of course from prevous post

misc road construction ref:
https://web.archive.org/web/19990508000322/http://www.dot.ca.gov/hq/oppd/hdm/chapters/t603.htm
603.1 Introduction

The primary goal of the design of the pavement structural section is to provide a structurally stable and durable pavement and base system which, with a minimum of maintenance, will carry the projected traffic loading for the designated design period. This topic discusses the factors to be considered and procedures to be followed in developing a projection of truck traffic for design of the "pavement structure" or the structural section for specific projects.

Pavement structural sections are designed to carry the projected truck traffic considering the expanded truck traffic volume, mix, and the axle loads converted to 80 kN equivalent single axle loads (ESAL's) expected to occur during the design period. The effects on pavement life of passenger cars, pickups, and two-axle trucks are considered to be negligible.

Traffic information that is required for structural section design includes axle loads, axle configurations, and number of applications. The results of the AASHO Road Test (performed in the early 1960's in Illinois) have shown that the damaging effect of the passage of an axle load can be represented by a number of 80 kN ESAL's. For example, one application of a 53 kN single axle load was found to cause damage equal to an application of approximately 0.23 of an 80 kN single axle load, and four applications of a 53 kN single axle were found to cause the same damage (or reduction in serviceability) as one application of an 80 kN single axle.


... snip ...

from the previous posting extract on the "design of pavement structural section" the paragraph:
Pavement structural sections are designed to carry the projected truck traffic considering the expanded truck traffic volume, mix, and the axle loads converted to 80 kN equivalent single axle loads (ESAL's) expected to occur during the design period. The effects on pavement life of passenger cars, pickups, and two-axle trucks are considered to be negligible.

... snip ...

the last paragraph in the referenced paragraph comments that
The effects on pavement life of passenger cars, pickups, and two-axle trucks are considered to be negligible.

... snip ...

previous refs:
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2002j.html#41 Transportation
https://www.garlic.com/~lynn/2002j.html#42 Transportation
https://www.garlic.com/~lynn/2006g.html#5 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#6 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#10 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#12 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#15 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#26 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#32 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#46 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#48 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#49 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#50 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#56 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#57 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#58 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#59 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#60 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#61 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#62 The Pankian Metaphor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Thu, 20 Apr 2006 07:35:39 -0600
jmfbahciv writes:
Michigananders pay their plate fees based on the type of vehicle. Doesn't this qualify as proportional payment of maintenance?

this reference posting by sidd
http://www.cait.rutgers.edu/finalreports/FHWA-NJ-2001-030.pdf

goes into quite a lot of analysis of various mechanisms looking at fully apportioned costs related to highways use. one comment is that several states have gone to a "3rd" kind of fee for heavy trucks, which is related to (loaded) weight and miles traveled (to more accurately apportion road use costs; because fuel tax doesn't accurately apportion costs for heavy trucking and even registration costs based on gross vehicle weight fail to adaquately adjust for accurately apportioned road use costs).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Thu, 20 Apr 2006 07:56:53 -0600
jmfbahciv writes:
Right. I want momentarily drift the thread. There is a word or phrase for what I'm trying to talk about (which your premise seems to ignore); I want to say intangible but I don't think it's a correct usage. I pay school taxes. I have no kids and no plans to have kids. Yet I do not mind throwing money into this pool (unless it's used to build schools that won't last two years but that's wise usage of funds). I also have a Honda Civic which needs a refill of fuel every three months. I also don't mind paying the gas tax IIF it goes towards maintenance of roads (Mass. doesn't but that's because they're crook^WDemocrats). I benefit indirectly from these institutions and understand that spreading the cost burden ensures that the institutions will continue. To delay my part of that payment until I purchase the production of those institutions is years too late. It will ensure that the institutions disappear.

so economic policies can take into account intangible societal benefits. these are almost always just expressed in qualitative terms. however, there is almost never any quantitative analysis that goes along with the decisions. for instance a quantitative analysis might say that a subsidy of "X dollars" per pound-miles transported represents an intangible societal benefit of "Y". However, any subsidy in excess of "X dollars" may encourage bad societal behavior (say excessive use of subsidized long-haul transportion of goods that provide no societal benefit).

so i could claim that the lack of quantitative analysis as part of economic policies (in support of intangible societal benefits) is somewhat the basis for the comment in the comptroller general's talk about programs lacking instrumentation, metrics, and audits; aka attempting to validate that various program fundings which have been described as having qualitative social benefits ... actually have had any real, measurable benefits.

past posts mentioning comptroller generals talk
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#14 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor

... i.e.
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future

without actual data about correctly apportioned costs by various (heavy trucking) activity ... it would be difficult to do a cost/benefit analysis of the benefits of possible subsidies for long-haul heavy truck transportion.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Thu, 20 Apr 2006 09:34:10 -0600
Anne & Lynn Wheeler writes:
past posts mentioning comptroller generals talk
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#14 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor

... i.e.
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future

without actual data about correctly apportioned costs by various (heavy trucking) activity ... it would be difficult to do a cost/benefit analysis of the benefits of possible subsidies for long-haul heavy truck transportion.


ref:
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor

now *intangible social benefits* somewhat implies that it can't be measured. however, sometimes *intangible social benefits* is codeword for "measurement and cost/benefit analysis isn't allowed" (it might be discovered somebody's pet program that isn't providing any general benefits).

i caught bits & pieces of discussion last night about directed appropriations ... I think I heard that one transportation bill had something like 3000 amendments for directed appropriations and something like $26B was involved (I may have gotten it wrong ... the number of directed appropriation amendments may have been spread over a larger number of bills ... but mostly they supposedly had little or nothing to do with transportation).

part of comptroller general's comments may be that you might be able to at least determine whether there has been any change at all after some funding.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Thu, 20 Apr 2006 10:27:04 -0600
ref:
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor

and the comptroller general's talk
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future

there is this folklore that in the wake of Chuck Spinney's (one of Boyd's compatriots) congressional testimony in the early 80s (regarding Spinney's analysis) of numerous Pentagon spending programs (drawn from purely non-classified sources) ... which got some pretty extensive press coverage ... the pentagon created a new document classfication "No-Spin" (aka would be down right embarrassing in the hands of Chuck Spinney)

Boyd had this story that since they couldn't get Spinney for being required to tell the truth in testimony in front of Congress, the secdef blamed Boyd (for likely having masterminding the whole thing) and had orders cut for Boyd to be assigned to someplace in Alaska and a life-time ban on Boyd ever being allowed to enter the Pentagon building.

misc. past posts mentioning John Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
misc. posts from around the web mentiong John Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

John died 9Mar97
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/

Spinney gave something of a eulogy at the Naval Institute in July 1997 titled "Genghis John".
http://www.d-n-i.net/fcs/comments/c199.htm#Reference
https://web.archive.org/web/20010412225142/http://www.defense-and-society.org/FCS_Folder/comments/c199.htm#Reference

from above:
One hardly expects the Commandant of the Marine Corps to agree with a dovish former Rhodes Scholar, or an up-from-the-ranks, brass-bashing retired Army colonel, or a pig farmer from Iowa who wants to cut the defense budget. Yet, within days of each other in mid-March 1997, all four men wrote amazingly similar testimonials to the intellect and moral character of John Boyd, a retired Air Force colonel, who died of cancer on 9 March at the age of 70.

General Charles Krulak, our nation's top Marine, called Boyd an architect of victory in the Persian Gulf War.


... snip ...

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Thu, 20 Apr 2006 12:21:04 -0600
Donald Tees writes:
In Ontario, it is directly proportional to the "registered gross weight" which is the maximum legal load. In other words, you can *buy* load capability. Driving a vehicle over it's gross registered weight carries quite severe penalties, and the maximum you can buy is directly related to number of axles and type.

an issue in the whole series of articles is that heavy truck traffic (even within maximum legal load limits) still results in significant road damage. overloaded vehicles can result in significant additional infrastructure degradation ... and so you have all those weigh stations all over the country (presumably attempting to catch relatively prevalant practice of overloading, aka if overloading wasn't prevalent ... then you probably wouldn't need/have all those weigh stations).

so the claim is that the infrastructure damage costs as a result of (legal) heavy truck traffic is significantly larger than what is being recovered both thru fuel taxes and/or registration fees based on GVW.

all the road design documents state that the damage is proportional to number of heavy truck adjusted axle-loads (heavy truck traffic adjusted for axle load/weight characteristics).

the most recent reference implies that it is widely recognized through out the highway industry that there needs to be additional fees to correctly apportion true highway infrastructure damage costs related to heavy truck traffic and that several states are already collecting such fees (i.e. the aggregate damage cost is proportional to some mile-axle-load measure ... the specific axle-load damage times the number of miles of road that has been travelled and therefor damaged).

misc. past posts referring to heavy truck axle-loads
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#5 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#6 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#10 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#12 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#15 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#26 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#32 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#46 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#48 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#49 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#53 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#54 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor

reference to study of accurately accounting for highway infrastructure use costs and various fees and mitigation strategies:
https://www.garlic.com/~lynn/2006h.html#1 The Pankian Metaphor

part of the study discusses municipal buses (as examples of heavy trucks) that have routes through residential streets, that otherwise prohibit commercial heavy truck traffic and were never built for the number of associated heavy truck axle-loads (that are the result of the municipal bus traffic).

one municipal bus item discussed was about specially reinforced pavement, at least at bus stops (where the damage can be especially extensive). a counter argument for specially reinforced pavement, just at designated bus stops, was that a major purpose of bus service was to allow the freedom to dynamicly adjust routes (that you get from having vehicles that can travel roads ... which could be severely restricted with limited number of pavement re-inforced bus stop areas).

another suggestion was to drastically restrict bus passenger loads on residential street routes (as partial road damage mitigation effort).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Fri, 21 Apr 2006 08:58:13 -0600
Donald Tees writes:
I have no doubts as to the accuracy of any of the above. However, you might wish to find out what a bus actually weighs. It is certainly not a "heavy axle load". It is probably 1/4 of that of a typical stone/gravel carrier or or even a tandem hauling construction rubble.

I suspect the problem is that city streets are designed for very light loads, and so even a bus is close to the design criterea. I'll repeat again that a major reason for most of the above being true is that many of the roads are underdesigned.


my original comment/observation
https://www.garlic.com/~lynn/2006h.html#5 The Pankian Metaphor

included the observation: "and were never built for the number of associated heavy truck axle-loads (that are the result of the municipal bus traffic)"

the issue repeated in all of the road design references are designing to expected traffic volumes and available budget. the repeated references are that the consideration isn't to maximum load but to number of times axle-load ... above some threshold will be applied. the citation of included a dozen or so times from cal. state highway design ... talks about equivalent ESAL axle-loads ... and mentions converting number of lighter (fractional) axle-loads (above the threshold that results in deforming road construction material and therefor damage, wear&tear) into equivalent ESAL axle-load damage
https://www.garlic.com/~lynn/2006g.html#56 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#57 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#59 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#60 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#61 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#62 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor

the referenced URLs road design documents are constantly referencing designing for expected traffic (i.e. accumulation of damage by repeated axle-loads) and budget. A previously posted DOT URL reference included comment about current traffic activity was frequently never aniticipated and so the volume of heavy truck axle-loads is severely shortening the projected 25year road lifetimes.
https://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor

the article on accumulation of bus traffic damage on residential streats wasn't about the residential streets having been underdesigned ... but that they had been designed for expected projected traffic and within available budget (frequent phrase that seems to crop up in the road design references) .. aka it costs a lot more to build residential steets that have reasonable lifetimes when there is large accumulation of heavy truck (equivalent) axle-loads.

one of the points raised was trade-off with the high costs of building all residential streets for high accumulated heavy truck (equivalent) axle-loads (the result of bus traffic) ... even tho bus traffic is restricted to specific route ... versus the still significant cost of only building specific residential streets to handle heavy truck (equivalent) axle-load traffic (from buses) for a specific route. Supposedly one of the advantages of buses and roads ... versus trollys and tracks ... was that buses had significant freedom of changing routes compared to trollys and tracks (which would be lost with only building specific streets to handle damage from bus traffic).

aka ... the roads weren't purposefully underdesigned ... they had been designed for specific lifetime based on projected traffic (of heavy truck equivalent axle-loads, aka accumulation of damage the result of repeated axle-loads) and budget. During the expected road lifetime (that it had been designed for) ... things changed ... and therefor damage (from repeated heavy truck equivalent axle-loads) accumulated at greater than the original projected rate.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Fri, 21 Apr 2006 09:54:50 -0600
jmfbahciv writes:
Actually, I have no idea how a government does any of this kind of analysis in any field. There has to be kind of actuary tables (I think that's a correct phrase). _Boyd_ talks about this stuff and I didn't understand any of it.

there are all sorts of efforts attempting to get quantitative measures of things considered purely qualitative. there are annual reports about city rankings of "quality of life" ... attempting to apply quantitative measures to quality of life for cities all around the country ... and then create an ordered ranking.

sometimes just gathering the data and doing things like multiple regression analysis turns up stuff previously unanticipated. past posting in this thread mentiong MRA
https://www.garlic.com/~lynn/2006g.html#4 The Pankian Metaphor

we were once looking at medicare DRGs ... minor reference:
http://library.kumc.edu/omrs/diseases/dzcodes.html#Medicare%20DRG%20Handbook

and for one random point that came up was that the avg. hospital stay for hip-replacements on the east coast was 14 days ... while the avg. hospital stay for the same procedure on the west coast was 7 days. ???

so some different drift ... somewhat discussion of computer related stuff moving from qualitative to quantitative.

cp67 scheduler had this simplified mechanism for promoting the intangible societal benefit of interactive response. any time the task had a terminal i/o operation, there was an assumption that it was related to interactive activity and the task's scheduling priority was adjusted. nominally tasks were given a cpu scheduling "time-slice" and a scheduling priority. the task retained the same scheduling priority until it had consumed the allocated cpu time-slice and then its scheduling priority would be recalculated. the mechanmis basically was to approximate round-robin. however, it there was a terminal I/O ... the associated task was given a cpu scheduling "time-slice" that was around 1/10th the normal "time-slice" and an "interactive" scheduling priority (all "interactive" scheduling priorities were ahead of all non-interactive scheduling priorities). an "interactive" task supposedly would get to run very fast ... for a short period of time ... which might allow it provide very fast interactive response.

since there were no actual resource controls ... people found that they could scam the system and get much better thruput if their application would frequently generate spurious terminal operations.

i replaced all that when i was an undergraduate in the late 60s ... with dynamic adaptive stuff that actually tracked approximation of recent resource utilization (cpu and some other stuff). tasks would be given advisery scheduling deadlines. tasks were ordered for scheduling by value that was periodically recalculated as the current time plus some delta ... given a time in the future. The "plus some delta" was prorated on a bunch of factors ... including recent resource consumption rate. Overall, if you were consuming resources at lower than targeted rate ... you speeded up; if you were consumer resources at faster than targeted rate ... you slowed down.

The original stuff for intangible "interactive response" was not quantitative ... in that it was a purely yes/no operation. my dynamic adaptive scheduling was policy driven based on target resource consumption rates (the default target resource consumer rate policy was nominally "fair share" ... but policies could be adjusted to have things other than "fair share") could be set ... quite quantitative. There were still bias available to improve interactive response, but it couldn't be used to increase a task's aggregate resource consumption rate.
https://www.garlic.com/~lynn/subtopic.html#fairshare

the dynamic adaptive scheduling stuff was picked up and shipped in standard cp67. some amount of the implementation was dropped in the morph from cp67 to vm370 ... but a couple years afterwards, I was allowed to reintroduce it with the resource manager product ... 30th aniv. of product announcement coming up on may 11th.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Fri, 21 Apr 2006 10:08:49 -0600
jmfbahciv writes:
Are they ever used? I've never seen one open and I've always wondered why they were built.

I see the weigh stations out west being open and used regularly. The newer ones seem to have upgraded electronic gear that I believe seem to speed up the operation.

they put new ones in on both north and south bound lanes of 101 between morgan hill and gilroy a couple years ago. I'm not sure exactly what the electronics are ... but it seems to have multiple parallel weigh station lanes ... with some sort of overhead communication/electronic gear above the lanes as the truck approaches the actual scale. Pure conjecture is that there is some sort of electronic interchange between the station and the truck ... w/o having physical interaction between the truck driver and the station operator. There appears some amount of video stuff on the highway in the area (possibly to catch drivers attempting to bypass the station when it is open).

quick search engine use turns up this web page claiming to have list of all DOT weigh stations
http://www.dieselboss.com/restarea.asp

cal dot weigh stations
http://www.dot.ca.gov/hq/traffops/trucks/

lot more information about cal dot truck weigh stations
http://www.dot.ca.gov/hq/traffops/trucks/weight/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

It's official: "nuke" infected Windows PCs instead of fixing them

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: It's official: "nuke" infected Windows PCs instead of fixing them.
Newsgroups: alt.folklore.computers
Date: Fri, 21 Apr 2006 11:11:44 -0600
KR Williams writes:
It's been a couple of years since all my cow-orkers (within IBM, BTW) switched from 123 to Excel, so the answer is "a couple of years". I much preferred Excel and used it for my work long before that.

for a long time the one shipped with turbo pascal was extremely popular ... including a lot of tweaks and features down by internal workers (available internal from one of the IBMPC forums). i continued to use the one from turbo pascal well into the late 80s.

vmshare had been provided as service to share community by tymshare ... misc
https://www.garlic.com/~lynn/submain.html#timeshare

after PCs started to become popular, pcshare was added.

i managed to set up regular cloning of the vmshare forums and make them available internally on places like hone
https://www.garlic.com/~lynn/subtopic.html#hone

somewhat in the wake of the "tandem memos" online event (for which I got blamed), more structured online operation was created ... somewhat akin to vmshare that was called IBMVM. this grew into IBMPC and all whole set of other interest areas. this was supported by something called toolsrun ... which was sort of cross (combination) between usenet and listserv.

at one point corporate hdqtrs starting deploying software that would account for amount of internal network traffic (world-wide, thousand plus nodes growing to a couple thousand, several hundred thousand people). at one point somebody suggested that i had been in some way (directly or indirectly) responsible for 1/3rd of all bytes transferred on the (whole) internal network for a period of a month.

random past posts mentioning tandem memos:
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#38 ibm time machine in new york times?
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#5 What ever happened to Tandem and NonStop OS ?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

It's official: "nuke" infected Windows PCs instead of fixing them

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: It's official: "nuke" infected Windows PCs instead of fixing them.
Newsgroups: alt.folklore.computers
Date: Fri, 21 Apr 2006 13:22:36 -0600
Anne & Lynn Wheeler writes:
for a long time the one shipped with turbo pascal was extremely popular ... including a lot of tweaks and features down by internal workers (available internal from one of the IBMPC forums). i continued to use the one from turbo pascal well into the late 80s.

part of the issue with the turbo pascal spreadsheet program was that you didn't have to fiddle with the (123) copy protect floppy disk (finding it, replacing what was already in the floppy drive, refiling it afterwards, etc)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Fri, 21 Apr 2006 16:22:49 -0600
ref:
https://www.garlic.com/~lynn/2006h.html#8 The Pankian Metaphor

and the reference mentioned in that posting
http://www.dot.ca.gov/hq/traffops/trucks/

the new technology i was noticing at the new weigh stations appears to be somewhat similar to the overhead ez-pass transponder sensors on toll roads, called "prepass":

PrePass Weigh Station Bypass
http://www.dot.ca.gov/hq/traffops/trucks/bypass/

from above:
PrePass is an automated, state-of-the-art system allowing heavy vehicles that are registered in the program to legally bypass open weigh stations.

Transponders: Carriers obtain special transponders used for communication between computers in the weigh stations and the vehicles.

Green Signal: If all requirements for weight, size, safety, etc. are met, the driver receives a green signal that allows the vehicle to bypass the weigh station.


... snip ...

i think the truck still exits main traffic, but if the prepass transponder agrees ... they take a lane that bypass the actual scales and returns them to main traffic flow

the shows a graphic that depicts how
http://www.educause.edu/ir/library/pdf/EPO0801.pdf

the area of the new weigh stations that i've seen on 101 is quite a wide open expanse ... significantly larger than the old one-lane operations with small weigh station shack located adjacent to the scales.

the cal dot site also has

Data Weigh-in-Motion
http://www.dot.ca.gov/hq/traffops/trucks/datawim/

there is some possibility that data weigh-in-motion is also used in conjunction with PrePass Weigh Station Bypass (there is reference in the above Weigh-in-Motion webpage to Byass WIM).

This talks about WIM technical overview and requirement for smooth pavement surface approach leading up to a WIM installation
http://www.dot.ca.gov/hq/traffops/trucks/datawim/technical.htm

this is base document URL from cal. dot talking about pavement design and ESAL (equivalent single axle load)
http://www.dot.ca.gov/hq/oppd/hdm/pdf/chp0600.pdf

also describing how to arrive at heavy truck equivalent single axle load for calculating design for payment and pavement lifetime based on amount of ESAL activity. previous postings including description of equivalent single axle loads and pavement lifetime
https://www.garlic.com/~lynn/2006g.html#56 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#57 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#59 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#60 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#61 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#62 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#6 The Pankian Metaphor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Binder REP Cards

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Binder REP Cards
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 21 Apr 2006 19:47:06 -0600
Peter Flass writes:
The original programmers may have been brain-dead, but it's always more charitable to assume they were working under constraints you didn't have, such as memory size, maximum program size, etc.

the folklore supposedly was that the programmer implementing the assembler symbol lookup was told that it had to be implemented in a total of 256bytes (instructions plus data) ... as part of a design specification allowing assembler to execute on minimum (real) memory size machine. as a result, the lookup table was disk resident and had to be reread from disk for each statement processed.

later this was rewritten with the lookup table as resident data part of the assembler ... when they started getting feedback on the thruput vis-a-vis memory size tradeoffs.

original post ref:
https://www.garlic.com/~lynn/2006g.html#43 Binder REP Cards

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Security

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Security
Newsgroups: alt.folklore.computers
Date: Sat, 22 Apr 2006 11:29:15 -0600
eugene@cse.ucsc.edu (Eugene Miya) writes:
I have given up on password security after reading Enigma accounts (makes me wonder about Sigaba [I know no compromised messages in WWII] history). I have far too many passwords for work, and with password expiration systems, some using forms of memory to prevent reuse, that even in the late 80s I started wondering about biometrics. The first I saw was a typing signature system (interesting, but trains the human as much as the machine). It was some time before I saw real finger/palm scanners (have yet to see a real retinal one). But even these systems as well as face recognition systems are compromised.

lots of past posts on shared-secret something you know authentication.
https://www.garlic.com/~lynn/subintegrity.html#secret

part of the issue in static data, shared-secret authentication paradigms ... is not only can static data be evesdropped and reproduced in replay attacks ... but the same information is used for both origination and verification. as a result, you are required to have a unique shared-secret for every different security domain ... as countermeasure to cross-domain compromises (aka you local garage isp and your place of employment or online banking). this has been further aggravated by requirement for hard to guess (and impossible to remember) passwords that are changed on frequent basis (potentially scores of different, impossible to remember passwords at any one moment)

in the 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor

something you have
something you know
something you are

... the last two tend to be (relatively) static data that are vulnerable to evesdropping/harvesting and replay attacks.
https://www.garlic.com/~lynn/subintegrity.html#harvest

unique physical tokens for something you have authentication that involve unique data for every operation (like digital signature as countermeasure to evesdropping and replay attacks) and different data for origination and verification (like public/private key as countermeasure to cross-domain compromises).

the issue then is that something you have authentication may be vulnerable to lost/stolen tokens ... and multi-factor authentication, with "somthing you know" or "somthing you are" then is countermeasure to lost/stolen tokens (and tokens are countermeasure to the static data evesdropping against something you know or something you are and replay attacks).

a somewhat implicit assumption in multi-factor authentication is that the different methods are vulnerable to different threats. the assumption in multi-factor authentication (in something like pin-debit) can be subverted where both the something you have (magstripe) and something you know (pin) are both subject to the same, common skimming/harvesting vulnerability(and replay attack)
https://www.garlic.com/~lynn/subintegrity.html#harvest

the next scenario ... even with relatively high integrity multi-factor authentication is the compromise of the authentication environment (where token/viruse can reproduce static data authentication and any physical token can be be induced to perform multiple operations ... w/o the owners knowledge). recent posting on this in thread on multi-factor authentication vulnerabilities
https://www.garlic.com/~lynn/aadsm23.htm#2

the above mentions that something you know authentication can either involve a shared-secret (that is typically registered at some institutional, security domain repository) or plan "secret". In the plan secret method, the "secret" is registered in a something you have token and required for correct token operation. Since the "secret" isn't registered at specific institutional, security domain repositories ... there is much less a threat of cross-domain compromises (and therefor the same authentication mechanisms could be used in multiple different security domains).

start of the thread mentioning a number of different security related weaknesses
https://www.financialcryptography.com/mt/archives/000691.html

and man-in-the-middle attacks
https://www.garlic.com/~lynn/subintegrity.html#mitm

lots of past posts on exploits, threats, and vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#fraud

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Security
Newsgroups: alt.folklore.computers
Date: Sat, 22 Apr 2006 12:59:46 -0600
the really old, ancient, "new" thing that has been bubbling off and on in the press for at least the past year (much more recently), is virtualization as security ... stuff like
http://www.securityfocus.com/columnists/397 Virtualization for Security

turns out that cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

started on it slightly over 40 years ago. I didn't do any work on it until three people from science center brought a copy out to the univ. the last week of jan68.

basically virtualization helps with partitioning and isolating effects of things like viruses and trojans. It also can be considered encompassing things like countermeasure to compromises of authentication environment ... raised in these recent postings:
https://www.garlic.com/~lynn/2006h.html#13 Security
https://www.garlic.com/~lynn/aadsm23.htm#2

not too long later, science center was using it to offer time-sharing service (as well as number of commercial time-sharing service bureaus)
https://www.garlic.com/~lynn/submain.html#timeshare

the science center had a combination of sensitive corporate activites as well as a mix of faculty and students from various educational institutions in the cambridge area (bu, mit, harvard, etc).

one of the really sensitive things was a lot of work on providing 370 virtual memory emulation (before 370 virtual memory had been announced and/or even hardware had been built). one of the others was corporate hdqtrs use of cms\apl for the most valuable and sensitive of corporate data. misc. posts that mention apl and/or hone (a major internal timesharing service built almost totally on cms\apl ... later moving to apl\cms and subsequent versions):
https://www.garlic.com/~lynn/subtopic.html#hone

cambridge had ported apl\360 to cms ... and added filesystem api semantics, as well as made available several mbyte workspaces as standard (compared to the typical 16kbyte workspaces available under apl/360). apl in the 60s and 70s were the spreadsheet "what-if" workhorse for corporate planners and business people. once cambridge had cms\apl up and running as part of standard offering ... some of the business people from corporate hdqtrs shipped up a tape of the most sensitive corporate business data for loading into cms\apl workspaces (apl\360 didn't have any filesystem api semantics, any data loaded into the miniture 16kbyte workspaces had to be done manually at the keyboard).

in any case, there was significant issue with not allowing any security breaches and/or data breaches of the extraordinarily sensitive corporate information ... especially by any of the general users (like various students from the surrounding educational institutions).

there were other organizations (besides internal systems and external commercial timeshare services) using the system for security purposes ... like referenced here
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

misc. past posts mentioning the above reference:
https://www.garlic.com/~lynn/2005k.html#30 Public disclosure of discovered vulnerabilities
https://www.garlic.com/~lynn/2005k.html#35 Determining processor status without IPIs
https://www.garlic.com/~lynn/2005p.html#0 Article: The True Value of Mainframe Security
https://www.garlic.com/~lynn/2005s.html#23 winscape?
https://www.garlic.com/~lynn/2005s.html#44 winscape?
https://www.garlic.com/~lynn/2005u.html#36 Mainframe Applications and Records Keeping?
https://www.garlic.com/~lynn/2005u.html#37 Mainframe Applications and Records Keeping?
https://www.garlic.com/~lynn/2005u.html#51 Channel Distances
https://www.garlic.com/~lynn/2006.html#11 Some credible documented evidence that a MVS or later op sys has ever been hacked

even tho a lot of stuff I was doing as an undergraduate was being picked up in standard system distribution ... i didn't hear about the guys mentioned in the above reference until much later (although I could reflect that some of the things that I was being asked to consider, when I was an undergraduate, may have originated from some of those organizations).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Security
Newsgroups: alt.folklore.computers
Date: Sat, 22 Apr 2006 13:47:06 -0600
ref:
https://www.garlic.com/~lynn/2006h.html#13 Security
https://www.garlic.com/~lynn/aadsm23.htm#2
https://www.garlic.com/~lynn/2006h.html#14 Security

one of the ancillary issues in havesting/skimming/evesdropping
https://www.garlic.com/~lynn/subintegrity.html#harvest

of static data shared-secrets
https://www.garlic.com/~lynn/subintegrity.html#secret

or any kind of static data shared-secrets are the security breaches and data breaches by insiders. insiders have repeatedly shown to be the major threat for id theft, id fraud, and account fraud; long before the internet and continuing right up thru the internet era to the present time.

one method to plug some of the security breaches and data breaches is by moving to multi-factor authentication (i.e. the static data authentication repositories are augmented) where at least one factor involves some sort of dynamic information (impersonation isn't possible by copying existing repository of authentication and transaction information).

this can help minimize the insider threat which has been responsible for the majority (possibly 75precent or more)
https://www.garlic.com/~lynn/aadsm17.htm#38 Study: ID theft usually an inside job

of id theft, id fraud, and account fraud. my slightly related, old standby about security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

however, it can make the attackers move from focusing on the backend ... to attacking the origin of the transaction and authentication ... including the environment that any authentication takes place in.

one of the other countermeasures for attacks on the backend infrastructure (security breaches and data breaches) is encryption. however, encryption is not going to be very effective if the encrypted repositories are required (unencrypted) by a large number of different business processes and insiders (aka insiders have always represented the majority of the threat). this is somewhat my repeated comment that the planet could be buried under miles of cryptography and still not be able to effectively stem such exploits.

misc. random past posts mentioning even miles deep cryptography may not be able to halt the leakage of various kinds of information (and therefor you have to change the nature and use of the information, so that even if it leaks, it can't be used for fraudulent purposes):
https://www.garlic.com/~lynn/aadsm15.htm#21 Simple SSL/TLS - Some Questions
https://www.garlic.com/~lynn/aadsm15.htm#27 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm19.htm#45 payment system fraud, etc
https://www.garlic.com/~lynn/2004b.html#25 Who is the most likely to use PK?
https://www.garlic.com/~lynn/2005u.html#3 PGP Lame question
https://www.garlic.com/~lynn/2005v.html#2 ABN Tape - Found
https://www.garlic.com/~lynn/2006c.html#34 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#35 X.509 and ssh
https://www.garlic.com/~lynn/2006d.html#26 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006e.html#44 Does the Data Protection Act of 2005 Make Sense
https://www.garlic.com/~lynn/aadsm22.htm#2 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#33 Meccano Trojans coming to a desktop near you

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Sat, 22 Apr 2006 17:41:04 -0600
Larry Elmore writes:
I remember that being a problem for some US military intelligence types at the end of WWII and the beginning of the occupation of Japan. A number of them had learned their Japanese from female (Japanese) teachers, and one could almost say there are male and female dialects of Japanese. It was very confusing sometimes, apparently, and Japanese military men weren't sure how to treat a big hairy barbarian to whom the Emperor had ordered a surrender, but who spoke to them in the manner of a subservient female making humble requests instead of giving orders.

i've been told that males using female vocabulary may also be referred to as bedroom talk ... and can be considered out of place in other contexts.

a co-worker on a business trip to japan once told the story of one of his first business meetings telling his audience that he wanted to practice his Japanese that he learned from his roommate in college (back in the days when yen was greater than 300/dollar). During the first break, somebody took him aside and attempted to tactfully explain the situation.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Sun, 23 Apr 2006 09:06:03 -0600
Anne & Lynn Wheeler writes:
so economic policies can take into account intangible societal benefits. these are almost always just expressed in qualitative terms. however, there is almost never any quantative analysis that goes along with the decisions. for instance a quantative analysis might say that a subsidy of "X dollars" per pound-miles transported represents an intangible societal benefit of "Y". However, any subsidy in excess of "X dollars" may encourage bad societal behavior (say excessive use of subsidized long-haul transportion of goods that provide no societal benefit).

recent news item:

NSF Begins a Push to Measure Societal Impacts of Research
http://www.sciencemag.org/cgi/content/full/312/5772/347b

... from above:
When politicians talk about getting a big bang for the buck out of public investments in research, they assume it's possible to measure the bang. Last year, U.S. presidential science adviser John Marburger disclosed a dirty little secret: We don't know nearly enough about the innovation process to measure the impact of past R&D investments, much less predict which areas of research will result in the largest payoff to society.

... snip ...

somewhat similar to comments in comptroller general's talk about being able to audit/measure funding programs (big bang for the buck from any funding program?)
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future

other posts
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#7 The Pankian Metaphor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Sun, 23 Apr 2006 17:24:39 -0600
Morten Reistad writes:
A spot market is the a market that requires no long term contracts, no planning, just business there and then.

7-11 is a spot market. Internet purchases seldom are.


there was an article about part of the reason that cal. got into trouble during the energy crunch .... the spot market can be much lower (vis-a-vis long term commuted contracts) during periods of significant excesses. the article had some detail about cal. pucc passing some regulation about not allowing long term contracts ... just getting stuff on spot market.

long term contracts tend to have more incentive for capital investment by producers. attempting to live off the excesses not needed by others ... can result in shortages when you have no long term supply commitment

the northwest has had a lot of hydroelectric power which gets dumped into the power-grid. power over and above long-term contracts shows up on the spot market (sort of like unsold seats on airplanes or unsold rooms at hotels ... sometimes you can find some really great discounts at the last minute).

then the northwest was having a draught ... and hydroelectric plants were dumping less into the power-grid. this appeared to make it easier for some unscrupulous dealers to manipulate the perceived scarcity/excess and the spot market; aka during periods of scarcity ... the spot market can be significantly higher than long term contracts.

this is a posting from a couple years ago, mentioning both preventive maintenance (on railroad tracks) as well as the article about california pucc having some regulation about getting power from the spot-market.
https://www.garlic.com/~lynn/2001e.html#75 Apology to Cloakware (open letter)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Sun, 23 Apr 2006 18:03:22 -0600
Anne & Lynn Wheeler writes:
this is a posting from a couple years ago, mentioning both preventive maintenance (on railroad tracks) as well as the article about california pucc having some regulation about getting power off the spot-market.
https://www.garlic.com/~lynn/2001e.html#75 Apology to Cloakware (open letter)


and for more drift ... the next post in the thread mentioned
https://www.garlic.com/~lynn/2001e.html#77 Apology to Cloakware (open letter)

has a reference to the thread between risk management and information security ... and a quote from a participate in a conference on the subject ... long posting of this person's observation:
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security

I've recently made references to a talk by the comptroller general:
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future

One of the comments the comptroller general made during the talk was that there is a $160k/person (every man, woman, child, and baby) fed. program liability in the US for various obligations. The extract (in this earlier thread) explains how the bailout of the S&L industry is being carried off-books, since it represents a $100k/person liability. It wasn't clear in the comptroller general's speech whether his figure of $160k/person included the S&L $100k/person bailout obligation or was in addition to the S&L bailout obligation.

recent postings referencing the comptroller general's talk
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Binder REP Cards (Was: What's the linkage editor really wants?)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Binder REP Cards (Was: What's the linkage editor really wants?)
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 23 Apr 2006 18:48:12 -0600
Chris Mason wrote:
Shmuel,

Check this out. It appears to be CMS HELP for the LOAD command. It includes a description of the REP card but also a VER card.

http://mitvma.mit.edu/cmshelp.cgi?CMS%20LOAD%20(ALL

I also found a page for the DOSLKED command which mentions only the usual card types including the REP card but no VER card.

http://vm.uconn.edu/cgi-bin/cmshelp?CMS%20DOSLked

Chris Mason


the original "loader" that the science center used for CMS ... Cambridge Monitor System ...
https://www.garlic.com/~lynn/subtopic.html#545tech

was from the BPS (basic programming system) loader.

In the morph from cp67 to vm370, the meaning of "CMS" was changed to Conversational Monitor System. There were quite a bit of code rewrite for the vm370 virtual machine kernel ... but significantly less changes were done for CMS. CMS did have a body of code that emulated os/360 functions ... that allowed running a number of assemblers and compilers from os/360. In the early MVS time-frame ... this os/360 emulation code totaled approx. 64kbytes and there were factious references to the cms 64kbyte os/360 emulation was almost as good as the mvs 8mbyte os/360 emulation.

Later versions of cms expanded the cms os/360 emulation support ... providing significantly greater compatibility with the mvs environment.

the cms help page you reference is copyrighted 1990, 2003.

however, my cms (hardcopy) manuals from early 70s (both program logic manual and user's guide) don't list VER.

I remember VER being part of (os/360) superzap for applying (PTF and other) patches to os/360 executables .... you would have superzap file and DD statements would reference input source. in some sense superzap served something was a hex editor ... except in the edit syntax ... you would have a specific record and change specific input string to specific output string.

here is help file on how to use superzap (and description of superzap statements):
http://www.cs.niu.edu/csci/567/ho/ho4.shtml

misc. past posts in this thread:
https://www.garlic.com/~lynn/2006g.html#43 Binder REP Cards (Was: What's the linkage editor really wants?)
https://www.garlic.com/~lynn/2006g.html#44 Binder REP Cards (Was: What's the linkage editor really wants?)
https://www.garlic.com/~lynn/2006g.html#58 REP cards
https://www.garlic.com/~lynn/2006h.html#12 Binder REP Cards

Binder REP Cards

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Binder REP Cards
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 24 Apr 2006 07:44:48 -0600
"Charlie Gibbs" writes:
Dunno about IBM's assembler, but the Univac 9300 assembler definitely stored its symbol table in memory. I did some work on a minimal system (8K memory) and I had to do a lot of the dreaded

BC   8,*+32

or whatever, simply to avoid symbol table overflows because I couldn't afford another label.


this wasn't symbols from the program ... this was the symbols of the instructions ... what the symbolic of each instruction (aka BC, TRT, A, etc). the folklore was that the person implementing the statement decode was told that they had a maximum of 256 bytes for the implementation (machine instructions implementing the decode ... as well as the actual data for doing instruction decode). as a result the table of instruction symbolics had to be read from disk.

ref:
https://www.garlic.com/~lynn/2006h.html#12 Binder REP Cards

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Mon, 24 Apr 2006 08:27:31 -0600
jmfbahciv writes:
The -10 had all kinds of ranges that users, system managers and system gens could set. I think the way our imlementations evolved anything that could be set had to have some way for all three to set, system manager, operator, user, and monitor. A usage variable could be set at sysgen, boot, system job login, user login, operator command during system operations, user command during login session.

From your description above, it doesn't seem like you gave operations a way to weight system service delivery against the typers and for the compute intensive job. There was a need for that kind of service, too.


it turns out that the bias of doing a terminal i/o was suppose to be for helping trivial interactive response. the dynamic adaptive stuff was monitoring recent resource utilization ... as per
https://www.garlic.com/~lynn/2006h.html#7 The Pankian Metaphor

so a terminal i/o kicked off a new scheduling advisery deadline for a very short amount of resource consumption. the deadline was calculated 1) proportional quanta of resource consumption for the period (smaller quanta for doing terminal i/o resulted in sooner deadline) as well as 2) proportional to recent resource consumption against some policy (the default policy being fairshare).

if the default resource policy was fairshare ... users consuming more than their fairshare had all their deadlines prorated further into the future (slowing them down), if users were really trivial interactive, then their recent resource consumption would be less than their fairshare, as a result the prorated calculations made their deadlines sooner ... speeding them up. a user was ahead of their targeted resource consumption (and therefor got advisery scheduling deadline priorities that slowed them down) or behind their targeted resource consumption (somewhat implicit if they were actually trivially interactive ... and got advisery scheduling dealine priorities that speeded them up.

for users that were way behind in their targeted resource consumption, they would start to speed up their measured resource consumption (because of the advisery deadline priorities) ... as their measured resource consumption approached their targeted resource consumption, they would slow down until their measured resource consumption and their targeted resource consumption was in equilibrium.

so i did a joke for the resource manager (vm370 re-issue of the dynamic adaptive stuff i did as undergraduate for cp67 ... 30th anv of the product announce of resource manager coming up on may 11th).

i had done all this elaborate dynamic adaptive stuff to measure what was going on and dynamically adapt everything. parameters were available for changing specified policies ... at system level and individual user level. however, all the performance tuning stuff had been subsumed by the elaborate dynamic adaptive capability.

furthermore, there had been extensive benchmarking, calibrating and validating the dynamic adaptive capability across a wide range of workloads, configurations, and policies. recent posts mentioning that benchmarking, calibrating and validating effort (one series of 2000 tests took 3 months elapsed time to complete)
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006e.html#25 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006f.html#22 A very basic question
https://www.garlic.com/~lynn/2006f.html#30 A very basic question
https://www.garlic.com/~lynn/2006g.html#1 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#34 The Pankian Metaphor

so as to the joke? well, somebody from corporate hdqtrs observed that all the existing state-of-the-art resource managers had elaborate parameters that could be set by installations (primarily for system tuning) and the resource manager would require equivalent capability before it could be released as a product. it wasn't possible to get across to the person that the elaborate dynamic adaptive capability subsumed all such features.

so i added some such parameters, published the calculations ... and of course all source was available (product was shipped in source maintenance form ... as well as applying the source changes for the resource manager to the base product). i even though classes on how the calculations and parameters all worked.

in the early 90s, we were making a number of marketing trips to the far east for our ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

i related an anecdoate from one such trip in this recent post
https://www.garlic.com/~lynn/2006g.html#21 Taxes

on one of the trips to HK, we were doing customer call on major bank ... and going up the elevator in the bank building with the external skeleton (there were some references to tinker toy building because of the external structure). from a younger person in the back of the elevator came a question ... are you the "wheeler" of the "wheeler scheduler"?, we studied you at the university.

so nobody had figured out the joke. as i've periodically referred to in the past, most system programmers tend to deal with states ... things are either in one state or another ... or in case of parameters, a specific value from a range.

the dynamic adaptive resource manager ... was much more of a dynamic feedback and feedfoward nature ... much more of a operations research methodology than a kernel programmer state methodology. In OR methodology calculations you tend to have parameters with characteristics like degrees of freedom. Now for the dynamic adaptive resource manager ... the parameters provided for people to set (other than the policy selection parameters) all fed into the same dynamic adaptive calculations as the base dynamic adaptive stuff. The dynamic adaptive stuff would iterate its values in the calculations ... changing the dynamic adaptive parameters to adapt to workload, configuration and how well things were going. The magnitude and range of the dynamic adaptive parameters recalculated at every interval had much larger degrees of freedom than the static set parameters (that corporate hdqtrs required to be added) for people to set.

So the dynamic adaptive resource calculations had great latitude in dynamiclly adjusting its parameters to constantly and dynamically compensate for changes in configuration and workload ... as well as compensating for any staticly set parameters that people might be fiddling with (somtimes referred to as performance tuning witch doctors).

misc. past posts mentioning the elaborte joke in the resource manager:
https://www.garlic.com/~lynn/2001e.html#51 OT: Ever hear of RFC 1149? A geek silliness taken wing
https://www.garlic.com/~lynn/2001l.html#9 mainframe question
https://www.garlic.com/~lynn/2002c.html#13 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#16 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#54 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002i.html#53 wrt code first, document later
https://www.garlic.com/~lynn/2002k.html#66 OT (sort-of) - Does it take math skills to do data processing ?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2004o.html#10 Multi-processor timing issue
https://www.garlic.com/~lynn/2005b.html#58 History of performance counters
https://www.garlic.com/~lynn/2005p.html#31 z/VM performance
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2006b.html#21 IBM 3090/VM Humor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Mon, 24 Apr 2006 08:50:31 -0600
jmfbahciv writes:
The taxes paid will no longer be used to fund all other projects but will have to be spent on roads. That will be the first action. Wouldn't a lot of the roads simply be discontined? There's a lot of concrete out that was pork bellies rather than commericial. Think of the all the ramps that won't have to be maintained if passenger cars are no longer use the highways.

the comment is not to eliminate payments (in the form of fees and taxes) for roads ... it is just a statement about correctly apportioning the taxes and fees based on use. This is "use" as in acutal wear and tear on the infrastructure, aka make the use fees ... not proportional to the number of times that you pass through someplace ... but proportional to the amount of wear and tear that you cause when you pass through someplace (flat rate charges tend to assume homogeneous and uniform wear & tear use).

as mentioned in past postings, somebody else even posted a reference to several states have gone to a 3rd fee. that it was recognized that straight fuel tax didn't accurately account for heavy trucking "use". As a result, some number of gov. bodies had added registration fees for heavy trucking that were proportional to the vehicles gvw. However, this second kind of gvw fee was still static ... it didn't actually account for the different amounts of wear and tear that happens based on miles driven and load carried (aka miles-ESAL ... miles-equivalent-single-axle-loads). The 3rd fee attempts to accurately account for actual wear and tear use caused by specific vehicles based on something akin to miles-equivalent-single-axle-loads).

if you more accurately account for actual costs ... the economics of doing something might change. one change might be that the amount of long-haul trucking is reduced ... since if the actual costs of long-haul trucking was accurately accounted for ... it might increase the costs of some of the products that were transported, if there was some increase in the costs of products transported by long-haul trucking ... some people might buy less of it and buy more of something else.

a possible point is that any "efficiency" in "market economy" is at least partially the result of having dynamic adaptive feedback operations based on actual costs (and prices accurately reflecting those costs). "managed economys" may enormously distort prices (with respect to actual costs) and therefor drastically distort the "market economy" ability to accurately, efficiently, and rapidly adapt to changing configurations and workloads.

some gov. bodies may try and achieve some trade-offs between degree of distorting prices and the efficiency of "market economy" ... possibly because "market economy" may heavily over optimize for short-term results at the expense of longer term optimization. however, one of the previously raised issues is that the people in gov. may have the least experience and skill to make such trade-off decisions.

past posts on the subject in this thread:
https://www.garlic.com/~lynn/2006g.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#8 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#20 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#26 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#32 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#34 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#48 The Pankian Metaphor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Mon, 24 Apr 2006 09:35:40 -0600
Brian Inglis writes:
This is how politicians think: everything should be paid for by those other people (we, the Taxpayer!)

there is some joke regarding the difficulty in figuring out how much tax you owe ... and it would be much simpler if you just sent in all your money ... and they sent back how much they think you should have. the joke isn't so much about who pays for what .... but whether an individual should even be allowed the privilege of deciding.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Mon, 24 Apr 2006 12:20:14 -0600
Anne & Lynn Wheeler writes:
so as to the joke? well, somebody from corporate hdqtrs observed that all the existing state-of-the-art resource managers had elaborate parameters that could be set by installations (primarily for system tuning) and the resource manager would require equivalent capability before it could be released as a product. it wasn't possible to get across to the person that the elaborate dynamic adaptive capability subsumed all such features.

re:
https://www.garlic.com/~lynn/2006h.html#22 The Pankian Metaphor

sort of the evoluation was that computer systems were built to just run. as things got more complex ... especially with various kinds of multiprogramming ... there was a realization that varous kinds of optimization and performance turning could improve throughput.

the problem was that much of the state-of-the-art around the time that i was (re)releasing dynamic adaptive scheduling for the research manager (11may76, not to mention the earlier incantation done in the late 60s) was that there was no really good understanding of the science of performance tuning. some set of performance settings were specified ... and sometimes it improved things and sometimes it didn't. part of the issue was that performance settings could be workload and configuration specific ... with static settings and workload that possibly dynamically changed minute to minute ... there would be no ideal setting.

in any case, the prevalent state-of-the-art at the time for (re-)releasing the dynamic adaptive stuff (11may76) was to try and identify all thruput related decisions in the systems and attach various kinds of control parameters to each of the decision points. You build an enormous specification of all possible control parameters and the types of decisions that they affected. customers then were encouraged to have significant resources devoted to studying thruput, (possibly randomly) changing turning parameters, evaluated the result and presenting detailed reports at user group meetings (like SHARE and GUIDE) about specific customer experiences (randomly) modifying the numerous tuning parameters (aka the rituals involved in propagating the performance tuning magic incantations thruout the performance turning witch doctor society and from one generation of performance turning witch doctors to the next)

part of the pressure from corporate hdqtrs was that the (other) mainstream operating system product had a significant subculture and folklore around performance tuning (requiring large amount of resources devoting to performance tuning was felt to be representative of an advanced, state-of-the-art customer installation)

the concept that you could have a science of thruput and deploy a dynamic adaptive resource manager based on such principles, was incomprehensible.

as a result i had to come up with the ruse of having "people set" tuning parameters and allow the dynamic adaptive control mechanisms "compete" with the people specific static settings.

part of this is because the science center had spent a lot of effort on instrumenting systems and capturing the data for detailed study and analysis
https://www.garlic.com/~lynn/subtopic.html#545tech

and was well on its way to evolving things like capacity planning based on the work ...
https://www.garlic.com/~lynn/submain.html#bench

aka the stuff that the performance predictor application was able to do on hone
https://www.garlic.com/~lynn/subtopic.html#hone

being able to take input from sales and marketing people about a customer's configuration and workload and allow "what-if" questions to be asked about changes to workload and/or configuration.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Security
Newsgroups: alt.folklore.computers
Date: Mon, 24 Apr 2006 13:11:38 -0600
Anne & Lynn Wheeler writes:
misc. random past posts mentioning even miles deep cryptography may not be able to halt the leakage of various kinds of information (and therefor you have to change the nature and use of the information, so that even if it leaks, it can't be used for fraudulent purposes):
https://www.garlic.com/~lynn/aadsm15.htm#21 Simple SSL/TLS - Some Questions
https://www.garlic.com/~lynn/aadsm15.htm#27 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm19.htm#45 payment system fraud, etc
https://www.garlic.com/~lynn/2004b.html#25 Who is the most likely to use PK?
https://www.garlic.com/~lynn/2005u.html#3 PGP Lame question
https://www.garlic.com/~lynn/2005v.html#2 ABN Tape - Found
https://www.garlic.com/~lynn/2006c.html#34 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#35 X.509 and ssh
https://www.garlic.com/~lynn/2006d.html#26 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006e.html#44 Does the Data Protection Act of 2005 Make Sense
https://www.garlic.com/~lynn/aadsm22.htm#2 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#33 Meccano Trojans coming to a desktop near you


ref:
https://www.garlic.com/~lynn/2006h.html#13 Security
https://www.garlic.com/~lynn/2006h.html#14 Security
https://www.garlic.com/~lynn/2006h.html#15 Security

trivial case of skimming, harvesting, evesdropping standard business process data for replay attacks ... being able to use the information for fraudulent transactions that get treated as valid. trivial recent example in the news:

Crook used dumped credit data
http://www.edmontonsun.com/News/Edmonton/2006/04/20/1541789-sun.html

in the mid-90s, the x9a10 financial standards working group was given the requirement to preserve the integrity of the financial infrastructure for all retail payments. one of the issues addressed in work on x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

was changing the paradigm so that any skimmed, harvested, and/or evesdropped normal business process information couldn't be used for performing fraudulent transactions.
https://www.garlic.com/~lynn/subintegrity.html#harvest
https://www.garlic.com/~lynn/subintegrity.html#secret

this was particular important when considering that all the long term statistics about such fraudulent behavior involved insiders the majority of the time (aka it didn't prevent the information from being skimmed, harvested, and/or evesdropped, it eliminated the ability of crooks being able to use the information for fraudulent activity).

compromised (and/or counterfeit) authentication environments have been around in the physical world (in the guise of point-of-sale terminals and/or atm machines) have possibly been around for decades. authentication information is skimmed/harvested and then used for replay attacks involving fraudulent transactions at other locations.

the current genre of phishing attacks as well as trojans and viruses on PCs ... just extend that same harvesting/skimming threat model to the internet. part of the objectives in the x9.59 financial standard was to eliminate harvesting/skimming (for at least some types of information) as a mechanism for (some of the more common types of) fraudulent transactions.

the basic x9.59 standard didn't do anything to eliminate crooks compromising and/or counterfeiting authentication environments ... it just minimized the fraudulent return on investment.

there are still threat models involving compromised and/or counterfeit authentication environments involving duplicated transactions unknown to the originating entity. there may not be information in the actual, valid transactions that can be skimmed and used for fraud. however, a compromised and/or counterfeit authentication environment may still be able to perform additional surreptitious fraudulent transactions in concert with valid transactions (unknown to the originator).

in the physical world, the crooks have tended to try and obfuscate the source of the compromised authentication environment (hoping that they can continue to use it as a source for being able to create fraudulent transactions). actually performing the fraudulent transactions at the point of compromise can result in it being quickly identified and removed. in the internet environment, individual introduction of trojan to compromize an end-user PC authentication environment represent less of an investment and therefor less of a loss if it is identified and removed (transactions accounting for a few thousand per PC may be sufficient to justify the effort).

the EU FINREAD terminal/standard
https://www.garlic.com/~lynn/subintegrity.html#finread

was an attempt to remove the PC authentication environment from the control of any trojans or viruses that might exist on individual PCs.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

confidence in CA

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: confidence in CA
Newsgroups: comp.security.misc
Date: Mon, 24 Apr 2006 13:53:12 -0600
Sebastian Gottschalk writes:
man Web of Trust man PGP

note that the original pk-init draft for kerberos
https://www.garlic.com/~lynn/subpubkey.html#kerberos

(used in m'soft infrastructure as well as many other authentication operations) called for registering public key in lieu of password ... aka w/o digital certificates
https://www.garlic.com/~lynn/subpubkey.html#certless

then there was a strong lobby to add certificate-based option to the pk-init specification. i've periodically gotten email apologizing from the person claiming primary responsibility for certificate-based option being added to pk-init.

what they realized was that they now have a certification authority based infrastructure for registering entities ... which has primarily to do with who they are.

except for the trivial, no-security operations ... they then continue to require the kerberos based registration infrastructure which involves both information about who the entity is, but also what permissions need to be associated with the entity. the counter argument is that every entity in the possesion of any valid digital certificate should be allowed unrestricted access to every system in the world (regardless of who they are and/or what systems are involved). the trivial example is that everybody in the world has unlimited access to perform financial transactions against any and all accounts that may exist anywhere in the world.

in effect, they now tend to have duplicated registration business processes ... with the certification authority registration infrastructure tending to be a subset (and duplicate) of the kerberos permission oriented registration operation. as a result, the digital certificates issued by the certification authority based operation have tended to become redundant and superfluous.

there has been a lot written about various serious integrity issues related to SSL domain name digital certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

part of proposals to improve the integrity of the SSL domain name certification authority operation ... is to have domain name owners register public keys (with the domain name infrastructure) when domain names are obtained. then when entities apply for SSL domain name infrastructures, they are required to be digitally signed. The certification authority then can do a real-time retrieval of the on-file public key from the domain name infrastructure to validate the digital signature on the SSL domain name digital certificate application (improving the integrity of the SSL domain name certification process).

the catch-22 for the SSL domain name certification authority industry is if the certification authority industry can rely on real-time retrieval of onfile public keys (from the domain name infrastructure) as the root of their certification and trust ... then why wouldn't it be possible for everybody in the world to also start performing real-time retrievals of the onfile public keys (making any use of SSL domain name digital certificates redundant and superfluous).

one could even imagine a highly optimized SSL variation where any public key and crypto-opts are piggy-backed on the same domain name infrastructure response that provided the domain name to ip-address mapping (totally eliminating the majority of existing SSL setup protocol chatter)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

confidence in CA

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: confidence in CA
Newsgroups: comp.security.misc
Date: Mon, 24 Apr 2006 14:32:12 -0600
Sebastian Gottschalk writes:
The more flexible variant of this is called OSCP.

there was a point in time when I was being told that the appending of digital certificates to financial transactions would bring financial transaction processing into the modern age. I pointed out that appending digital certificates to financial transaction represented more of a paradigm regression of 20-30 years (to pre-real time authentication and authoriziation). Shortly after that, OCSP was born.

the issue regarding OCSP was that it preserved the stale, static (redundant and superfluous) digital certificate model ... by doing possibly real-time response regarding whether the stale, static information in the digital certificate was still valid. It didn't do anything about providing real-time operational information about things that may involve non-stale, and non-static information ... like real-time response authorization a transaction as to whether it was within the account limits.

credentials, certificates, licenses, diplomas, letters of introduction, and letters of credit, etc have served for centuries, providing statle, static information for relying parties that had no other method for obtaining information about the party they were dealing with.

digital certificates have been electronic analogs to the physical world counterparts for relying parties that lack any of their own information about the party they are dealing with AND lack any online mechanism for obtaining such information.

as online environments have become more ubiquituous and prevalent, digital certificates have somewhat moved into the no-value market segment (as the possible offline operations that would benefit from stale, static digital certificate information have disappeared being replace with relying parties being able to directly access real-time information about the entities they were dealing with). the no-value market segment are business operations where the relying parties can't justify the cost or expense of having access to real-time information.

the scenario for OCSP for financial transactions ... was that the relying party could do a OCSP to see whether the digital certificate was still current.

the counter was the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

in both cases there was a digital signature attached to the transaction to be verified. in the x9.59 scenario, the relying party could forward the transaction and the attached digital signature to the customers financial institution and get a real-time response either standing behind the payment or denying the payment (based not only on verifying the digital signature and whether the account still existed, but also current credit limit and possibly recent transaction patterns that might represent fraud).

attaching a digital certificate was purely redundant and superfluous in any sort of real-time, online operation ... and provided absolutely no additional benefit.

my other observation (made at the same time as pointing out that attaching stale, static digital certificates not only didn't modernize the operation but set it back 20-30 years) was about the enormous payload bloat.

the typical payment transaction payload size is on the order of 60-80 bytes. the digital certificate oriented financial efforts going on in that time-frame were seeing payment transactions being increased by 4k-12k bytes for the digital certificate appending mechanisms (i.e. payload was being increased by 100 times, two-orders of magnitude for stale, static information that was redundant and superfluous).

so another effort was started in parallel about the same time the ocsp stuff started ... which was to define "compressed" digital certificates. this effort hoped to get compressed digital certificates into the 300 byte range ... representing only a factor of five times payload bloat (for stale, static, redundant and superfluous information) rather than 100 times payload bloat.

one of their suggested mechanisms was to remove all non-unique information in the digital certificate ... leaving only the necessary information that was absolutely unique for a particular digital certificate. I pointed out if the point of appending the digital certificate was to have it forwarded to the entity's financial institution ... for processing by the entity's financial institution ... then it was also possible to eliminate all information in the digital certificate that was already in possession of the entity's financial institution. I then could trivially prove that the entity's financial institution would have a superset of all information in the digital certificate and compress the size of the digital certificate to zero bytes.

rather than eliminating the appending of stale, static, redundant and superfluous digital certificates to every financial transaction, in part to avoid a factor of 100 times payload bloat ... it would be possible to compress the appended stale, static, redundant and superfluous digital certificate to zero bytes. You would still have an appended digital certificate, but since it would only be zero bytes in size, any associated payload bloat would be significantly reduced.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

confidence in CA

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: confidence in CA
Newsgroups: comp.security.misc
Date: Mon, 24 Apr 2006 14:53:26 -0600
Sebastian Gottschalk writes:
The more flexible variant of this is called OSCP.

another suggested target market in that period for digital certificates was the driver's license analog .... i.e. driver's licenses would be issued with chips ... and the chip would have a digital certificate containing all the information normally associated with driver license.

ocsp was then suggested for that. law enforcement officer would stop you and ask for a valid driver's license (chip reader to get the digital certificate copy out) and then the officer could do an oscp to see whether it was still valid or not.

however, in that time frame, law enforcement was moving to real-time, online operations. rather than wanted to know whether the physical driver's license was still valid (i.e. this is the century's old paradigm involving credentials, certificates, licenses, diplomas, letters of credit/introduction, etc) ... the officer needed some database lookup value (account number analog) ... and the officer would do real-time access of all the "real" information.

the license was a physical object substitute for relying parties that lacked the ability to access the real information (including real-time stuff like oustanding warrants, tickets, etc). in the move to a real-time, online operation ... any stale, static distributed physical representation of that information was becoming less and less useful.

having realtime access to the real information eliminated any need for having a stale, static distributed representation of that information and/or any OCSP-style real-time operation providing simple yes/no regarding whether the stale, static distributed copies were still valid. you get rid of needing stale, static distributed copies (in the form of physical licenses or digital certificates with the same information) when you have direct, online, real-time access to the real information.

if you have direct, online, real-time access to the real information, negating the requirement for any stale, static distributed copies, then any requirement for an OCSP-style protocol is also negated (since you are dealing with the real information, and don't need to consider situations involve stale, static, redundant and superfluous copies).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Mon, 24 Apr 2006 17:29:24 -0600
Brian Inglis writes:
Must admit, on VM (and most other OSes), reducing I/O hotspots and balancing load across controllers and drives gave much more measurable improvements than tweaking any dynamic parameters, assuming you'd genned in, enabled, or used appropriately all the features that could help performance (alternate paths, page prefetch, block paging, vm preference, sometimes affinity, etc.) Found VM response in a PR/SM LPAR sensitive to LPAR dispatch interval time slice: default (50ms?) was good for MVS, as it and TSO were slow pigs anyway; VM required much lower values (20ms?) to avoid seeming slow (to sprogs on local terminals).

part of the issue was that many of the participants involved in dropping much of the dynamic adaptive capability in the cp67 to vm370 morph ... went on to become part of the vmtools effort in pok.

there had been a strategic decision that vm370 would be stabilized and there would be no new releases. furthermore, nearly all of the group was required in pok in support of the mvs/xa development effort ... they were needed to build an internal only XA-based virtual machine capability needed by the mvs/xa development organization. this activity to retarget the vm370 development group to a purely internal mission in support of mvx/xa development ... somewhat coincided with competitive corporate forces trying to continue vm370 and help obtain decision allowing me to (re-)release the dynamic adaptive stuff (as the resource manager).

the vmtool effort had very similar static tuning paradigm as the initial vm370 implementation (in part because there was much less variety in workload and configuration ... for purely supporting mvs/xa development).

misc past references to decision to retargeting the vm370 development group to the vmtool mission purely in support of internal mvs/xa development
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002e.html#27 moving on
https://www.garlic.com/~lynn/2002m.html#9 DOS history question
https://www.garlic.com/~lynn/2002p.html#14 Multics on emulated systems?
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2004g.html#38 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004k.html#23 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2004n.html#7 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#59 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005j.html#25 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005s.html#35 Filemode 7-9?

in part because of customer demand ... some amount vm370 support and development effort continued ... even with much of the original vm370 development group having been retargeted to internal mvs/xa development support. in part because of the continued customer virtual machine demand; eventually there was a decision to repackage vmtools as a customer product as vm/xa ... initially for 3081 running in 370-xa mode. this moved to 3090 ... where pr/sm had been implemented. pr/sm implementation was somewhat in response to hypervisor that had been done by Amdahl. misc. past posts mentioning pr/sm and Amdahl hypervisor:
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
https://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
https://www.garlic.com/~lynn/2006c.html#9 Mainframe Jobs Going Away
https://www.garlic.com/~lynn/2006e.html#15 About TLB in lower-level caches

as part of continued development of vm/xa offering as customer product, three different competitive scheduling product proposals emerged. at one point i observed that all the resources spent on resolution and escalation meetings (regarding the three competitive scheduler implementation proposals) was significantly larger than needed to actual implement all three competitive solutions and perform extensive benchmark comparisons.

while all of that was going on ... i had observed that system configurations had changed from being significantly real storage and/or processor constrained to being significantly i/o constrained. i did quite a bit of work significantly enhancing the resource manager to improve its ability in i/o constrained environment ... but that never shipped in product. at one point i characterized the transformation as relatitive system disk performance had declined by a factor of 10 times over a period of years (i.e. other resources increased by a factor of 50, but disks only improved by a factor of five ... or less). i've made past references to the anecdote about the disk division adversely reacting to my observation and assigning the divisions performance and modeling group to refute the statement. they subsequently came back and reported that i had actually understated the change.
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2002f.html#14 Mail system scalability (Was: Re: Itanium troubles)
https://www.garlic.com/~lynn/2002l.html#29 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002l.html#34 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2003b.html#22 360/370 disk drives
https://www.garlic.com/~lynn/2003k.html#22 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004b.html#54 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004d.html#3 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#45 who were the original fortran installations?
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004l.html#12 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2004q.html#27 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#85 The TransRelational Model: Performance Concerns
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005d.html#71 Metcalfe's Law Refuted
https://www.garlic.com/~lynn/2005g.html#14 DOS/360: Forty years
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005k.html#34 How much RAM is 64K (36-bit) words of Core Memory?
https://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005r.html#0 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?
https://www.garlic.com/~lynn/2006.html#4 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#1 using 3390 mod-9s

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Intel vPro Technology

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Intel vPro Technology
Newsgroups: alt.folklore.computers
Date: Mon, 24 Apr 2006 21:28:42 -0600
Anne & Lynn Wheeler writes:
the EU FINREAD terminal/standard
https://www.garlic.com/~lynn/subintegrity.html#finread

was an attempt to remove the PC authentication environment from the control of any trojans or viruses that might exist on individual PCs.


re:
https://www.garlic.com/~lynn/2006h.html#26 Security

Intel vPro Technology Security:
http://www.intel.com/vpro/security.htm

in some sense virtualization technology is being applied as countermeasures to the extreme vulnerability of many PCs to viruses and trojans. In that sense it is also attempting to secure the PC as an authentication environment (among other things).

this also goes along with my recent account
https://www.garlic.com/~lynn/2006h.html#13 Security
https://www.garlic.com/~lynn/2006h.html#14 Security

of lots of news stories over at least the past year of using the new, really old thing, virtualization as security mechanism and countermeasure to various threats and vulnerabilities.

misc. stray ocmments about fraud, exploits. vulnerabilities and threats
https://www.garlic.com/~lynn/subintegrity.html#fraud

and numerous posts on the subject of assurance ... that somewhat started with my talk at an assurance panel in the trusted computing track at the 2001 spring Intel Developers Forum
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Intel vPro Technology

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Intel vPro Technology
Newsgroups: alt.folklore.computers
Date: Mon, 24 Apr 2006 22:28:24 -0600
Anne & Lynn Wheeler writes:
this also goes along with my recent account
https://www.garlic.com/~lynn/2006h.html#13 Security
https://www.garlic.com/~lynn/2006h.html#14 Security

of lots of news stories over at least the past year of using the new, really old thing, virtualization as security mechanism and countermeasure to various threats and vulnerabilities.


ref:
https://www.garlic.com/~lynn/2006h.html#31 Security

and a comment dating to last summer:

Fear-commerce, something called Virtualisation, and Identity Doublethink.
http://www.financialcryptography.com/mt/archives/000513.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Tue, 25 Apr 2006 09:05:58 -0600
jmfbahciv writes:
In our state, they have no data, even the basic ABC. I don't think we've cornered the market of all abject stupidity.

so that goes back to the comment in comptroller general talk regarding projection, instrumentation, metrics, audit, validation, etc regarding spending programs ... including accurately accounting for whether the money is actually being spent on what it is supposed to be spent on. it involves fundamental accountability.

it is somewhat like capacity planning ... w/o fundamental instrumentation and measurement regarding what currently goes on, it may be impossible to understand what might happen if anything changes (aka if you don't understand what it is currently happening, it is probably not possible to understand what might happen if things change).

somewhat the discussion regarding improvements to road use metrics and accounting deals with making information even more accurate. just because there may be major flaws in other parts of the infrastructure (and possibly only in specific jurisdictions) doesn't mean that these specific issues shouldn't be addressed at all.

reference to the comptroller general's talk
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future

past posts referring to the comptroller general's talk
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#19 The Pankian Metaphor

my own counter argument to the accountability paradigm, I used at a financial sector conference in europe last fall. many of the European executives were complaining that sarbanes-oxley was causing them significantly increased costs (and pain).

my observation was that particular accountability paradigm is basically looking for inconsistencies in a firm's records. however, given the prevalent use of IT technology for maintaining corporate records ... a reasonably intelligent fraudulent endevor should be able to make sure that the corporate IT technology generates absolutely consistent corporate records. Increasing the amount of auditing isn't likely to be effective in such a situation.

It is somewhat like the assumptions related to the benefits of multi-factor authentication; the assumption is that much of the benefits comes because the different factors should have different threats and vulnerabilities. This assumption is negated if all the authentication factors being used have common threat or vulnerability.
https://www.garlic.com/~lynn/subintegrity.html#3factor

In the case of audits and accountability, there is some assumption that records from different corporate sources may show up inconsistencies when fraud is involved. That assumption is negated if corporate IT technology can be used to maintain and generate all corporate records and therefor guarantee consistency. Somewhat built into that audit and accountability paradigm is the assumption about records from different sources (and looking for inconsistencies).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Tue, 25 Apr 2006 09:51:19 -0600
Anne & Lynn Wheeler writes:
In the case of audits and accountability, there is some assumption that records from different corporate sources may show up inconsistencies when fraud is involved. That assumption is negated if corporate IT technology can be used to maintain and generate all corporate records and therefor guarantee consistency. Somewhat built into that audit and accountability paradigm is the assumption about records from different sources (and looking for inconsistencies).

another one of my favorites is SSL
https://www.garlic.com/~lynn/subpubkey.html#sslcert

we were asked to consult with this small client/server startup that wanted to do payments on their server
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

they had this technology they called ssl that they wanted to use in conjunction with the payments.

the original SSL scenario was that the host domain name you thot you were talking to wasn't actually who you were talking to. as a result webservers got certificates declaring their domain name. you typed in a domain name into your browser ... and the browser connected you to the server. the server then supplied its certificate ... the browser validated the certifcate and then checked the domain name you typed in against the domain name in the certificate.

however, early on, most of the merchant webservers found that using SSL cut there capacity by 80-90 percent ... they could support five times as much activity if they didn't use SSL. so you saw the change-over to SSL just be used for checkout/payment. The domain name provided by the browser no longer has any SSL guarantess. eventually the person gets to checkout and clicks on the checkout/pay button. the checkout/pay button supplies a domain name that goes off to some payment webpage which does the SSL thing. The issue now is it would take a really dumb crook to provide you with a domain name (on the checkout/pay button) that was different than the domain name in the SSL certificate they were supplying. There is implicit assumption in the SSL infrastructure that the domain name for getting to the server comes from a different source than the server which is supplying the SSL certificate. If they are from the same source ... then all bets are off (you are just validating that the person is able to prove that they are who they claim to be ... as opposed to proving that they are who you think them to be).

misc. past discussions about only using SSL for the checkout/pay phase subverts fundamental assumptions about the use of SSL:
https://www.garlic.com/~lynn/aepay10.htm#63 MaterCard test high-tech payments
https://www.garlic.com/~lynn/aadsm14.htm#5 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm19.htm#26 Trojan horse attack involving many major Israeli companies, executives
https://www.garlic.com/~lynn/aadsm20.htm#6 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#9 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love
https://www.garlic.com/~lynn/aadsm21.htm#22 Broken SSL domain name trust model
https://www.garlic.com/~lynn/aadsm21.htm#36 browser vendors and CAs agreeing on high-assurance certificates
https://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm21.htm#40 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/2005g.html#44 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005l.html#19 Bank of America - On Line Banking *NOT* Secure?
https://www.garlic.com/~lynn/2005m.html#0 simple question about certificate chains
https://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
https://www.garlic.com/~lynn/2005o.html#41 Certificate Authority of a secured P2P network
https://www.garlic.com/~lynn/2006c.html#36 Secure web page?
https://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh

for a little more drift ... catch-22 issue assocated with ssl digital certificates
https://www.garlic.com/~lynn/aadsm8.htm#softpki6 Software for PKI
https://www.garlic.com/~lynn/aadsm13.htm#32 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm14.htm#39 An attack on paypal
https://www.garlic.com/~lynn/aadsm15.htm#25 WYTM?
https://www.garlic.com/~lynn/aadsm17.htm#60 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#43 SSL/TLS passive sniffing
https://www.garlic.com/~lynn/aadsm19.htm#13 What happened with the session fixation bug?
https://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love
https://www.garlic.com/~lynn/aadsm20.htm#43 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/2002n.html#2 SRP authentication for web app
https://www.garlic.com/~lynn/2003.html#63 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003.html#66 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003d.html#40 Authentification vs Encryption in a system to system interface
https://www.garlic.com/~lynn/2004g.html#6 Adding Certificates
https://www.garlic.com/~lynn/2004h.html#58 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#5 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005e.html#45 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005e.html#51 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005g.html#1 What is a Certificate?
https://www.garlic.com/~lynn/2005g.html#9 What is a Certificate?
https://www.garlic.com/~lynn/2005i.html#3 General PKI Question
https://www.garlic.com/~lynn/2005i.html#7 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
https://www.garlic.com/~lynn/2005o.html#41 Certificate Authority of a secured P2P network
https://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
https://www.garlic.com/~lynn/2006c.html#38 X.509 and ssh
https://www.garlic.com/~lynn/2006d.html#29 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

64-bit architectures & 32-bit instructions

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 64-bit architectures & 32-bit instructions
Newsgroups: comp.arch
Date: Tue, 25 Apr 2006 14:11:39 -0600
eugene@cse.ucsc.edu (Eugene Miya) writes:
Small FYI Lynn. Garlic came up in discussion down at a meeting (a dog and pony show for academics) visiting Almaden last week.

any particular context??

Our garlic web pages see a fairly large number of daily hits from various search engines or other web crawlers ... including some project or another that appears to be someplace inside ibm. we suspect that because of the extremely high ratio of "hrefs=" (especially in rfc index and the merged glossaries) ... that it is being used as some sort of test case.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Tue, 25 Apr 2006 14:30:25 -0600
jmfbahciv writes:
Sure. I agree. But there has to be an additional "if". The funds collected have to be used for roads and not anything else.

I wonder if these guys are just responsible for running the toll booths ... or do they take over the whole road and are responsible for the whole road operating expense ... which they have to manage out of collected tolls.

On the move; The deal will create the world's largest toll-road operating company with a 6,740km network of highways in Europe and America
http://www.economist.com/agenda/displaystory.cfm?story_id=E1_GRQSGSR
...

If the later ... does the states continue to keep their road use fuel tax (aka any selling off roads to private interests potentially represents both the capital on the sale as well as being able to reallocate various other collected road use fees like fuel tax).

I thot I remember something about the mass pike and its toll road. nominally tolls are put in place to pay off the original road construction bonds sold to get money to build the road (what the federal gov. wasn't supposedly otherwise subsidizing). most places then discontinue the tolls once the bonds are paid off. What I remember was that they decided to keep on collecting the tolls on the mass pike even after the road construction bonds were paid off.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Taxes

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Taxes
Newsgroups: alt.folklore.computers
Date: Tue, 25 Apr 2006 14:53:24 -0600
Anne & Lynn Wheeler writes:
I got bits and pieces recently of somebody reporting economic analysis (on Bloomberg?) that included some numbers about current engineers/technical graduates. The numbers that I believe were China was graduating something like 400,000, Russia was graduating something like 300,000 and US was graduating something like 20,000.

ref:
https://www.garlic.com/~lynn/2006g.html#21 Taxes

I may have tuned in and out of pieces of the program ... it may have been India graduating 300,000 and Russia was some other number.

although ... not exactly inconsistent with the referenced numbers; couple references from today.

The Continuing American Decline in CS
http://developers.slashdot.org/developers/06/04/25/139203.shtml
A Red Flag In The Brain Game; America's dismal showing in a contest of college programmers highlights how China, India, and Eastern Europe are closing the tech talent gap
http://www.businessweek.com/magazine/content/06_18/b3982053.htm?campaign_id=bier_tca

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Taxes

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Taxes
Newsgroups: alt.folklore.computers
Date: Tue, 25 Apr 2006 15:19:31 -0600
Al Balmer writes:
Somewhere recently I read that these numbers are misleading because of the categories included and how they are counted. Unfortunately, I don't remember where, though Communications of the ACM comes to mind. If I can find the article, I'll post a reference.

re:
https://www.garlic.com/~lynn/2006g.html#21 Taxes
https://www.garlic.com/~lynn/2006h.html#37 Taxes

i've run across the annual graduate numbers on the nsf.gov web site in the past. i didn't find them with quick check just now ... but

Science and Engineering Indicators 2006; America's Pressing Challenge - Building A Stronger Foundation
http://www.nsf.gov/statistics/seind06/

little more searching turns up this overview page:
http://www.nsf.gov/statistics/showpub.cfm?TopID=2&SubID=5

and this is the page I remember running across before
http://www.nsf.gov/statistics/infbrief/nsf06301/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Wed, 26 Apr 2006 08:11:54 -0600
jmfbahciv writes:
Eventually they stopped tolls at each end. I understand it did cut down on congestion on the Boston end. However, the last time I drove home to Mich. I could tell from the condition of the roads what sections were no longer covered by the toll people. I don't understand it how that happens.

first time i drove on mass pike (long ago and far away) was on x-country trip from west coast to the east coast during the winter. mass pike still had tolls ... but it appeared to be the worst condition road of the trip (including some snow covered mountain roads out west).

misc. past posts mentioning mass pike
https://www.garlic.com/~lynn/2002i.html#28 trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#35 pop density was: trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#36 pop density was: trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002j.html#68 Killer Hard Drives - Shrapnel?
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#69 The problem with installable operating systems
https://www.garlic.com/~lynn/2003j.html#11 Idiot drivers
https://www.garlic.com/~lynn/2006g.html#49 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#36 The Pankian Metaphor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe vs. xSeries

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe vs. xSeries
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 26 Apr 2006 11:42:06 -0600
zansheva@yahoo.com wrote:
Ok I've been hearing about how great mainframes are. Better availability, management, security, etc. I'm a bit new to all this so I apologize for my ignorance. My question is, if mainframes are so fantastic why do companies buy Wintel servers such as IBM xSeries for their datacenters? Under what circumstances is it better to buy Wintel? Or why buy wintel at all?

it use to be that homework sounding questions were clustered in sept/oct timeframe with the new croup of freshman attemping to get others to do their homework.

for a whole slew of reasons ... the mainframe systems of the 60s tended to evolve a paradigm that provided clearcut design & implementation separation of system command&control from system use. you saw this further evolving in the late 60s with much of the command&control infrastructure starting to be automated. part of this was number of commercial interactive timesharing use that provided 7x24, continuous operation with offshift being unattended in lights-out kind of environment

https://www.garlic.com/~lynn/submain.html#timeshare

the implicit separation of system command&control (as well as automating much of the command&control functions) from system use has tended to permeate into all aspects of the design and implementations over the past 40 years.

the evolution of the desktop systems partly included the enormous simplification that could be achieved if there was no differentiation of the system command&control from the system use (i.e. single user system where the same person that used the system was also responsible for the command&control of the system).

many of the current things commonly referred to as "servers", have tended to be platforms that evolved from the desktop paradigm with no strong, clearcut differentiation between the command&control of the system (along with little or no automation of the command&control functions) from the use of the system. many such "servers" may have a patchwork facade applied on top of the underlying infrastructure to try and create the appearance that there is fundamental separation of command&control from use (with some degree of automation). trivial scenario is frequent situations where remote user may acquire system command&control capability (via a wide-variety of different mechanisms)

the lack of clearcut and unambiguous separation of system command&control from system use ... permeating all aspects of design and implementation can lead to large number of integrity and security problems.

for instance would you prefer to have the financial infrastructure (that you regularly use,) managed by 1) dataprocessing operation that has strongly differentiated system command&control from system use or 2) a system that has constant and frequent reports of vulnerabilities and exploits.

then there are all sorts of feature/function/capability that will tend to evolve over a period of forty years ... where the operational environment includes basic premise of unattended operation as well as strong separation of system command&control from system use.

slight drift ... lots of posts on fraud, exploits, vulnerabilities, and threats
https://www.garlic.com/~lynn/subintegrity.html#fraud

misc. past posts raising the issue of answering questions that appear to be homework:
https://www.garlic.com/~lynn/2001.html#70 what is interrupt mask register?
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001c.html#11 Memory management - Page replacement
https://www.garlic.com/~lynn/2001c.html#25 Use of ICM
https://www.garlic.com/~lynn/2001k.html#75 Disappointed
https://www.garlic.com/~lynn/2001l.html#0 Disappointed
https://www.garlic.com/~lynn/2001m.html#0 7.2 Install "upgrade to ext3" LOSES DATA
https://www.garlic.com/~lynn/2001m.html#32 Number of combinations in five digit lock? (or: Help, my brain hurts)
https://www.garlic.com/~lynn/2002c.html#2 Need article on Cache schemes
https://www.garlic.com/~lynn/2002f.html#32 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#40 e-commerce future
https://www.garlic.com/~lynn/2002g.html#83 Questions about computer security
https://www.garlic.com/~lynn/2002l.html#58 Spin Loop?
https://www.garlic.com/~lynn/2002l.html#59 Spin Loop?
https://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002o.html#35 META: Newsgroup cliques?
https://www.garlic.com/~lynn/2003d.html#27 [urgent] which OSI layer is SSL located?
https://www.garlic.com/~lynn/2003j.html#34 Interrupt in an IBM mainframe
https://www.garlic.com/~lynn/2003m.html#41 Issues in Using Virtual Address for addressing the Cache
https://www.garlic.com/~lynn/2003m.html#46 OSI protocol header
https://www.garlic.com/~lynn/2003n.html#4 Dual Signature
https://www.garlic.com/~lynn/2004f.html#43 can a program be run withour main memory ?
https://www.garlic.com/~lynn/2004f.html#51 before execution does it require whole program 2 b loaded in
https://www.garlic.com/~lynn/2004f.html#61 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#47 very basic quextions: public key encryption
https://www.garlic.com/~lynn/2004k.html#34 August 23, 1957
https://www.garlic.com/~lynn/2005h.html#1 Single System Image questions
https://www.garlic.com/~lynn/2005m.html#50 Cluster computing drawbacks
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006b.html#2 Mount a tape

Mainframe vs. xSeries

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe vs. xSeries
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 26 Apr 2006 11:53:55 -0600
ref:
https://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries

past posts raising the issue about desktop paradigm not having strong, clearcut requirement separating system command&control from system use ... and that distinction might permeate into all aspects of system design and implementation
https://www.garlic.com/~lynn/98.html#51 Mainframes suck? (was Re: Possibly OT: Disney Computing)
https://www.garlic.com/~lynn/99.html#16 Old Computers
https://www.garlic.com/~lynn/99.html#197 Computing As She Really Is. Was: Re: Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/2001.html#43 Life as a programmer--1960, 1965?
https://www.garlic.com/~lynn/2001k.html#14 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2002.html#1 The demise of compaq
https://www.garlic.com/~lynn/2002h.html#73 Where did text file line ending characters begin?
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2003h.html#56 The figures of merit that make mainframes worth the price
https://www.garlic.com/~lynn/2003n.html#46 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2004.html#40 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004.html#41 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004b.html#10 Mars Rover Not Responding
https://www.garlic.com/~lynn/2004o.html#21 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005m.html#51 Cluster computing drawbacks
https://www.garlic.com/~lynn/2005q.html#2 Article in Information week: Mainframe Programmers Wanted

Mainframe vs. xSeries

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe vs. xSeries
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 26 Apr 2006 12:31:43 -0600
Anne & Lynn Wheeler wrote:
the lack of clearcut and unambiguous separation of system command&control from system use ... permeating all aspects of design and implementation can lead to large number of integrity and security problems.

ref:
https://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#41 Mainframe vs. xSeries

if you don't have a clearly defined and unambiguous separation of system command&control from system use ... then it is much more difficult to create a comprehensive threat model along with the countermeasures for the threats ... the lack of which can contribute to a large number of integrity and security vulnerabilities.

if you are trying to have automated server operation, with minimized operational and support costs ... then having a clearly defined command&control operation immensely contributes to defining the feature/function that need to be optimized. if this has existed for 40 years ... then the amount of command&control feature/function optimization will tend to have gone through enormous amount of evolution and numerous generations, tending to result in more sophisticated and comprehensive solutions.

Intel VPro

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Intel VPro
Newsgroups: comp.arch
Date: Wed, 26 Apr 2006 12:46:49 -0600
David Brown writes:
Could it just be a matter of isolating risky software in their own virtual machines? If your windows server runs its web server, mail server, and database server in different VMs, then any attack that compromises one of the servers will have a harder job getting to any of the others.

Or perhaps you could set up a VM for a linux (or bsd) firewall to protect the windows VM instead of having to rely on a windows-based software firewall.


recent comment about having comprehensive system design based on clearcut and strongly seperation of system command&control from system use. many of the desktop systems have achieved a lot of simplification and "ease of use" by not requiring strong separation of system command&control from system use (for single user desktop system, the same person is responsible for command&control and the use of the system).

many of the systems that had implicit single user desktop use have poor separation between command&control of the system from system use.
https://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#41 Mainframe vs. xSeries

a couple other recent postings mentioning vPro and security
https://www.garlic.com/~lynn/2006h.html#31 Intel vPro Technology
https://www.garlic.com/~lynn/2006h.html#32 Intel vPro Technology

virtual machine technology can attempt to retrofit stronger separation between system command&control and system use ... to an environment that doesn't natively have such strong separation.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe vs. xSeries

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe vs. xSeries
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 26 Apr 2006 13:10:57 -0600
Anne & Lynn Wheeler wrote:
the lack of clearcut and unambiguous separation of system command&control from system use ... permeating all aspects of design and implementation can lead to large number of integrity and security problems.

oh ... and for slight drift ... a couple of posts related to vPro ... aka using virtual machine paradigm as a methodology to retrofit stronger separation of system command&control from system use ... to systems that didn't originally have such strong separation (aka as part of retrofitting stronger integrity and security to those systems).
https://www.garlic.com/~lynn/2006h.html#31 Intel vPro Technology
https://www.garlic.com/~lynn/2006h.html#32 Intel vPro Technology
https://www.garlic.com/~lynn/2006h.html#43 Intel vPro

there have been numerous news articles over at least the past year about being able to utilize virtualization technolology to retrofit stronger integrity and security (some separation of system command&control from system use) to systems where it wasn't part of the fundamental infrastructure.

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Thu, 27 Apr 2006 07:06:23 -0600
jmfbahciv writes:
I seem to recall a flap waybackwhen about where the money was going instead of road maintenance. Query: how did the Mass Pike condition compare with the rest of our roads? Our standards may have been a lot lower which would make the Pike perfect compared the rest of the unidirectional ruts. :-)

part of the issue was the (annual) frost heaves thru the berkshires ... it made it look like the road wasn't built to standards appropriate for use and environment (i.e. county roads in western mountain areas appeared to have been built to better standards). several people, that had lived all their life in mass, made statements about it enabling annual expensive/extensive road repair contracts. there was also facetious statements about the use of water soluable asphalt.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

blast from the past, tcp/ip, project athena and kerberos

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: blast from the past, tcp/ip, project athena and kerberos
Newsgroups: alt.folklore.computers
Date: Thu, 27 Apr 2006 09:35:32 -0600
Date: 22 March 1988, 17:05:12 EST
From: CAS
To: WHEELER

Lynn,
1. TCP/IP: There is "Van Jacobsen's version" of TCP/IP now publicly available. Originating at Lawrence Berkeley Lab, it functionally includes "congestion control", to reduce re-transmission of redundant messages. I understand it is "public domain", requiring neither an AT&T nor Berlekey license. Source code can be obtained from an ftp disk at Berkeley; it is also available here at M.I.T. Please contact me (phone numbers above), or for more detailed info on implementation, Jeff Schiller (Network Mgr. at M.I.T.), tele- (617) 253-4101.

2. Would like to follow up on Austin visit of Athenians; propose Dan Geer, Mgr of System Development staff, and one techie. (plus me) Topics to include Service Management System, and Authentication (Kerberos). How about 1 and 1/2 days, Thur/Fri 3/31-4/1 ?


... snip ... top of post, old email index, HSDT email

email was from CAS, same person for whom compare&swap instruction was named
https://www.garlic.com/~lynn/subtopic.html#smp

who I had worked with at the science center in the early 70s
https://www.garlic.com/~lynn/subtopic.html#545tech

Jerry Saltzer and Steve Dyer made the trip also.

misc. past posts mentioning kerberos
https://www.garlic.com/~lynn/subpubkey.html#kerberos

congestion control ... refers to slow-start ... misc. past posts mentioning slow-start
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency (was Re: Ms employees begging for food)
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002c.html#54 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2003.html#55 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003.html#59 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003g.html#54 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2003k.html#57 Window field in TCP header goes small
https://www.garlic.com/~lynn/2003l.html#42 Thoughts on Utility Computing?
https://www.garlic.com/~lynn/2003p.html#13 packetloss bad for sliding window protocol ?
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
https://www.garlic.com/~lynn/2004k.html#8 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#12 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#13 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2005g.html#4 Successful remote AES key extraction
https://www.garlic.com/~lynn/2005q.html#22 tcp-ip concept
https://www.garlic.com/~lynn/2005q.html#28 tcp-ip concept
https://www.garlic.com/~lynn/2005q.html#37 Callable Wait State
https://www.garlic.com/~lynn/2006d.html#21 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006g.html#18 TOD Clock the same as the BIOS clock in PCs?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

guess the date

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: guess the date
Newsgroups: alt.folklore.computers
Date: Thu, 27 Apr 2006 09:56:42 -0600
the following are summaries of some industry articles, guess the date:

Tools for creating business programs faster, are on the way.

BofA "can of worms" story about replacing aging business software.
- Dual runs on new/old systems
- Major customers pulled out of bank
- Statements fell behind 9 months
- Bank gave up
  . 2.5 Million lines of code
. $20 million investment
  . $60 million to correct the difficulties
. Vice presidents of technology and trust departments resigned
- Extreme example, but becoming commonplace

There's a mounting shortage of good software
- custom-made software for big corporations
- US companies are increasingly vulnerable to competition
. Europe and Japan have head-start to automate software development
- Alarm at the Pentagon
. Military cannot get enough, reliable software quickly
  . Mar 30, House Armed Forces Services Committee cut all procurement
funding for "over the horizon" radar
  . No point building the hardware until the software is ready

Software powers:
- on-board computers in new car engines
  copiers, micro-wave ovens
- stoplights that control city traffic
- stocks for Wall Street traders
- loans for bank officers
- routing for
. telephone calls
  . truck fleets
. factory production flow
- Hard to find anything that doesn't depend on software
. Banks would close in 2 days
. distribution companies in 4 days
. factories in a week

Widening gap between hardware and software performance
- hardware doubles every 2-3 years
- customers expect to tackle tougher jobs
- tasks require more complex software
- no software counterpart to semiconductor technology

Programmers "grind out instructions" at 1-2 lines per hour
- Akin to building a 747 using stone knives
- Manpower shortage isn't helping
- 25% growth in the average length of programs
- 12% growth in overall demand for software
-  4% growth in number of programmers
- 3 year backlog for identified application programs
- 32,000 workdays to finish average business-software package
- Finished programs get tossed out
. When they're ready, they're obsolete
- Projecting the trend means nearly everyone will become a programmer
- Something has to give

Means for dealing with the crunch are falling into place
CASE:  Computer Aided Software Engineering
- helps automate the job of writing programs
- speeds work; improves quality

Texas Instruments, one of 4 leading suppliers of CASE tools
- world-wide demand will hit $2 billion by xxxx

New "Object Oriented" programming languages
- Replacing Cobol, C, Pascal
 - By xxxx, expected to dominate many business areas
- Offer the best of two worlds:
   . ease of use
. unmatched versatility
- "The emerging thing"  -WH Gates, Microsoft

Experts believe US information managers lack the money and will to
implement new technology
- Top executives aren't involved; you can't touch or smell software

Europe and Japan are building momentum
- Britain and France pioneered the concept of CASE
- European Community has spent $690 million since 1983 for development
- Japan Sigma, 3 years old, $200 million effort
  . builds on previous government projects
. most Japanese experts doubt Sigma will score a breakthru
. may be irrelevant; race doesn't go to most original or creative
. Real test is how soon technology is implemented
  . Fujitsu and Hitachi have more than 10,000 programmers each
working on "white collar assembly lines"
  . Japanese willing to invest for the very long term
(There's no quick fix to the software problem)

Automating software development means a drastic cultural upheaval
- requires overhaul of programming curriculum taught at universities
- there is little choice; business depend on software
  . Many programs today are "downright rickety"
. 60-70% of budget for Maintenance of programs
. 80% by xxxx
. You make an innocent change, and the whole thing collapses

- Why not scrap old programs?
  . Few know what an existing program does
. New software is notoriously bug-ridden
. The larger the program, the higher the risk that glitches
will go undetected for weeks/months after installation

"Programmers regard themselves as artists.
 As such, they consider keeping accurate records of their handiwork
on par with washing ash trays."

Pentagon, 1975 awarded contract to develop new programming language (ADA)
- CII-Honewell Bull
- Object Oriented
- Required for all "Mission Critical" software
- Resistance (from contractors) ceased in 1987
. More than 120 compilers
. Libraries of routines
  . Success stories started rolling in
. TRW, Harris, GTE, Boeing, Raytheon
- Raytheon
. Nearly two thirds of new programs consist of reusable modules
. 10% reduction in design costs
. 50% reduction in code-generation and debugging
  . 60% reduction in maintenance
- Honeywell Bull
  . 33% reduced costs overall
. 30% increase in Aerospace and Defense software

Object Oriented Programming Systems  (OOPS):
- takes reuseability one step further
- each module wraps instructions around the specific data that
  the software will manipulate
- the two elements constitute an "object"
- data and program commands are always handled together
- maintaining complex software is simpler
- Microsoft expects to offer products incorporating this approach soon
"So simple to learn, they will foster a "huge market" among programmers"
   - Gates
- Technique is especially useful for managing massive amounts of data.
- Mainstay Software Corp, Denver Colorado
. Object Oriented Data Base runs on a PC
  . Often out-shines mainframe based systems
. a "breeze to tailor for individual needs"

Smalltalk:
- First object-oriented program
- Alan Kay, Xerox PARC
- Widely criticized because it ran slowly

Objective-C (Stepstone Corp) and C++ (AT&T)
- Software industry needs mostly "off-the-shelf components"
- a few custom-designed circuits give "personality" to a system
- software can then keep up with ever-changing business conditions

So far, big spenders have been companies where software is a profit
center (EDS - Electronic Data Systems)

Most of the 100 USA CASE ventures are tiny, aimed at narrow markets
European Commission hopes to hit a "grand slam"
- comprehensive "environment"
- automated, integrated tools
- all aspects of software development
- Sweeping standards already defined by 4-year Esprit project
. Bull (France)
. GEC and ICL (Britain)
  . Nixdorf and Siemens (West Germany)
. Olivetti (Italy)
- Next phase will start shortly
. "software factory" for all facets of programming

Software Engineering Institute, Carnegie-Mellon
- "We're not going to automate away the problem with technology"
- Larry Druffel, directory
- Need to automate the intellectual enterprise required to conceive
the programs.
- SEI is funded by Pentagon $125 million Stars Project
. Knowledge base of "hotshot programmers"
  . Develop smart CASE system to bypass programmers
. Endusers would do their own software

Software Production Consortium, Reston Virginia
- SPC founded by 14 major aerospace and electronics companies
. Boeing, Martin Marietta, McDonnel Douglas
  . each contributed $1 million per year
. 155 researchers
- satisfied to wring maximum efficiency from reusable software
- reusable software to create quick, bare-bones prototypes
. "Look and Feel" of the final program
. Feedback from users of the software
    "Fit the software to the user"
80% of system total life-cycle costs stem from changes made
        to initial software design
- TRW constructed a Rapid Prototyping Center
. dummy workstations and consoles
. Artificial Intelligence software
  . prototype closely simulates final product performance

Never displace the "software Picassos"
- always a need for totally novel solutions
- The greater need at the moment is for house painters

... snip ...

Answer:
88/05/11

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Chant of the Trolloc Hordes

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Chant of the Trolloc Hordes
Newsgroups: alt.folklore.computers
Date: Thu, 27 Apr 2006 12:55:44 -0600
"Charlie Gibbs" writes:
I wish I could find the satire someone posted in that vein. It went on like "Your disks will fill up! Your CPU will MELT! You'll have to apply one update after the next! And you'll PAY US FOR THE PRIVILEGE! Bwahahahaha! We have a hammer. Your problem is a nail."

long ago and far away ... we were called into a new effort that was going to process a large amount of data, significant daily updates and regular monthly summaries. they had built their prototype on relational platform, but hadn't done even trivial "feeds and speeds" scale-up calculations. simple back of the envelope indicated that their regularly production operation would take two weeks elapsed time for each daily operation and two years elapsed time for each monthly operation.

few old posts mentioning original relational/sql activity and some wars with the 60s databases
https://www.garlic.com/~lynn/2002l.html#71 Faster seeks (was Re: Do any architectures use instruction
https://www.garlic.com/~lynn/2003c.html#75 The relational model and relational algebra - why did SQL become the industry standard?
https://www.garlic.com/~lynn/2003c.html#78 The relational model and relational algebra - why did SQL become the industry standard?
https://www.garlic.com/~lynn/2003f.html#44 unix
https://www.garlic.com/~lynn/2004e.html#15 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2004e.html#23 Relational Model and Search Engines?
https://www.garlic.com/~lynn/2004o.html#67 Relational vs network vs hierarchic databases
https://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2004q.html#31 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#23 Network databases
https://www.garlic.com/~lynn/2005.html#24 Network databases
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005q.html#23 Logon with Digital Siganture (PKI/OCES - or what else they're called)
https://www.garlic.com/~lynn/2005s.html#9 Flat Query
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe vs. xSeries

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe vs. xSeries
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 27 Apr 2006 16:03:05 -0600
"David Wade" writes:
Looking at the last question. Because the front loaded costs on a mainframe are horrendous. Last time my employer updated our mainframe the cost of upgrading one (mionor) package accounted for 25% of the upgrade costs. Until IBM and the mainframe suppliers get a grasp on software pricing many small to medium shops will continue to ditch their mainframes, as we intend to do as soon as possible. In fact over the last 12 months the availability on the mainframe has been worse that any of our Wintel servers.

7-10 hrs ago I some cases where some gov. labs. retired mainframe when their mainframe support people retired. they had recs out for support people for over a year w/o any hits (i.e. couldn't staff). of course that was also during the period leading up to the y2k remediation crunch.

supposedly the announcements today will moderate some of the entry and mid-range situations ... i've seen lots of complaints over the past several years that the various mainframe emulators running on wintel servers have been the only offering in that market niche (and the software pricing really bites them)

IBM expands mainframe into mid-market with breakthrough business class mainframe to target growth of SOA transactions and data;
Shanghai lab to develop mainframe software
http://www.enterprisenetworksandservers.com/newsflash/art.php?571
When an Inexpensive Mainframe Isn't an Oxymoron
http://www.serverwatch.com/news/article.php/3602091
IBM to offer mainframe for the midmarket
http://www.infoworld.com/article/06/04/27/77826_HNibmmainframe_1.html
IBM to offer mainframe for the midmarket
http://www.computerworld.com/hardwaretopics/hardware/mainframes/story/0,10801,110925,00.html
IBM Unveils 'Business Class' Mainframe
http://news.yahoo.com/s/ap/20060427/ap_on_hi_te/ibm_mainframe_lite;_ylt=A0SOwlt2LVFEqgAAhAQjtBAF;_ylu=X3oDMTA5aHJvMDdwBHNlYwN5bmNhdA--
IBM launches low-cost mainframe
http://www.techworld.com/opsys/news/index.cfm?newsID=5886&pagtype=all


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

nntp and ssl

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: nntp and ssl
Newsgroups: gnu.emacs.gnus
Date: Fri, 28 Apr 2006 11:54:35 -0600
muk@msu.edu (m. kolb) writes:
Unfortunately I'm not sure what is happening there. Can you just openssl s_client -connect newshost:563 from your machine and see what is happening?

about 20-30percent of the time ... i see freeze-up while reading active file. it rarely freezes at any other point (reading) .... openssl 0.9.8a-5.2; no gnus v0.5

first couple times i had to kill emacs and restart. i eventually did a script that i run in another window that gets the process number of openssl and kills it. turns out that it kills openssl and then after a couple seconds it checks to see if it has to kill openssl a second time (about half the time after it has killed openssl while reading the active file, gnus continues a little bit and then freezes a second time).

it then is good ... until the next get-new-news, at which point there is about 1-in-3 chance it will repeat

it also freezes about 95 percent of the time when posting ... i run the kill script; gnus then completes saying that the posting failed; however the post was actually made (like is going to happen when this is posted)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Chant of the Trolloc Hordes

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Chant of the Trolloc Hordes
Newsgroups: alt.folklore.computers
Date: Fri, 28 Apr 2006 13:51:53 -0600
scott@slp53.sl.home (Scott Lurndal) writes:
A few megabyte database fits comfortably in memory. Sleepycat?

at least in the past five years or so ... there has been some looking at in-memory databases that log and periodically snapshot ... as opposed to disk databases that may be able to operate totally cached. some claims have that "in-memory" databases are 10 times (to 100 times) faster than disk oriented database that is fully cached (aka both cases have all data resident in memory).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Need Help defining an AS400 with an IP address to the mainframe

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need Help defining an AS400 with an IP address to the mainframe
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 28 Apr 2006 14:48:58 -0600
patrick.okeefe@ibm-main.lst (Patrick O'Keefe) writes:
Actually, I think it never held up. As far as I know a node has always been hardware and a PU has always bee a program (as described in a FAP and probably originally desiged using FAPL). The PU never had to match the node type (although each non-APPN node had a PU based on the node's capabilities). For example, a PU_T1 never ran in a node T_1, as far as I know. (I think a PU_T1 was by definition the PU code supporting a device too dumb to have executable code.) It always ran on something else - a T_4 (or maybe T_5, but I never heard of that implementation).

APPN and SNA were totally different stuff. SNA has been a communication infrastructure ... that was driven by sscp/ncp (pu5/pu4, vtam/3705). this was somewhat the communication continuation of the FS objectves ... after FS had been killed
https://www.garlic.com/~lynn/submain.html#futuresys

specific reference regarding major FS objectives:
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System

despite some of the comments in the above reference ... at the time, I drew some comparisons between FS project and a cult film that had been playing non-stop down in central sq. ... at that time I was with the science center in tech sq., a few blocks from central sq.
https://www.garlic.com/~lynn/subtopic.html#545tech

in the early SNA time-frame ... my wife and Bert Moldow produced an alternative architecture that actually represented networking (being forced to use the label peer-to-peer) ... referred to as AWP39. a few past posts mentioning AWP39:
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005u.html#23 Channel Distances

my wife went on to serve a stint in POK in charge of loosely-coupled architecture ... where she had numerous battles with the SNA organization. she was also responsible for Peer-Coupled Shared Data architecture ... which initially saw major uptake with ims hot-standby ... and later in parallel sysplex.
https://www.garlic.com/~lynn/submain.html#shareddata

at the time APPN was attempting to be announced, one of the primary persons behind APPN and I happened report to the same executive. The SNA organization had non-concurred with the announcement of APPN and the issue was being escalated. After six weeks or so, there was finally approval for the announcement of APPN (out of corporate) ... but only after the announcement letter was carefully rewritten to avoid implying any possible connection between APPN and SNA. The original APPN architecture was "AWP164".

also, almost every organization that ever built a box to the "official" (even detailed internal) SNA specifications found that it wouldn't actually work with NCP ... it first had to be tweaked in various ways to make it work.

supposedly the drive for FS ... as mentioned in the previous reference
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System

had been the appearance of the clone controllers.

this is one of the things that I had been involved with as an undergraduate in the 60s. I had tried to make the 2702 telecommunication controller do something that it couldn't quite actually do. this somewhat prompted a project at the univ. to build its own telecommunication controller; reverse engineering the channel interface, build a channel interface card, and program a Interdata/3 minicomputer to emulate 2702 functions. Somebody wrote an article blaming four of us for starting the clone controller business
https://www.garlic.com/~lynn/submain.html#360pcm

Mainframe vs. xSeries

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe vs. xSeries
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 29 Apr 2006 08:03:08 -0600
jmfbahciv writes:
This all sounds like we're back to reinventing timesharing and eschewing distributed processing (as was defined in the early 80s).

So this must be another cycle of the biz.

Has anybody noticed that old, really old, auto designs are back on the road? I swear I saw my folks' '34 Plymouth.


part of the issue in timesharing was that there was a much greater need to separate system command&control from system use. the desktop systems were able to greatly simplify because that separation wasn't really required.

however, when attachment of desktop systems to networks expanded from purely local non-hostile environment to world-wide, and potentially extremely hostile environment, some of the partitioning requirements re-emerged. to some degree the network attachment allowed remote attackers to reach into many of the desktop systems ... these hadn't provisions for separation/partitioning of system use and system command&control (they hadn't been designed and built from the ground up with extensive countermeasures to hostile attacks).

in the past, virtualization has been used for variety of different purposes; 1) timesharing services, 2) simplification of testing operating systems, 3) using different operating systems and operating environments, as well as 4) providing partitioning and separation of system command&control from system use.

some of the current activity is layering some additional partitioning of system command&control (from system use) to environments that weren't orginally intended to operate in such hostile and adversarial conditions (might be thought of as akin to adding bumpers to horseless carriages).

past postings on this topic:
https://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#41 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#42 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#43 Intel VPro
https://www.garlic.com/~lynn/2006h.html#44 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#49 Mainframe vs. xSeries

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe vs. xSeries

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe vs. xSeries
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 29 Apr 2006 10:12:28 -0600
jmfbahciv writes:
But now I'm puzzled. You seem to consider Unix a non-main frame OS. If this is true, then the computing service delivery biz is in really deep doo-doo of the tar pit flavor.

a lot of unix (and tcp/ip networking) code was designed and written assuming a non-hostile and non-adversarial environment (implicit assumptions about not needing countermeasures for hostile and adversarial activity).

i had given a talk at ISI (usc graduate student seminar plus some of the ietf rfc editor staff) in the late 90s about tcp/ip implementation not inherently taking into consideration various business critical operational considerations.

original unix, being somewhat multi-user oriented, tended to have some greater separation between system command&control and system use ,,, however, adaptation to desktop environments can create some ambiquities (ambivalence?) regrading simplifying the partitioning/separation needed.

misc. assurance postings
https://www.garlic.com/~lynn/subintegrity.html#assurance

recent postings on this topic:
https://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#41 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#42 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#43 Intel VPro
https://www.garlic.com/~lynn/2006h.html#44 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#49 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#53 Mainframe vs. xSeries

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

History of first use of all-computerized typesetting?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of first use of all-computerized typesetting?
Newsgroups: alt.folklore.computers
Date: Sat, 29 Apr 2006 21:53:31 -0600
Brian Inglis writes:
Certainly S/370 Principles of Operations (architecture) manual was computer printed, possibly S/360 also, as Lynn Wheeler has mentioned IBM SCRIPT masters and conditionals for internal and external versions.

Even limiting the claim to typeset books, the claim seems to be bogus, may possibly be true for OCR source documents.


the cp67/cms manuals were done in script .... and then i think photo-offset for printing. the 1403 TN printer provided uppercase/lowercase ... and you could get some sort of 1403 sharper printing ribbon (film?) for final copies ... more distinct/sharper characters than you got with standard fabric ribbon

the principles of operation was a much wider distributed manaul ... nearly all the mainframe customers ... rather than just cp67/cms customers

the original cms script document processing command had been done early at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

which had runoff like "dot" commands (tracing common heritage back to ctss)

and then in 1969 at the science center, "G", "M", & "L" invented GML (i.e. precursor to sgml, html, xml, etc) ... and gml processing supported was added to script command ... and you could even intermix "dot" commands and "gml" tags
https://www.garlic.com/~lynn/submain.html#sgml

i think part of the early move of principles of operation to cms script was conditionals ... where the same source file could produce the full internal architecture "red book" and its subset, principles of operation document available to customers.

here is online extract from melinda's history
https://www.leeandmelindavarian.com/Melinda/25paper.pdf

that covers ctss and ctss runoff command
http://listserv.uark.edu/scripts/wa.exe?A2=ind9803&L=vmesa-l&F=&S=&P=40304

a lot more on ibm 7094 and ctss
https://www.multicians.org/thvv/7094.html

ctss runoff command, jh saltzer, 8nov1964
http://mit.edu/Saltzer/www/publications/CC-244.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Sun, 30 Apr 2006 08:50:08 -0600
jmfbahciv writes:
Sure. But on the -10, a control file was just another user logged in. I had a feeling that IBM treated batch very differently and that a command in a control file was done by different code than the TTY device driver, if I may use that term loosely.

typical interactive paradigm had assumptions about interacting with a human ... anomolies either were reflected to human or dumped into some system default.

batch paradigm tended to assume that there was no interaction with human. specifications tended to be more complex, in part because there was no assumption about human being there to handle anomolies. there tended to be a whole lot more up front specification ... allowing the batch system to determine whether all required resources could be committed before even starting the sequence of operations (lot more operations that required significant portion or all available resources ... potentially all available tape drives and nearly all disk space for correct operation). during execution ... almost any kind of possibly anomoly could be specified as being handled by the application programming ... and if no application specific handler had been specified ... then go with some system default. this separated specification of application required resources from the application use of the actual resources.

batch processing application specification didn't translate well into interactive use ... since there was this assumption that the required resources had to be specified in some detail before specifying the execution of the application. for any sort of interactive use ... they attempted to create an intermediate layer that basically intercepted commands and then provided some set of default resource specification before invoking the application (and possibly defining some number of hooks/exits for various kinds of anomolous processing which might be reflected back to the user).

there are sporadic threads in some of the mainframe discussions about being able to use REXX as a "job control language" (the infrastructure for specifying required resources) as opposed to invoking REXX as an application ... which, then, in turn processes a command file.

REXX was originally "REX" internally in the late 70s on vm/cms ... and sometime later made available as vm/cms product REXX ... and subsequently ported to a number of other platforms. some number of posts that make some reference to doing applications in REX
https://www.garlic.com/~lynn/submain.html#dumprx

rexx language association web page
http://www.rexxla.org/

some trivia drift , two people had done a new, and ever improved FAPL in the early 80s ... one of them was the person also responsible for REXX.

recent post that mentions FAPL
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe

somewhat random pages from search engine mentioning FAPL
http://www.hps.com/~tpg/resume/index.php?file=IBM

more detailed discussion of fapl
http://www.research.ibm.com/journal/rd/271/ibmrd2701K.pdf

Date: 05/06/85 16:52:04
From: wheeler

re: sna; interesting note in this month's sigops. Somebody at CMU implemented LU6.2 under UNIX4.2 ...

Rosenberg reported on an implementation of an SNA node consisting of LU 6.2 and PU T2.1 protocols. (These protocols cover approximately OSI layers 2 through 6.) The implementation was made on UNIX 4.2. About 85% of the code was generated automatically from the FAPL meta-description of SNA. The following problems were reported:

1. The protocol code is large, and thus cannot run in the kernel space. Consequently, communication between user program and the node (processor executing the SNA code) is more complex and slower than if the node were part of the kernel. In addition, error recovery proved tricky.

2. The SNA node must simulate the UNIX sockets, which are full duplex and place no restriction on the states of the two conversants. The SNA node uses a half-duplex, flip-flop protocol, where the states of the two conversants must remain synchronized. To match the two required an extension to SNA.

The implementation is now complete and is actually used to drive a 3820 printer, which is a SNA device


... snip ... top of post, old email index

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PDS Directory Question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PDS Directory Question
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 30 Apr 2006 09:15:15 -0600
Shmuel Metz , Seymour J. wrote:
That gets complicated. SSS, MSS and MPS were announced concurrently, but there were significant redesigns in the evolutions SSS->PCP, MSS->MFT->MFT II and MPS->VMS->MVT. I believe that OS/360 had stabilized at PCP, MFT and MVT by the time IEBCOPY came along; it definitely ran under PCP. The AOS (OS/VS) version had some significant enhancements, but you could zap it to run under OS/360.

Of course, before IEBCOPY you had IEHMOVE. Don't ask, don't tell.


I don't remember anything but PCP in os/360 release 6 that i was playing with. MFT sysgen option was available by os/360 release 9.5 that I played with (but I don't remember which release it first appeared in). I believe MVT sysgen option appeared in os/360 release 12. I had done os/360 release 11 sysgen and then skipped 12 and 13 ... and then did os/360 release 14 sysgen ... but choose to stick with MFT (waiting for more of the MVT bugs to shake out). I remember some customers running os/360 release 13 MVT ... but also have some vague recollection of the sysgen option first appearing in release 12. I finally did MVT sysgen with os/360 release 15/16. os/360 release 15/16 was also the first release that allowed you to specify the cylinder for vtoc (I had been asking for since I started doing careful file & member disk layout for optimal arm seek in release 11).

couple minor references to talk i gave at fall68 share in Atlantic City ... about both reworking stage-2 sysgen to carefully control disk layout (optimizing arm seek) and bunch of pathlength work i had done on cp67 (at the university). stage-2 included a bunch of different steps that included both iehmove and iebcopy. carefully controlly file layout required some rework of sequence of stage-2 steps. to carefully control member placement (in pds) required reordering move/copy member statements.
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)

Sarbanes-Oxley

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Sarbanes-Oxley
Newsgroups: bit.listserv.ibm-main
Date: Sun, 30 Apr 2006 09:23:11 -0600
Phil Payne wrote:
Sorry if you've already done all the work...

http://news.independent.co.uk/business/news/article360919.ece

"Tough, post-Enron company rules in the US may have to be relaxed to stem a flow of listings from New York to London, an American diplomat said yesterday.

Rushed through in 2002, following a wave of corporate scandals, the controversial Sarbanes-Oxley legislation lays down tough accounting rules designed to ensure that public companies make fuller disclosure of their financial position.

It prompted an angry reaction from this side of the Atlantic because European companies that are listed in the US are affected even if they comply with their own domestic requirements."


i was at a financial industry conference (including some of the european exchanges) in europe late last fall ... where a lot of the (european) corporation executives spent a lot of time discussing the sarbanes-oxley issue. recent post discussing the subject ... towards the end
https://www.garlic.com/~lynn/2006.html#12a sox, auditing, finding improprieties
https://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor

The Pankian Metaphor

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Sun, 30 Apr 2006 10:07:00 -0600
jmfbahciv writes:
Heh. That sounds like over-managing. I recall people spending a lot of time, money and arguing about being "fair". For all the energy, now that we're measuring in Joules, spent being fair, more work would have been done for all users in less time which is the most fair of all.

round-robin can be fair ... assuming relatively homogeneous workload. it is when you get into all sorts of heterogeneous workload that things get tricky. also round-robin tends to have overhead expense associated with switching (improvements in fair shouldn't be offset by energy lost in the switching).

the other issue is shortest-job-first. if you have large queue ... and there are people associated with items in the queue ... then shortest-job-first reduces the avg. waiting time and avg. length of queue. you can somewhat see this at grocery checkouts that have some sort of "fast" lane.

one trick is balancing shortest-job-first (to minimize avg. waiting time and queue length) against fair. a frequent problem, if you don't have really good, explicit shortest-job-first rules ... can users scam the system by careful packaging of their work to take advantage of shortest-job-first rules ... and get more than there fair share aka ... people trying to use the 9-item checkout lane when they have a basket full ... or jump their position in a line.

a law enforcement officer claimed that at least 30percent of the population will regularly attempt to circumvent/violate rules and laws for their advantage (of course the population sample he deals with may be biased).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/


previous, next, index - home