From will.senn at gmail.com Sun Jun 2 11:59:42 2024 From: will.senn at gmail.com (Will Senn) Date: Sat, 1 Jun 2024 20:59:42 -0500 Subject: [TUHS] Old documentation - still the best Message-ID: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> A small reflection on the marvels of ancient writing... Today, I went to the local Unix user group to see what that was like. I was pleasantly surprised to find it quite rewarding. Learned some new stuff... and won the door prize, a copy of a book entitled "Introducing the UNIX System" by Henry McGilton and Rachel Morgan. I accepted the prize, but said I'd just read it and recycle it for some other deserving unix-phile. As it turns out, I'm not giving it back, I'll contribute another Unix book. I thought it was just some intro unix text and figured I might learn a thing or two and let someone else who needs it more have it after I read it, but it's a V7 book! I haven't seem many of those around and so, I started digging into it and do I ever wish I'd had it when I was first trying to figure stuff out! Great book, never heard of it, or its authors, but hey, I've only read a few thousand tech books. What was really fun, was where I went from there - the authors mentioned some bit about permuted indexes and the programmer's manual... So, I went and grabbed my copy off the shelf and lo and behold, my copy either doesn't have a permuted index or I'm not finding it, I was crushed. But, while I was digging around the manual, I came across Section 9 - Quick UNIX Reference! Are you kidding me?!! How many years has it taken me to gain what knowledge I have? and here, in 20 pages is the most concise reference manual I've ever seen. Just the SH, TROFF and NROFF sections are worth the effort of digging up this 40 year old text. Anyhow, following on the heels of a recent dive into v7 and Ritchie's setting up unix v7 documentation, I was yet again reminded of the golden age of well written technical documents. Oh and I guess my recent perusal of more modern "heavy weight" texts (heavy by weight, not content, and many hundreds of pages long) might have made me more appreciative of concision - I long for the days of 300 page and shorter technical books :). In case you think I overstate - just got through a pair of TCL/TK books together clocking in at 1565 pages. Thank you Henry McGilton, Rachel Morgan, and Dennis Ritchie and Steve Bourne and other folks of the '70s and '80s for keeping it concise. As a late to the party unix enthusiast, I greatly value your work and am really thankful you didn't write like they do now... Later, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Sun Jun 2 12:31:41 2024 From: will.senn at gmail.com (Will Senn) Date: Sat, 1 Jun 2024 21:31:41 -0500 Subject: [TUHS] Proliferation of book print styles Message-ID: Today, as I was digging more into nroff/troff and such, and bemoaning the lack of brevity of modern text. I got to thinking about the old days and what might have gone wrong with book production that got us where we are today. First, I wanna ask, tongue in cheek, sort of... As the inventors and early pioneers in the area of moving from typesetters to print on demand... do you feel a bit like the Manhattan project - did you maybe put too much power into the hands of folks who probably shouldn't have that power? But seriously, I know the period of time where we went from hot metal typesetting to the digital era was an eyeblink in history but do y'all recall how it went down? Were you surprised when folks settled on word processors in favor of markup? Do you think we've progressed in the area of ease of creating documentation and printing it making it viewable and accurate since 1980? I didn't specifically mention unix, but unix history is forever bound to the evolution of documents and printing, so I figure it's fair game for TUHS and isn't yet COFF :). Later, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.martin.yardley at gmail.com Sun Jun 2 12:44:59 2024 From: peter.martin.yardley at gmail.com (Peter Yardley) Date: Sun, 2 Jun 2024 12:44:59 +1000 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <4CED5BE6-75C1-4A4B-B730-AF2A79150426@gmail.com> Hi, My early days were spent in the electronics industry. I can remember receiving 3 pallets of data books from National Semiconductor. This happened every year. The Internet and the availability of on line documentation put a stop to that. It was a revolution. > On 2 Jun 2024, at 12:31 PM, Will Senn wrote: > > Today, as I was digging more into nroff/troff and such, and bemoaning the lack of brevity of modern text. I got to thinking about the old days and what might have gone wrong with book production that got us where we are today. > > First, I wanna ask, tongue in cheek, sort of... As the inventors and early pioneers in the area of moving from typesetters to print on demand... do you feel a bit like the Manhattan project - did you maybe put too much power into the hands of folks who probably shouldn't have that power? > > But seriously, I know the period of time where we went from hot metal typesetting to the digital era was an eyeblink in history but do y'all recall how it went down? Were you surprised when folks settled on word processors in favor of markup? Do you think we've progressed in the area of ease of creating documentation and printing it making it viewable and accurate since 1980? > > I didn't specifically mention unix, but unix history is forever bound to the evolution of documents and printing, so I figure it's fair game for TUHS and isn't yet COFF :). > > Later, > > Will Peter Yardley peter.martin.yardley at gmail.com From lm at mcvoy.com Sun Jun 2 12:53:58 2024 From: lm at mcvoy.com (Larry McVoy) Date: Sat, 1 Jun 2024 19:53:58 -0700 Subject: [TUHS] Old documentation - still the best In-Reply-To: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> References: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> Message-ID: <20240602025358.GC30164@mcvoy.com> Good writing is an art form. I used to be awful, then I met Udi Manber and did some work with him. When I told him I struggled to write a good paper (I was either a senior or a grad student, so not a lot of practice) he was flabbergasted and said "writing papers is easy". I said "do tell". Here is what he told me: A) You have to know what you are writing about, no amount of writing chops will cover up a lack of knowledge. B) You have to have a good outline. Organize what you want to tell people and get it in the right order and with the right level of detail. The outline is like the skeleton of a ship. Once you have that, you are just nailing on boards. Same thing for a paper. A good outline and good knowledge, now you are just typing and filling in the details. But to get back to your point, Will, great writing is all of that but just enough words, no more, no less. You need skill to do that but you also need to care about what you are writing, it's easy to write crap if you don't care. It's hard to write well, even with all skills, I used to call writing being mentally constipated, the good stuff didn't come out easily. The early Unix papers were very well written. The Bell Labs technical journal papers about Unix are fantastic in my opinion. On Sat, Jun 01, 2024 at 08:59:42PM -0500, Will Senn wrote: > A small reflection on the marvels of ancient writing... > > Today, I went to the local Unix user group to see what that was like. I was > pleasantly surprised to find it quite rewarding. Learned some new stuff... > and won the door prize, a copy of a book entitled "Introducing the UNIX > System" by Henry McGilton and Rachel Morgan. I accepted the prize, but said > I'd just read it and recycle it for some other deserving unix-phile. As it > turns out, I'm not giving it back, I'll contribute another Unix book. I > thought it was just some intro unix text and figured I might learn a thing > or two and let someone else who needs it more have it after I read it, but > it's a V7 book! I haven't seem many of those around and so, I started > digging into it and do I ever wish I'd had it when I was first trying to > figure stuff out! Great book, never heard of it, or its authors, but hey, > I've only read a few thousand tech books. > > What was really fun, was where I went from there - the authors mentioned > some bit about permuted indexes and the programmer's manual... So, I went > and grabbed my copy off the shelf and lo and behold, my copy either doesn't > have a permuted index or I'm not finding it, I was crushed. But, while I was > digging around the manual, I came across Section 9 - Quick UNIX Reference! > Are you kidding me?!! How many years has it taken me to gain what knowledge > I have? and here, in 20 pages is the most concise reference manual I've ever > seen. > > Just the SH, TROFF and NROFF sections are worth the effort of digging up > this 40 year old text. > > Anyhow, following on the heels of a recent dive into v7 and Ritchie's > setting up unix v7 documentation, I was yet again reminded of the golden age > of well written technical documents. Oh and I guess my recent perusal of > more modern "heavy weight" texts (heavy by weight, not content, and many > hundreds of pages long) might have made me more appreciative of concision - > I long for the days of 300 page and shorter technical books :). In case you > think I overstate - just got through a pair of TCL/TK books together > clocking in at 1565 pages. > > Thank you Henry McGilton, Rachel Morgan, and Dennis Ritchie and Steve Bourne > and other folks of the '70s and '80s for keeping it concise. As a late to > the party unix enthusiast, I greatly value your work and am really thankful > you didn't write like they do now... > > Later, > > Will > > > > > -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat From tuhs at tuhs.org Sun Jun 2 13:02:46 2024 From: tuhs at tuhs.org (Grant Taylor via TUHS) Date: Sat, 1 Jun 2024 22:02:46 -0500 Subject: [TUHS] Old documentation - still the best In-Reply-To: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> References: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> Message-ID: On 6/1/24 20:59, Will Senn wrote: > ... "Introducing the UNIX System" by Henry McGilton and Rachel Morgan Would you please share the ISBN for the book? It looks like there may be two different covers and I'm curious which one you're referring to. -- Grant. . . . -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4033 bytes Desc: S/MIME Cryptographic Signature URL: From will.senn at gmail.com Sun Jun 2 13:12:47 2024 From: will.senn at gmail.com (Will Senn) Date: Sat, 1 Jun 2024 22:12:47 -0500 Subject: [TUHS] Old documentation - still the best In-Reply-To: References: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> Message-ID: <5671fc3a-31b4-4087-b2ec-f209c1054332@gmail.com> On 6/1/24 10:02 PM, Grant Taylor via TUHS wrote: > On 6/1/24 20:59, Will Senn wrote: >> ... "Introducing the UNIX System" by Henry McGilton and Rachel Morgan > > Would you please share the ISBN for the book? > > It looks like there may be two different covers and I'm curious which > one you're referring to. > > > 0-07-045001-3 1983 - McGraw Hill Black cover, gray and white text, orangegish boxes: https://www.goodreads.com/book/show/2398130.Introducing_the_UNIX_System Mine doesn't have the "A BYTE BOOK" or "McGraw-Hill Software Series for Computer Professionals headings" printed on the top, but otherwise it's the same book -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.bowling at kev009.com Sun Jun 2 14:03:47 2024 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Sat, 1 Jun 2024 21:03:47 -0700 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: On Sat, Jun 1, 2024 at 7:31 PM Will Senn wrote: > > Today, as I was digging more into nroff/troff and such, and bemoaning the lack of brevity of modern text. I got to thinking about the old days and what might have gone wrong with book production that got us where we are today. > > First, I wanna ask, tongue in cheek, sort of... As the inventors and early pioneers in the area of moving from typesetters to print on demand... do you feel a bit like the Manhattan project - did you maybe put too much power into the hands of folks who probably shouldn't have that power? > > But seriously, I know the period of time where we went from hot metal typesetting to the digital era was an eyeblink in history but do y'all recall how it went down? Were you surprised when folks settled on word processors in favor of markup? Do you think we've progressed in the area of ease of creating documentation and printing it making it viewable and accurate since 1980? > > I didn't specifically mention unix, but unix history is forever bound to the evolution of documents and printing, so I figure it's fair game for TUHS and isn't yet COFF :). > > Later, > > Will I think your other topic is closely related but I chose this one to reply to. I own something well north of 10,000 technical and engineering books so I will appoint myself as an amateur librarian. When I was younger, I had the false notion that anything new is good. This attitude permates a lot of society. Including professional libraries. They have a lot of collection management practices around deciding what and when to pitch something and a big one is whether the work is still in print, while a more sophisticated collection will also take into account circulation numbers (how often it is checked out). A lot of that is undoubtedly the real costs surrounding storing and displaying something (an archived book has a marginal cost, a publically accessible displayed book presumably has a higher associated cost) as well as the desire to remain current and provide value to the library's membership. >From what I have seen, there isn't much notion of retaining or promoting a particular work unless it remains in print. As an example, K&R C is still in print and would be retained by most libraries. The whole thing becomes a bit ouroboros because that leads to more copies being printed, and it remaining in collections, and being read. Obviously, this is a case of a great piece of work benefiting from the whole ordeal. But for more niche topics, that kind of feedback loop doesn't happen. So the whole thing comes down in a house of cards... the publisher guesses how many books to print, a glut of them are produced, they enter circulation, and then it goes out of print in a few years. A few years later it is purged from the public libraries. As an end user, one benefit to this collapse is that used books are basically flooded into the market and you can get many books for a fraction of their retail price used.. but it becomes difficult to know _what_ to get if you don't have an expert guide or somewhere to browse and select for yourself. So why does this all matter to your more meta question of why less great books? There is less to no money in it nowadays for authors. The above example of library circulation represented a large number of guaranteed sales to wealthy institutions (academic and government = wealth, don't let them pretend otherwise). Except now many libraries have downsized their physical collections to make room for multimedia or just lower density use of space. So there are less guaranteed sales. Another facet of the same coin, one reason printed books are great has to do with the team surrounding their production. If you look near the colophon, you will often find a textbook will have quite a few people involved in moving a manuscript to production. This obviously costs a lot of money. As things move more to ebook and print on demand, it's an obvious place to cut publishing expenses and throw all the work directly onto the author. That may result in cheaper books and maybe(?) more revenue for the author, but it won't have the same quality that a professional publishing team can bring to the table. As to my deliberate decision to accumulate the dead trees and ink, it's because although online docs are great I find my best learning is offline while I use the online docs more like mental jogs for a particular API or refamiliarizing myself with the problem domain. I have some grandeur ambitions that first involve a large scanning project but that will have to await more self funding. Regards, Kevin From tuhs at tuhs.org Sun Jun 2 14:34:51 2024 From: tuhs at tuhs.org (Grant Taylor via TUHS) Date: Sat, 1 Jun 2024 23:34:51 -0500 Subject: [TUHS] Old documentation - still the best In-Reply-To: <5671fc3a-31b4-4087-b2ec-f209c1054332@gmail.com> References: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> <5671fc3a-31b4-4087-b2ec-f209c1054332@gmail.com> Message-ID: On 6/1/24 22:12, Will Senn wrote: > 0-07-045001-3 > 1983 - McGraw Hill > Black cover, gray and white text, orangegish boxes: > https://www.goodreads.com/book/show/2398130.Introducing_the_UNIX_System Thank you. > Mine doesn't have the "A BYTE BOOK" or "McGraw-Hill Software Series for > Computer Professionals headings" printed on the top, but otherwise it's > the same book :-) -- Grant. . . . -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4033 bytes Desc: S/MIME Cryptographic Signature URL: From tuhs at tuhs.org Sun Jun 2 17:28:06 2024 From: tuhs at tuhs.org (Scot Jenkins via TUHS) Date: Sun, 02 Jun 2024 03:28:06 -0400 Subject: [TUHS] Old documentation - still the best In-Reply-To: <5671fc3a-31b4-4087-b2ec-f209c1054332@gmail.com> References: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> <5671fc3a-31b4-4087-b2ec-f209c1054332@gmail.com> Message-ID: <202406020728.4527S7YV006057@sdf.org> Will Senn wrote: > On 6/1/24 10:02 PM, Grant Taylor via TUHS wrote: > > On 6/1/24 20:59, Will Senn wrote: > >> ... "Introducing the UNIX System" by Henry McGilton and Rachel Morgan This is a great book, and much of it is still relevant today. The editor tutorials and the document formatting chapters are outstanding. > > Would you please share the ISBN for the book? > > > > It looks like there may be two different covers and I'm curious which > > one you're referring to. > > > > > > > 0-07-045001-3 > 1983 - McGraw Hill > Black cover, gray and white text, orangegish boxes: > https://www.goodreads.com/book/show/2398130.Introducing_the_UNIX_System > > Mine doesn't have the "A BYTE BOOK" or "McGraw-Hill Software Series for > Computer Professionals headings" printed on the top, but otherwise it's > the same book FWIW, my copy does have the "A BYTE BOOK" and the "McGraw-Hill Software Series for Computer Professionals" headings, and has the same ISBN and publish date (1983), 556 total pages including the index. The book doesn't appear to have any printing version on the copyright page, just this above the ISBN, which I have no idea what it means: 12 13 14 15 DODO 898765 scot From mrochkind at gmail.com Sun Jun 2 18:08:08 2024 From: mrochkind at gmail.com (Marc Rochkind) Date: Sun, 2 Jun 2024 11:08:08 +0300 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: True enough, Kevin, but with the decline of printed books and the increase in online docs, I rarely find what I'm looking for in a printed book and, when I think I have, the price is very high for what may turn out to be a bad guess. Browsing a bookstore for serious computer books is no longer possible, except maybe in very large cities. For example, for an upcoming project I need up-to-date and authoritative information on Kotlin and AWS S3 APIs. Living in the past, I find, is no help! Marc Rochkind (author of the first book on UNIX programming) On Sun, Jun 2, 2024, 7:12 AM Kevin Bowling wrote: > On Sat, Jun 1, 2024 at 7:31 PM Will Senn wrote: > > > > Today, as I was digging more into nroff/troff and such, and bemoaning > the lack of brevity of modern text. I got to thinking about the old days > and what might have gone wrong with book production that got us where we > are today. > > > > First, I wanna ask, tongue in cheek, sort of... As the inventors and > early pioneers in the area of moving from typesetters to print on demand... > do you feel a bit like the Manhattan project - did you maybe put too much > power into the hands of folks who probably shouldn't have that power? > > > > But seriously, I know the period of time where we went from hot metal > typesetting to the digital era was an eyeblink in history but do y'all > recall how it went down? Were you surprised when folks settled on word > processors in favor of markup? Do you think we've progressed in the area of > ease of creating documentation and printing it making it viewable and > accurate since 1980? > > > > I didn't specifically mention unix, but unix history is forever bound to > the evolution of documents and printing, so I figure it's fair game for > TUHS and isn't yet COFF :). > > > > Later, > > > > Will > > I think your other topic is closely related but I chose this one to reply > to. > > I own something well north of 10,000 technical and engineering books > so I will appoint myself as an amateur librarian. > > When I was younger, I had the false notion that anything new is good. > This attitude permates a lot of society. Including professional > libraries. They have a lot of collection management practices around > deciding what and when to pitch something and a big one is whether the > work is still in print, while a more sophisticated collection will > also take into account circulation numbers (how often it is checked > out). A lot of that is undoubtedly the real costs surrounding storing > and displaying something (an archived book has a marginal cost, a > publically accessible displayed book presumably has a higher > associated cost) as well as the desire to remain current and provide > value to the library's membership. > > From what I have seen, there isn't much notion of retaining or > promoting a particular work unless it remains in print. As an > example, K&R C is still in print and would be retained by most > libraries. The whole thing becomes a bit ouroboros because that leads > to more copies being printed, and it remaining in collections, and > being read. Obviously, this is a case of a great piece of work > benefiting from the whole ordeal. But for more niche topics, that > kind of feedback loop doesn't happen. So the whole thing comes down > in a house of cards... the publisher guesses how many books to print, > a glut of them are produced, they enter circulation, and then it goes > out of print in a few years. A few years later it is purged from the > public libraries. As an end user, one benefit to this collapse is > that used books are basically flooded into the market and you can get > many books for a fraction of their retail price used.. but it becomes > difficult to know _what_ to get if you don't have an expert guide or > somewhere to browse and select for yourself. > > So why does this all matter to your more meta question of why less > great books? There is less to no money in it nowadays for authors. > The above example of library circulation represented a large number of > guaranteed sales to wealthy institutions (academic and government = > wealth, don't let them pretend otherwise). Except now many libraries > have downsized their physical collections to make room for multimedia > or just lower density use of space. So there are less guaranteed > sales. > > Another facet of the same coin, one reason printed books are great has > to do with the team surrounding their production. If you look near > the colophon, you will often find a textbook will have quite a few > people involved in moving a manuscript to production. This obviously > costs a lot of money. As things move more to ebook and print on > demand, it's an obvious place to cut publishing expenses and throw all > the work directly onto the author. That may result in cheaper books > and maybe(?) more revenue for the author, but it won't have the same > quality that a professional publishing team can bring to the table. > > As to my deliberate decision to accumulate the dead trees and ink, > it's because although online docs are great I find my best learning is > offline while I use the online docs more like mental jogs for a > particular API or refamiliarizing myself with the problem domain. I > have some grandeur ambitions that first involve a large scanning > project but that will have to await more self funding. > > Regards, > Kevin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcilroy at dartmouth.edu Sun Jun 2 22:39:44 2024 From: douglas.mcilroy at dartmouth.edu (Douglas McIlroy) Date: Sun, 2 Jun 2024 08:39:44 -0400 Subject: [TUHS] Proliferation of book print styles Message-ID: > Were you surprised when folks settled on word processors in favor of markup? I'm not sure what you're asking. "Word processor" was a term coming into prominence when Unix was in its infancy. Unix itself was sold to management partly on the promise of using it to make a word processor. All word processors used typewriters and were markup-based. Screens, which eventually enabled WYSIWYG, were not affordable for widespread use. Perhaps the question you meant to ask was whether we were surprised when WYSIWYG took over word-processing for the masses. No, we weren't, but we weren't attracted to it either, because it sacrificed markup's potential for expressing the logical structure of documents and thus fostering portability of text among distinct physical forms, e.g. man pages on terminals and in book form or technical papers as TMs and as journal articles. WYSIWYG was also unsuitable for typesetting math. (Microsoft Word clumsily diverts to a separate markup pane for math.) Moreover, WYSIWYG was out of sympathy with Unix philosophy, as it kept documents in a form difficult for other tools to process for unanticipated purposes, In this regard, I still regret that Luca Cardelli and Mark Manasse moved on from Bell Labs before they finished their dream of Blue, a WYSIWYG editor for markup documents, I don't know yet whether that blue-sky goal is achievable. (.docx may be seen as a ponderous latter-day attempt. Does anyone know whether it has fostered tool use?) Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Sun Jun 2 22:45:45 2024 From: arnold at skeeve.com (arnold at skeeve.com) Date: Sun, 02 Jun 2024 06:45:45 -0600 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <202406021245.452Cjjmj319179@freefriends.org> Douglas McIlroy wrote: > In this regard, I still regret that Luca Cardelli and Mark > Manasse moved on from Bell Labs before they finished their dream of Blue, a > WYSIWYG editor for markup documents, I don't know yet whether that blue-sky > goal is achievable. lyx does this for LaTeX. It's been around for a long time. See lyx.org. Arnold From will.senn at gmail.com Sun Jun 2 22:55:45 2024 From: will.senn at gmail.com (Will Senn) Date: Sun, 2 Jun 2024 07:55:45 -0500 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <554ffdbc-dd5a-43d2-92aa-11d5d73ed715@gmail.com> On 6/2/24 7:39 AM, Douglas McIlroy wrote: > > Perhaps the question you meant to ask was whether we were surprised > when WYSIWYG took over word-processing for the masses. No, we weren't, > but we weren't attracted to it either, because it sacrificed markup's > potential for expressing the logical structure of documents and thus > fostering portability of text among distinct physical forms, e.g. man > pages on terminals and in book form or  technical papers as TMs and as > journal articles. WYSIWYG was also unsuitable for typesetting math. > (Microsoft Word clumsily diverts to a separate markup pane for math.) > Yup, that's what I was really meaning to ask and what I was hoping to hear about. > Moreover, WYSIWYG was out of sympathy with Unix philosophy, as it kept > documents in a form difficult for other tools to process for > unanticipated purposes, In this regard, I still regret that Luca > Cardelli and Mark Manasse moved on from Bell Labs before they finished > their dream of Blue, a WYSIWYG editor for markup documents, I don't > know yet whether that blue-sky goal is achievable. (.docx may be seen > as a ponderous latter-day attempt. Does anyone know whether it has > fostered tool use?) > Interesting, I was wishing for something along those lines after using TeX Studio for a while. A quick preview side by side is nice, but wouldn't it be great to be able to work on the preview side of the pane while the markup side changes (as minimally as possible) showing your changes as you make them and being able to switch back and forth? Personally, I prefer troff to tex, but just idea of markup and WYSIWYG is enticing. Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Sun Jun 2 23:13:43 2024 From: will.senn at gmail.com (Will Senn) Date: Sun, 2 Jun 2024 08:13:43 -0500 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <46e3144e-7cc8-47e7-a262-483bb21c7bcf@gmail.com> On 6/1/24 11:03 PM, Kevin Bowling wrote: > I think your other topic is closely related but I chose this one to reply to. > > I own something well north of 10,000 technical and engineering books > so I will appoint myself as an amateur librarian. > > When I was younger, I had the false notion that anything new is good. > This attitude permates a lot of society. Including professional > libraries. They have a lot of collection management practices around > deciding what and when to pitch something and a big one is whether the > work is still in print, while a more sophisticated collection will > also take into account circulation numbers (how often it is checked > out). A lot of that is undoubtedly the real costs surrounding storing > and displaying something (an archived book has a marginal cost, a > publically accessible displayed book presumably has a higher > associated cost) as well as the desire to remain current and provide > value to the library's membership. > > From what I have seen, there isn't much notion of retaining or > promoting a particular work unless it remains in print. As an > example, K&R C is still in print and would be retained by most > libraries. The whole thing becomes a bit ouroboros because that leads > to more copies being printed, and it remaining in collections, and > being read. Obviously, this is a case of a great piece of work > benefiting from the whole ordeal. But for more niche topics, that > kind of feedback loop doesn't happen. So the whole thing comes down > in a house of cards... the publisher guesses how many books to print, > a glut of them are produced, they enter circulation, and then it goes > out of print in a few years. A few years later it is purged from the > public libraries. As an end user, one benefit to this collapse is > that used books are basically flooded into the market and you can get > many books for a fraction of their retail price used.. but it becomes > difficult to know _what_ to get if you don't have an expert guide or > somewhere to browse and select for yourself. > > So why does this all matter to your more meta question of why less > great books? There is less to no money in it nowadays for authors. > The above example of library circulation represented a large number of > guaranteed sales to wealthy institutions (academic and government = > wealth, don't let them pretend otherwise). Except now many libraries > have downsized their physical collections to make room for multimedia > or just lower density use of space. So there are less guaranteed > sales. > > Another facet of the same coin, one reason printed books are great has > to do with the team surrounding their production. If you look near > the colophon, you will often find a textbook will have quite a few > people involved in moving a manuscript to production. This obviously > costs a lot of money. As things move more to ebook and print on > demand, it's an obvious place to cut publishing expenses and throw all > the work directly onto the author. That may result in cheaper books > and maybe(?) more revenue for the author, but it won't have the same > quality that a professional publishing team can bring to the table. > > As to my deliberate decision to accumulate the dead trees and ink, > it's because although online docs are great I find my best learning is > offline while I use the online docs more like mental jogs for a > particular API or refamiliarizing myself with the problem domain. I > have some grandeur ambitions that first involve a large scanning > project but that will have to await more self funding. > > Regards, > Kevin Thanks. This is really clear and while I'd had similar thoughts, I hadn't thought through the entire supply chain like this. The publishing side is one thing, but the library's role in things. I gotta think some more about that - the Mattew Effect, acquisitions, and weeding... Seriously, I never thought about the library's outsized influence on supply. Duh! As for digital materials, I'm pretty sure no one on the list is unaccustomed to vast amounts of reading digital materials so would qualify as experienced consumers at the least, producers most likely, and some even experts on the subject. I, for one, read many many pdf (or convertable to pdf) works every week. Still, I vastly prefer print for serious reading or study. I have learned the value of marking up my text and I find myself writing voluminously alongside much of what I read. It seems like I have to work much harder, cognitively, to retain material that I view online and having my notes disconnected from the corresponding material is frustrating. Gotta print important stuff, no way around it for me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Sun Jun 2 23:50:02 2024 From: will.senn at gmail.com (Will Senn) Date: Sun, 2 Jun 2024 08:50:02 -0500 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <8d091003-2e75-45f1-a233-1a577265c3d7@gmail.com> Marc, it and its successors are great books for sure, thanks for writing them! I like having access to digital works, no complaints about access other than I wish I had access to everything ever written and some way to sort through it all quickly and easily. I'm more inclined to gripe about the quality of the work than it's medium. Both the writing quality and the production quality. If the target is pdf, make it a good pdf that when printed is a space considerate, easy to read, and efficient to process work, and when it's target is screen, do the same. My only real gripe about the medium, is the disconnect between quality writing and production, and the unavoidable but hidden nature of proportions that are inherent in the virtual medium. A crazy example... I recently got out my 8086 handbook because I was doing x64 assembly work and couldn't locate what I was looking for in the x64 equivalent 10 volume set online. A quick flip through the pages found what I needed and I was on my way. So, being a thoughtful person ;), I figured it was just a matter of having the book on hand, so I order one up... a week later, my x64 "manual arrived", all 10 volumes in a box about 14 inches tall, and 8 1/2 by 11 and weighing, well, I only picked it up once, but it was friggin' heavy as in bend the knees heavy. Anyhow, I dutifully opened it up, pulled out the relevant "book", volume 3 part 3 or something and flipped and flipped and flipped some more and found the 8 pages discussing the same thing covered in a paragraph in the 8086 book. Now, I realize that parallel pipelines of AVR 512 SIMPLEX/42 has some impact on the REPNZ command in situations where the quarf rejects the quam, but really pages for a paragraph and not because it required pages, they could have single spaced the document, proportioned the margins to a readable width, put the base cases in prominent positions and put the quarf and quam notes in separate appendices. They didn't - they just keep adding and adding and adding and the page count just keeps growing and growing. Why? Because they can and because folks are hungry for information. I appreciate that they put it out there, but is it ok for me to wish it were of higher quality and to note that the old stuff was better? BTW, I didn't read the 8086 manual back in the day, when it was printed, I read it the day after I went looking at the x64 docs. Will On 6/2/24 3:08 AM, Marc Rochkind wrote: > True enough, Kevin, but with the decline of printed books and the > increase in online docs, I rarely find what I'm looking for in a > printed book and, when I think I have, the price is very high for what > may turn out to be a bad guess. Browsing a bookstore for serious > computer books is no longer possible, except maybe in very large cities. > > For example, for an upcoming project I need up-to-date and > authoritative information on Kotlin and AWS S3 APIs. > > Living in the past, I find, is no help! > > Marc Rochkind > (author of the first book on UNIX programming) > > On Sun, Jun 2, 2024, 7:12 AM Kevin Bowling > wrote: > > On Sat, Jun 1, 2024 at 7:31 PM Will Senn wrote: > > > > Today, as I was digging more into nroff/troff and such, and > bemoaning the lack of brevity of modern text. I got to thinking > about the old days and what might have gone wrong with book > production that got us where we are today. > > > > First, I wanna ask, tongue in cheek, sort of... As the inventors > and early pioneers in the area of moving from typesetters to print > on demand... do you feel a bit like the Manhattan project - did > you maybe put too much power into the hands of folks who probably > shouldn't have that power? > > > > But seriously, I know the period of time where we went from hot > metal typesetting to the digital era was an eyeblink in history > but do y'all recall how it went down? Were you surprised when > folks settled on word processors in favor of markup? Do you think > we've progressed in the area of ease of creating documentation and > printing it making it viewable and accurate since 1980? > > > > I didn't specifically mention unix, but unix history is forever > bound to the evolution of documents and printing, so I figure it's > fair game for TUHS and isn't yet COFF :). > > > > Later, > > > > Will > > I think your other topic is closely related but I chose this one > to reply to. > > I own something well north of 10,000 technical and engineering books > so I will appoint myself as an amateur librarian. > > When I was younger, I had the false notion that anything new is good. > This attitude permates a lot of society.  Including professional > libraries.  They have a lot of collection management practices around > deciding what and when to pitch something and a big one is whether the > work is still in print, while a more sophisticated collection will > also take into account circulation numbers (how often it is checked > out).  A lot of that is undoubtedly the real costs surrounding storing > and displaying something (an archived book has a marginal cost, a > publically accessible displayed book presumably has a higher > associated cost) as well as the desire to remain current and provide > value to the library's membership. > > From what I have seen, there isn't much notion of retaining or > promoting a particular work unless it remains in print.  As an > example, K&R C is still in print and would be retained by most > libraries.  The whole thing becomes a bit ouroboros because that leads > to more copies being printed, and it remaining in collections, and > being read.  Obviously, this is a case of a great piece of work > benefiting from the whole ordeal.  But for more niche topics, that > kind of feedback loop doesn't happen.  So the whole thing comes down > in a house of cards... the publisher guesses how many books to print, > a glut of them are produced, they enter circulation, and then it goes > out of print in a few years.  A few years later it is purged from the > public libraries.  As an end user, one benefit to this collapse is > that used books are basically flooded into the market and you can get > many books for a fraction of their retail price used.. but it becomes > difficult to know _what_ to get if you don't have an expert guide or > somewhere to browse and select for yourself. > > So why does this all matter to your more meta question of why less > great books?  There is less to no money in it nowadays for authors. > The above example of library circulation represented a large number of > guaranteed sales to wealthy institutions (academic and government = > wealth, don't let them pretend otherwise).  Except now many libraries > have downsized their physical collections to make room for multimedia > or just lower density use of space.  So there are less guaranteed > sales. > > Another facet of the same coin, one reason printed books are great has > to do with the team surrounding their production.  If you look near > the colophon, you will often find a textbook will have quite a few > people involved in moving a manuscript to production.  This obviously > costs a lot of money.  As things move more to ebook and print on > demand, it's an obvious place to cut publishing expenses and throw all > the work directly onto the author.  That may result in cheaper books > and maybe(?) more revenue for the author, but it won't have the same > quality that a professional publishing team can bring to the table. > > As to my deliberate decision to accumulate the dead trees and ink, > it's because although online docs are great I find my best learning is > offline while I use the online docs more like mental jogs for a > particular API or refamiliarizing myself with the problem domain.  I > have some grandeur ambitions that first involve a large scanning > project but that will have to await more self funding. > > Regards, > Kevin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aek at bitsavers.org Mon Jun 3 00:31:20 2024 From: aek at bitsavers.org (Al Kossow) Date: Sun, 2 Jun 2024 07:31:20 -0700 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <78afa540-b09a-5f62-2764-c17aa9f4aec3@bitsavers.org> On 6/2/24 5:39 AM, Douglas McIlroy wrote: >> Were you surprised when folks settled on word processors in favor of markup? I was disappointed the world tolerates the fugly typography of web pages. Hundreds of years of readability knowledge thrown out the window. From stuff at riddermarkfarm.ca Mon Jun 3 00:48:28 2024 From: stuff at riddermarkfarm.ca (Stuff Received) Date: Sun, 2 Jun 2024 10:48:28 -0400 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: On 2024-06-02 08:39, Douglas McIlroy wrote (in part): > Perhaps the question you meant to ask was whether we were surprised when > WYSIWYG took over word-processing for the masses. No, we weren't, but we > weren't attracted to it either, because it sacrificed markup's potential > for expressing the logical structure of documents and thus fostering > portability of text among distinct physical forms, e.g. man pages on > terminals and in book form or  technical papers as TMs and as journal > articles. WYSIWYG was also unsuitable for typesetting math. (Microsoft > Word clumsily diverts to a separate markup pane for math.) I liken suffering through WYSIWYG for math to searching through drawers of movable type pieces for the desired piece. Some time ago, I read a nice article titled "What you see is all you get" but I cannot find the link (and Google fails me miserably). Found this, though: What has WSYIWYG done for us: https://web.archive.org/web/20050207015413/http://www.ideography.co.uk/library/seybold/WYSIWYG.html S. From e5655f30a07f at ewoof.net Mon Jun 3 01:21:43 2024 From: e5655f30a07f at ewoof.net (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Sun, 2 Jun 2024 15:21:43 +0000 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <80313ef5-2617-4150-8869-7c09c35e20aa@home.arpa> On 2 Jun 2024 08:39 -0400, from douglas.mcilroy at dartmouth.edu (Douglas McIlroy): > In this regard, I still regret that Luca Cardelli and Mark > Manasse moved on from Bell Labs before they finished their dream of Blue, a > WYSIWYG editor for markup documents, I don't know yet whether that blue-sky > goal is achievable. (.docx may be seen as a ponderous latter-day attempt. > Does anyone know whether it has fostered tool use?) Does Markdown count? Especially when combined with LaTeX support for typesetting math, it's probably quite good enough for most peoples' needs outside of niche applications; and there are WYSIWYG editors (not just text editors with a preview, but actual WYSIWYG editors) which use Markdown as the storage format. Of course, what Markdown very specifically does _not_ even try to do is provide any strong presentation guarantees. In that sense, it's quite a lot like early HTML. (And that, naturally, results in people doing things like using different heading levels not to represent the document outline, but rather because the result renders as what they feel is an "appropriate" text size at that point in the document.) -- Michael Kjörling 🔗 https://michael.kjorling.se “Remember when, on the Internet, nobody cared that you were a dog?” From douglas.mcilroy at dartmouth.edu Mon Jun 3 01:52:52 2024 From: douglas.mcilroy at dartmouth.edu (Douglas McIlroy) Date: Sun, 2 Jun 2024 11:52:52 -0400 Subject: [TUHS] Old documentation - still the best Message-ID: I keep Lomuto and Lomuto, "A Unix Primer", Prentice-Hall (1983) on my shelf, not as a reference, but because I like to savor the presentation. The Lomutos manage to impart the Unix ethos while maintaining focus on the title in a friendly style that is nevertheless succinct and accurate. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph at inputplus.co.uk Mon Jun 3 03:44:31 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Sun, 02 Jun 2024 18:44:31 +0100 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <20240602174431.561FC1F9AA@orac.inputplus.co.uk> Hi S., > Some time ago, I read a nice article titled "What you see is all you > get" but I cannot find the link (and Google fails me miserably). Could it have been ‘Text processing vs word processors’ from Peter Schaffter, the author of the troff mom macros. It starts with ‘When you use a word processor, your screen persistently displays an updated image of the finished document. Word for word, line for line, What You See Is What You Get.’ — https://schaffter.ca/mom/mom-02.html His -mom, have to be careful here, covers a lot of ground. https://schaffter.ca/mom/mom-01a.html -- Cheers, Ralph. From ake.nordin at netia.se Mon Jun 3 06:22:39 2024 From: ake.nordin at netia.se (=?UTF-8?Q?=C3=85ke_Nordin?=) Date: Sun, 2 Jun 2024 22:22:39 +0200 Subject: [TUHS] Proliferation of book print styles In-Reply-To: <80313ef5-2617-4150-8869-7c09c35e20aa@home.arpa> References: <80313ef5-2617-4150-8869-7c09c35e20aa@home.arpa> Message-ID: On 2024-06-02 17:21, Michael Kjörling wrote: > On 2 Jun 2024 08:39 -0400, from douglas.mcilroy at dartmouth.edu (Douglas McIlroy): >> In this regard, I still regret that Luca Cardelli and Mark >> Manasse moved on from Bell Labs before they finished their dream of Blue, a >> WYSIWYG editor for markup documents, I don't know yet whether that blue-sky >> goal is achievable. (.docx may be seen as a ponderous latter-day attempt. >> Does anyone know whether it has fostered tool use?) > Does Markdown count? > > Especially when combined with LaTeX support for typesetting math, it's > probably quite good enough for most peoples' needs outside of niche > applications; and there are WYSIWYG editors (not just text editors > with a preview, but actual WYSIWYG editors) which use Markdown as the > storage format. > > Of course, what Markdown very specifically does _not_ even try to do > is provide any strong presentation guarantees. I haven't really participated in any real publishing endeavors since the times of waxed sheets and scalpels, so I have precious little firsthand experience with e.g. markdown, but I've read quite the severe critique of its shortcomings. A prime example is https://undeadly.org/cgi?action=article&sid=20170304230520 by Ingo Schwarze, the main developer of mandoc together with Kristaps Dzonsons. This makes me believe that any WYSIWYG editor using markdown as its storage format really uses some quite strict subset of it, combined with its own incompatible extensions. MfG, -- Åke Nordin , resident Net/Lunix/telecom geek. Netia Data AB, Stockholm SWEDEN *46#7O466OI99# From kevin.bowling at kev009.com Mon Jun 3 07:21:32 2024 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Sun, 2 Jun 2024 14:21:32 -0700 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: On Sun, Jun 2, 2024 at 1:08 AM Marc Rochkind wrote: > > True enough, Kevin, but with the decline of printed books and the increase in online docs, I rarely find what I'm looking for in a printed book and, when I think I have, the price is very high for what may turn out to be a bad guess. Browsing a bookstore for serious computer books is no longer possible, except maybe in very large cities. Agreed, bookstores are more or less dead. I used the Internet Archive a lot to inform my pre-purchasing decisions but the copyright enforcement has caught up there. > For example, for an upcoming project I need up-to-date and authoritative information on Kotlin and AWS S3 APIs. I believe there are decent Kotlin books out. There are some "fast" publishers like Manning, Apress, and Packt (maybe in rough order of quality..) that put out a lot of ephemeral literature but occasionally have some fairly good works. There aren't a lot of consistent bangers like Prentice-Hall PTR was putting out back in the day although I am generally impressed with some of the work Pearson is putting out. No Starch is also generally a winner, although a little less hard sciency and more pop. S3 is, as a user, so trivial I am not sure it warrants a book. In the past "cookbook" style books were common and maybe even useful. When I was getting started, I was thirsty for easy copy+paste solutions so that I didn't have to strain much thought to get results. I believe Large Language Models are good enough to subsume some of that now. On the other hand, a good book on building applications in a cloud-native way definitely will shave a year or two off the learning curve. What and why seem to be more enduring than how. > > Living in the past, I find, is no help! I don't think I live in the past, I am working on similar technologies you mention to earn a living in the present. One thing I failed to mention in my post, and I think related to all this is the utility of Large Language Models. In your example above, the best current LLMs would be helpful for S3 and a little less so (but not useless) for Kotlin. However, LLMs still can't really help with the synthesis of good overall design and taste while an enduring book will impart both on an intrepid reader that should outlive the details being discussed. No doubt, whatever you are doing now is informed by your past. One other anecdote, in my recent passion of learning digital logic design, I find even the most recent textbooks are well referenced to papers and books of the past which is a bit of a contrast to programming literature. Most will go back to Boole's "Studies of Logic & Probability" as the basis. Lots of papers referenced from the 40s and books from the 70s and 80s still have authority if you are serious about the subject - Quine, McClusky, RK Richards, etc had a lot to say early on and it is very much still valid. > > Marc Rochkind > (author of the first book on UNIX programming) Yes, I recognized your name and have your books. > > On Sun, Jun 2, 2024, 7:12 AM Kevin Bowling wrote: >> >> On Sat, Jun 1, 2024 at 7:31 PM Will Senn wrote: >> > >> > Today, as I was digging more into nroff/troff and such, and bemoaning the lack of brevity of modern text. I got to thinking about the old days and what might have gone wrong with book production that got us where we are today. >> > >> > First, I wanna ask, tongue in cheek, sort of... As the inventors and early pioneers in the area of moving from typesetters to print on demand... do you feel a bit like the Manhattan project - did you maybe put too much power into the hands of folks who probably shouldn't have that power? >> > >> > But seriously, I know the period of time where we went from hot metal typesetting to the digital era was an eyeblink in history but do y'all recall how it went down? Were you surprised when folks settled on word processors in favor of markup? Do you think we've progressed in the area of ease of creating documentation and printing it making it viewable and accurate since 1980? >> > >> > I didn't specifically mention unix, but unix history is forever bound to the evolution of documents and printing, so I figure it's fair game for TUHS and isn't yet COFF :). >> > >> > Later, >> > >> > Will >> >> I think your other topic is closely related but I chose this one to reply to. >> >> I own something well north of 10,000 technical and engineering books >> so I will appoint myself as an amateur librarian. >> >> When I was younger, I had the false notion that anything new is good. >> This attitude permates a lot of society. Including professional >> libraries. They have a lot of collection management practices around >> deciding what and when to pitch something and a big one is whether the >> work is still in print, while a more sophisticated collection will >> also take into account circulation numbers (how often it is checked >> out). A lot of that is undoubtedly the real costs surrounding storing >> and displaying something (an archived book has a marginal cost, a >> publically accessible displayed book presumably has a higher >> associated cost) as well as the desire to remain current and provide >> value to the library's membership. >> >> From what I have seen, there isn't much notion of retaining or >> promoting a particular work unless it remains in print. As an >> example, K&R C is still in print and would be retained by most >> libraries. The whole thing becomes a bit ouroboros because that leads >> to more copies being printed, and it remaining in collections, and >> being read. Obviously, this is a case of a great piece of work >> benefiting from the whole ordeal. But for more niche topics, that >> kind of feedback loop doesn't happen. So the whole thing comes down >> in a house of cards... the publisher guesses how many books to print, >> a glut of them are produced, they enter circulation, and then it goes >> out of print in a few years. A few years later it is purged from the >> public libraries. As an end user, one benefit to this collapse is >> that used books are basically flooded into the market and you can get >> many books for a fraction of their retail price used.. but it becomes >> difficult to know _what_ to get if you don't have an expert guide or >> somewhere to browse and select for yourself. >> >> So why does this all matter to your more meta question of why less >> great books? There is less to no money in it nowadays for authors. >> The above example of library circulation represented a large number of >> guaranteed sales to wealthy institutions (academic and government = >> wealth, don't let them pretend otherwise). Except now many libraries >> have downsized their physical collections to make room for multimedia >> or just lower density use of space. So there are less guaranteed >> sales. >> >> Another facet of the same coin, one reason printed books are great has >> to do with the team surrounding their production. If you look near >> the colophon, you will often find a textbook will have quite a few >> people involved in moving a manuscript to production. This obviously >> costs a lot of money. As things move more to ebook and print on >> demand, it's an obvious place to cut publishing expenses and throw all >> the work directly onto the author. That may result in cheaper books >> and maybe(?) more revenue for the author, but it won't have the same >> quality that a professional publishing team can bring to the table. >> >> As to my deliberate decision to accumulate the dead trees and ink, >> it's because although online docs are great I find my best learning is >> offline while I use the online docs more like mental jogs for a >> particular API or refamiliarizing myself with the problem domain. I >> have some grandeur ambitions that first involve a large scanning >> project but that will have to await more self funding. >> >> Regards, >> Kevin From sjenkin at canb.auug.org.au Mon Jun 3 08:35:14 2024 From: sjenkin at canb.auug.org.au (sjenkin at canb.auug.org.au) Date: Mon, 3 Jun 2024 08:35:14 +1000 Subject: [TUHS] Old documentation - still the best In-Reply-To: References: Message-ID: <957B629A-6ABE-4647-86F9-946A54BDE795@canb.auug.org.au> For those playing along at home, Internet Archive have a scanned copy to borrow. The cover is unique & thoughtful. Can read the Table of Contents without ‘borrowing’. > On 3 Jun 2024, at 01:52, Douglas McIlroy wrote: > > I keep Lomuto and Lomuto, "A Unix Primer", Prentice-Hall (1983) on my shelf, not as a reference, but because I like to savor the presentation. The Lomutos manage to impart the Unix ethos while maintaining focus on the title in a friendly style that is nevertheless succinct and accurate. > > Doug -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph at inputplus.co.uk Mon Jun 3 19:53:20 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Mon, 03 Jun 2024 10:53:20 +0100 Subject: [TUHS] Proliferation of book print styles In-Reply-To: <78afa540-b09a-5f62-2764-c17aa9f4aec3@bitsavers.org> References: <78afa540-b09a-5f62-2764-c17aa9f4aec3@bitsavers.org> Message-ID: <20240603095320.6ED3B21E8C@orac.inputplus.co.uk> Hi Al, > I was disappointed the world tolerates the fugly typography of web > pages. Hundreds of years of readability knowledge thrown out the > window. PDFs of books have declined too, and with that the book held in the hand. It's as if no aesthetic judging of each page's appearance has occurred; whatever the program produces is correct. Probably because many books are about technologies with little lifespan; either it will wane or version 2.0 will need a new book. Books on topics with a longer life are dragged down. Full justification is still often used. No breaks around the start/stop parenthetical em dash causes the very long ‘word’ to start the next line; the line before becomes 40% space. Sentences which start ‘I’ end a line. Or page. Sans serif used so that ‘I’ is as thin as can be and the font, to my eyes, generally lacks flow. When there's the choice, I skim the PDF and if it's good, go with that. Otherwise, I pluck for the worse-looking EPUB, HTML under the covers, because I can unpack it with bsdtar(1), tinker with the HTML and CSS to fix the worst of the appearance, and then return it to foo.epub for reading. -- Cheers, Ralph. From jcapp at anteil.com Tue Jun 4 02:47:59 2024 From: jcapp at anteil.com (Jim Capp) Date: Mon, 3 Jun 2024 12:47:59 -0400 (EDT) Subject: [TUHS] Old documentation - still the best In-Reply-To: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> Message-ID: <7801272.555.1717433279940.JavaMail.root@zimbraanteil> Does it happen to Nicole's, or anyone else's extension or just yours? From: "Will Senn" To: "TUHS" Sent: Saturday, June 1, 2024 9:59:42 PM Subject: [TUHS] Old documentation - still the best A small reflection on the marvels of ancient writing... Today, I went to the local Unix user group to see what that was like. I was pleasantly surprised to find it quite rewarding. Learned some new stuff... and won the door prize, a copy of a book entitled "Introducing the UNIX System" by Henry McGilton and Rachel Morgan. I accepted the prize, but said I'd just read it and recycle it for some other deserving unix-phile. As it turns out, I'm not giving it back, I'll contribute another Unix book. I thought it was just some intro unix text and figured I might learn a thing or two and let someone else who needs it more have it after I read it, but it's a V7 book! I haven't seem many of those around and so, I started digging into it and do I ever wish I'd had it when I was first trying to figure stuff out! Great book, never heard of it, or its authors, but hey, I've only read a few thousand tech books. What was really fun, was where I went from there - the authors mentioned some bit about permuted indexes and the programmer's manual... So, I went and grabbed my copy off the shelf and lo and behold, my copy either doesn't have a permuted index or I'm not finding it, I was crushed. But, while I was digging around the manual, I came across Section 9 - Quick UNIX Reference! Are you kidding me?!! How many years has it taken me to gain what knowledge I have? and here, in 20 pages is the most concise reference manual I've ever seen. Just the SH, TROFF and NROFF sections are worth the effort of digging up this 40 year old text. Anyhow, following on the heels of a recent dive into v7 and Ritchie's setting up unix v7 documentation, I was yet again reminded of the golden age of well written technical documents. Oh and I guess my recent perusal of more modern "heavy weight" texts (heavy by weight, not content, and many hundreds of pages long) might have made me more appreciative of concision - I long for the days of 300 page and shorter technical books :). In case you think I overstate - just got through a pair of TCL/TK books together clocking in at 1565 pages. Thank you Henry McGilton, Rachel Morgan, and Dennis Ritchie and Steve Bourne and other folks of the '70s and '80s for keeping it concise. As a late to the party unix enthusiast, I greatly value your work and am really thankful you didn't write like they do now... Later, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcapp at anteil.com Tue Jun 4 02:52:49 2024 From: jcapp at anteil.com (Jim Capp) Date: Mon, 3 Jun 2024 12:52:49 -0400 (EDT) Subject: [TUHS] Old documentation - still the best In-Reply-To: <7801272.555.1717433279940.JavaMail.root@zimbraanteil> Message-ID: <4305347.562.1717433569021.JavaMail.root@zimbraanteil> Sorry folks, please ignore that one! From: "Jim Capp" To: "Will Senn" Cc: "TUHS" Sent: Monday, June 3, 2024 12:47:59 PM Subject: Re: [TUHS] Old documentation - still the best Does it happen to Nicole's, or anyone else's extension or just yours? From: "Will Senn" To: "TUHS" Sent: Saturday, June 1, 2024 9:59:42 PM Subject: [TUHS] Old documentation - still the best A small reflection on the marvels of ancient writing... Today, I went to the local Unix user group to see what that was like. I was pleasantly surprised to find it quite rewarding. Learned some new stuff... and won the door prize, a copy of a book entitled "Introducing the UNIX System" by Henry McGilton and Rachel Morgan. I accepted the prize, but said I'd just read it and recycle it for some other deserving unix-phile. As it turns out, I'm not giving it back, I'll contribute another Unix book. I thought it was just some intro unix text and figured I might learn a thing or two and let someone else who needs it more have it after I read it, but it's a V7 book! I haven't seem many of those around and so, I started digging into it and do I ever wish I'd had it when I was first trying to figure stuff out! Great book, never heard of it, or its authors, but hey, I've only read a few thousand tech books. What was really fun, was where I went from there - the authors mentioned some bit about permuted indexes and the programmer's manual... So, I went and grabbed my copy off the shelf and lo and behold, my copy either doesn't have a permuted index or I'm not finding it, I was crushed. But, while I was digging around the manual, I came across Section 9 - Quick UNIX Reference! Are you kidding me?!! How many years has it taken me to gain what knowledge I have? and here, in 20 pages is the most concise reference manual I've ever seen. Just the SH, TROFF and NROFF sections are worth the effort of digging up this 40 year old text. Anyhow, following on the heels of a recent dive into v7 and Ritchie's setting up unix v7 documentation, I was yet again reminded of the golden age of well written technical documents. Oh and I guess my recent perusal of more modern "heavy weight" texts (heavy by weight, not content, and many hundreds of pages long) might have made me more appreciative of concision - I long for the days of 300 page and shorter technical books :). In case you think I overstate - just got through a pair of TCL/TK books together clocking in at 1565 pages. Thank you Henry McGilton, Rachel Morgan, and Dennis Ritchie and Steve Bourne and other folks of the '70s and '80s for keeping it concise. As a late to the party unix enthusiast, I greatly value your work and am really thankful you didn't write like they do now... Later, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From frew at ucsb.edu Tue Jun 4 07:42:37 2024 From: frew at ucsb.edu (James Frew) Date: Mon, 3 Jun 2024 14:42:37 -0700 Subject: [TUHS] Proliferation of book print styles In-Reply-To: <4CED5BE6-75C1-4A4B-B730-AF2A79150426@gmail.com> References: <4CED5BE6-75C1-4A4B-B730-AF2A79150426@gmail.com> Message-ID: <9854affc-bc27-42fb-a294-3b0e7ea4d28d@ucsb.edu> In 1988 I checked a Sun-3 workstation as baggage on a flight from LA to Beijing (long story...) The airline shrink-wrapped the whole shmodz onto a pallet for customs reasons, but I remember the second-heaviest (i.e. expensive) component, after the monitor, was the box of printed manuals... Online is wonderful. Cheers, /Frew On 2024-06-01 19:44, Peter Yardley wrote: > I can remember receiving 3 pallets of data books from National Semiconductor. This happened every year. The Internet and the availability of on line documentation put a stop to that. It was a revolution. From dave at horsfall.org Tue Jun 4 14:26:01 2024 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 4 Jun 2024 14:26:01 +1000 (EST) Subject: [TUHS] Proliferation of book print styles In-Reply-To: <20240603095320.6ED3B21E8C@orac.inputplus.co.uk> References: <78afa540-b09a-5f62-2764-c17aa9f4aec3@bitsavers.org> <20240603095320.6ED3B21E8C@orac.inputplus.co.uk> Message-ID: On Mon, 3 Jun 2024, Ralph Corderoy wrote: > Full justification is still often used. No breaks around the start/stop > parenthetical em dash causes the very long ‘word’ to start the next > line; the line before becomes 40% space. Sentences which start ‘I’ end > a line. Or page. Sans serif used so that ‘I’ is as thin as can be and > the font, to my eyes, generally lacks flow. What he said... And let's not even talk about hyphenating the- rapist. -- Dave From will.senn at gmail.com Tue Jun 4 14:31:58 2024 From: will.senn at gmail.com (Will Senn) Date: Mon, 3 Jun 2024 23:31:58 -0500 Subject: [TUHS] Vi Quick Reference card for 4.4 BSD Message-ID: Today after trying to decipher the online help for vim and neovim, I decided I'd had enough and I opted for nvi - the bug for bug vi compatible that I've used for so long on FreeBSD. It handles cursor keys, these days (my biggest gripe back when, now I'm not so sure it's an improvement). It's in-app help pages are about 300 lines long, the docs are just four of the 4.4 docs: An Introduction to Display Editing with VI, Edit: A tutorial, EX Reference Manual, and VI-EX Reference Manual - all very well written and understandable. It does everything I really need it to do without the million and one extensions and "enhancements" the others offer. In doing the docs research, I found many, many references to a "/Vi Quick Reference card"/ in the various manpages and docs. I googled and googled some more and of course got thousands of hits (really many thousands), but I can't seem to find the actual card referenced. I'm pretty sure what I want to find is a scanned image or pdf of the card for 4.4bsd. Do y'all happen to know of where I might find the golden quick ref card for vi from back in the 4.4bsd days or did it even really exist? Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Tue Jun 4 14:46:01 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Tue, 04 Jun 2024 04:46:01 +0000 Subject: [TUHS] Vi Quick Reference card for 4.4 BSD In-Reply-To: References: Message-ID: <4YcO7QxSRsr-EFfdZcWDr8bsnSJkpl8bgWhtvn4PYDYO1UzTHdpwYOvQXue_O3X319Nt1AY9BAyvMLbM7v0E0HJEnnN_JIgrgmRu5pC1ygA=@protonmail.com> On Monday, June 3rd, 2024 at 9:31 PM, Will Senn wrote: > Today after trying to decipher the online help for vim and neovim, I decided I'd had enough and I opted for nvi - the bug for bug vi compatible that I've used for so long on FreeBSD. It handles cursor keys, these days (my biggest gripe back when, now I'm not so sure it's an improvement). It's in-app help pages are about 300 lines long, the docs are just four of the 4.4 docs: An Introduction to Display Editing with VI, Edit: A tutorial, EX Reference Manual, and VI-EX Reference Manual - all very well written and understandable. It does everything I really need it to do without the million and one extensions and "enhancements" the others offer. > > In doing the docs research, I found many, many references to a "Vi Quick Reference card" in the various manpages and docs. I googled and googled some more and of course got thousands of hits (really many thousands), but I can't seem to find the actual card referenced. I'm pretty sure what I want to find is a scanned image or pdf of the card for 4.4bsd. > > Do y'all happen to know of where I might find the golden quick ref card for vi from back in the 4.4bsd days or did it even really exist? > > Will Perhaps this?  https://imgur.com/a/unix-vi-quick-reference-Nw0sfTH Pardon the quality and host, not in a place to do a more thoughtful scan and archival right now. That was in a stack of documents I received some time ago, thrown in with stuff like V6 and KSOS manuals, some BSD docs, etc. so I presume it's also "official" fare. That and no commercial indicators (TMs, copyrights, etc.) Let me know if that link doesn't work and I'll try and find my scanner and do it properly (scanner is MIA apparently...) - Matt G. P.S. I also have the AT&T branded version of this from 1984, it's a small 22 page flipbook with the same cover motif as early SVR2 binders (so the grey with some "deathstar" lines not the red with black accent dots). Once I find my scanner I'll get that on the glass. From tuhs at tuhs.org Tue Jun 4 15:47:34 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Tue, 04 Jun 2024 05:47:34 +0000 Subject: [TUHS] Vi Quick Reference card for 4.4 BSD In-Reply-To: <4YcO7QxSRsr-EFfdZcWDr8bsnSJkpl8bgWhtvn4PYDYO1UzTHdpwYOvQXue_O3X319Nt1AY9BAyvMLbM7v0E0HJEnnN_JIgrgmRu5pC1ygA=@protonmail.com> References: <4YcO7QxSRsr-EFfdZcWDr8bsnSJkpl8bgWhtvn4PYDYO1UzTHdpwYOvQXue_O3X319Nt1AY9BAyvMLbM7v0E0HJEnnN_JIgrgmRu5pC1ygA=@protonmail.com> Message-ID: On Monday, June 3rd, 2024 at 9:46 PM, segaloco via TUHS wrote: > On Monday, June 3rd, 2024 at 9:31 PM, Will Senn will.senn at gmail.com wrote: > > > Today after trying to decipher the online help for vim and neovim, I decided I'd had enough and I opted for nvi - the bug for bug vi compatible that I've used for so long on FreeBSD. It handles cursor keys, these days (my biggest gripe back when, now I'm not so sure it's an improvement). It's in-app help pages are about 300 lines long, the docs are just four of the 4.4 docs: An Introduction to Display Editing with VI, Edit: A tutorial, EX Reference Manual, and VI-EX Reference Manual - all very well written and understandable. It does everything I really need it to do without the million and one extensions and "enhancements" the others offer. > > > > In doing the docs research, I found many, many references to a "Vi Quick Reference card" in the various manpages and docs. I googled and googled some more and of course got thousands of hits (really many thousands), but I can't seem to find the actual card referenced. I'm pretty sure what I want to find is a scanned image or pdf of the card for 4.4bsd. > > > > Do y'all happen to know of where I might find the golden quick ref card for vi from back in the 4.4bsd days or did it even really exist? > > > > Will > > > Perhaps this? https://imgur.com/a/unix-vi-quick-reference-Nw0sfTH > > Pardon the quality and host, not in a place to do a more thoughtful scan and archival right now. That was in a stack of documents I received some time ago, thrown in with stuff like V6 and KSOS manuals, some BSD docs, etc. so I presume it's also "official" fare. That and no commercial indicators (TMs, copyrights, etc.) > > Let me know if that link doesn't work and I'll try and find my scanner and do it properly (scanner is MIA apparently...) > > - Matt G. > > P.S. I also have the AT&T branded version of this from 1984, it's a small 22 page flipbook with the same cover motif as early SVR2 binders (so the grey with some "deathstar" lines not the red with black accent dots). Once I find my scanner I'll get that on the glass. Looked a bit harder and found it, scanned that booklet: https://archive.org/details/unix-system-v-visual-editor-quick-reference-issue-2 The two appear different enough, although they may share a common ancestor. I hope one or the other fits what you're searching for, either specifically or at least generally as a concise vi(1) reference. I keep the AT&T booklet at my desk as a matter of fact, it's quite convenient. - Matt G. From dave at horsfall.org Tue Jun 4 15:49:29 2024 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 4 Jun 2024 15:49:29 +1000 (EST) Subject: [TUHS] Proliferation of book print styles In-Reply-To: <9854affc-bc27-42fb-a294-3b0e7ea4d28d@ucsb.edu> References: <4CED5BE6-75C1-4A4B-B730-AF2A79150426@gmail.com> <9854affc-bc27-42fb-a294-3b0e7ea4d28d@ucsb.edu> Message-ID: On Mon, 3 Jun 2024, James Frew wrote: > In 1988 I checked a Sun-3 workstation as baggage on a flight from LA to > Beijing (long story...) The airline shrink-wrapped the whole shmodz onto > a pallet for customs reasons, but I remember the second-heaviest (i.e. > expensive) component, after the monitor, was the box of printed > manuals... When working for Lionel Singer's Sun Australia (a Sun reseller), we had an entire room devoted to SunOS manuals; I wonder what happened to them (the manuals, I mean)? -- Dave From fjarlq at gmail.com Tue Jun 4 22:28:02 2024 From: fjarlq at gmail.com (Matt Day) Date: Tue, 4 Jun 2024 06:28:02 -0600 Subject: [TUHS] Vi Quick Reference card for 4.4 BSD In-Reply-To: <4YcO7QxSRsr-EFfdZcWDr8bsnSJkpl8bgWhtvn4PYDYO1UzTHdpwYOvQXue_O3X319Nt1AY9BAyvMLbM7v0E0HJEnnN_JIgrgmRu5pC1ygA=@protonmail.com> References: <4YcO7QxSRsr-EFfdZcWDr8bsnSJkpl8bgWhtvn4PYDYO1UzTHdpwYOvQXue_O3X319Nt1AY9BAyvMLbM7v0E0HJEnnN_JIgrgmRu5pC1ygA=@protonmail.com> Message-ID: Yep, that's it. The Vi Quick Reference Card dates back to the vi documentation in 2BSD: https://www.tuhs.org/cgi-bin/utree.pl?file=2BSD/doc/vi specifically the file vi.summary: https://www.tuhs.org/cgi-bin/utree.pl?file=2BSD/doc/vi/vi.summary Here's vi.summary in 4.4BSD: https://www.tuhs.org/cgi-bin/utree.pl?file=4.4BSD/usr/src/usr.bin/ex/USD.doc/vi/vi.summary A decent PDF render: https://www.mpaoli.net/~michael/unix/vi/summary.pdf On Mon, Jun 3, 2024 at 10:46 PM segaloco via TUHS wrote: > On Monday, June 3rd, 2024 at 9:31 PM, Will Senn > wrote: > > > Today after trying to decipher the online help for vim and neovim, I > decided I'd had enough and I opted for nvi - the bug for bug vi compatible > that I've used for so long on FreeBSD. It handles cursor keys, these days > (my biggest gripe back when, now I'm not so sure it's an improvement). It's > in-app help pages are about 300 lines long, the docs are just four of the > 4.4 docs: An Introduction to Display Editing with VI, Edit: A tutorial, EX > Reference Manual, and VI-EX Reference Manual - all very well written and > understandable. It does everything I really need it to do without the > million and one extensions and "enhancements" the others offer. > > > > In doing the docs research, I found many, many references to a "Vi Quick > Reference card" in the various manpages and docs. I googled and googled > some more and of course got thousands of hits (really many thousands), but > I can't seem to find the actual card referenced. I'm pretty sure what I > want to find is a scanned image or pdf of the card for 4.4bsd. > > > > Do y'all happen to know of where I might find the golden quick ref card > for vi from back in the 4.4bsd days or did it even really exist? > > > > Will > > Perhaps this? https://imgur.com/a/unix-vi-quick-reference-Nw0sfTH > > Pardon the quality and host, not in a place to do a more thoughtful scan > and archival right now. That was in a stack of documents I received some > time ago, thrown in with stuff like V6 and KSOS manuals, some BSD docs, > etc. so I presume it's also "official" fare. That and no commercial > indicators (TMs, copyrights, etc.) > > Let me know if that link doesn't work and I'll try and find my scanner and > do it properly (scanner is MIA apparently...) > > - Matt G. > > P.S. I also have the AT&T branded version of this from 1984, it's a small > 22 page flipbook with the same cover motif as early SVR2 binders (so the > grey with some "deathstar" lines not the red with black accent dots). Once > I find my scanner I'll get that on the glass. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcilroy at dartmouth.edu Tue Jun 4 23:01:38 2024 From: douglas.mcilroy at dartmouth.edu (Douglas McIlroy) Date: Tue, 4 Jun 2024 09:01:38 -0400 Subject: [TUHS] Vi Quick Reference card for 4.4 BSD In-Reply-To: References: <4YcO7QxSRsr-EFfdZcWDr8bsnSJkpl8bgWhtvn4PYDYO1UzTHdpwYOvQXue_O3X319Nt1AY9BAyvMLbM7v0E0HJEnnN_JIgrgmRu5pC1ygA=@protonmail.com> Message-ID: It's not a card, but it's brief: vi(1) in the v10 manual covers vi, ex, and edit in three pages. On Tue, Jun 4, 2024 at 1:47 AM segaloco via TUHS wrote: > On Monday, June 3rd, 2024 at 9:46 PM, segaloco via TUHS > wrote: > > > On Monday, June 3rd, 2024 at 9:31 PM, Will Senn will.senn at gmail.com > wrote: > > > > > Today after trying to decipher the online help for vim and neovim, I > decided I'd had enough and I opted for nvi - the bug for bug vi compatible > that I've used for so long on FreeBSD. It handles cursor keys, these days > (my biggest gripe back when, now I'm not so sure it's an improvement). It's > in-app help pages are about 300 lines long, the docs are just four of the > 4.4 docs: An Introduction to Display Editing with VI, Edit: A tutorial, EX > Reference Manual, and VI-EX Reference Manual - all very well written and > understandable. It does everything I really need it to do without the > million and one extensions and "enhancements" the others offer. > > > > > > In doing the docs research, I found many, many references to a "Vi > Quick Reference card" in the various manpages and docs. I googled and > googled some more and of course got thousands of hits (really many > thousands), but I can't seem to find the actual card referenced. I'm pretty > sure what I want to find is a scanned image or pdf of the card for 4.4bsd. > > > > > > Do y'all happen to know of where I might find the golden quick ref > card for vi from back in the 4.4bsd days or did it even really exist? > > > > > > Will > > > > > > Perhaps this? https://imgur.com/a/unix-vi-quick-reference-Nw0sfTH > > > > Pardon the quality and host, not in a place to do a more thoughtful scan > and archival right now. That was in a stack of documents I received some > time ago, thrown in with stuff like V6 and KSOS manuals, some BSD docs, > etc. so I presume it's also "official" fare. That and no commercial > indicators (TMs, copyrights, etc.) > > > > Let me know if that link doesn't work and I'll try and find my scanner > and do it properly (scanner is MIA apparently...) > > > > - Matt G. > > > > P.S. I also have the AT&T branded version of this from 1984, it's a > small 22 page flipbook with the same cover motif as early SVR2 binders (so > the grey with some "deathstar" lines not the red with black accent dots). > Once I find my scanner I'll get that on the glass. > > Looked a bit harder and found it, scanned that booklet: > > > https://archive.org/details/unix-system-v-visual-editor-quick-reference-issue-2 > > The two appear different enough, although they may share a common > ancestor. I hope one or the other fits what you're searching for, either > specifically or at least generally as a concise vi(1) reference. I keep > the AT&T booklet at my desk as a matter of fact, it's quite convenient. > > - Matt G. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Tue Jun 4 23:06:54 2024 From: will.senn at gmail.com (Will Senn) Date: Tue, 4 Jun 2024 08:06:54 -0500 Subject: [TUHS] Vi Quick Reference card for 4.4 BSD In-Reply-To: References: <4YcO7QxSRsr-EFfdZcWDr8bsnSJkpl8bgWhtvn4PYDYO1UzTHdpwYOvQXue_O3X319Nt1AY9BAyvMLbM7v0E0HJEnnN_JIgrgmRu5pC1ygA=@protonmail.com> Message-ID: <344f26f0-b258-49a0-8b39-b82538f48d21@gmail.com> Thanks Matt & Matt :). This is what I was looking for and thanks for the background, too. Oh, and duh, it didn't occur to me to go looking for the source. Off to see about rendering my own from source! Will On 6/4/24 7:28 AM, Matt Day wrote: > Yep, that's it. > > The Vi Quick Reference Card dates back to the vi documentation in > 2BSD: https://www.tuhs.org/cgi-bin/utree.pl?file=2BSD/doc/vi > specifically the file vi.summary: > https://www.tuhs.org/cgi-bin/utree.pl?file=2BSD/doc/vi/vi.summary > > Here's vi.summary in 4.4BSD: > https://www.tuhs.org/cgi-bin/utree.pl?file=4.4BSD/usr/src/usr.bin/ex/USD.doc/vi/vi.summary > A decent PDF render: https://www.mpaoli.net/~michael/unix/vi/summary.pdf > > On Mon, Jun 3, 2024 at 10:46 PM segaloco via TUHS wrote: > > On Monday, June 3rd, 2024 at 9:31 PM, Will Senn > wrote: > > > Today after trying to decipher the online help for vim and > neovim, I decided I'd had enough and I opted for nvi - the bug for > bug vi compatible that I've used for so long on FreeBSD. It > handles cursor keys, these days (my biggest gripe back when, now > I'm not so sure it's an improvement). It's in-app help pages are > about 300 lines long, the docs are just four of the 4.4 docs: An > Introduction to Display Editing with VI, Edit: A tutorial, EX > Reference Manual, and VI-EX Reference Manual - all very well > written and understandable. It does everything I really need it to > do without the million and one extensions and "enhancements" the > others offer. > > > > In doing the docs research, I found many, many references to a > "Vi Quick Reference card" in the various manpages and docs. I > googled and googled some more and of course got thousands of hits > (really many thousands), but I can't seem to find the actual card > referenced. I'm pretty sure what I want to find is a scanned image > or pdf of the card for 4.4bsd. > > > > Do y'all happen to know of where I might find the golden quick > ref card for vi from back in the 4.4bsd days or did it even really > exist? > > > > Will > > Perhaps this? https://imgur.com/a/unix-vi-quick-reference-Nw0sfTH > > Pardon the quality and host, not in a place to do a more > thoughtful scan and archival right now.  That was in a stack of > documents I received some time ago, thrown in with stuff like V6 > and KSOS manuals, some BSD docs, etc. so I presume it's also > "official" fare.  That and no commercial indicators (TMs, > copyrights, etc.) > > Let me know if that link doesn't work and I'll try and find my > scanner and do it properly (scanner is MIA apparently...) > > - Matt G. > > P.S. I also have the AT&T branded version of this from 1984, it's > a small 22 page flipbook with the same cover motif as early SVR2 > binders (so the grey with some "deathstar" lines not the red with > black accent dots).  Once I find my scanner I'll get that on the > glass. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.donner at gmail.com Tue Jun 4 23:22:20 2024 From: marc.donner at gmail.com (Marc Donner) Date: Tue, 4 Jun 2024 09:22:20 -0400 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: The history of markup and WSYWYG (or, as a friend said, WYSIAYG - what you see is all you get) is fascinating. The early markup systems (runoff and its derivatives like troff, nroff, IBM's SCRIPT) focused on manipulation of representation. Normal, bold, italic, font size, justification and centering, and so on, were the vocabulary of the old systems. These systems, to me, were assembler language for contemporary phototypesetters. In the late 1970s and early 1980s we began to get systems that, as Douglas noted, could express the logical structure of documents. GML and SCRIBE were my first exposures to this way of thinking and they made life much much better for the writer. The standards work that created SGML went a bit overboard, to my taste. The only really serious adopters of SGML that I can think of were the US military, but there may have been others. Along the way were some fascinating attempts at clever hybrids. Mike Cowlishaw built a markup system for the Oxford University Press back in the early 1980s on secondment from IBM. It had a rather elegant ability to switch between markup mode and rendering mode so you could peek at how something would look. I know that it was used by OUP for the humongous task of converting the OED from its old paper-based production framework to the electronic system that they use today, though I have no idea what the current details are. The hybrid model is not dead, by the way. The wikimedia system adopts it ... you may edit either in markup mode or in WSYWYG mode, though I find the WSYWYG mode to be frustrating. Sadly, the markdown stuff used by wikimedia is pretty annoying to work with and the rendering is buggy and sometimes incomprehensible (to me, at least). Making a strong system that includes inline markup editing AND WSYWYG editing with clean flipping between them would be fascinating. Sadly, the markup specifications are flimsy and the ease of creating crazy markup like

blah blah

in edit mode makes for some difficult exception handling problems. Marc ===== nygeek.net mindthegapdialogs.com/home On Sun, Jun 2, 2024 at 8:40 AM Douglas McIlroy < douglas.mcilroy at dartmouth.edu> wrote: > > Were you surprised when folks settled on word processors in favor of > markup? > > I'm not sure what you're asking. "Word processor" was a term coming into > prominence when Unix was in its infancy. Unix itself was sold to management > partly on the promise of using it to make a word processor. All word > processors used typewriters and were markup-based. Screens, which > eventually enabled WYSIWYG, were not affordable for widespread use. > > Perhaps the question you meant to ask was whether we were surprised when > WYSIWYG took over word-processing for the masses. No, we weren't, but we > weren't attracted to it either, because it sacrificed markup's potential > for expressing the logical structure of documents and thus fostering > portability of text among distinct physical forms, e.g. man pages on > terminals and in book form or technical papers as TMs and as journal > articles. WYSIWYG was also unsuitable for typesetting math. (Microsoft Word > clumsily diverts to a separate markup pane for math.) > > Moreover, WYSIWYG was out of sympathy with Unix philosophy, as it kept > documents in a form difficult for other tools to process for unanticipated > purposes, In this regard, I still regret that Luca Cardelli and Mark > Manasse moved on from Bell Labs before they finished their dream of Blue, a > WYSIWYG editor for markup documents, I don't know yet whether that blue-sky > goal is achievable. (.docx may be seen as a ponderous latter-day attempt. > Does anyone know whether it has fostered tool use?) > > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph at inputplus.co.uk Tue Jun 4 23:56:46 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Tue, 04 Jun 2024 14:56:46 +0100 Subject: [TUHS] vi(1) in 10th Ed. (Was: Vi Quick Reference card for 4.4 BSD) In-Reply-To: References: <4YcO7QxSRsr-EFfdZcWDr8bsnSJkpl8bgWhtvn4PYDYO1UzTHdpwYOvQXue_O3X319Nt1AY9BAyvMLbM7v0E0HJEnnN_JIgrgmRu5pC1ygA=@protonmail.com> Message-ID: <20240604135646.A1BB821EDB@orac.inputplus.co.uk> Hi, Doug wrote: > It's not a card, but it's brief: vi(1) in the v10 manual covers vi, > ex, and edit in three pages. I went looking for it. The source is https://www.tuhs.org/cgi-bin/utree.pl?file=V10/man/man1/vi.1 The TUHS wiki, https://wiki.tuhs.org/doku.php?id=publications:manuals:research#tenth_edition links to a 10th Ed. PDF, but beware it isn't a scan of the manual. Instead, as the blurb on scrolling down says, the man pages were formatted with BSD's mandoc so not a lot of chance of the output matching the original. Page 389 of 992 is the start of vi(1). The .2C two-column output split by a tab character hasn't been honoured which is why it starts to look garbled by the second page. .PP .de fq \&\f5\\$1\fR───→\\$2 \\$3 \\$4 \\$5 \\$6 .. .de fz \&\f5\\$1 \fI\\$2\fR───→\\$3 \\$4 \\$5 \\$6 .. .ta \w'\f5:e + file'u File manipulation .2C .fq :w write back changes .fz :w file write \fIfile\fR .fz :w! file overwrite \fIfile\fR A scan of an authentic 10th Ed. manual would be handy. If it already exists, then the wiki would be better pointed at that. -- Cheers, Ralph. From lm at mcvoy.com Wed Jun 5 00:15:29 2024 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 4 Jun 2024 07:15:29 -0700 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <20240604141529.GI5878@mcvoy.com> I've been using this hybrid for decades, it re-renders every time you write out the file: #!/usr/bin/perl # Run the command into PS.$USER # go into a loop watching the file and rerun command whenever the file # has changed. use POSIX ":sys_wait_h"; $usage = "usage: $0 comand -args -args file [file ...]\n"; foreach $file (@ARGV) { next unless -f $file; push(@files, $file); } die $usage unless $#files > -1; $cmd = "@ARGV > PS.$ENV{USER}"; $gv = "gv --spartan --antialias --media=letter PS.$ENV{USER}"; system "$cmd"; $pid = fork; if ($pid == 0) { exec $gv; die $gv; } # Read all the files looking for .so's so we catch the implied list. # I dunno if groff catches nested .so's but we don't. foreach $file (@files) { $stat{$file} = (stat($file))[9]; open(F, $file); while () { next unless /^\.so\s+(.*)\s*$/; $stat{$1} = (stat($1))[9]; } close(F); } while (1) { select(undef, undef, undef, .2); $kid = waitpid($pid,&WNOHANG); exit 0 if (kill(0, $pid) != 1); $doit = 0; foreach $f (keys %stat) { if ($stat{$f} != (stat($f))[9]) { $stat{$f} = (stat($f))[9]; $doit = 1; } } if ($doit) { system $cmd; kill(1, $pid); } } From clemc at ccc.com Wed Jun 5 00:32:25 2024 From: clemc at ccc.com (Clem Cole) Date: Tue, 4 Jun 2024 10:32:25 -0400 Subject: [TUHS] Vi Quick Reference card for 4.4 BSD In-Reply-To: References: Message-ID: On Tue, Jun 4, 2024 at 12:32 AM Will Senn wrote: > Do y'all happen to know of where I might find the golden quick ref card > for vi from back in the 4.4bsd days or did it even really exist? > Matt Day pointed you to the source, but in a small but slightly assuming addition. Your comment made me check my archives. Indeed, while the version on imgur.com is not golden, it is close. The copies I have are printed on "sunflower yellow" card stock. By the way, there was a firm called "Specialized Systems Consultants" of Seattle, Washington, that in the early 80s had a business printing and selling pocket reference cards and other SW and Services. They had a pretty good vi reference, which is ISBN 0-916151-19-0. It was printed on white card stock with black and blue letters for highlights and boxes around some of the text. Also, while looking for the vi cards, I turned up two wonderful artifacts that I'll try to get scanned and added to TUHS at some point. When you purchased V7 from AT&T, you got one copy of the printed docs and a small "purple/red" 9"x3.5" flip-binding reference card that Lorinda Cherry compiled. Also, when DEC released V7M-11, they printed a small flip-binding 8"x4" reference called the "programmers guide" [AA-X7978-1C]—which is similar but different. > ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From blake1024 at gmail.com Wed Jun 5 00:42:58 2024 From: blake1024 at gmail.com (Blake McBride) Date: Tue, 4 Jun 2024 09:42:58 -0500 Subject: [TUHS] Vi Quick Reference card for 4.4 BSD In-Reply-To: References: Message-ID: How about this one? https://wiki.arahant.com/Wiki.jsp?page=Vi On Mon, Jun 3, 2024 at 11:32 PM Will Senn wrote: > Today after trying to decipher the online help for vim and neovim, I > decided I'd had enough and I opted for nvi - the bug for bug vi compatible > that I've used for so long on FreeBSD. It handles cursor keys, these days > (my biggest gripe back when, now I'm not so sure it's an improvement). It's > in-app help pages are about 300 lines long, the docs are just four of the > 4.4 docs: An Introduction to Display Editing with VI, Edit: A tutorial, EX > Reference Manual, and VI-EX Reference Manual - all very well written and > understandable. It does everything I really need it to do without the > million and one extensions and "enhancements" the others offer. > > In doing the docs research, I found many, many references to a "*Vi Quick > Reference card"* in the various manpages and docs. I googled and googled > some more and of course got thousands of hits (really many thousands), but > I can't seem to find the actual card referenced. I'm pretty sure what I > want to find is a scanned image or pdf of the card for 4.4bsd. > > Do y'all happen to know of where I might find the golden quick ref card > for vi from back in the 4.4bsd days or did it even really exist? > > Will > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph at inputplus.co.uk Wed Jun 5 00:48:36 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Tue, 04 Jun 2024 15:48:36 +0100 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <20240604144836.EEA0821C4E@orac.inputplus.co.uk> Hi Mark, > Mike Cowlishaw built a markup system for the Oxford University Press > back in the early 1980s on secondment from IBM.  It had a rather > elegant ability to switch between markup mode and rendering mode so > you could peek at how something would look. I think that's his LEXX editor which did live parsing and could be initialised with parsing tables. LEXX — A programmable structured editor DOI:10.1147/rd.311.0073 https://www.researchgate.net/publication/224103825_LEXX-A_programmable_structured_editor https://en.wikipedia.org/wiki/LEXX_(text_editor) > I know that it was used by OUP for the humongous task of converting > the OED from its old paper-based production framework to the > electronic system that they use today Collins, a rival in dictionaries, used troff for a long time to produce theirs. Don't know what they do now. The University of Nottingham chose device-independent troff for their examination papers over TeX because the PDP-11 was affordable compared to the VAX. The troff source licence cost £4,000 around ’82. https://www.researchgate.net/publication/28692919_In-house_Preparation_of_Examination_Papers_using_troff_tbl_and_eqn > Sadly, the markup specifications are flimsy For markdown, the CommonMark folk have been improving this for a while. ‘We propose a standard, unambiguous syntax specification for Markdown, along with a suite of comprehensive tests to validate Markdown implementations against this specification. We believe this is necessary, even essential, for the future of Markdown. ‘That’s what we call CommonMark.’ — https://commonmark.org > the ease of creating crazy markup like

blah blah

in > edit mode makes for some difficult exception handling problems. Just treat it as an error rather than attempt recovery? Although the rendered version could be flipped to, or viewed in parallel, it would be read only and only get so far; the bug would need fixing in the mark-up view. -- Cheers, Ralph. From imp at bsdimp.com Wed Jun 5 00:53:54 2024 From: imp at bsdimp.com (Warner Losh) Date: Tue, 4 Jun 2024 08:53:54 -0600 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: At the risk of venturing too far off into the weeds (though maybe it's too late for that) What do people think of the newer markup languages like Markdown or ASCII Doctor? They seem more approachable than SGML or docbook, and a bit easier to understand, though with less control, than troff, LaTeX or TeX. To me they seem to be clever in that they infer the type of thing from the extra context marking that you give it, and the marking is more intuitive than the old-school markups (though still with some twists and sharp edges). Warner On Tue, Jun 4, 2024 at 7:22 AM Marc Donner wrote: > The history of markup and WSYWYG (or, as a friend said, WYSIAYG - what you > see is all you get) is fascinating. > > The early markup systems (runoff and its derivatives like troff, nroff, > IBM's SCRIPT) focused on manipulation of representation. Normal, bold, > italic, font size, justification and centering, and so on, were the > vocabulary of the old systems. These systems, to me, were assembler > language for contemporary phototypesetters. > > In the late 1970s and early 1980s we began to get systems that, as Douglas > noted, could express the logical structure of documents. GML and SCRIBE > were my first exposures to this way of thinking and they made life much > much better for the writer. > > The standards work that created SGML went a bit overboard, to my taste. > The only really serious adopters of SGML that I can think of were the US > military, but there may have been others. > > Along the way were some fascinating attempts at clever hybrids. Mike > Cowlishaw built a markup system for the Oxford University Press back in the > early 1980s on secondment from IBM. It had a rather elegant ability to > switch between markup mode and rendering mode so you could peek at how > something would look. I know that it was used by OUP for the humongous > task of converting the OED from its old paper-based production framework to > the electronic system that they use today, though I have no idea what the > current details are. > > The hybrid model is not dead, by the way. The wikimedia system adopts it > ... you may edit either in markup mode or in WSYWYG mode, though I find the > WSYWYG mode to be frustrating. Sadly, the markdown stuff used by wikimedia > is pretty annoying to work with and the rendering is buggy and sometimes > incomprehensible (to me, at least). > > Making a strong system that includes inline markup editing AND > WSYWYG editing with clean flipping between them would be fascinating. > Sadly, the markup specifications are flimsy and the ease of creating crazy > markup like

blah blah

in edit mode makes for some difficult > exception handling problems. > > Marc > ===== > nygeek.net > mindthegapdialogs.com/home > > > On Sun, Jun 2, 2024 at 8:40 AM Douglas McIlroy < > douglas.mcilroy at dartmouth.edu> wrote: > >> > Were you surprised when folks settled on word processors in favor of >> markup? >> >> I'm not sure what you're asking. "Word processor" was a term coming into >> prominence when Unix was in its infancy. Unix itself was sold to management >> partly on the promise of using it to make a word processor. All word >> processors used typewriters and were markup-based. Screens, which >> eventually enabled WYSIWYG, were not affordable for widespread use. >> >> Perhaps the question you meant to ask was whether we were surprised when >> WYSIWYG took over word-processing for the masses. No, we weren't, but we >> weren't attracted to it either, because it sacrificed markup's potential >> for expressing the logical structure of documents and thus fostering >> portability of text among distinct physical forms, e.g. man pages on >> terminals and in book form or technical papers as TMs and as journal >> articles. WYSIWYG was also unsuitable for typesetting math. (Microsoft Word >> clumsily diverts to a separate markup pane for math.) >> >> Moreover, WYSIWYG was out of sympathy with Unix philosophy, as it kept >> documents in a form difficult for other tools to process for unanticipated >> purposes, In this regard, I still regret that Luca Cardelli and Mark >> Manasse moved on from Bell Labs before they finished their dream of Blue, a >> WYSIWYG editor for markup documents, I don't know yet whether that blue-sky >> goal is achievable. (.docx may be seen as a ponderous latter-day attempt. >> Does anyone know whether it has fostered tool use?) >> >> Doug >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Wed Jun 5 01:29:21 2024 From: tuhs at tuhs.org (Grant Taylor via TUHS) Date: Tue, 4 Jun 2024 10:29:21 -0500 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <61959090-8c09-7357-25b3-8efa47724947@tnetconsulting.net> On 6/4/24 9:53 AM, Warner Losh wrote: > What do people think of the newer markup languages like Markdown or > ASCII Doctor? They seem more approachable than SGML or docbook, and a > bit easier to understand, though with less control, than troff, LaTeX or > TeX. I find Markdown et al. leaving me wanting. I personally prefer basic HTML for structure and function. If I care enough I'll add some CSS on top for appearance candy. -- Grant. . . . unix || die From tuhs at tuhs.org Wed Jun 5 05:23:22 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Tue, 4 Jun 2024 15:23:22 -0400 Subject: [TUHS] Mike Karels has died Message-ID: <58f15238-c5f3-4e51-920b-c718ff616cff@case.edu> Sad, horrible news. https://www.facebook.com/groups/BSDCan/permalink/10159552565206372/ -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From fjarlq at gmail.com Wed Jun 5 05:23:40 2024 From: fjarlq at gmail.com (Matt Day) Date: Tue, 4 Jun 2024 13:23:40 -0600 Subject: [TUHS] Vi Quick Reference card for 4.4 BSD In-Reply-To: References: Message-ID: My favorite vi reference for ages is Maarten Litmaath's, available here: https://www.ungerhu.com/jxh/vi.html Contributors to that include Rich Salz and Diomidis Spinellis. On Tue, Jun 4, 2024 at 8:33 AM Clem Cole wrote: > > > On Tue, Jun 4, 2024 at 12:32 AM Will Senn wrote: > >> Do y'all happen to know of where I might find the golden quick ref card >> for vi from back in the 4.4bsd days or did it even really exist? >> > Matt Day pointed you to the source, but in a small but slightly assuming > addition. Your comment made me check my archives. Indeed, while the version > on imgur.com is not golden, it is close. The copies I have are printed on "sunflower > yellow" card stock. > > By the way, there was a firm called "Specialized Systems Consultants" of > Seattle, Washington, that in the early 80s had a business printing and > selling pocket reference cards and other SW and Services. They had a pretty > good vi reference, which is ISBN 0-916151-19-0. It was printed on white > card stock with black and blue letters for highlights and boxes around some > of the text. > > Also, while looking for the vi cards, I turned up two wonderful artifacts > that I'll try to get scanned and added to TUHS at some point. When you > purchased V7 from AT&T, you got one copy of the printed docs and a small > "purple/red" 9"x3.5" flip-binding reference card that Lorinda Cherry > compiled. Also, when DEC released V7M-11, they printed a small flip-binding > 8"x4" reference called the "programmers guide" [AA-X7978-1C]—which is > similar but different. > >> ᐧ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Wed Jun 5 05:25:06 2024 From: crossd at gmail.com (Dan Cross) Date: Tue, 4 Jun 2024 15:25:06 -0400 Subject: [TUHS] Fwd: [ih] Mike Karels has died In-Reply-To: References: Message-ID: FYI, this just got passed by Vint Cerf. Very sad news. ---------- Forwarded message --------- From: vinton cerf via Internet-history Date: Tue, Jun 4, 2024 at 3:18 PM Subject: [ih] Mike Karels has died To: internet-history Mike Karels died on Sunday. I don’t have any details other than: https://www.facebook.com/groups/BSDCan/permalink/10159552565206372/ https://www.gearty-delmore.com/obituaries/michael-mike-karels Mike was deeply involved in the Berkeley BSD releases as I recall, after he inherited the TCP/IP implementation for Unix from Bill Joy (am I remembering that correctly?). RIP v -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From imp at bsdimp.com Wed Jun 5 05:36:09 2024 From: imp at bsdimp.com (Warner Losh) Date: Tue, 4 Jun 2024 12:36:09 -0700 Subject: [TUHS] Fwd: [ih] Mike Karels has died In-Reply-To: References: Message-ID: He died of what appears to have been a heart attack while waiting for the train to go to the airport after BSDcan in Ottawa. I believe those details can be shared, but an official obituary will be forthcoming with more details. He was in good spirits for the conference, and I'm in shock. I am glad that I did get to chat with him about all things BSD during the closing social... But I'm also very sad. Warner On Tue, Jun 4, 2024 at 12:25 PM Dan Cross wrote: > FYI, this just got passed by Vint Cerf. Very sad news. > > ---------- Forwarded message --------- > From: vinton cerf via Internet-history > Date: Tue, Jun 4, 2024 at 3:18 PM > Subject: [ih] Mike Karels has died > To: internet-history > > > Mike Karels died on Sunday. I don’t have any details other than: > https://www.facebook.com/groups/BSDCan/permalink/10159552565206372/ > https://www.gearty-delmore.com/obituaries/michael-mike-karels > > Mike was deeply involved in the Berkeley BSD releases as I recall, after he > inherited the TCP/IP implementation for Unix from Bill Joy (am I > remembering that correctly?). > > RIP > v > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefftwopointzero at gmail.com Wed Jun 5 06:20:02 2024 From: jefftwopointzero at gmail.com (Jeffrey Joshua Rollin) Date: Tue, 4 Jun 2024 21:20:02 +0100 Subject: [TUHS] [ih] Mike Karels has died In-Reply-To: References: Message-ID: <687B90EA-C9E8-4FB7-A52C-6F38A43063D8@gmail.com> Sad news indeed :-( > On 4 Jun 2024, at 20:25, Dan Cross wrote: > > FYI, this just got passed by Vint Cerf. Very sad news. > > ---------- Forwarded message --------- > From: vinton cerf via Internet-history > Date: Tue, Jun 4, 2024 at 3:18 PM > Subject: [ih] Mike Karels has died > To: internet-history > > > Mike Karels died on Sunday. I don’t have any details other than: > https://www.facebook.com/groups/BSDCan/permalink/10159552565206372/ > https://www.gearty-delmore.com/obituaries/michael-mike-karels > > Mike was deeply involved in the Berkeley BSD releases as I recall, after he > inherited the TCP/IP implementation for Unix from Bill Joy (am I > remembering that correctly?). > > RIP > v > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From akosela at andykosela.com Wed Jun 5 06:35:53 2024 From: akosela at andykosela.com (Andy Kosela) Date: Tue, 4 Jun 2024 22:35:53 +0200 Subject: [TUHS] Mike Karels has died In-Reply-To: <58f15238-c5f3-4e51-920b-c718ff616cff@case.edu> References: <58f15238-c5f3-4e51-920b-c718ff616cff@case.edu> Message-ID: On Tuesday, June 4, 2024, Chet Ramey via TUHS wrote: > Sad, horrible news. > > https://www.facebook.com/groups/BSDCan/permalink/10159552565206372/ > > He was the BSD giant. Rest in Peace. I remember one particular Computer Chronicles episode from 1989 which featured Mike. https://youtu.be/lkyyAKTvmx0?si=9fKp2pmF_e7HeVpk --Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From athornton at gmail.com Wed Jun 5 07:46:51 2024 From: athornton at gmail.com (Adam Thornton) Date: Tue, 4 Jun 2024 14:46:51 -0700 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: On Tue, Jun 4, 2024 at 6:22 AM Marc Donner wrote: > The standards work that created SGML went a bit overboard, to my taste. > The only really serious adopters of SGML that I can think of were the US > military, but there may have been others. > > Bookmaster (an IBM product, and I think what they used for their published docs in the 90s into the 2000s?) was SGML based, if I remember correctly. Writing in it was kind of lovely, and the traintrack diagrams for command syntax were exceptionally well-done. It made nice-looking docs (e.g. https://distribution.sinenomine.net/opensolaris/install2.pdf). Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Wed Jun 5 08:54:59 2024 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 5 Jun 2024 08:54:59 +1000 (EST) Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: <4CED5BE6-75C1-4A4B-B730-AF2A79150426@gmail.com> <9854affc-bc27-42fb-a294-3b0e7ea4d28d@ucsb.edu> Message-ID: On Tue, 4 Jun 2024, Dave Horsfall wrote: > When working for Lionel Singer's Sun Australia (a Sun reseller), we had > an entire room devoted to SunOS manuals; I wonder what happened to them > (the manuals, I mean)? Sun *Computer* Australia, of course; sigh... -- Dave From flexibeast at gmail.com Wed Jun 5 10:13:37 2024 From: flexibeast at gmail.com (Alexis) Date: Wed, 05 Jun 2024 10:13:37 +1000 Subject: [TUHS] Proliferation of book print styles In-Reply-To: (Warner Losh's message of "Tue, 4 Jun 2024 08:53:54 -0600") References: Message-ID: <8734psfdqm.fsf@gmail.com> Warner Losh writes: > What do people think of the newer markup languages like Markdown > or ASCII > Doctor? They seem more approachable than SGML or docbook, and a > bit easier > to understand, though with less control, than troff, LaTeX or > TeX. Speaking as someone who had to fight Markdown several years ago, when trying to write a converter from Markdown, and who found that programming language library authors generally seemed to assume you'd only ever want to convert to HTML ("No, we won't expose the parse tree"), this old critique by Ingo Schwarze strongly resonates with me: https://undeadly.org/cgi?action=article&sid=20170304230520 Alexis. From tuhs at tuhs.org Wed Jun 5 20:17:19 2024 From: tuhs at tuhs.org (Andrew Lynch via TUHS) Date: Wed, 5 Jun 2024 10:17:19 +0000 (UTC) Subject: [TUHS] most direct Unix descendant References: <1324869037.1755756.1717582639424.ref@mail.yahoo.com> Message-ID: <1324869037.1755756.1717582639424@mail.yahoo.com> Hi Out of curiosity, what would be considered the most direct descendent of Unix available today?  Yes, there are many descendants, but they've all gone down their own evolutionary paths.   Is it FreeBSD or NetBSD?  Something else?  I don't think it would be Minix or Linux because I remember when they came along, and it was well after various Unix versions were around. Does such a thing even exist anymore?  I remember using AT&T Unix System V and various BSD variants back in college in the 1980's.  System V was the "new thing" back then but was eventually sold and seems to have faded.  Maybe it is only available commercially, but it does not seem as prominent as it once was. Any thoughts? Thanks, Andrew Lynch -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreww591 at gmail.com Wed Jun 5 20:51:32 2024 From: andreww591 at gmail.com (Andrew Warkentin) Date: Wed, 5 Jun 2024 04:51:32 -0600 Subject: [TUHS] most direct Unix descendant In-Reply-To: <1324869037.1755756.1717582639424@mail.yahoo.com> References: <1324869037.1755756.1717582639424.ref@mail.yahoo.com> <1324869037.1755756.1717582639424@mail.yahoo.com> Message-ID: On Wed, Jun 5, 2024 at 4:17 AM Andrew Lynch via TUHS wrote: > > Hi > > Out of curiosity, what would be considered the most direct descendent of Unix available today? Yes, there are many descendants, but they've all gone down their own evolutionary paths. > > Is it FreeBSD or NetBSD? Something else? I don't think it would be Minix or Linux because I remember when they came along, and it was well after various Unix versions were around. > > Does such a thing even exist anymore? I remember using AT&T Unix System V and various BSD variants back in college in the 1980's. System V was the "new thing" back then but was eventually sold and seems to have faded. Maybe it is only available commercially, but it does not seem as prominent as it once was. > > Any thoughts? > What exactly do you mean by "most direct descendant of Unix"? Are you specifically talking about Research Unix? Both USG (SysIII/SysV) and BSD are actually more like side branches from Research Unix, and neither is really a continuation of it. After V7, Research Unix continued until V10, but was barely distributed outside Bell Labs and had relatively little direct influence on anything else; these late Research Unix versions did incorporate significant amounts of code from the side branches that took over the mainstream (especially BSD, although there may have been a bit of USG code incorporated as well). I'd say the closest thing to "the most direct modern descendant of Research Unix" would be Plan 9, which continued the development of the networking and extensibility features of late Research Unix, but significantly broke compatibility with Unix (sometimes in ways that are IMO not really worth the incompatibility). From tuhs at tuhs.org Wed Jun 5 23:46:55 2024 From: tuhs at tuhs.org (Andrew Lynch via TUHS) Date: Wed, 5 Jun 2024 13:46:55 +0000 (UTC) Subject: [TUHS] most direct Unix descendant In-Reply-To: References: <1324869037.1755756.1717582639424.ref@mail.yahoo.com> <1324869037.1755756.1717582639424@mail.yahoo.com> Message-ID: <378394076.1809602.1717595215796@mail.yahoo.com> On Wednesday, June 5, 2024 at 06:51:54 AM EDT, Andrew Warkentin wrote: On Wed, Jun 5, 2024 at 4:17 AM Andrew Lynch via TUHS wrote: > > Hi > > Out of curiosity, what would be considered the most direct descendent of Unix available today?  Yes, there are many descendants, but they've all gone down their own evolutionary paths. > > Is it FreeBSD or NetBSD?  Something else?  I don't think it would be Minix or Linux because I remember when they came along, and it was well after various Unix versions were around. > > Does such a thing even exist anymore?  I remember using AT&T Unix System V and various BSD variants back in college in the 1980's.  System V was the "new thing" back then but was eventually sold and seems to have faded.  Maybe it is only available commercially, but it does not seem as prominent as it once was. > > Any thoughts? > What exactly do you mean by "most direct descendant of Unix"? Are you specifically talking about Research Unix? Both USG (SysIII/SysV) and BSD are actually more like side branches from Research Unix, and neither is really a continuation of it. After V7, Research Unix continued until V10, but was barely distributed outside Bell Labs and had relatively little direct influence on anything else; these late Research Unix versions did incorporate significant amounts of code from the side branches that took over the mainstream (especially BSD, although there may have been a bit of USG code incorporated as well). I'd say the closest thing to "the most direct modern descendant of Research Unix" would be Plan 9, which continued the development of the networking and extensibility features of late Research Unix, but significantly broke compatibility with Unix (sometimes in ways that are IMO not really worth the incompatibility). Hi That's interesting.  I've been pondering this question for a while and suspected the answer is either "it doesn't exist" or "depends on who you ask" but I hadn't considered Research Unix.   For a long time, I considered AT&T System V to be the primary Unix descendant but have changed my mind and now not sure.  The question is simple, but the answer seems quite complicated. Thanks, Andrew Lynch -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Thu Jun 6 03:34:58 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Wed, 05 Jun 2024 17:34:58 +0000 Subject: [TUHS] most direct Unix descendant In-Reply-To: <1324869037.1755756.1717582639424@mail.yahoo.com> References: <1324869037.1755756.1717582639424.ref@mail.yahoo.com> <1324869037.1755756.1717582639424@mail.yahoo.com> Message-ID: On Wednesday, June 5th, 2024 at 3:17 AM, Andrew Lynch via TUHS wrote: > Hi > > Out of curiosity, what would be considered the most direct descendent of Unix available today? > > ... > > Thanks, Andrew Lynch I don't think this question has one correct answer, rather, it really depends on your definition of purity. My two cents: >From a purely source code perspective, much of System V and its kin can be traced back to the Research implementations of various bits. Between V7 and V8, Research incorporated a fair deal of BSD, but at a time prior to the majority of BSD being reimplemented as unencumbered source code, so in many parts of the codebase, the actual source still very much was descended from V7, just with some BSD "accent" incorporated. To me one of the most notable userland divergences in the commercial stream is the init system, what with commercial UNIX aligning more with what is seen in CB (and allegedly USG Program Generic 3, but I have no direct proof, just speculation based on alleged manpages.) In any case, if you did a huge diff of the source code between say V7 and SVR4, you would likely find a fair deal of commonality, especially in userland. Taking an alternate viewpoint, BSD, while entirely rewritten, strove for functional compatibility with the bits that were being replaced, and in many ways BSD "behaved" more like Research, in reality and in "spirit". Again using the init system as an example, to this day the BSDs use an init system much closer to Research init than USGs run-level system. BSD also shows up in many more UNIX "places" than System V does. Indeed primarily System V distributions over time incorporate aspects of BSD due to their proliferation elsewhere in the UNIX world, much more than commercial backflow in the other direction. Given this, my humble opinion (which again this sort of thing I believe is largely a philosophical matter of opinion...) is that the BSD line captures the spirit of Research UNIX much more than System V does, while System V retains much more of the source code lineage of what most folks would consider a "pure" UNIX. Of course all of this too is predicated on treating V7 (really 32V...) as that central point of divergence. Good luck in your quest to find the answer to this question. I suspect it has no concrete answer and rather is one of those more philosophical quandaries that makes UNIX something worth pondering on this level. That all said, eventually I intend via my mandiff project to determine which of the three "last" historic UNIX manuals (SVR4, 4.4BSD, V10) has the highest parity with V7 literature, and similar work has been attempted via source code (D. Spinellis git repo[1]), so if that sort of quantitative analysis is more your cup of tea, then it may be possible to boil it down to ratios of "is and isn't V7" in codebases...but that sort of thing doesn't paint the full picture. That and the linked git repo doesn't incorporate System V for legal reasons...a bridge I haven't had to cross yet as I'm between V6 and V7 on my own analysis presently. One last disclaimer as I know this question can also stir up matters of pride, this is all opinion, and I think only can be opinion at this point, but my opinion is also only based on observations from afar. I wasn't a key player in this stuff, those folks' thoughts carry much more weight than mine do, but I also suspect, like good parents, folks with more heft to their involvement in things know the value in not playing favorites and letting their issue stand on their own. - Matt G. P.S. Can you tell this is one of my favorite questions to ponder :) [1] - https://github.com/dspinellis/unix-history-repo/branches/all?page=9 From will.senn at gmail.com Thu Jun 6 03:51:19 2024 From: will.senn at gmail.com (Will Senn) Date: Wed, 5 Jun 2024 12:51:19 -0500 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: <1324869037.1755756.1717582639424.ref@mail.yahoo.com> <1324869037.1755756.1717582639424@mail.yahoo.com> Message-ID: On 6/5/24 12:34 PM, segaloco via TUHS wrote: > On Wednesday, June 5th, 2024 at 3:17 AM, Andrew Lynch via TUHS wrote: > >> Hi >> >> Out of curiosity, what would be considered the most direct descendent of Unix available today? >> >> ... >> >> Thanks, Andrew Lynch > snip > Given this, my humble opinion (which again this sort of thing I believe is largely a philosophical matter of opinion...) is that the BSD line captures the spirit of Research UNIX much more than System V does, while System V retains much more of the source code lineage of what most folks would consider a "pure" UNIX. Of course all of this too is predicated on treating V7 (really 32V...) as that central point of divergence. When I saw this thread appear, I was of two minds about it, but this lines up with where my thoughts were headed. I've done a lot of delving into the v6/v7 environments over the last 10 years or so and it feels much closer in kinship to BSD derivatives than to SysV... source code lineages aside. Also, I get more mileage out of my BSD books and docs than those treating SysV. I'd vote for *BSD as sticking closest to the unix way, if there is still such a thing... I say this as I just typed 'kldload linux64' into freebsd's terminal so I could run sublime alongside nvi... sometimes I wish I was a purist, but I'm way too fond of experimentation :). Will From rminnich at gmail.com Thu Jun 6 04:02:16 2024 From: rminnich at gmail.com (ron minnich) Date: Wed, 5 Jun 2024 11:02:16 -0700 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: <1324869037.1755756.1717582639424.ref@mail.yahoo.com> <1324869037.1755756.1717582639424@mail.yahoo.com> Message-ID: You could argue that the most direct descendant is the one in which all resources are presented and accessed via open/read/write/close. If your kernel has separate system calls for reading directories, or setting up network connections, or debugging processes, then you may not be a direct descendant, at least philosophically (and, yes, I know about ptrace ...) But your kernel might be Plan 9, which at least to me, is the direct descendant. :-) On Wed, Jun 5, 2024 at 10:51 AM Will Senn wrote: > On 6/5/24 12:34 PM, segaloco via TUHS wrote: > > On Wednesday, June 5th, 2024 at 3:17 AM, Andrew Lynch via TUHS < > tuhs at tuhs.org> wrote: > > > >> Hi > >> > >> Out of curiosity, what would be considered the most direct descendent > of Unix available today? > >> > >> ... > >> > >> Thanks, Andrew Lynch > > snip > > Given this, my humble opinion (which again this sort of thing I believe > is largely a philosophical matter of opinion...) is that the BSD line > captures the spirit of Research UNIX much more than System V does, while > System V retains much more of the source code lineage of what most folks > would consider a "pure" UNIX. Of course all of this too is predicated on > treating V7 (really 32V...) as that central point of divergence. > When I saw this thread appear, I was of two minds about it, but this > lines up with where my thoughts were headed. I've done a lot of delving > into the v6/v7 environments over the last 10 years or so and it feels > much closer in kinship to BSD derivatives than to SysV... source code > lineages aside. Also, I get more mileage out of my BSD books and docs > than those treating SysV. I'd vote for *BSD as sticking closest to the > unix way, if there is still such a thing... I say this as I just typed > 'kldload linux64' into freebsd's terminal so I could run sublime > alongside nvi... sometimes I wish I was a purist, but I'm way too fond > of experimentation :). > > Will > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefftwopointzero at gmail.com Thu Jun 6 04:22:36 2024 From: jefftwopointzero at gmail.com (Jeffrey Joshua Rollin) Date: Wed, 5 Jun 2024 19:22:36 +0100 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: Following this line of thought, - and with the disclaimer that my own personal existence begins roughly where what has been called “The Last True UNIX” [Seventh Edition] ends, I’d say that, if ESR - who I know can be controversial - is correct, “BSD won in the marketplace, but System V won the standards wars” or words to that effect. With that in mind, and given that NetBSD was forked from 386BSD and in turn gave rise to the other BSDs around today, it would be my candidate for “most direct descendant available today,” particularly if we’re talking wide availability. (Whilst V1-6 and beyond were of course only available to users of business and academic mainframes and minicomputers, I’d argue that the other two contenders, Solaris and HP-UX, are sufficiently rare in comparison to the availability even of the open source BSD’s that the word “available” would be doing some rather heavy lifting if I were to include them.) The BSDs (except macOS and whatever SCO’s cash cow is called this evening) are also open source, of course, which is inline with the spirit of early Unix. I’ve not done an audit - and am not qualified to - but I suspect the main objection to this line of thinking is that despite the fact it still runs on VAX, it would not surprise me in the least to find that (excluding comments, perhaps), not a single line of code remains the same in NetBSD 10 (and indeed several versions prior) to the equivalent in V7 - and again, I’ve no idea how much of V1 remains in V7, nor (other than knowing it was written in assembly) how closely early PDP-11 versions resembled PDP-7 versions. By then, I suspect we really are getting into the Ship of Theseus problem - as the ancient Greeks would have been familiar with the issue, by the time every single plank of Theseus’ Ship has been replaced because the old ones have decayed, is it really the Ship of Theseus anymore? Plus of course, though it’s more a legal issue than a philosophical one, not only at least one version of Mach-based macOS, but also one distribution of Linux - which is known not to contain either Minix or UNIX code - have been certified as UNIX by The Open Group. My 2c Jeff Sent from my iPhone > On 5 Jun 2024, at 18:51, Will Senn wrote: > > On 6/5/24 12:34 PM, segaloco via TUHS wrote: >>> On Wednesday, June 5th, 2024 at 3:17 AM, Andrew Lynch via TUHS wrote: >>> >>> Hi >>> >>> Out of curiosity, what would be considered the most direct descendent of Unix available today? >>> >>> ... >>> >>> Thanks, Andrew Lynch >> snip >> Given this, my humble opinion (which again this sort of thing I believe is largely a philosophical matter of opinion...) is that the BSD line captures the spirit of Research UNIX much more than System V does, while System V retains much more of the source code lineage of what most folks would consider a "pure" UNIX. Of course all of this too is predicated on treating V7 (really 32V...) as that central point of divergence. > When I saw this thread appear, I was of two minds about it, but this lines up with where my thoughts were headed. I've done a lot of delving into the v6/v7 environments over the last 10 years or so and it feels much closer in kinship to BSD derivatives than to SysV... source code lineages aside. Also, I get more mileage out of my BSD books and docs than those treating SysV. I'd vote for *BSD as sticking closest to the unix way, if there is still such a thing... I say this as I just typed 'kldload linux64' into freebsd's terminal so I could run sublime alongside nvi... sometimes I wish I was a purist, but I'm way too fond of experimentation :). > > Will From imp at bsdimp.com Thu Jun 6 04:41:16 2024 From: imp at bsdimp.com (Warner Losh) Date: Wed, 5 Jun 2024 11:41:16 -0700 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: On Wed, Jun 5, 2024, 11:23 AM Jeffrey Joshua Rollin < jefftwopointzero at gmail.com> wrote: > Following this line of thought, - and with the disclaimer that my own > personal existence begins roughly where what has been called “The Last True > UNIX” [Seventh Edition] ends, I’d say that, if ESR - who I know can be > controversial - is correct, “BSD won in the marketplace, but System V won > the standards wars” or words to that effect. > > With that in mind, and given that NetBSD was forked from 386BSD and in > turn gave rise to the other BSDs around today, That is not true. FreeBSD imported the 386BSD plus patchkit patches into its CVS tree. It did not inport NetBSD's source, though NetBSD did import the same sources into their CVS repo days (or maybe weeks) earlier. Much of this early history, though, is not widely available as the early NetBSD and FreeBSD CVS repos are not available in their original form due to the AT&T lawsuit. And then the redo of these groups of the 4.4BSD import, the 4.4BSD-lite and lite 2 rebased both projects further muddy the waters since they now were both based on approximately the same pure from CSRG sources, rendering the earlier messiness perhaps moot. Or perhaps not, but not a point that has universal agreement, even among those involved in doing the work. It also gets muddy because of the original patchkit authors also spintered to for both NetBSD and FreeBSD in a way that's most kindly described as messy, so much spin was broadcast to characterize who was first or best. The truth is that the split was messy and definitive statements around this are troublesome at best. Warner it would be my candidate for “most direct descendant available today,” > particularly if we’re talking wide availability. (Whilst V1-6 and beyond > were of course only available to users of business and academic mainframes > and minicomputers, I’d argue that the other two contenders, Solaris and > HP-UX, are sufficiently rare in comparison to the availability even of the > open source BSD’s that the word “available” would be doing some rather > heavy lifting if I were to include them.) The BSDs (except macOS and > whatever SCO’s cash cow is called this evening) are also open source, of > course, which is inline with the spirit of early Unix. > > I’ve not done an audit - and am not qualified to - but I suspect the main > objection to this line of thinking is that despite the fact it still runs > on VAX, it would not surprise me in the least to find that (excluding > comments, perhaps), not a single line of code remains the same in NetBSD 10 > (and indeed several versions prior) to the equivalent in V7 - and again, > I’ve no idea how much of V1 remains in V7, nor (other than knowing it was > written in assembly) how closely early PDP-11 versions resembled PDP-7 > versions. By then, I suspect we really are getting into the Ship of Theseus > problem - as the ancient Greeks would have been familiar with the issue, by > the time every single plank of Theseus’ Ship has been replaced because the > old ones have decayed, is it really the Ship of Theseus anymore? > > Plus of course, though it’s more a legal issue than a philosophical one, > not only at least one version of Mach-based macOS, but also one > distribution of Linux - which is known not to contain either Minix or UNIX > code - have been certified as UNIX by The Open Group. > > My 2c > > Jeff > > Sent from my iPhone > > > On 5 Jun 2024, at 18:51, Will Senn wrote: > > > > On 6/5/24 12:34 PM, segaloco via TUHS wrote: > >>> On Wednesday, June 5th, 2024 at 3:17 AM, Andrew Lynch via TUHS < > tuhs at tuhs.org> wrote: > >>> > >>> Hi > >>> > >>> Out of curiosity, what would be considered the most direct descendent > of Unix available today? > >>> > >>> ... > >>> > >>> Thanks, Andrew Lynch > >> snip > >> Given this, my humble opinion (which again this sort of thing I believe > is largely a philosophical matter of opinion...) is that the BSD line > captures the spirit of Research UNIX much more than System V does, while > System V retains much more of the source code lineage of what most folks > would consider a "pure" UNIX. Of course all of this too is predicated on > treating V7 (really 32V...) as that central point of divergence. > > When I saw this thread appear, I was of two minds about it, but this > lines up with where my thoughts were headed. I've done a lot of delving > into the v6/v7 environments over the last 10 years or so and it feels much > closer in kinship to BSD derivatives than to SysV... source code lineages > aside. Also, I get more mileage out of my BSD books and docs than those > treating SysV. I'd vote for *BSD as sticking closest to the unix way, if > there is still such a thing... I say this as I just typed 'kldload linux64' > into freebsd's terminal so I could run sublime alongside nvi... sometimes I > wish I was a purist, but I'm way too fond of experimentation :). > > > > Will > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jefftwopointzero at gmail.com Thu Jun 6 05:17:51 2024 From: jefftwopointzero at gmail.com (Jeffrey Joshua Rollin) Date: Wed, 5 Jun 2024 20:17:51 +0100 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From andreww591 at gmail.com Thu Jun 6 09:07:50 2024 From: andreww591 at gmail.com (Andrew Warkentin) Date: Wed, 5 Jun 2024 17:07:50 -0600 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: <1324869037.1755756.1717582639424.ref@mail.yahoo.com> <1324869037.1755756.1717582639424@mail.yahoo.com> Message-ID: On Wed, Jun 5, 2024 at 12:02 PM ron minnich wrote: > > You could argue that the most direct descendant is the one in which all resources are presented and accessed via open/read/write/close. > > If your kernel has separate system calls for reading directories, or setting up network connections, or debugging processes, then you may not be a direct descendant, at least philosophically (and, yes, I know about ptrace ...) > > But your kernel might be Plan 9, which at least to me, is the direct descendant. :-) > Even Plan 9's model is more like "all I/O is a file" and not "literally everything is a file", since regular process memory is still anonymous and fork()/rfork() are still system calls. I've never seen an OS that puts together the "all memory is a file" of Multics and the "all I/O is a file" of Plan 9. I think the one I'm working on is probably the first. Its public "system call" API (actually a jump table into a static shared library; the real microkernel system calls will be considered a private implementation detail) will just consist of read()/write()/seek()-like calls plus a few support functions to go with them; even things like open() and close() will be RPCs over a permanently-open channel file, and process/thread creation and memory allocation will be done through /proc (there will of course be a library interface over this that implements regular Unix APIs). From tuhs at tuhs.org Thu Jun 6 09:21:35 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Wed, 05 Jun 2024 23:21:35 +0000 Subject: [TUHS] Wiki Page on UNIX Standards Message-ID: Good day everyone, I just wanted to share that I've put up a bit of info as well as some book covers concerning UNIX standards that were published from the 80s til now: https://wiki.tuhs.org/doku.php?id=publications:standards I did my best to put down a bit of information about the /usr/group, POSIX, SVID, and SUS/Open Group standards, although there's certainly more to each story than what I put down there. Still, hopefully it serves to lay out a bit of the history of the actual standards produced over time. I'm kicking myself because one of the things I could've produced a picture of but didn't save at the time is the cover of IEEE 1003.2, a copy of this popped up on eBay some time in the past year and for reasons I can't recall I didn't order it, nor did I save the picture from the auction at the time. In any case, if anyone has any published standards that are not visually represented in this article, I'm happy to add any photos or scans you can provide to the page. Also pardon if the bit on spec 1170/SUS may be shorter than the others. Admittedly even having most of this on the desk in front of me right now, I'm fuzzy on the lines between POSIX, the Single UNIX Specification, the "Open Group Specification", spec 1170, etc. or if these are all names that ultimately just refer to different generations of the same thing. Part of getting this information put down is hoping someone will be along to correct inaccuracies :) Anywho, that's all for now. Feel free to suggest any corrections or additions! - Matt G. From tuhs at tuhs.org Thu Jun 6 09:47:33 2024 From: tuhs at tuhs.org (Alan Coopersmith via TUHS) Date: Wed, 5 Jun 2024 16:47:33 -0700 Subject: [TUHS] Mike Karels has died In-Reply-To: <58f15238-c5f3-4e51-920b-c718ff616cff@case.edu> References: <58f15238-c5f3-4e51-920b-c718ff616cff@case.edu> Message-ID: <323e5137-236c-4d88-9d2a-2ff2356effaa@oracle.com> On 6/4/24 12:23, Chet Ramey via TUHS wrote: > Sad, horrible news. > > https://www.facebook.com/groups/BSDCan/permalink/10159552565206372/ For those who prefer non-Facebook links: https://io.mwl.io/@mwl/112558631795149050 https://freebsdfoundation.org/mike_karels/ https://twitter.com/cperciva/status/1798436210261823543 -alan- From ralph at inputplus.co.uk Thu Jun 6 19:55:02 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Thu, 06 Jun 2024 10:55:02 +0100 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: <20240606095502.AD4EE210F4@orac.inputplus.co.uk> Hi, There's a chart of the connections between Unix versions at https://en.wikipedia.org/wiki/List_of_Unix_systems, though I dislike the lack of direction given there are some arcs with little incline. It says it's based on https://www.levenez.com/unix/ where Éric notes his chart is not limited to just source-code transfer. -- Cheers, Ralph. From steffen at sdaoden.eu Fri Jun 7 05:49:01 2024 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Thu, 06 Jun 2024 21:49:01 +0200 Subject: [TUHS] most direct Unix descendant In-Reply-To: <20240606095502.AD4EE210F4@orac.inputplus.co.uk> References: <20240606095502.AD4EE210F4@orac.inputplus.co.uk> Message-ID: <20240606194901.F5bDRUkh@steffen%sdaoden.eu> Ralph Corderoy wrote in <20240606095502.AD4EE210F4 at orac.inputplus.co.uk>: |There's a chart of the connections between Unix versions at |https://en.wikipedia.org/wiki/List_of_Unix_systems, though I dislike the |lack of direction given there are some arcs with little incline. |It says it's based on https://www.levenez.com/unix/ where Éric notes his |chart is not limited to just source-code transfer. I also admire that FreeBSD and NetBSD keep on maintaining the bsd-family-tree (and in the original form, not that dots thing, or how it was called). So that starts with First Edition (V1) | Second Edition (V2) | Third Edition (V3) | Fourth Edition (V4) | Fifth Edition (V5) | Sixth Edition (V6) -----* \ | \ | \ | Seventh Edition (V7)----|----------------------* \ | | \ 1BSD | 32V | | \ 2BSD---------------* | \ / | | \ / | | \/ | | 3BSD | | | | | 4.0BSD 2.79BSD | | | | 4.1BSD --------------> 2.8BSD <-* | | 4.1aBSD -----------\ | | \ | 4.1bBSD \ | | \ | *------ 4.1cBSD --------------> 2.9BSD / | | Eighth Edition | 2.9BSD-Seismo | | | +----<--- 4.2BSD 2.9.1BSD ... and says Multics 1965 UNIX Summer 1969 DEC PDP-7 First Edition 1971-11-03 [QCU] DEC PDP-11/20, Assembler Second Edition 1972-06-12 [QCU] 10 UNIX installations Third Edition 1973-02-xx [QCU] Pipes, 16 installations Fourth Edition 1973-11-xx [QCU] rewriting in C effected, above 30 installations Fifth Edition 1974-06-xx [QCU] above 50 installations Sixth Edition 1975-05-xx [QCU] port to DEC Vax Seventh Edition 1979-01-xx [QCU] 1979-01-10 [TUHS] first portable UNIX .. with a nice Bibliography with falsely underscored headline plus URL: https://cgit.freebsd.org/src/tree/share/misc/bsd-family-tree It also covers the system most of you are using (later). --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From tuhs at tuhs.org Fri Jun 7 15:00:09 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Fri, 07 Jun 2024 05:00:09 +0000 Subject: [TUHS] CSRC Involvement in Non-UNIX AT&T Software Projects Message-ID: I'm reading about the Automatic Intercept System as discussed in BSTJ Vol. 53 No. 1 this evening. It is a stored program control call handling system designed to respond to calls with potential forwarding or disconnection messages. Reading through the description of the operating system for AIS got me wondering: What with the growing experience in the CSRC regarding kernel technologies and systems programming, was there ever any crossover regarding UNIX folks applying their developments to other non-UNIX AT&T systems projects or vice versa, perhaps folks who worked primarily on switching and support software bringing things over to the UNIX development camp? In other words, was there personnel cross-pollination between Bell System UNIX programmers and the folks working on stuff like AIS, ESS switching software, etc.? Or were the aims and implementation of such projects so different that the resources were relatively siloed? I would imagine some of these projects were at least developed using UNIX given the popularity and demands of PWB. That's just my hunch though, some BSTJs also describe software development and maintenance taking place on S/360 and S/370 machines and various PDPs. Indeed the development process for AIS mentioned above, as of late 1971, involved assembly via S/360 software and then system maintenance and debugging via an attached PDP-9. - Matt G. From kevin.bowling at kev009.com Fri Jun 7 15:32:40 2024 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Thu, 6 Jun 2024 22:32:40 -0700 Subject: [TUHS] CSRC Involvement in Non-UNIX AT&T Software Projects In-Reply-To: References: Message-ID: On Thu, Jun 6, 2024 at 10:00 PM segaloco via TUHS wrote: > > I'm reading about the Automatic Intercept System as discussed in BSTJ Vol. 53 No. 1 this evening. It is a stored program control call handling system designed to respond to calls with potential forwarding or disconnection messages. Reading through the description of the operating system for AIS got me wondering: > > What with the growing experience in the CSRC regarding kernel technologies and systems programming, was there ever any crossover regarding UNIX folks applying their developments to other non-UNIX AT&T systems projects or vice versa, perhaps folks who worked primarily on switching and support software bringing things over to the UNIX development camp? In other words, was there personnel cross-pollination between Bell System UNIX programmers and the folks working on stuff like AIS, ESS switching software, etc.? Or were the aims and implementation of such projects so different that the resources were relatively siloed? https://en.wikipedia.org/wiki/Alexander_G._Fraser bio is an example of what you seem to be after > I would imagine some of these projects were at least developed using UNIX given the popularity and demands of PWB. That's just my hunch though, some BSTJs also describe software development and maintenance taking place on S/360 and S/370 machines and various PDPs. Indeed the development process for AIS mentioned above, as of late 1971, involved assembly via S/360 software and then system maintenance and debugging via an attached PDP-9. > > - Matt G. From arnold at skeeve.com Fri Jun 7 17:32:24 2024 From: arnold at skeeve.com (arnold at skeeve.com) Date: Fri, 07 Jun 2024 01:32:24 -0600 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: Message-ID: <202406070732.4577WO39691963@freefriends.org> Warner Losh wrote: > At the risk of venturing too far off into the weeds (though maybe it's too > late for that) > > What do people think of the newer markup languages like Markdown or ASCII > Doctor? They seem more approachable than SGML or docbook, and a bit easier > to understand, though with less control, than troff, LaTeX or TeX. Having written books in troff, DocBook (SGML and XML), Texinfo and AsciiDoc, I can say that the latter two are much more pleasant that the former two. AsciiDoc is quite nice once you get to used to it, but sometimes getting it to layout things exactly the way you want can be difficult. Also, there aren't good free software toolchains for it to produce really nice output. The production process for the AsciiDoc book went AsciiDoc --> HTML --> Proprietary Formatter (Antenna House) --> PDF. I have not written much MarkDown, but I agree that it's too sparse for serious (book length) work. My two cents, Arnold From peter.martin.yardley at gmail.com Fri Jun 7 17:58:32 2024 From: peter.martin.yardley at gmail.com (Peter Yardley) Date: Fri, 7 Jun 2024 17:58:32 +1000 Subject: [TUHS] Proliferation of book print styles In-Reply-To: References: <4CED5BE6-75C1-4A4B-B730-AF2A79150426@gmail.com> <9854affc-bc27-42fb-a294-3b0e7ea4d28d@ucsb.edu> Message-ID: <03F9B732-9EB0-4A02-9D35-C1E57A16E6F4@gmail.com> I can remember using Interleaf and Mentor Graphics “Doc”. Semi wysiwyg systems, both a pleasure to use once you got used to them. Interleaf was quite advanced and was used by a few publishing houses. Chapters were in separate files (helped at the time) brought together by an index file. Doc was used by Boeing and was designed to produce military grade SGML. It had multiple revision streams, potentially by different authors, which could be coloured to highlight changes. I wan’t trying to do any mathematics tho. > On 5 Jun 2024, at 8:54 AM, Dave Horsfall wrote: > > On Tue, 4 Jun 2024, Dave Horsfall wrote: > >> When working for Lionel Singer's Sun Australia (a Sun reseller), we had >> an entire room devoted to SunOS manuals; I wonder what happened to them >> (the manuals, I mean)? > > Sun *Computer* Australia, of course; sigh... > > -- Dave Peter Yardley peter.martin.yardley at gmail.com From douglas.mcilroy at dartmouth.edu Fri Jun 7 21:44:02 2024 From: douglas.mcilroy at dartmouth.edu (Douglas McIlroy) Date: Fri, 7 Jun 2024 07:44:02 -0400 Subject: [TUHS] CSRC Involvement in Non-UNIX AT&T Software Projects Message-ID: > was there ever any crossover regarding UNIX folks applying their developments to other non-UNIX AT&T systems Besides Sandy Fraser's long-term effort to advance digital communication (as distinct from digital transmission), there was TPC; see TUHS https://www.tuhs.org/pipermail/tuhs/2020-April/020802.html and other mentions of TPC in the TUHS archives. Ken Thompson did considerable handholding for early adopters of Unix for applications within the Bell System, notably tracking automatic trouble reports from switching systems and managing the workflow of craftspeople in a wire center. Bob Morris's intimate participation in a submarine signal-processing project that Bell Labs contracted to produce for the US Navy set him on a career path that led to becoming chief scientist at NSA's National Computer Security Center. Gerard Holtzmann collaborated to instill model-checking in switching and transmission projects. Andrew Hume spent much time with AT&T's call records. Lorinda Cherry single-handedly automated the analysis of call centers' notes on customer contacts, This enabled detection of significant human-engineering and public-relations problems. An important part of my role as a department head was to maintain contacts with development labs so that R and D were mutually aware of each other's problems and expertise. This encouraged consulting visits, internships, and occasionally extended collaboration or specific research projects as recounted above. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Sat Jun 8 00:44:33 2024 From: will.senn at gmail.com (Will Senn) Date: Fri, 7 Jun 2024 09:44:33 -0500 Subject: [TUHS] diving into vi (nvi) - some observations from a slow learner Message-ID: Well, I really dove into using vi by way of nvi this past week and I have to say... the water's fine. It turns out that vi is a much simpler editor to understand and use (but no less powerful) than it's great grandchild, vim. To be fair, vim is an interesting editor and having used it off and on since the mid '90s, it's very familiar... but its powers? difficult to attain. vim stands as an excellent example of just how far you can take a product that works, keeping its core, but expanding it in all directions. No matter how much I tried to grasp its essence, it alluded me. The online help files seemed inscrutable to me, mixing environment settings, ex commands, ex mode commands, vi motions, insert mode commands in ways that seemed quite confusing to me. I know that I'm probably among a select few vim users that love it without really having a clue as to how it works. My best resource has been the web and search, but of late I've been wanting more. That's what drove me on this quest to really dig in to how things used to work and nvi is the best surrogate of the old ways that I could find (well, excluding heirloom vi, the traditional vi, which I've confirmed works pretty much the same way as nvi, with lisp support and without a few nice-to-haves nvi has). Anyway, here's something I worked out, after much travail - vi appears to be all about modes, movement, counts, operators, registers, and screens (which I found very weird at first, but simple in retrospect)... with these fundamentals down, it seems like you can do anything and I mean anything... and all of the other functions are just bolted on for the purpose of making their specific tasks work. Getting this out of the existing documentation was a mess. Thankfully the nvi docs (based on the 4.4 docs) are much slimmer and better organized. Even so, they make assumptions of the reader that I don't seem to fit. Take motions as prolly the most glaring example - all of the docs I've ever seen organize these by logical units of text (words, paras, etc), personally and apparently persistently, I think of motion as directed, so it took me a lot of experimentation, head scratching, and writing things out several times in several different ways to realize I could represent the idea on a single notecard as (some commands appear in multiple lines): Leftward motions - [[, {, (, 0, ^|_, B, b, h|^H Rightward Movement - l|SP, e, E, w, W, $, ), }, ]] Upward motions - 1G, ^B, H, ^U, -, k | ^P Downward motions - G, ^F, L, ^D, ^M | +, j | ^J | ^N Absolute - | G Relative - %, H, M, L Marks - m, ', `, '', `` Keeping in mind that movements left-to-right are - section, para, sentence, line, text, word and endword (big, and small), and letter. And up and down are - file, screen, in screen (HML), half-screen, chars-in-line, and line. For me, this inversion from units of motion to direction of motion put forty some-odd commands in much closer reach for me. Looking back at the vim documentation, I see how its sheer volume and the way it is organized got in the way of my seeing the forest. Thankfully, in nvi, there are two incredibly useful ex commands to help - exu[sage] and viu[sage]. I simply printed these out and worked with them making the experimental assumption that they would serve as a baseline that represented the full capabilities of vi... and sure enough, after working and working with them, I am pretty confident they are sufficient for any editing task.  Wow, who knew? I loved vi, but now, I'm starting to really appreciate it's simplicity?! I can't believe those words are coming out of my mouth. I never thought of it as simple... those movement commands were far too numerous as I understood them. Are there things I miss from vim? Sure, I miss command line completion in ex mode, I want my help text to appear in a window where I can search, I would like better window control. But, I think I'll stick with nvi a while until I really nail it down. Then all of the cool stuff that vim offers, or neovim, will seem like icing on the cake that is vi. Thanks to Ken Thompson for writing a work of art that serves as the true core of this editor,  to Bill Joy for his great work in extending it, again to Bill Joy for bringing vi to life, and to Mary Ann for the macros and making it accessible to the rest of us, and others who contributed. It's 2024 and I still can't find a better terminal editor than vi... as it existed in the late '80s or as it exists today as nvi/vim/neovim. Amazing piece of software. Off to figure out tags!! Arg, seems like it oughtta be really useful in my work with source code, why can't I figure it out?! Sheesh. Will From ralph at inputplus.co.uk Sat Jun 8 01:41:20 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Fri, 07 Jun 2024 16:41:20 +0100 Subject: [TUHS] diving into vi (nvi) - some observations from a slow learner In-Reply-To: References: Message-ID: <20240607154120.08031210F3@orac.inputplus.co.uk> Hi Will, > But, I think I'll stick with nvi a while until I really nail it down. Something I used to do was to look at each key on the keyboard and think what it would do, e.g. d, D, and ^D. Most do at least one thing. > Leftward motions - [[, {, (, 0, ^|_, B, b, h|^H > Rightward Movement - l|SP, e, E, w, W, $, ), }, ]] You're missing these handy six: f F t T ; , > Upward motions - 1G, ^B, H, ^U, -, k | ^P > Downward motions - G, ^F, L, ^D, ^M | +, j | ^J | ^N There's also keeping the cursor on the same line but moving the window over the text: z ^E ^Y > Off to figure out tags Understand the format of the tags file first; built by ctags(1). ^] on a word looks it up and goes there. Where you were is pushed on to the ‘tagstack’. When you wish to exit that rabbit hole, ^T pops the top of the stack and goes there which returns you to where you pressed ^]. $ func='foo bar xyzzy' $ printf "%s: $func"'\n' $func >src $ cat src foo: foo bar xyzzy bar: foo bar xyzzy xyzzy: foo bar xyzzy $ grep -n '[^:]*' src | awk -F: '{print $2 "\tsrc\t" $1}' >tags $ sed -n l tags foo\tsrc\t1$ bar\tsrc\t2$ xyzzy\tsrc\t3$ $ vi src, move to a word, ^] and it will move you to the ‘definition’ line. Imagine each line is a function definition with calls to other functions. You're wandering down and up a ‘call tree’, following possible execution paths. -- Cheers, Ralph. From will.senn at gmail.com Sat Jun 8 02:20:35 2024 From: will.senn at gmail.com (Will Senn) Date: Fri, 7 Jun 2024 11:20:35 -0500 Subject: [TUHS] diving into vi (nvi) - some observations from a slow learner In-Reply-To: <20240607154120.08031210F3@orac.inputplus.co.uk> References: <20240607154120.08031210F3@orac.inputplus.co.uk> Message-ID: <05302a0e-58b6-450c-8554-4546a409d603@gmail.com> Thanks for the tips, Ralph. I definitely learned the 6, but put 'em on a card with searching. Here's my motion card, including the scrolling commands, mark movement and a couple odd balls, to see how my mind works :) : https://decuser.github.io/assets/img/vi/motions-notecard.jpg On another note, I was reminded in an offline discussion that QED was the predecessor to ed - my history of tech always seems to glitch at the genesis of Unix. Of course the Unix pioneers didn't say "Let there be Unix and so mote it be" even though it may seems so to some of us. A lot of intellectual blood, sweat, and tears went into what came before and Ken Thompson definitely stood on the shoulders of Lampson, Deutsch, Kleene and others to create his masterpiece. Ritchie's partial history of QED https://www.bell-labs.com/usr/dmr/www/qed.html Deutsch & Lampson's work https://dl.acm.org/doi/pdf/10.1145/363848.363863 Thompson's innovation https://dl.acm.org/doi/pdf/10.1145/363347.363387 Kleene's indirect contribution (Automata<->Regular Expression) https://www.logicmatters.net/tyl/booknotes/kleene-metamath/ Later, Will On 6/7/24 10:41 AM, Ralph Corderoy wrote: > Hi Will, > >> But, I think I'll stick with nvi a while until I really nail it down. > Something I used to do was to look at each key on the keyboard and think > what it would do, e.g. d, D, and ^D. Most do at least one thing. > >> Leftward motions - [[, {, (, 0, ^|_, B, b, h|^H >> Rightward Movement - l|SP, e, E, w, W, $, ), }, ]] > You're missing these handy six: f F t T ; , > >> Upward motions - 1G, ^B, H, ^U, -, k | ^P >> Downward motions - G, ^F, L, ^D, ^M | +, j | ^J | ^N > There's also keeping the cursor on the same line but moving the window > over the text: z ^E ^Y > >> Off to figure out tags > Understand the format of the tags file first; built by ctags(1). > ^] on a word looks it up and goes there. > Where you were is pushed on to the ‘tagstack’. > When you wish to exit that rabbit hole, ^T pops the top of the stack and > goes there which returns you to where you pressed ^]. > > $ func='foo bar xyzzy' > $ printf "%s: $func"'\n' $func >src > $ cat src > foo: foo bar xyzzy > bar: foo bar xyzzy > xyzzy: foo bar xyzzy > $ grep -n '[^:]*' src | awk -F: '{print $2 "\tsrc\t" $1}' >tags > $ sed -n l tags > foo\tsrc\t1$ > bar\tsrc\t2$ > xyzzy\tsrc\t3$ > $ > > vi src, move to a word, ^] and it will move you to the ‘definition’ > line. Imagine each line is a function definition with calls to other > functions. You're wandering down and up a ‘call tree’, following > possible execution paths. > From akosela at andykosela.com Sat Jun 8 02:58:14 2024 From: akosela at andykosela.com (Andy Kosela) Date: Fri, 7 Jun 2024 18:58:14 +0200 Subject: [TUHS] diving into vi (nvi) - some observations from a slow learner In-Reply-To: <05302a0e-58b6-450c-8554-4546a409d603@gmail.com> References: <20240607154120.08031210F3@orac.inputplus.co.uk> <05302a0e-58b6-450c-8554-4546a409d603@gmail.com> Message-ID: On Friday, June 7, 2024, Will Senn wrote: > Thanks for the tips, Ralph. I definitely learned the 6, but put 'em on a > card with searching. Here's my motion card, including the scrolling > commands, mark movement and a couple odd balls, to see how my mind works :) > : > > https://decuser.github.io/assets/img/vi/motions-notecard.jpg > > The best thing about vi/nvi/vim is that you do not need to know all the arcane commands for it to be usable. You can get along quite happily using just a small subset of them. The beauty of vi lies in the interface minimalism and ubiquity. It is still my favorite editor after all these years and still using it on Linux/*BSD/MS-DOS/AmigaOS. --Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrochkind at gmail.com Sat Jun 8 03:29:29 2024 From: mrochkind at gmail.com (Marc Rochkind) Date: Fri, 7 Jun 2024 11:29:29 -0600 Subject: [TUHS] diving into vi (nvi) - some observations from a slow learner In-Reply-To: References: Message-ID: For me, my introduction to vi came with my introduction to working on a CRT (which is what I think we called screens) after years with paper terminals and ed. Wow, it was so great! But, vi has a fundamental flaw: It's modal. For example, typing an "a" might enter the character "a", or it might initiate append mode. Having worked with computer beginners for about 50 years, I can say with assurance that this is very difficult to learn. We encountered this as early as the mid-1970s when we were trying to get mainframe programmers to use the Programmers Workbench. Other editors didn't work that way. I think emacs was non-modal, although I never used it. (Could be wrong.) That said, there's no question but that vi could be learned and, once learned, that it was extremely productive. If a program operates consistently, it can always be learned no matter how poor the UI is. This is not to diminish Bill Joy's accomplishment and not to acknowledge that he was severely limited by the hardware available to him. (Richard Stallman had a much better terminal.) But, sadly, vi continued to be promoted long after the hardware improved. It's very hard to dislodge something so entrenched! Much of UNIX was that way: Criticisms of its UI (notably Don Norman in 1981 or thereabouts) were refuted by those who had already learned UNIX. I recall that it was Mike Lesk who responded directly to Don Norman. (My memory is really stretching here.) Easy-to-learn and easy-to-use are very different, as we now all know. Marc On Fri, Jun 7, 2024 at 8:44 AM Will Senn wrote: > Well, I really dove into using vi by way of nvi this past week and I > have to say... the water's fine. It turns out that vi is a much simpler > editor to understand and use (but no less powerful) than it's great > grandchild, vim. To be fair, vim is an interesting editor and having > used it off and on since the mid '90s, it's very familiar... but its > powers? difficult to attain. > > vim stands as an excellent example of just how far you can take a > product that works, keeping its core, but expanding it in all > directions. No matter how much I tried to grasp its essence, it alluded > me. The online help files seemed inscrutable to me, mixing environment > settings, ex commands, ex mode commands, vi motions, insert mode > commands in ways that seemed quite confusing to me. I know that I'm > probably among a select few vim users that love it without really having > a clue as to how it works. My best resource has been the web and search, > but of late I've been wanting more. That's what drove me on this quest > to really dig in to how things used to work and nvi is the best > surrogate of the old ways that I could find (well, excluding heirloom > vi, the traditional vi, which I've confirmed works pretty much the same > way as nvi, with lisp support and without a few nice-to-haves nvi has). > > Anyway, here's something I worked out, after much travail - vi appears > to be all about modes, movement, counts, operators, registers, and > screens (which I found very weird at first, but simple in retrospect)... > with these fundamentals down, it seems like you can do anything and I > mean anything... and all of the other functions are just bolted on for > the purpose of making their specific tasks work. > > Getting this out of the existing documentation was a mess. Thankfully > the nvi docs (based on the 4.4 docs) are much slimmer and better > organized. Even so, they make assumptions of the reader that I don't > seem to fit. Take motions as prolly the most glaring example - all of > the docs I've ever seen organize these by logical units of text (words, > paras, etc), personally and apparently persistently, I think of motion > as directed, so it took me a lot of experimentation, head scratching, > and writing things out several times in several different ways to > realize I could represent the idea on a single notecard as (some > commands appear in multiple lines): > > Leftward motions - [[, {, (, 0, ^|_, B, b, h|^H > Rightward Movement - l|SP, e, E, w, W, $, ), }, ]] > Upward motions - 1G, ^B, H, ^U, -, k | ^P > Downward motions - G, ^F, L, ^D, ^M | +, j | ^J | ^N > Absolute - | G > Relative - %, H, M, L > Marks - m, ', `, '', `` > > Keeping in mind that movements left-to-right are - section, para, > sentence, line, text, word and endword (big, and small), and letter. And > up and down are - file, screen, in screen (HML), half-screen, > chars-in-line, and line. For me, this inversion from units of motion to > direction of motion put forty some-odd commands in much closer reach for > me. Looking back at the vim documentation, I see how its sheer volume > and the way it is organized got in the way of my seeing the forest. > > Thankfully, in nvi, there are two incredibly useful ex commands to help > - exu[sage] and viu[sage]. I simply printed these out and worked with > them making the experimental assumption that they would serve as a > baseline that represented the full capabilities of vi... and sure > enough, after working and working with them, I am pretty confident they > are sufficient for any editing task. Wow, who knew? I loved vi, but > now, I'm starting to really appreciate it's simplicity?! I can't believe > those words are coming out of my mouth. I never thought of it as > simple... those movement commands were far too numerous as I understood > them. > > Are there things I miss from vim? Sure, I miss command line completion > in ex mode, I want my help text to appear in a window where I can > search, I would like better window control. But, I think I'll stick with > nvi a while until I really nail it down. Then all of the cool stuff that > vim offers, or neovim, will seem like icing on the cake that is vi. > > Thanks to Ken Thompson for writing a work of art that serves as the true > core of this editor, to Bill Joy for his great work in extending it, > again to Bill Joy for bringing vi to life, and to Mary Ann for the > macros and making it accessible to the rest of us, and others who > contributed. It's 2024 and I still can't find a better terminal editor > than vi... as it existed in the late '80s or as it exists today as > nvi/vim/neovim. Amazing piece of software. > > Off to figure out tags!! Arg, seems like it oughtta be really useful in > my work with source code, why can't I figure it out?! Sheesh. > > Will > -- *My new email address is mrochkind at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Sat Jun 8 08:12:30 2024 From: tuhs at tuhs.org (Scot Jenkins via TUHS) Date: Fri, 07 Jun 2024 18:12:30 -0400 Subject: [TUHS] diving into vi (nvi) - some observations from a slow learner In-Reply-To: References: Message-ID: <202406072212.457MCUTa002433@sdf.org> Will Senn wrote: > vim stands as an excellent example of just how far you can take a > product that works, keeping its core, but expanding it in all > directions. vim has a *lot* of knobs to twist, all of which must be in just the right position for it to be comfortably usable, in my opinion. I got annoyed with many of the default features, like the auto indenting and getting stuck in comment mode. Start a comment in vim and try to get out of that mode. I found I spent too much time trying to figure out how to turn off these things so I generally went back to straight vi as my daily editor. I use ed(1) a lot too for quick edits. vim is great for the syntax highlighting when coding or editing HTML though. It makes it easy to spot errors. > Off to figure out tags!! Arg, seems like it oughtta be really useful in > my work with source code, why can't I figure it out?! Sheesh. I think the best way to learn vi/vim features is from watching someone else use it. You pick up a lot of useful tricks. Mike Shah has many great videos; here are a couple vi/vim related ones. 1. Why I'm Still using Vim in 2024 - A Brief Introduction and Demo https://www.youtube.com/watch?v=e4E6nQpd7Xs This is a good quick into to using vi/vim. 2. [Dlang Episode 31] D Language - ctags with dscanner for VIM (and ctags with phobos demonstration) https://www.youtube.com/watch?v=vMF7NxF_HFY While he uses the D programming language for this video, it is a great demo of how to use ctags. The principle is the same for other programming languages, ctags supports many, run: "ctags --list-languages" to view the full list. scot From dave at horsfall.org Sat Jun 8 14:44:48 2024 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 8 Jun 2024 14:44:48 +1000 (EST) Subject: [TUHS] diving into vi (nvi) - some observations from a slow learner In-Reply-To: References: Message-ID: On Fri, 7 Jun 2024, Marc Rochkind wrote: > Other editors didn't work that way. I think emacs was non-modal, > although I never used it. (Could be wrong.) As the saying goes, EMACS is for people who can't remember what mode they're in :-) -- Dave From imp at bsdimp.com Sat Jun 8 14:50:55 2024 From: imp at bsdimp.com (Warner Losh) Date: Fri, 7 Jun 2024 22:50:55 -0600 Subject: [TUHS] diving into vi (nvi) - some observations from a slow learner In-Reply-To: References: Message-ID: On Fri, Jun 7, 2024, 10:45 PM Dave Horsfall wrote: > On Fri, 7 Jun 2024, Marc Rochkind wrote: > > > Other editors didn't work that way. I think emacs was non-modal, > > although I never used it. (Could be wrong.) > > As the saying goes, EMACS is for people who can't remember what mode > they're in :-) > One less thing helps us focus on ond more thing that actually matters :) Warner -- Dave > -------------- next part -------------- An HTML attachment was scrubbed... URL: From woods at robohack.ca Sat Jun 8 11:58:02 2024 From: woods at robohack.ca (Greg A. Woods) Date: Fri, 07 Jun 2024 18:58:02 -0700 Subject: [TUHS] Old documentation - still the best In-Reply-To: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> References: <5fe1dc07-7598-47c7-ac44-9e113d946cac@gmail.com> Message-ID: At Sat, 1 Jun 2024 20:59:42 -0500, Will Senn wrote: Subject: [TUHS] Old documentation - still the best > > Just the SH, TROFF and NROFF sections are > worth the effort of digging up this 40 > year old text. You might be interested in this one by the same author too: Title: Typesetting Tables on the UNIX System Author: Henry McGilton; Mary McNabb (With) Publisher: Trilithon Press Date: 1990-04 ISBN: 9780962628900 / 0962628905 https://archive.org/details/typesettingtable0000mcgi/mode/2up I found it invaluable back when I was using troff frequently, and it too is, IMHO, very well written. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From robpike at gmail.com Sat Jun 8 16:22:32 2024 From: robpike at gmail.com (Rob Pike) Date: Sat, 8 Jun 2024 16:22:32 +1000 Subject: [TUHS] diving into vi (nvi) - some observations from a slow learner In-Reply-To: References: Message-ID: https://9p.io/magic/man2html/1/vi is the manual for Plan 9's vi. -rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph at inputplus.co.uk Sat Jun 8 20:19:12 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Sat, 08 Jun 2024 11:19:12 +0100 Subject: [TUHS] POSIX ed(1)'s exit status. (Was: diving into vi...) In-Reply-To: <202406072212.457MCUTa002433@sdf.org> References: <202406072212.457MCUTa002433@sdf.org> Message-ID: <20240608101912.3F8EF21FBE@orac.inputplus.co.uk> Hi, Will Senn wrote: > I use ed(1) a lot too for quick edits. Me too. I've heard others who have told crontab(1) or their mail program to use ed have been bitten by the exit status varying between 0 and 1. ed(1p) explains: EXIT STATUS The following exit values shall be returned: 0 Successful completion without any file or command errors. >0 An error occurred. This behaviour is surprising. Here's GNU ed: $ ed /tmp/foo /tmp/foo: No such file or directory a foo . wq 4 $ echo $? 0 $ ed /tmp/foo 4 /bar ? $a bar . wq 8 $ echo $? 1 $ I assume POSIX made it the default behaviour to be useful when ed isn't talking to mankind. Perhaps they think that's the default these days. GNU ed added -l: -l, --loose-exit-status exit with 0 status even if a command fails >From https://man.netbsd.org/ed.1, I don't think BSD ed has a similar option. Probably, because it doesn't need it as my quick skim of http://bxr.su/NetBSD/bin/ed/main.c#220 suggests it will exit(0) even if an earlier search found nothing. There is a list of BSD's differences to POSIX, e.g. ‘z’ for scrolling, amidst the source, http://bxr.su/NetBSD/bin/ed/POSIX, but it doesn't mention the exit status. -- Cheers, Ralph. From beebe at math.utah.edu Sun Jun 9 09:43:25 2024 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Sat, 8 Jun 2024 17:43:25 -0600 Subject: [TUHS] [tuhs] Early statistical software in Unix Message-ID: The book ``S: An Interactive Environment for Data Analysis and Graphics'' (Wadsworth, 1984), by Richard A. Becker and John M. Chambers, and an earlier Bell Labs report in 1981, introduced the S statistical software system that later evolved into the commercial S-Plus system (now defunct, I think), and the vibrant and active R system (https://cran.r-project.org/) that we use at Utah in our statistics courses. Almost 21,000 open-source packages for R are available, and they appear to be the dominant form of statistical software package publication, based on extensive evidence in our bibliography archives that completely cover numerous journals in probability and statistics. I'm interested in looking into the early S source code, if possible, to see how some statistical software that I am freshly implementing for high-precision computation was handled at Bell Labs more than four decades ago. Does anyone on this list know whether the original S system was ever distributed in source code to commercial sites, and academic sites, that licensed Unix from Bell Labs in the 1980s? Does that code still exist (and is openly accessible), or has it been lost? As with the B, C, D, and R programming languages, it is rather hard for Web search engines to find things that are known only by a single letter of the alphabet. ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From egbegb2 at gmail.com Sun Jun 9 18:00:53 2024 From: egbegb2 at gmail.com (Ed Bradford) Date: Sun, 9 Jun 2024 03:00:53 -0500 Subject: [TUHS] most direct Unix descendant In-Reply-To: <20240606194901.F5bDRUkh@steffen%sdaoden.eu> References: <20240606095502.AD4EE210F4@orac.inputplus.co.uk> <20240606194901.F5bDRUkh@steffen%sdaoden.eu> Message-ID: Excellent responses here. Brings back so many great memories. My 1 cent would be to ask the question: Which of today's Unix variants (Linux, BSD, AIX, Cygwin, ...) is closest to the philosophy of the Ken-Denis-Doug versions of V6 Unix? All the variants I see today suffer from "complexification" - a John Mashey term. Documentation of commands today has grown 5 to 10 fold for each command in /usr/bin. V7 had less than 64 well documented system calls. Today's Linux, AIX, and others have how many? I don't know. The concept of producing a stream of text as the output of a program that does simple jobs well has been replaced by "power-shell" thinking of passing binary objects rather than text between program - a decidedly non-portable idea. Passing "objects" requires attaching to a dynamically linked library (that will change or even disappear with the next release of the OS or the object library). With Research Unix, I could pipe the output of a Unix program running on an Intel 486 to another program running on a Motorola 68000 or a Zilog Z80000 or an IBM AIX machine. IPhones, iPads, and my Android tablet don't have a usable text editor. All non-Unix text editors seem to struggle to offer a fixed width font. (Ever try to make columns line up on an iPhone or Android tablet?) Complexification rears its ugly head. I still use vi on both my Mac and PC (Cygwin). (I can't find a usable gvim for Mac and Macvim is weird but doesn't seem to know what a mouse is.) Unix brought automation to the forefront of possibilities. Using Unix, anyone could do it - even that kid in Jurassic Park. Today, everything is GUI and nothing can be automated easily or, most of the time, not at all. Unix is an ever shrinking oasis in a desert of non-automation and complexity. It is the loss of automation possibilities that frustrates me the most. (Don't mind me, I'm just outgassing for no good reason.) Ed On Thu, Jun 6, 2024 at 3:06 PM Steffen Nurpmeso wrote: > Ralph Corderoy wrote in > <20240606095502.AD4EE210F4 at orac.inputplus.co.uk>: > |There's a chart of the connections between Unix versions at > |https://en.wikipedia.org/wiki/List_of_Unix_systems, though I dislike the > |lack of direction given there are some arcs with little incline. > |It says it's based on https://www.levenez.com/unix/ where Éric notes his > |chart is not limited to just source-code transfer. > > I also admire that FreeBSD and NetBSD keep on maintaining the > bsd-family-tree (and in the original form, not that dots thing, or > how it was called). So that starts with > > First Edition (V1) > | > Second Edition (V2) > | > Third Edition (V3) > | > Fourth Edition (V4) > | > Fifth Edition (V5) > | > Sixth Edition (V6) -----* > \ | > \ | > \ | > Seventh Edition (V7)----|----------------------* > \ | | > \ 1BSD | > 32V | | > \ 2BSD---------------* | > \ / | | > \ / | | > \/ | | > 3BSD | | > | | | > 4.0BSD 2.79BSD | > | | | > 4.1BSD --------------> 2.8BSD <-* > | | > 4.1aBSD -----------\ | > | \ | > 4.1bBSD \ | > | \ | > *------ 4.1cBSD --------------> 2.9BSD > / | | > Eighth Edition | 2.9BSD-Seismo > | | | > +----<--- 4.2BSD 2.9.1BSD > ... > > and says > > Multics 1965 > UNIX Summer 1969 > DEC PDP-7 > First Edition 1971-11-03 [QCU] > DEC PDP-11/20, Assembler > Second Edition 1972-06-12 [QCU] > 10 UNIX installations > Third Edition 1973-02-xx [QCU] > Pipes, 16 installations > Fourth Edition 1973-11-xx [QCU] > rewriting in C effected, > above 30 installations > Fifth Edition 1974-06-xx [QCU] > above 50 installations > Sixth Edition 1975-05-xx [QCU] > port to DEC Vax > Seventh Edition 1979-01-xx [QCU] 1979-01-10 [TUHS] > first portable UNIX > .. > > with a nice Bibliography with falsely underscored headline plus > > URL: https://cgit.freebsd.org/src/tree/share/misc/bsd-family-tree > > It also covers the system most of you are using (later). > > --steffen > | > |Der Kragenbaer, The moon bear, > |der holt sich munter he cheerfully and one by one > |einen nach dem anderen runter wa.ks himself off > |(By Robert Gernhardt) > -- Advice is judged by results, not by intentions. Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcilroy at dartmouth.edu Sun Jun 9 21:34:46 2024 From: douglas.mcilroy at dartmouth.edu (Douglas McIlroy) Date: Sun, 9 Jun 2024 07:34:46 -0400 Subject: [TUHS] most direct Unix descendant Message-ID: Eloquently put. Amen! doug > Unix brought automation to the forefront of possibilities. Using Unix, > anyone could do it - even that kid in Jurassic Park. Today, everything > is GUI and nothing can be automated easily or, most of the time, > not at all. > Unix is an ever shrinking oasis in a desert of non-automation and complexity. > It is the loss of automation possibilities that frustrates me the most -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.phillip.garcia at gmail.com Sun Jun 9 21:59:00 2024 From: a.phillip.garcia at gmail.com (A. P. Garcia) Date: Sun, 9 Jun 2024 07:59:00 -0400 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: On Sun, Jun 9, 2024, 7:35 AM Douglas McIlroy wrote: > Eloquently put. Amen! > > doug > > > Unix brought automation to the forefront of possibilities. Using Unix, > > anyone could do it - even that kid in Jurassic Park. Today, everything > > is GUI and nothing can be automated easily or, most of the time, > > not at all. > > > Unix is an ever shrinking oasis in a desert of non-automation and > complexity. > > > It is the loss of automation possibilities that frustrates me the most > Do I have to be that guy? I hate windows. I love Unix. But the above isn't really true. MS has actually done a good job of catching up in that department. All major apps have Powershell libraries. I envy some features of Powershell, but I still won't use it unless I have to. One example is PowerCLI, which is very useful for vSphere automation. Easier to use than their other language APIs, in my opinion. I could go on with other examples (Active Directory, MSSQL, Exchange), but I think the point is made... > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph at inputplus.co.uk Sun Jun 9 22:31:55 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Sun, 09 Jun 2024 13:31:55 +0100 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: <20240609123155.7C72220152@orac.inputplus.co.uk> Hi A. P., > All major apps have Powershell libraries. I envy some features of > Powershell, but I still won't use it unless I have to. > > One example is PowerCLI, which is very useful for vSphere automation. > Easier to use than their other language APIs, in my opinion. The grandfather of your post address Powershell earlier on. https://www.tuhs.org/mailman3/hyperkitty/list/tuhs at tuhs.org/message/QZVFRCYM2MEJ4VNZPORBUAKIS6WG6LIY/ > The concept of producing a stream of text as the output of a program > that does simple jobs well has been replaced by "power-shell" thinking > of passing binary objects rather than text between program > - a decidedly non-portable idea. > > Passing "objects" requires attaching to a dynamically linked library > (that will change or even disappear with the next release of the OS or > the object library). With Research Unix, I could pipe the output of > a Unix program running on an Intel 486 to another program running on > a Motorola 68000 or a Zilog Z80000 or an IBM AIX machine. -- Cheers, Ralph. From a.phillip.garcia at gmail.com Mon Jun 10 00:06:00 2024 From: a.phillip.garcia at gmail.com (A. P. Garcia) Date: Sun, 9 Jun 2024 10:06:00 -0400 Subject: [TUHS] most direct Unix descendant In-Reply-To: <20240609123155.7C72220152@orac.inputplus.co.uk> References: <20240609123155.7C72220152@orac.inputplus.co.uk> Message-ID: On Sun, Jun 9, 2024, 8:40 AM Ralph Corderoy wrote: > Hi A. P., > > > All major apps have Powershell libraries. I envy some features of > > Powershell, but I still won't use it unless I have to. > > > > One example is PowerCLI, which is very useful for vSphere automation. > > Easier to use than their other language APIs, in my opinion. > > The grandfather of your post address Powershell earlier on. > > > https://www.tuhs.org/mailman3/hyperkitty/list/tuhs at tuhs.org/message/QZVFRCYM2MEJ4VNZPORBUAKIS6WG6LIY/ > > The concept of producing a stream of text as the output of a program > > that does simple jobs well has been replaced by "power-shell" thinking > > of passing binary objects rather than text between program > > - a decidedly non-portable idea. > > > > Passing "objects" requires attaching to a dynamically linked library > > (that will change or even disappear with the next release of the OS or > > the object library). With Research Unix, I could pipe the output of > > a Unix program running on an Intel 486 to another program running on > > a Motorola 68000 or a Zilog Z80000 or an IBM AIX machine. > > -- > Cheers, Ralph. > Thank you, I hadn't seen that. He's right, of course. It's kludgy, but you can always use text, or some structured form of it like json or xml, to communicate between different machines. Does windows have something like netcat/socat? I honestly don't know. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From egbegb2 at gmail.com Mon Jun 10 15:13:08 2024 From: egbegb2 at gmail.com (Ed Bradford) Date: Mon, 10 Jun 2024 00:13:08 -0500 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: Hi A.P., I agree powershell supports automation. However, it so complex and non-portable that even learning about it and how to use it takes considerably longer than learning about Unix automation. It puts a significant burden on developers to document their DLL's and make the interfaces permanent. Complexity produces more expensive and less reliable software. BTW, what about the non-major apps? From your view, they are simply excluded from automation. In PS, type "help cmdlet" to see the complexity of each and every command. PS does allow automation, but it is very expensive because most people will be daunted when trying to learn how to solve problems with it, people who know how to write stuff in PS are more expensive employees, and development time for asking a simple question like "Show me the last 5 files read in a directory tree" can require days or more of research and experimentation. A help page on almost any cmdlet produces a full page-width pages of options, many of which lead to further questions about usage. Yes, PS automates Windows -- but at what cost? Ed PS: I should write my book of "Why Windows is not my favorite operating system" (paraphrasing a famous BTL TM). PS2: Speaking of complexity and documentation, here is the start of the printout on an up-to-date MacOS of the command man ls | less *NAME ls – list directory contents SYNOPSIS ls [- at ABCFGHILOPRSTUWabcdefghiklmnopqrstuvwxy1%,] [--color=when] [-D format] [file ...]* and the same question about "ls" on Windows 11 powershell: help ls | less # typed in PS [image: image.png] How did we let this happen? On Sun, Jun 9, 2024 at 6:59 AM A. P. Garcia wrote: > > > On Sun, Jun 9, 2024, 7:35 AM Douglas McIlroy < > douglas.mcilroy at dartmouth.edu> wrote: > >> Eloquently put. Amen! >> >> doug >> >> > Unix brought automation to the forefront of possibilities. Using Unix, >> > anyone could do it - even that kid in Jurassic Park. Today, everything >> > is GUI and nothing can be automated easily or, most of the time, >> > not at all. >> >> > Unix is an ever shrinking oasis in a desert of non-automation and >> complexity. >> >> > It is the loss of automation possibilities that frustrates me the most >> > > > Do I have to be that guy? I hate windows. I love Unix. But the above isn't > really true. MS has actually done a good job of catching up in that > department. All major apps have Powershell libraries. I envy some features > of Powershell, but I still won't use it unless I have to. > > One example is PowerCLI, which is very useful for vSphere automation. > Easier to use than their other language APIs, in my opinion. I could go on > with other examples (Active Directory, MSSQL, Exchange), but I think the > point is made... > >> -- Advice is judged by results, not by intentions. Cicero -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 86182 bytes Desc: not available URL: From g.branden.robinson at gmail.com Mon Jun 10 15:25:50 2024 From: g.branden.robinson at gmail.com (G. Branden Robinson) Date: Mon, 10 Jun 2024 00:25:50 -0500 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: <20240610052550.5waad35xzozzipwz@illithid> At 2024-06-10T00:13:08-0500, Ed Bradford wrote: > PS2: Speaking of complexity and documentation, here is the start > of the printout on an up-to-date MacOS of the command > > man ls | less [...] > How did we let this happen? When "everything is a file", a lot gets packed into one's file abstraction. Regards, Branden -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dave at horsfall.org Mon Jun 10 18:39:40 2024 From: dave at horsfall.org (Dave Horsfall) Date: Mon, 10 Jun 2024 18:39:40 +1000 (EST) Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: On Mon, 10 Jun 2024, Ed Bradford wrote: > [...] people who know how to write stuff in PS are more expensive > employees, and development time for asking a simple question like > >   "Show me the last 5 files read in a directory tree" Likely a one-liner in Unix :-) -- Dave From marc.donner at gmail.com Mon Jun 10 19:36:32 2024 From: marc.donner at gmail.com (Marc Donner) Date: Mon, 10 Jun 2024 05:36:32 -0400 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: The architectural alternative to powershell-style extension has been around in various guises for a while. In particular things like TCL and Lua are engineered to be add-on extension languages. Integrating them just involves adding a few callouts (dispatch a “program”, scan directories in a designated “path” for programs, render internal structures into text). This style of design has been around for a long time - all Unix shells, EMacs, many video games. It enables an elegant approach to performance management - build it first as a script and only reimplement it as a binary if needed. Doing this enables automation, but it does require the designers and product managers to want automation. Marc ===== nygeek.net mindthegapdialogs.com/home On Mon, Jun 10, 2024 at 4:39 AM Dave Horsfall wrote: > On Mon, 10 Jun 2024, Ed Bradford wrote: > > > [...] people who know how to write stuff in PS are more expensive > > employees, and development time for asking a simple question like > > > > "Show me the last 5 files read in a directory tree" > > Likely a one-liner in Unix :-) > > -- Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.senn at gmail.com Mon Jun 10 22:33:27 2024 From: will.senn at gmail.com (Will Senn) Date: Mon, 10 Jun 2024 07:33:27 -0500 Subject: [TUHS] PID 0 and the scheduler Message-ID: <8d74aed5-957d-493b-9aab-0c647dd64018@gmail.com> All, There's an interesting dive into PID 0 linked to from osnews: https://blog.dave.tf/post/linux-pid0/ In the article, the author delves into the history of the scheduler a bit - going back to Unix v4 (his assembly language skills don't go to PDP variants). I like the article for two reasons - 1) it's clarity 2) it points out the self-reinforcing nature of our search ecosystem. I'm left with the question - how did scheduling work in v0-v4? and the observation that search really sucks these days. Later, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsc at swtch.com Tue Jun 11 03:02:56 2024 From: rsc at swtch.com (Russ Cox) Date: Mon, 10 Jun 2024 13:02:56 -0400 Subject: [TUHS] PID 0 and the scheduler In-Reply-To: <8d74aed5-957d-493b-9aab-0c647dd64018@gmail.com> References: <8d74aed5-957d-493b-9aab-0c647dd64018@gmail.com> Message-ID: On Mon, Jun 10, 2024 at 8:33 AM Will Senn wrote: > There's an interesting dive into PID 0 linked to from osnews: > https://blog.dave.tf/post/linux-pid0/ > In the article, the author delves into the history of the scheduler a bit > - going back to Unix v4 (his assembly language skills don't go to PDP > variants). > I like the article for two reasons - 1) it's clarity 2) it points out the > self-reinforcing nature of our search ecosystem. > I'm left with the question - how did scheduling work in v0-v4? and the > observation that search really sucks these days. > It's an interesting and well-written article, but I think it's not quite correct. It links to sched in the V4 code [1] but there's nothing there about pid 0. The right place to link would be the code in main that installs the "system process" into the process table [2]. So yes, in V4, the scheduler is a process in a meaningful sense, but I don't think pid 0 is a meaningful process identifier for it. Nothing actually *identifies* the scheduler by using the number 0. After a process has exited and its parent has called wait, its process table entry is set to p_pid = 0 [3]. Surely pid 0 does not also identify those processes at the same time that it identifies the system process. If there are many processes in the table with pid 0, it's difficult to see pid 0 as any kind of identifier at all! Instead it seems pretty clear that pid 0 represents the concept "no pid". This makes sense since the kernel memory started out zeroed, so using the zero pid for "nothing here" avoided separate reinitialization. The same is true for process status 0 meaning "unused". Similarly, inode 0 is "no inode" (useful to mark the end of a directory entry list), and disk block number 0 is "no block" (useful to mark an unallocated block in a file). (Go's emphasis on meaningful zero values is in the same spirit.) Reading the V1 sources seems to confirm this hypothesis: V1 does not have a process table for any kernel process, and yet it still uses pid 1 for the first process [4]. In V1 the user struct has a u.uno field holding the process number as an index into the process table. That field too is 1-indexed, because it is convenient for u.uno==0 to mean "no process". In particular, swap (analogous here to V4 swtch) understood that if called when u.uno==0 the process is exiting and need not be saved for reactivation [5]. The kernel goes out of its way to use u.uno==0 instead of u.uno==-1: all the code that indexes an array by u.uno has to subtract 1 (or 2 for words) from the address being indexed to account for the first entry being 1 and not 0. Presumably this is because of wanting to use zero value as "no uno". (And it's not any less efficient, since the -1 or -2 can be applied to the base address during linking.) The obvious question to ask then is not why pids start at 1 but why, in contrast to all these examples, uids start at 0. My guess is that there was simply no need for "no uid" and in contrast having the zero value mean "root" worked out nicely. Perhaps Ken will correct me if I'm reading this all wrong. As to the question of how scheduling worked in V1, the swap code is walking over runq looking for the highest priority runnable process [6]. Every process image except the one running was saved on disk, so the only decision was which one to read back in. This is in contrast to the V4 scheduler, which is juggling multiple in-memory process images at once and split out the decisions about what to run from the code that moved processes to and from the disk. Best, Russ [1] https://github.com/dspinellis/unix-history-repo/blob/Research-V4/sys/ken/slp.c#L89 [2] https://github.com/dspinellis/unix-history-repo/blob/Research-V4/sys/ken/main.c#L55 [3] https://github.com/dspinellis/unix-history-repo/blob/Research-V4/sys/ken/sys1.c#L247 [4] https://github.com/dspinellis/unix-history-repo/blob/Research-V1/u0.s#L200 [5] https://github.com/dspinellis/unix-history-repo/blob/Research-V1/u3.s#L40 [6] https://github.com/dspinellis/unix-history-repo/blob/Research-V1/u3.s#L9 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffen at sdaoden.eu Tue Jun 11 05:40:53 2024 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Mon, 10 Jun 2024 21:40:53 +0200 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: Message-ID: <20240610194053.JfHirqmk@steffen%sdaoden.eu> Marc Donner wrote in : |The architectural alternative to powershell-style extension has been around |in various guises for a while. In particular things like TCL and Lua are |engineered to be add-on extension languages. Integrating them just |involves adding a few callouts (dispatch a “program”, scan directories in a |designated “path” for programs, render internal structures into text). | |This style of design has been around for a long time - all Unix shells, |EMacs, many video games. | |It enables an elegant approach to performance management - build it first |as a script and only reimplement it as a binary if needed. | |Doing this enables automation, but it does require the designers and |product managers to want automation. Let me be the one who feed the silent head shakers with the Rob Pike quote "[just] make it strings". Of course lua hooks are faster, and i am looking forward myself, but other than that textual input/output communication with a program is language-neutral and somehow humanic. So now the time has come to point to an influential -- for me -- manual from 2001, that goes into assembler programming for x86: https://docs.freebsd.org/en/books/developers-handbook/x86/ And there you read things like A.12. One-Pointed Mind As a student of Zen, I like the idea of a one-pointed mind: Do one thing at a time, and do it well. This, indeed, is very much how UNIX® works as well. While a typical Windows® application is attempting to do everything imaginable (and is, therefore, riddled with bugs), a typical UNIX® program does only one thing, and it does it well. The typical UNIX® user then essentially assembles his own applications by writing a shell script which combines the various existing programs by piping the output of one program to the input of another. When writing your own UNIX® software, it is generally a good idea to see what parts of the problem you need to solve can be handled by existing programs, and only write your own programs for that part of the problem that you do not have an existing solution for. And going over A.13.2. Excursion to Pinhole Photography we come to the A.13.3.1. Processing Program Input which was a stunning read for me (the 15+ years before i came via Commodore 64 and its Basic, over Windows 3.1 and Windows 95 and, alongside, DOS, later 4DOS (then perl etc.)), because when doing really, really important things like calculating the cubic capacity of ones penis' in cubic millimeters (to end up with large numbers, say), i would never have thought by myself that the program accept and parse running text! (There you see that something "big" can actually be pretty "small" indeed.) Personally, I like to keep it simple. Something either is a number, so I process it. Or it is not a number, so I discard it. I do not like the computer complaining about me typing in an extra character when it is obvious that it is an extra character. Duh. Plus, it allows me to break up the monotony of computing and type in a query instead of just a number: What is the best pinhole diameter for the focal length of 150? There is no reason for the computer to spit out a number of complaints: Syntax error: What Syntax error: is Syntax error: the Syntax error: best Et cetera, et cetera, et cetera. And this (assembler!) then goes to % pinhole Computer, What size pinhole do I need for the focal length of 150? 150 490 306 362 2930 12 Hmmm... How about 160? 160 506 316 362 3125 12 Let's make it 155, please. 155 498 311 362 3027 12 Ah, let's try 157... 157 501 313 362 3066 12 156? 156 500 312 362 3047 12 That's it! Perfect! Thank you very much! ^D which is not even handled by GNU getopt with its argument-resorting behaviour! But it is likely that you all do not need that no more anyway, since you likely just speak out (silently at "Hal" level) "Hey computer bla bla", and the AI does the rest itself. --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From marc.donner at gmail.com Tue Jun 11 06:09:47 2024 From: marc.donner at gmail.com (Marc Donner) Date: Mon, 10 Jun 2024 16:09:47 -0400 Subject: [TUHS] most direct Unix descendant In-Reply-To: <20240610194053.JfHirqmk@steffen%sdaoden.eu> References: <20240610194053.JfHirqmk@steffen%sdaoden.eu> Message-ID: Totally correct - in the words of the immortal Beatles - "Strings is all you need." ===== nygeek.net mindthegapdialogs.com/home On Mon, Jun 10, 2024 at 3:40 PM Steffen Nurpmeso wrote: > Marc Donner wrote in > : > |The architectural alternative to powershell-style extension has been > around > |in various guises for a while. In particular things like TCL and Lua are > |engineered to be add-on extension languages. Integrating them just > |involves adding a few callouts (dispatch a “program”, scan directories > in a > |designated “path” for programs, render internal structures into text). > | > |This style of design has been around for a long time - all Unix shells, > |EMacs, many video games. > | > |It enables an elegant approach to performance management - build it first > |as a script and only reimplement it as a binary if needed. > | > |Doing this enables automation, but it does require the designers and > |product managers to want automation. > > Let me be the one who feed the silent head shakers with the > Rob Pike quote "[just] make it strings". > > Of course lua hooks are faster, and i am looking forward myself, > but other than that textual input/output communication with > a program is language-neutral and somehow humanic. > > So now the time has come to point to an influential -- for me -- > manual from 2001, that goes into assembler programming for x86: > > https://docs.freebsd.org/en/books/developers-handbook/x86/ > > And there you read things like > > A.12. One-Pointed Mind > > As a student of Zen, I like the idea of a one-pointed mind: Do > one thing at a time, and do it well. > > This, indeed, is very much how UNIX® works as well. While > a typical Windows® application is attempting to do everything > imaginable (and is, therefore, riddled with bugs), a typical > UNIX® program does only one thing, and it does it well. > > The typical UNIX® user then essentially assembles his own > applications by writing a shell script which combines the > various existing programs by piping the output of one program to > the input of another. > > When writing your own UNIX® software, it is generally a good > idea to see what parts of the problem you need to solve can be > handled by existing programs, and only write your own programs > for that part of the problem that you do not have an existing > solution for. > > And going over > > A.13.2. Excursion to Pinhole Photography > > we come to the > > A.13.3.1. Processing Program Input > > which was a stunning read for me (the 15+ years before i came via > Commodore 64 and its Basic, over Windows 3.1 and Windows 95 and, > alongside, DOS, later 4DOS (then perl etc.)), because when doing > really, really important things like calculating the cubic > capacity of ones penis' in cubic millimeters (to end up with large > numbers, say), i would never have thought by myself that the > program accept and parse running text! (There you see that > something "big" can actually be pretty "small" indeed.) > > Personally, I like to keep it simple. Something either is > a number, so I process it. Or it is not a number, so I discard > it. I do not like the computer complaining about me typing in an > extra character when it is obvious that it is an extra > character. Duh. > > Plus, it allows me to break up the monotony of computing and > type in a query instead of just a number: > > What is the best pinhole diameter for the focal length of 150? > > There is no reason for the computer to spit out a number of complaints: > > Syntax error: What > Syntax error: is > Syntax error: the > Syntax error: best > > Et cetera, et cetera, et cetera. > > And this (assembler!) then goes to > > % pinhole > > Computer, > > What size pinhole do I need for the focal length of 150? > 150 490 306 362 2930 12 > Hmmm... How about 160? > 160 506 316 362 3125 12 > Let's make it 155, please. > 155 498 311 362 3027 12 > Ah, let's try 157... > 157 501 313 362 3066 12 > 156? > 156 500 312 362 3047 12 > That's it! Perfect! Thank you very much! > ^D > > which is not even handled by GNU getopt with its > argument-resorting behaviour! > But it is likely that you all do not need that no more anyway, > since you likely just speak out (silently at "Hal" level) "Hey > computer bla bla", and the AI does the rest itself. > > --steffen > | > |Der Kragenbaer, The moon bear, > |der holt sich munter he cheerfully and one by one > |einen nach dem anderen runter wa.ks himself off > |(By Robert Gernhardt) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steffen at sdaoden.eu Tue Jun 11 06:19:15 2024 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Mon, 10 Jun 2024 22:19:15 +0200 Subject: [TUHS] most direct Unix descendant In-Reply-To: References: <20240610194053.JfHirqmk@steffen%sdaoden.eu> Message-ID: <20240610201915.vRvGE93Y@steffen%sdaoden.eu> Marc Donner wrote in : |Totally correct - in the words of the immortal Beatles - "Strings is all |you need." Sounds a bit like a somewhat pissed younger John Lennon? Iggy's "put some strings, you know" on some of those photograph seamy American carpets. (Of course -- i have no idea of what i am talking about.) --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From frew at ucsb.edu Tue Jun 11 13:15:29 2024 From: frew at ucsb.edu (James Frew) Date: Mon, 10 Jun 2024 20:15:29 -0700 Subject: [TUHS] Likely a one-liner in Unix In-Reply-To: References: Message-ID: OK, I'll bite (NB: using GNU find): find "$directory_tree" -type f -printf "%A+ %p\n" | sort -r | cut -d' ' -f2 | head -5 Cheers, /Frew On 2024-06-10 01:39, Dave Horsfall wrote: > On Mon, 10 Jun 2024, Ed Bradford wrote: > >> [...] people who know how to write stuff in PS are more expensive >> employees, and development time for asking a simple question like >> >>   "Show me the last 5 files read in a directory tree" > Likely a one-liner in Unix :-) > > -- Dave From tuhs at tuhs.org Tue Jun 11 16:06:37 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Tue, 11 Jun 2024 06:06:37 +0000 Subject: [TUHS] 5ESS UNIX RTR Reference Manual - Issue 10 (2001) Message-ID: Good evening, while I'm still waiting on the full uploads to progress (it's like there's a rule any >100MB upload to archive.org for me has to fail like 5 times before it'll finally go...) I decided to scrape out the UNIX RTR manual from a recent trove of 5ESS materials I received and tossed it up in a separate upload: https://archive.org/details/5ess-switch-unix-rtr-operating-system-reference-manual-issue-10 This time around I've got Issue 10 from December 2001. The last issue of this particular manual I found on another 5ESS disc is Issue 7 from 1998 which I shared previously (https://ia601200.us.archive.org/view_archive.php?archive=%2F12%2Fitems%2F5ess-switch-dk5e-cd-1999-05%2F5ESS-DK5E.zip&file=5EDOCS%2F93447.PDF) The manual is in "DynaText" format on the CD in question, unlike Issue 7 which was already a PDF on its respective CD. I used print-to-PDF to generate the above linked copy. Given that the CD itself is from 2007, this may point to UNIX RTR having no significant user-visible changes from 2001 to 2007 that would've necessitated manual revisions. In any case, I intend to upload bin/cue images of all 7 of the CDs I've received which span from 1999 to 2007, and mostly concern the 5ESS-2000 switch from the administrative and maintenance points of view. Once I get archive.org to choke these files down I also intend to go back around to the discs I've already archived and reupload them as proper bin/cue rips. I was in a hurry the last time around and simply zipped the contents from the discs, but aside from just being good archive practice, I think bin/cue is necessary for the other discs as they seem to have control information in the disc header that is required by the interactive documentation viewers therein. All that to say, the first pass will result in bin/cues which aren't easily readable through archive.org's interface, but I intend to also swing back around on these new discs and provide zips of the contents as well to ensure the archives are both correct (bin/cue) and easily navigable (zip). As always, if you have any such documentation or leads on where any may be awaiting archival, I'm happy to take on the work! - Matt G. From kevin.bowling at kev009.com Tue Jun 11 16:59:38 2024 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Mon, 10 Jun 2024 23:59:38 -0700 Subject: [TUHS] 5ESS UNIX RTR Reference Manual - Issue 10 (2001) In-Reply-To: References: Message-ID: On Mon, Jun 10, 2024 at 11:06 PM segaloco via TUHS wrote: > > Good evening, while I'm still waiting on the full uploads to progress (it's like there's a rule any >100MB upload to archive.org for me has to fail like 5 times before it'll finally go...) I decided to scrape out the UNIX RTR manual from a recent trove of 5ESS materials I received and tossed it up in a separate upload: > > https://archive.org/details/5ess-switch-unix-rtr-operating-system-reference-manual-issue-10 > > This time around I've got Issue 10 from December 2001. The last issue of this particular manual I found on another 5ESS disc is Issue 7 from 1998 which I shared previously (https://ia601200.us.archive.org/view_archive.php?archive=%2F12%2Fitems%2F5ess-switch-dk5e-cd-1999-05%2F5ESS-DK5E.zip&file=5EDOCS%2F93447.PDF) > > The manual is in "DynaText" format on the CD in question, unlike Issue 7 which was already a PDF on its respective CD. I used print-to-PDF to generate the above linked copy. Given that the CD itself is from 2007, this may point to UNIX RTR having no significant user-visible changes from 2001 to 2007 that would've necessitated manual revisions. > > In any case, I intend to upload bin/cue images of all 7 of the CDs I've received which span from 1999 to 2007, and mostly concern the 5ESS-2000 switch from the administrative and maintenance points of view. Once I get archive.org to choke these files down I also intend to go back around to the discs I've already archived and reupload them as proper bin/cue rips. I was in a hurry the last time around and simply zipped the contents from the discs, but aside from just being good archive practice, I think bin/cue is necessary for the other discs as they seem to have control information in the disc header that is required by the interactive documentation viewers therein. > I have some of these CDs already and can compare notes with you: DK5E-CD from 2004, OA&M from 2008. I think you can just copy the SGML files to a HDD once you have DynaText installed, so whatever is funky about the CDs is not terribly important for use aside from the fidelity of your archival. Some of my CDs also use something called Eloquent Presenter which seems like a HyperCard style program. All the docs that aren't SGML are PDF, including most of the schematics, plenty which look like scans of originals. > All that to say, the first pass will result in bin/cues which aren't easily readable through archive.org's interface, but I intend to also swing back around on these new discs and provide zips of the contents as well to ensure the archives are both correct (bin/cue) and easily navigable (zip). > > As always, if you have any such documentation or leads on where any may be awaiting archival, I'm happy to take on the work! FWIW I have a fully working 5ESS that I turned off last week (actually a 7 R/E - 3B21D, CM3 (Global Message Server), 20k lines a mix of POTS, ISDN, PRI trunks) and it is coming home with me at the end of the month. Small matters of loading, unloading, AC PDU and getting a sizable DC power plant and unbounded wiring are in my future. Why? I dunno but full send. I need to have a think on how to be public with all this going forward, I do want to share the system and the goal is historical preservation and learning with interested parties. > - Matt G. From ralph at inputplus.co.uk Tue Jun 11 18:05:06 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Tue, 11 Jun 2024 09:05:06 +0100 Subject: [TUHS] Likely a one-liner in Unix In-Reply-To: References: Message-ID: <20240611080506.73D7B21309@orac.inputplus.co.uk> Hi James, > > >   "Show me the last 5 files read in a directory tree" Given sort(1) gained -u for efficiency, I've often wondered why, in those constrained times, it didn't have a ‘-m n’ to output only the n ‘minimums’, e.g. ‘sed ${n}q’. With ‘-m 5’, this would let sort track the current fifth entry and discard input which was bigger, so avoiding both storing many unwanted lines and finding the current line's location within them. > OK, I'll bite (NB: using GNU find): I think the POSIX way of getting the atime would be ‘LC_CTIME=C ls -lu’ and then parsing the two possible date formats. So non-POSIX find is simpler. Also, GNU find shows me the sub-second part but ls doesn't. Neither does GNU ‘stat -c '%X %n'’. > find "$directory_tree" -type f -printf "%A+ %p\n" | sort -r | cut -d' ' -f2 | head -5 - I'd switch the atime format to seconds since epoch for easier formatting given it's discarded. - When atimes tie, sort's -r will give file Z before A so I'd add some -k's so A comes first. - I'd move the head to before the cut so cut processes fewer lines... - But on so few lines, I'd just use sed to do both in one. find "$@" -type f -printf '%A@ %p\n' | sort -k1,1nr -k2 | sed 's/^[^ ]* //; 5q' Remaining issues... If tied entries bridge the top-five border then this isn't shown. Is the real requirement to show files with the five most recent distinct atimes? awk '{t += !s[$0]; s[$0] = 1; print} t == 5 {exit}' Though this might give many lines. Instead, an ellipsis could show a tie bridged the cut-off. awk 't {if ($0 == l) print "..."; exit} NR == 5 {l = $0; t = 1} 1' Paths can contain linefeeds and some versions allow handling NULs to be tediously employed. find "$@" -type f -printf '%A@ %p\0' | sort -z -k1,1nr -k2 | sed -z 's/[^ ]* //; 5q' | tr \\0 \\n David Wheeler has a nice article he maintains on unusual characters in filenames: how to cope, and what other systems do, e.g. Plan 9. Fixing Unix/Linux/POSIX filenames: control characters (such as newline), leading dashes, and other problems David A. Wheeler, 2023-08-22 (originally 2009-03-24) https://dwheeler.com/essays/fixing-unix-linux-filenames.html As he writes, Linux already returns EINVAL for some paths on some filesystem types. A mount option which had a syscall return an error on meeting an insensible path would be useful. It avoids any attempt at escapement and its greater risk of implementation errors. I could always re-mount some old volume without the option to list the directory and fix up its entries. The second-best day to plant a tree is today. -- Cheers, Ralph. From g.branden.robinson at gmail.com Tue Jun 11 20:51:20 2024 From: g.branden.robinson at gmail.com (G. Branden Robinson) Date: Tue, 11 Jun 2024 05:51:20 -0500 Subject: [TUHS] Draft: London and Reiser's UNIX/32V paper, reconstructed Message-ID: <20240611105120.k3jpky7fvmuc7wjy@illithid> Hi folks, Reiser and London's paper documenting their preparation of UNIX/32V, a port of Seventh Edition Unix to the VAX-11/780, is an important milestone in Unix development--as much I think for its frank critique of C as "portable assembly" as for the status of the system documented: the last common ancestor of the BSD and System V branches of development. Because the only version I've ever seen of this paper is a scan of, possibly, a photocopy several generations removed from the original, I thought I'd throw an OCR tool at it and see about reconstructing it, not just for posterity but to put the groff implementation of mm to the test. So even if someone has a beautiful scan of this document elsewhere, this exercise remains worthwhile to me for what it has shown me about Documenter's Workbench mm and groff's mostly compatible reimplementation thereof. Please find attached my first draft of the reconstruction as an mm document as well as a PDF rendered by bleeding edge groff. I did not attempt to fix _any_ typos, solecisms, or non-idiomatic *roff usage (like the employment of hyphens as arithmetic signs or operators) in this document. I may have introduced errors, however, due to human fallibility or incorrect inferences about what lay beneath scanning artifacts. Hence its draft status. I welcome review. Assuming this reconstruction survives peer scrutiny, I aim to put it up on GitHub as I did Kernighan & Cherry's "Typesetting Mathematics" paper.[1] For the casual reader, I extract my documentary annotations below. For groff list subscribers, I will add, because people are accustomed to me venturing radical suggestions for reforms of macro packages, I suggest that we can get rid of groff mm's "MOVE" and "PGFORM" extensions. They're buggy (as the man page has long conceded), and I don't think anyone ever mastered them, not even their author. I rewrote "0.MT", essential to rendering of this document, without requiring them at all. I _tried_ to use them, but "MOVE" in particular introduced baffling errors in vertical spacing. When I threw it aside to attack head-on the layout problems facing me, things got easier. Further, simple caching and restoration of `.i` and `.l` register values (when multiple changes were being made to them within a macro) obviated `PGFORM`. I'm not sure that it is tractable to idiot-proof manipulations of basic layout parameters like these, as these macros seem to have tried to do. If a document author wants to seize control of page layout from a full-service macro package and reach deep into the guts of the formatter, they should glove up and put things back where they found them. My opinion. .\" London & Reiser's UNIX/32V porting paper .\" .\" Reconstruction in groff mm (but DWB 3.3 mm compatible) .\" from scanned/OCRed text by G. Branden Robinson, June 2024 .\" .\" The original scan shows no evidence of superscript usage, except on .\" the cover sheet where "TM" superscripts "UNIX". .\" .\" Some differences may arise due to changes in the mm macro package .\" itself from its PWB incarnation (ca. 1978) and DWB 3.3 (July 1992). .\" Thanks to Dan Plassche for the history. .\" https://www.tuhs.org/pipermail/tuhs/2022-March/025545.html .\" .\" The groff reimplementation of mm was undertaken mostly from .\" 1991-1999 (by Juergen Haegg), based on the DWB documentation. It .\" added features but also parameterized many aspects of package .\" behavior, for example to facilitate easy localization. Later, .\" Werner Lemberg and G. Branden Robinson contributed enhancements, bug .\" fixes, and improvements to the groff_mm(7) man page. .\" .\" I anticipate adding further parameters to groff mm to better .\" emulate the old version of mm used by this paper. (For example, the .\" format of the caption applied to the reference page differs between .\" PWB mm and DWB 3.3.) Where this document exercises such extensions, .\" they should be prefixed with a `do` request so that AT&T troff will .\" ignore them. .\" Override: "By default, ... bold stand-alone headings are printed .\" in a size one point smaller than the body." .\" XXX: The cover "page" (more like a header block) is a mess when .\" typeset with groff mm, and outright horrific in nroff mode. GBR has .\" fixes for these pending for push to GNU Savannah's Git repository. .\" .\" XXX: Original scan capitalizes "Subject:"; DWB 3.3 renders it in .\" full lowercase. .\" .\" XXX: Original scan bears a "TM:" heading for the technical .\" memorandum number(s). DWB 3.3 lacks this. .\" .\" Memorandum captions may have changed from PWB to DWB 3.3 mm. groff .\" mm has changed in Git (June 2024) to use the captions documented in .\" the DWB 3.3 manual. We override the default for authenticity. .\" XXX: Original scan sets reference marks as a typewriter might, at .\" normal size on the baseline between square brackets. DWB 3.3 .\" converts them to superscripts but keeps the brackets(!). groff mm .\" should add a "Rfstyle" register to control this. .\" 0 = auto (nroff/troff); 1 = bracket; 2 = superscript; 3 = both. (?) \" straight quotes in original .ns \" XXX: Hack to keep `VL` from adding pre-list vertical space. \" recte: *(\-\-p+i) \" bad ellipsis spacing in original \" - missing; error in text or scanner fubar? \" recte: \-1 \" sic .\" Either `AL` worked differently in 1978 mm, or didn't exist, or .\" somebody wanted this list _just so_. .\"AL "" 5 .\" XXX: Scan has signatures set farther to the right, not centered as .\" DWB 3.3 mm sets them. groff mm follows DWB here. .\" .\" XXX: PWB and DWB 3.3 put the signature names in bold; groff mm sets .\" them at normal weight. Bug. .\" .\" XXX: Scan has a couple of vees between the signature line and the .\" flush left secretarial annotation. groff mm sets the annotation on .\" the same line as the last author but also puts its information in .\" the cover page header as DWB 3.3 does, described next. DWB 3.3: (1) .\" omits the secretarial annotation altogether, putting it up in the .\" cover page header under the authors' names; (2) does not use author .\" initials (in the cover header) for this memorandum type; (3) puts .\" the department number after "Org." on the line under the author .\" name; (4) puts the abbreviated AT&T site name below that. Should we .\" consider a `Sgstyle` register for groff mm? .\" .\" XXX: groff mm organizes the department and site name differently .\" from DWB 3.3 in the cover head, and I don't see any reason for it .\" to. Fix this. .\" XXX: Scan only breaks between notations; DWB 3.3 and groff put 1v .\" between them. Should we consider an `Nss` register for groff mm? .\" XXX: Scan has references caption set flush left, in mixed case and .\" bold (just like `HU`). DWB 3.3 and groff center it and set it in .\" full caps in italics (at normal weight). If there were a way to .\" dump the accumulated reference list independently of rendering the .\" caption, that would give the author much more flexibility. .\" .\" XXX: The numbered reference list does not look like one produced .\" with `RL` nor with `AL`. The numeric tag is left-aligned within the .\" paragraph indentation. groff mm aligns it to the right. .\" .\" DWB 3.3 and Heirloom mm don't seem to honor `.RP 2` as the DWB .\" manual documents. They start the table immediately after the .\" reference list and go haywire boxing the table. Bug. Regards, Branden [1] https://github.com/g-branden-robinson/retypesetting-mathematics -------------- next part -------------- A non-text attachment was scrubbed... Name: unix-32v-reconstructed.mm Type: text/troff Size: 54267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unix-32v-reconstructed.pdf Type: application/pdf Size: 103558 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ralph at inputplus.co.uk Wed Jun 12 00:05:38 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Tue, 11 Jun 2024 15:05:38 +0100 Subject: [TUHS] Draft: London and Reiser's UNIX/32V paper, reconstructed In-Reply-To: <20240611105120.k3jpky7fvmuc7wjy@illithid> References: <20240611105120.k3jpky7fvmuc7wjy@illithid> Message-ID: <20240611140538.6F6D8220BB@orac.inputplus.co.uk> G. Branden Robinson wrote: > For groff list subscribers, I will add, because people are accustomed > to me venturing radical suggestions for reforms of macro packages, > I suggest that we can get rid of groff mm's "MOVE" and "PGFORM" > extensions. They're buggy (as the man page has long conceded), and > I don't think anyone ever mastered them, not even their author. I have quite a lot of old troff -mm source containing lines like .PGFORM 21c-2i 29.7c-1.5i 1i 1 and they worked fine for me. Part of troff's attraction is it has reached an age where it doesn't have breaking changes. Perhaps they should be in a fork of groff. gbroff? Though I'd have though an entirely new formatter would give much more freedom for experimentation given modern input and output formats and greater processing power. Meanwhile, Werner's earlier groff is still available and other troffs exist. -- Cheers, Ralph. From beebe at math.utah.edu Wed Jun 12 03:42:54 2024 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Tue, 11 Jun 2024 11:42:54 -0600 Subject: [TUHS] [tuhs] Early statistical software in Unix Message-ID: Doug McIlroy kindly sent me contact information for John Chambers, co-author of the cited book about the S system. I have just heard back from John, who offered a link to his summary paper from the 2020 HOPL conference proceedings S, R, and data science https://doi.org/10.1145/3386334 and reported that S was licensed to AT&T Unix customers only in binary form, and that the original source code may no longer exist. That is a definitive answer, even if not the one that I was hoping to find. ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From steffen at sdaoden.eu Wed Jun 12 07:01:02 2024 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Tue, 11 Jun 2024 23:01:02 +0200 Subject: [TUHS] Likely a one-liner in Unix In-Reply-To: <20240611080506.73D7B21309@orac.inputplus.co.uk> References: <20240611080506.73D7B21309@orac.inputplus.co.uk> Message-ID: <20240611210102.P8tiuiAL@steffen%sdaoden.eu> Ralph Corderoy wrote in <20240611080506.73D7B21309 at orac.inputplus.co.uk>: .. |>>>   "Show me the last 5 files read in a directory tree" ... |Neither does GNU ‘stat -c '%X %n'’. Unfortunately "stat" is not portable. ... |David Wheeler has a nice article he maintains on unusual characters in |filenames: how to cope, and what other systems do, e.g. Plan 9. | | Fixing Unix/Linux/POSIX filenames: control characters (such as | newline), leading dashes, and other problems | David A. Wheeler, 2023-08-22 (originally 2009-03-24) | https://dwheeler.com/essays/fixing-unix-linux-filenames.html | |As he writes, Linux already returns EINVAL for some paths on some |filesystem types. A mount option which had a syscall return an error on |meeting an insensible path would be useful. It avoids any attempt at |escapement and its greater risk of implementation errors. I could |always re-mount some old volume without the option to list the directory |and fix up its entries. The second-best day to plant a tree is today. dash is currently implementing $'' quotes (that will be part of the next POSIX i think). I want to mention again, eh, please let me just paste something of mine from ossec from may, as i really think in $'' could lie sanity also for such things: While here please let me back the not yet gracefully supported shell escape mechanism $''. The current approach seems to be to be as atomic as possible: # touch $(printf 'a\rb\tc\a') # ll -> -rw-r----- 1 steffen steffen 0 May 3 00:46 'c'$'\a' -rw-r----- 1 steffen steffen 0 May 3 00:46 'a'$'\r''b' (GNU coreutils). Isn't that just terrible? In (the development version of) my mailer tab-completion leads to #..mbox? /tmp/ $'a\rb' $'c\a' which i find at least a little bit better. (Do not even think about looking in its implementation though, look ICU or what.) And even though currently unsupported, it should be said that with "grapheme clusters" and in general things like ligatures and other such language-specific constructs which need to look at surroundings -- in general interfaces like towupper() etc are not useful in global context, entire sentences have to be looked at as a whole due to this! --, shell quotes should be extended to the largest possible range possible. Ie, all the iconv(3)s that are currently used because of a lack of other interfaces should be enabled to see the longest possible (sub)string, not the most atomar, as seen above. Anyhow, with proper $'' quoting that also offers \$VAR/\${VAR} for example (which acts like "$VAR" in double quotes; and \c@==NUL, xx) there would be a way to a have a holistic quoting mechanism. Some more complaining of an idiot who does not understand why ISO C and more standardized holes in the \U and \u (Unicode code point, hexadecimal) ranges. --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From frew at ucsb.edu Wed Jun 12 10:26:22 2024 From: frew at ucsb.edu (James Frew) Date: Tue, 11 Jun 2024 17:26:22 -0700 Subject: [TUHS] [tuhs] Early statistical software in Unix In-Reply-To: References: Message-ID: <8b0285c2-ce2d-4f92-ac74-5dc62f16a6fa@ucsb.edu> Well, at least one copy of the source code escaped: I compiled and ran it on a Sun-3 server at UCSB ca. 1980. As I recall it was a formidable wad of Fortran. Alas, it's long gone: any backups extant would be on media likely too deteriorated to read, even if we had the hardware to read it. Cheers, /Frew On 2024-06-11 10:42, Nelson H. F. Beebe wrote: > Doug McIlroy kindly sent me contact information for John Chambers, > co-author of the cited book about the S system. I have just heard > back from John, who offered a link to his summary paper from the > 2020 HOPL conference proceedings > > S, R, and data science > https://doi.org/10.1145/3386334 > > and reported that S was licensed to AT&T Unix customers only in binary > form, and that the original source code may no longer exist. > > That is a definitive answer, even if not the one that I was hoping to > find. From jim at deitygraveyard.com Wed Jun 12 11:37:42 2024 From: jim at deitygraveyard.com (Jim Carpenter) Date: Tue, 11 Jun 2024 21:37:42 -0400 Subject: [TUHS] 5ESS UNIX RTR Reference Manual - Issue 10 (2001) In-Reply-To: References: Message-ID: On Tue, Jun 11, 2024 at 2:06 AM segaloco via TUHS wrote: > In any case, I intend to upload bin/cue images of all 7 of the CDs I've received which span from 1999 to 2007, and mostly concern the 5ESS-2000 switch from the administrative and maintenance points of view. Once I get archive.org to choke these files down I also intend to go back around to the discs I've already archived and reupload them as proper bin/cue rips. I was in a hurry the last time around and simply zipped the contents from the discs, but aside from just being good archive practice, I think bin/cue is necessary for the other discs as they seem to have control information in the disc header that is required by the interactive documentation viewers therein. Bin/cue is a PITA. You've checked that a simple raw image isn't adequate? Perhaps the viewer was just checking the volume label? Jim From tuhs at tuhs.org Wed Jun 12 13:34:22 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Wed, 12 Jun 2024 03:34:22 +0000 Subject: [TUHS] 5ESS UNIX RTR Reference Manual - Issue 10 (2001) In-Reply-To: References: Message-ID: On Tuesday, June 11th, 2024 at 6:37 PM, Jim Carpenter wrote: > On Tue, Jun 11, 2024 at 2:06 AM segaloco via TUHS tuhs at tuhs.org wrote: > > > In any case, I intend to upload bin/cue images of all 7 of the CDs I've received which span from 1999 to 2007, and mostly concern the 5ESS-2000 switch from the administrative and maintenance points of view. Once I get archive.org to choke these files down I also intend to go back around to the discs I've already archived and reupload them as proper bin/cue rips. I was in a hurry the last time around and simply zipped the contents from the discs, but aside from just being good archive practice, I think bin/cue is necessary for the other discs as they seem to have control information in the disc header that is required by the interactive documentation viewers therein. > > > Bin/cue is a PITA. You've checked that a simple raw image isn't > adequate? Perhaps the viewer was just checking the volume label? > > > Jim What would you suggest? My main point of reference is years and years of being in the console video game scene, bin/cue is the most accessible of the high fidelity formats I've seen for things, compared with say cdi and mdf/mds. Does a plain old iso suffice for all relevant data from the media? Frankly I've never done dumps on a UNIXy computer with an optical drive, only Windows boxen, so can't say I'm hip to the sort of disc image you get doing a dd from an optical /dev entry, maybe I just need to get a UNIX of some kind on my old beater game machine with an optical drive to do these dumps going forward. Either way, open to suggestions on what format is the ideal combination of capturing everything that matters from optical media while not using too onerous or closed up of an image format. This is not an area I'm "with the times on", I just went straight to what was customary for myself over a decade ago when I was last diligently interacting with optical media preservation. - Matt G. From andreww591 at gmail.com Wed Jun 12 15:43:13 2024 From: andreww591 at gmail.com (Andrew Warkentin) Date: Tue, 11 Jun 2024 23:43:13 -0600 Subject: [TUHS] 5ESS UNIX RTR Reference Manual - Issue 10 (2001) In-Reply-To: References: Message-ID: On Tue, Jun 11, 2024 at 9:41 PM segaloco via TUHS wrote: > > What would you suggest? My main point of reference is years and years of being in the console video game scene, bin/cue is the most accessible of the high fidelity formats I've seen for things, compared with say cdi and mdf/mds. Does a plain old iso suffice for all relevant data from the media? Frankly I've never done dumps on a UNIXy computer with an optical drive, only Windows boxen, so can't say I'm hip to the sort of disc image you get doing a dd from an optical /dev entry, maybe I just need to get a UNIX of some kind on my old beater game machine with an optical drive to do these dumps going forward. > The vast majority of non-game software was distributed on discs that were formatted with a single data track and no special formatting. These can be safely imaged in flat (ISO) format. The main reason to use the lower-level formats is for discs with disc-based copy protection or multiple tracks (usually one data track and multiple audio tracks), both of which are very uncommon for non-game software. BeOS install CDs are the one exception I can think of; these have an ISO-format boot track followed by one or two BFS-format system tracks (separate system tracks are used for x86 and PPC), although even these aren't actually dependent on multiple tracks and can be run from a CD with just the system track if a boot floppy is used. Most dumping programs should be able to show you how the discs are formatted; if they only have a single track each, ISO format should be sufficient. From wobblygong at gmail.com Wed Jun 12 17:01:44 2024 From: wobblygong at gmail.com (Wesley Parish) Date: Wed, 12 Jun 2024 19:01:44 +1200 Subject: [TUHS] 5ESS UNIX RTR Reference Manual - Issue 10 (2001) In-Reply-To: References: Message-ID: On 12/06/24 17:43, Andrew Warkentin wrote: > On Tue, Jun 11, 2024 at 9:41 PM segaloco via TUHS wrote: >> What would you suggest? My main point of reference is years and years of being in the console video game scene, bin/cue is the most accessible of the high fidelity formats I've seen for things, compared with say cdi and mdf/mds. Does a plain old iso suffice for all relevant data from the media? Frankly I've never done dumps on a UNIXy computer with an optical drive, only Windows boxen, so can't say I'm hip to the sort of disc image you get doing a dd from an optical /dev entry, maybe I just need to get a UNIX of some kind on my old beater game machine with an optical drive to do these dumps going forward. >> > The vast majority of non-game software was distributed on discs that > were formatted with a single data track and no special formatting. > These can be safely imaged in flat (ISO) format. The main reason to > use the lower-level formats is for discs with disc-based copy > protection or multiple tracks (usually one data track and multiple > audio tracks), both of which are very uncommon for non-game software. > BeOS install CDs are the one exception I can think of; these have an > ISO-format boot track followed by one or two BFS-format system tracks > (separate system tracks are used for x86 and PPC), although even these > aren't actually dependent on multiple tracks and can be run from a CD > with just the system track if a boot floppy is used. > > Most dumping programs should be able to show you how the discs are > formatted; if they only have a single track each, ISO format should be > sufficient. FWIW, I've successfully dd'ed cds and cd-roms into iso files and burnt copies. I've never made use of the bin/cue setup, and wouldn't know how to work it. Wesley Parish From ralph at inputplus.co.uk Wed Jun 12 18:12:04 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Wed, 12 Jun 2024 09:12:04 +0100 Subject: [TUHS] 5ESS UNIX RTR Reference Manual - Issue 10 (2001) In-Reply-To: References: Message-ID: <20240612081204.670F21FBE4@orac.inputplus.co.uk> Hi Andrew, > Most dumping programs should be able to show you how the discs are > formatted; if they only have a single track each, ISO format should be > sufficient. Presumably, it's fairly simple to look at the text-file cue-sheet, if bin/cue had been used, to see an ISO would have been good enough? https://en.wikipedia.org/wiki/Cue_sheet_(computing)#Cue_sheet_syntax And then perhaps there's a command to extract the ISO from the bin/cue files. -- Cheers, Ralph. From tuhs at tuhs.org Wed Jun 12 18:41:45 2024 From: tuhs at tuhs.org (Arno Griffioen via TUHS) Date: Wed, 12 Jun 2024 10:41:45 +0200 Subject: [TUHS] 5ESS UNIX RTR Reference Manual - Issue 10 (2001) In-Reply-To: <20240612081204.670F21FBE4@orac.inputplus.co.uk> References: <20240612081204.670F21FBE4@orac.inputplus.co.uk> Message-ID: On Wed, Jun 12, 2024 at 09:12:04AM +0100, Ralph Corderoy wrote: > And then perhaps there's a command to extract the ISO from the bin/cue > files. 'bchunk' is one that does exactly that. Also 'fuseiso' allows mouting a BIN/CUE file/image as a regular filesystem to read the files if you just want to use them as-is. Bye, Arno. From ralph at inputplus.co.uk Wed Jun 12 19:01:01 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Wed, 12 Jun 2024 10:01:01 +0100 Subject: [TUHS] [tuhs] Early statistical software in Unix In-Reply-To: References: Message-ID: <20240612090101.DB4811FBE4@orac.inputplus.co.uk> Hi Nelson, > [John] reported that S was licensed to AT&T Unix customers only in > binary form, and that the original source code may no longer exist. Given your point about ‘S’ being an awkward search term, does John Chambers recall a colloquial longer name, perhaps for when context was needed? Also, any fragments of filenames he can recall, whether source or binary distribution. Given Bell Labs long history of inventions, it presumably had an archive of material and an archivist or librarian back in the '70s. Back then, contemporary data on disc and tape was impractical to archive — too much of it, to expensive to duplicate, and difficult to predict what would be worth keeping — but paper was their trade. An archival print of source to match a licensed release would have been possible. Perhaps even preferred by the lawyers. I'm surprised they weren't considering how to archive ‘today’. -- Cheers, Ralph. From tuhs at tuhs.org Thu Jun 13 02:30:48 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Wed, 12 Jun 2024 16:30:48 +0000 Subject: [TUHS] 5ESS UNIX RTR Reference Manual - Issue 10 (2001) In-Reply-To: References: <20240612081204.670F21FBE4@orac.inputplus.co.uk> Message-ID: On Wednesday, June 12th, 2024 at 1:41 AM, Arno Griffioen via TUHS wrote: > On Wed, Jun 12, 2024 at 09:12:04AM +0100, Ralph Corderoy wrote: > > > And then perhaps there's a command to extract the ISO from the bin/cue > > files. > > > 'bchunk' is one that does exactly that. > > Also 'fuseiso' allows mouting a BIN/CUE file/image as a regular filesystem to > read the files if you just want to use them as-is. > > Bye, Arno. bchunk is likewise my tool of choice for that sort of thing. I think what I'll go with is the bin/cue for completeness but also a zip of the contents composited together. Someone who specifically needs the disc image data can probably figure out bchunk and then an archive will be present in a form navigable through archive.org's interface with the composite pieces from each collection (i.e. a merge of the discs for a multi-disc set). That should hopefully satisfy various needs. - Matt G. From sauer at technologists.com Fri Jun 14 00:56:04 2024 From: sauer at technologists.com (Charles H Sauer (he/him)) Date: Thu, 13 Jun 2024 09:56:04 -0500 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= Message-ID: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> https://www.theregister.com/2024/06/13/version_256_systemd/ I don't see the boast at https://github.com/systemd/systemd/releases/tag/v256, but ... Charlie -- voice: +1.512.784.7526 e-mail: sauer at technologists.com fax: +1.512.346.5240 Web: https://technologists.com/sauer/ Facebook/Google/LinkedIn/Twitter: CharlesHSauer From crossd at gmail.com Fri Jun 14 01:33:30 2024 From: crossd at gmail.com (Dan Cross) Date: Thu, 13 Jun 2024 11:33:30 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: On Thu, Jun 13, 2024 at 10:56 AM Charles H Sauer (he/him) wrote: > https://www.theregister.com/2024/06/13/version_256_systemd/ > > I don't see the boast at > https://github.com/systemd/systemd/releases/tag/v256, but ... That "boast" seems to come from a random mastodon post? - Dan C. From lm at mcvoy.com Fri Jun 14 01:35:29 2024 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 13 Jun 2024 08:35:29 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: <20240613153529.GO8271@mcvoy.com> "The new alternative does no such sleight of hand. Instead, it just gets the systemd daemon to run the command for you, using a special form of the existing systemd-run command." Sounds like a new path for exploits. We'll see if Mr Systemd has to eat some crow in the future. Said by a guy who _hates_ systemd. On Thu, Jun 13, 2024 at 09:56:04AM -0500, Charles H Sauer (he/him) wrote: > https://www.theregister.com/2024/06/13/version_256_systemd/ > > I don't see the boast at > https://github.com/systemd/systemd/releases/tag/v256, but ... > > Charlie > -- > voice: +1.512.784.7526 e-mail: sauer at technologists.com > fax: +1.512.346.5240 Web: https://technologists.com/sauer/ > Facebook/Google/LinkedIn/Twitter: CharlesHSauer -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat From clemc at ccc.com Fri Jun 14 01:39:12 2024 From: clemc at ccc.com (Clem Cole) Date: Thu, 13 Jun 2024 11:39:12 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: Thank Charlie. But I just threw up after I read it. Sadly, UNIX's "prime directive" was to "keep it simple." Or, as someone else describes it, create "small tools that did one job well." On the PDP-11, the lack of address space somewhat enforced this. With the 32-bit vax, we see cat -v and the like. I think "frameworks" are just a modern term for IBM's "access methods" of the 1960s. John Lions observed that the entire documentation set for UNIX V6 could be kept in a 3-ring binder, and, as his book showed, given the size, anyone could understand all of the kernel and the core systems ideas. FWIW, Linux is not the first to fail. Years ago, I pointed out to Dennis that the System V Release 3 bootloader for the 3B was larger than the entire V6 kernel. I have not looked at the size of systemd, but do you want to bet that it fails the same test? But I digress. Someone (Henry Spencer, maybe) once said, "Good Taste is subjective. I have it, and you don't seem to." IMO systemd, was >>not<< a net positive - it falls so many of these tests WRT to good programming and good ideas. Sigh ... Clem ᐧ ᐧ On Thu, Jun 13, 2024 at 10:56 AM Charles H Sauer (he/him) < sauer at technologists.com> wrote: > https://www.theregister.com/2024/06/13/version_256_systemd/ > > I don't see the boast at > https://github.com/systemd/systemd/releases/tag/v256, but ... > > Charlie > -- > voice: +1.512.784.7526 e-mail: sauer at technologists.com > fax: +1.512.346.5240 Web: https://technologists.com/sauer/ > Facebook/Google/LinkedIn/Twitter > : > CharlesHSauer > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ads at salewski.email Fri Jun 14 01:41:48 2024 From: ads at salewski.email (Alan D. Salewski) Date: Thu, 13 Jun 2024 11:41:48 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <20240613153529.GO8271@mcvoy.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <20240613153529.GO8271@mcvoy.com> Message-ID: On Thu, Jun 13, 2024, at 11:35, Larry McVoy wrote: > "The new alternative does no such sleight of hand. Instead, it just gets the > systemd daemon to run the command for you, using a special form of the existing > systemd-run command." > > Sounds like a new path for exploits. We'll see if Mr Systemd has to eat > some crow in the future. Said by a guy who _hates_ systemd. Eating sudo, eating crow...I guess systemd is /still/ hungry: https://i.kym-cdn.com/photos/images/original/000/925/966/8d2.gif -Al From usotsuki at buric.co Fri Jun 14 01:55:40 2024 From: usotsuki at buric.co (Steve Nickolas) Date: Thu, 13 Jun 2024 11:55:40 -0400 (EDT) Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <20240613153529.GO8271@mcvoy.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <20240613153529.GO8271@mcvoy.com> Message-ID: On Thu, 13 Jun 2024, Larry McVoy wrote: > "The new alternative does no such sleight of hand. Instead, it just gets the > systemd daemon to run the command for you, using a special form of the existing > systemd-run command." > > Sounds like a new path for exploits. We'll see if Mr Systemd has to eat > some crow in the future. Said by a guy who _hates_ systemd. systemd is the thing that should not be. If I had successfully gotten my project up (trying to get a standalone kernel/libc/clang build environment up and running - either Linux/musl or NetBSD) it would run a rewrite of the SysV init system (not "sysvinit", that's GPL). -uso. From tuhs at tuhs.org Fri Jun 14 02:47:02 2024 From: tuhs at tuhs.org (Arrigo Triulzi via TUHS) Date: Thu, 13 Jun 2024 18:47:02 +0200 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: On 13 Jun 2024, at 17:39, Clem Cole wrote: > IMO systemd, was >>not<< a net positive - it falls so many of these tests WRT to good programming and good ideas. Binary logs, ’nuff said. Good sysadmins live & die by grep and being able to visually detect departures from the norm by just looking at the “shape” of logs scrolling down a screen (before), terminal window now. Yours disgusted since v1 of that abomination. Arrigo From tuhs at tuhs.org Fri Jun 14 04:39:13 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Thu, 13 Jun 2024 18:39:13 +0000 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: On Thursday, June 13th, 2024 at 9:47 AM, Arrigo Triulzi via TUHS wrote: > On 13 Jun 2024, at 17:39, Clem Cole clemc at ccc.com wrote: > > > IMO systemd, was >>not<< a net positive - it falls so many of these tests WRT to good programming and good ideas. > > > Binary logs, ’nuff said. > > Good sysadmins live & die by grep and being able to visually detect departures from the norm by just looking at the “shape” of logs scrolling down a screen (before), terminal window now. > > Yours disgusted since v1 of that abomination. > > Arrigo Part of what irks me is the lack of choice. Just like many outlets will use GNU extensions to otherwise POSIX components, leaving the rest of the world out in the rain, several bits of the Linux ecosystem have backed systemd as the one true way and are hobbled if even usable at all with other init systems out there. User software shouldn't have any attachment to a particular init system, it isn't meant to provide "services" beyond run this script at this time based on the conditions of boot, manage terminal lines, and maybe offer some runlevels to compartmentalize operating environments. I've seen it said elsewhere that the amount of surface area being shoved into PID 1 can only lead to disaster. Are there any known attempts in the modern age to roll Linux with something resembling research/BSD init? That would be a nice counter to the proliferation of systemd. Even if it doesn't make a dent in the actual uptake, at least it'd feel cathartic to have an alternative in the opposite direction. - Matt G. From falcon at freecalypso.org Fri Jun 14 04:45:09 2024 From: falcon at freecalypso.org (Mychaela Falconia) Date: Thu, 13 Jun 2024 10:45:09 -0800 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: <20240613184520.3ED803740153@freecalypso.org> Matt G. wrote: > Are there any known attempts in the modern age to roll Linux with something > resembling research/BSD init? That would be a nice counter to the > proliferation of systemd. I use Slackware and will never give it up. It uses sysvinit, which isn't as good as research/BSD init, but a helluvalot better than systemd! There is also Devuan, a sans-systemd fork of Debian, for those who aren't hard-core enough to go full Slackware. M~ From crossd at gmail.com Fri Jun 14 04:54:51 2024 From: crossd at gmail.com (Dan Cross) Date: Thu, 13 Jun 2024 14:54:51 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: On Thu, Jun 13, 2024 at 2:39 PM segaloco via TUHS wrote: > On Thursday, June 13th, 2024 at 9:47 AM, Arrigo Triulzi via TUHS wrote: > > On 13 Jun 2024, at 17:39, Clem Cole clemc at ccc.com wrote: > > > IMO systemd, was >>not<< a net positive - it falls so many of these tests WRT to good programming and good ideas. > > > > Binary logs, ’nuff said. > > > > Good sysadmins live & die by grep and being able to visually detect departures from the norm by just looking at the “shape” of logs scrolling down a screen (before), terminal window now. > > > > Yours disgusted since v1 of that abomination. > > Part of what irks me is the lack of choice. Just like many outlets will use GNU extensions to otherwise POSIX components, leaving the rest of the world out in the rain, several bits of the Linux ecosystem have backed systemd as the one true way and are hobbled if even usable at all with other init systems out there. User software shouldn't have any attachment to a particular init system, it isn't meant to provide "services" beyond run this script at this time based on the conditions of boot, manage terminal lines, and maybe offer some runlevels to compartmentalize operating environments. I've seen it said elsewhere that the amount of surface area being shoved into PID 1 can only lead to disaster. I agree about the lack of choice, but I think the reasoning here shows a bit of an impedance mismatch between what systemd is, and what people think that it should be. In particular, it left merely being an "init system" behind a long time ago, and is now the all-singing, all-dancing service and resource management platform for the system. That's not a terrible thing to have, if the goal of your system is to be able to, well, run services and manage resources. But is systemd, as an expression of that idea, a good thing? I don't really think so. My arguments here tend to be somewhat vague, but I do believe that there is valid criticism beyond just, "It's new! It's different! I hate it!!" Portability is a good argument. Where I think many of the arguments against systemd break down is by dismissing the real problems that it solves; off the top of my head, this may include automatically restarting dependent services when a daemon crashes and is restarted. But again, just because a tool solves a real problem doesn't mean that it's a good tool, or even a good tool for solving that problem. I suspect much of the rush to systemd is driven less by enthusiasm for how it does things, and more for it being the only thing out there that solves some problem that the distro maintainers consider important (ie, that they get asked about frequently). > Are there any known attempts in the modern age to roll Linux with something resembling research/BSD init? Alpine Linux may come closest? And of course the BSDs still exist. > That would be a nice counter to the proliferation of systemd. Even if it doesn't make a dent in the actual uptake, at least it'd feel cathartic to have an alternative in the opposite direction. There are still some Linux distributions that don't ship with systemd, but I think they're just delaying the inevitable. On a more meta point, there are big differences between production server systems, user-oriented systems, and research systems. Systemd feels very much like it comes from the first of those, to me; very mainframe-y. - Dan C. From woods at robohack.ca Thu Jun 13 05:29:28 2024 From: woods at robohack.ca (Greg A. Woods) Date: Wed, 12 Jun 2024 12:29:28 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: At Thu, 13 Jun 2024 14:54:51 -0400, Dan Cross wrote: Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > I agree about the lack of choice, Personally I don't see any lack of choice. There are more than three good BSD derived systems that together cover almost any imaginable set of requirements. The sheep will follow the herd though..... > this may include automatically restarting dependent services when a > daemon crashes and is restarted. If your daemon's are crashing and in need of restarting so often that a tool is needed to restart them then you have a myriad of other far more pressing problems you should be dealing with first! > being the only thing out there that solves some problem that the > distro maintainers consider important (ie, that they get asked about > frequently). If it were so simple I would expect that claim to be more widely advertised, yet we fall back on "it restarts daemons that crash". Personally I think systemd trying to solve the rather high demands and diverse requirements of mobile laptop systems and is trying to meet or match MS Windows in this regard. (personally I think macos has them both beat by a country mile!) It sure as heck isn't of any use in production server environments! If it were more about servers then it would look more like SMF, or maybe launchd, and it's code wouldn't look like it was written by a grade school student. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From ads at salewski.email Fri Jun 14 05:37:57 2024 From: ads at salewski.email (Alan D. Salewski) Date: Thu, 13 Jun 2024 15:37:57 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> On Thu, Jun 13, 2024, at 14:39, segaloco via TUHS wrote: [...] > Are there any known attempts in the modern age to roll Linux with > something resembling research/BSD init? That would be a nice counter > to the proliferation of systemd. Even if it doesn't make a dent in the > actual uptake, at least it'd feel cathartic to have an alternative in > the opposite direction. > > - Matt G. I'm interested in hearing about other options in this space, too. The ones that I'm aware of include: 1. Slackware http://www.slackware.com/ 2. Debian, with sysvinit-core or some other init https://www.debian.org/doc/manuals/debian-faq/customizing.en.html#sysvinit https://wiki.debian.org/Init 3. Devuan (for a Debian derived system w/o systemd) https://www.devuan.org/ The most no-fuss, just-works-out-of-the-box-without-systemd approach would probably be to use Slackware. Debian can be easily customized to run without systemd, once you know the formulas[0]. I did not include Alpine Linux[1] in the above list because it includes lots of tools in a single executable (possibly "init").[2] It does not use systemd by default, though. I mention Devuan only because I'm aware of it -- I've never used it in anger. -Al [0] Even on a systemd-infected host, it isn't much more complicated than: * install the 'sysvinit-core' package (and friends) * pin the 'systemd-sysv' package to '-1' (never install) * reboot * purge most (or all) packages with 'systemd' in their name [1] https://www.alpinelinux.org/about/ [2] It's great in certain circumstances, though -- it's my go-to distro base for most of my small-footprint-scenarios work with Linux containers. From crossd at gmail.com Fri Jun 14 06:03:30 2024 From: crossd at gmail.com (Dan Cross) Date: Thu, 13 Jun 2024 16:03:30 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: On Thu, Jun 13, 2024 at 3:18 PM Greg A. Woods wrote: > [snip] > > this may include automatically restarting dependent services when a > > daemon crashes and is restarted. > > If your daemon's are crashing and in need of restarting so often that a > tool is needed to restart them then you have a myriad of other far more > pressing problems you should be dealing with first! I may be in a bit of a grumpy mood, so forgive me if this is snarkier than I intend, but statements like this bother me. First, there are a number of reasons that programs crash in the real world, in production environments. Often, the people in charge of keeping them running are not the people who wrote the software; nevermind that sometimes the reason for a crash has nothing to do with the software itself; hardware soft-failures, for instance (that is, where a momentary hardware blip kills a process on some machine but isn't serious enough to drain the computer and reschedule the work elsewhere; particularly where the OS can partition off a bad component, such as a disk or a chunk of RAM or a CPU). When you actually run systems at scale, you engineer them under an expectation of failure and to be resilient. That means automatically restarting services when they crash, among many other things. Second, there are many reasons beyond just "lol it crashed" that you may want to restart dependent services; for example, perhaps you are upgrading a system and part of the upgrade process is restarting your dependents. Having a system that does things like that for you is useful. > > being the only thing out there that solves some problem that the > > distro maintainers consider important (ie, that they get asked about > > frequently). > > If it were so simple I would expect that claim to be more widely > advertised, yet we fall back on "it restarts daemons that crash". See above. > Personally I think systemd trying to solve the rather high demands and > diverse requirements of mobile laptop systems and is trying to meet or > match MS Windows in this regard. (personally I think macos has them > both beat by a country mile!) > > It sure as heck isn't of any use in production server environments! That's not an argument, it's an assertion, and one that isn't well supported. > If it were more about servers then it would look more like SMF, or maybe > launchd, and it's code wouldn't look like it was written by a grade > school student. Sorry, but this is exactly the sort of overly dismissive attitude that I was referring to earlier. You undermine your own argument by mentioning SMF (which can automatically restart services when the crash), for example. - Dan C. From clemc at ccc.com Fri Jun 14 06:05:28 2024 From: clemc at ccc.com (Clem Cole) Date: Thu, 13 Jun 2024 16:05:28 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> Message-ID: Except ... My primary Linux instances are on my growing family of Raspberry PIs or equiv (I have a couple of BananaPIs and Libre boards). It's based enough on Raspian which like Henry's line WRT 4.2, is just like Debian only different. Frankly, dealing with those issues when you leave the fold is a huge PITA. The problem for me, I really don't have a choice as I can not run a *BSD on them easily to do what I want to do - which is typically to control HW (like my PiDP's or a some "homecontrol" stuff I have). As I have said, it all about economics (well and ego in this case). You have to make something better to make it valuable. Replacing how the system init worked always struck me as throwing out the baby with the bathwater. SysV init was not at all bad, moving from Research/BSD init to it was not a huge life. That said, I agree with Dan, adding a resource system is a good idea and probably was a "hole." Years ago, the CMU Mach team created their nanny system, but it ran in cooperation with init - it did not try to replace init (remember Mach was trying to be a superset of BSD -- they had learned the lessons with Accent of being completely different). When Apple picked up Mach, their engineers eventually combined them to create launchd - which is what I think opened up the world to "getting rid of init" and thus systemd being considered possible by some Linux folks. Of course, the Linux developers could not settle for using launched (NIH) .... so, sadly, https://xkcd.com/927/ applies here - that's the ego part. ᐧ ᐧ ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.phillip.garcia at gmail.com Fri Jun 14 06:06:39 2024 From: a.phillip.garcia at gmail.com (A. P. Garcia) Date: Thu, 13 Jun 2024 16:06:39 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> Message-ID: On Thu, Jun 13, 2024, 3:38 PM Alan D. Salewski wrote: > On Thu, Jun 13, 2024, at 14:39, segaloco via TUHS wrote: > [...] > > Are there any known attempts in the modern age to roll Linux with > > something resembling research/BSD init? > > I'm interested in hearing about other options in this space, > too. > https://nosysyemd.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Fri Jun 14 06:26:00 2024 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 14 Jun 2024 06:26:00 +1000 (EST) Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: On Thu, 13 Jun 2024, Arrigo Triulzi via TUHS wrote: > Binary logs, ’nuff said. Ugh... > Good sysadmins live & die by grep and being able to visually detect > departures from the norm by just looking at the “shape” of logs > scrolling down a screen (before), terminal window now. Which is exactly what I do: one window with "tail -F /var/log/maillog" and another with "tail -F /var/log/httpd-access.log"; I've spotted lots of attacks that way (followed by a quick update to my firewall). -- Dave From jcapp at anteil.com Fri Jun 14 06:26:55 2024 From: jcapp at anteil.com (Jim Capp) Date: Thu, 13 Jun 2024 16:26:55 -0400 (EDT) Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: Message-ID: <1403506.1536.1718310415450.JavaMail.root@zimbraanteil> https://nosystemd.org/ From: "A. P. Garcia" To: "Alan D. Salewski" Cc: "TUHS (The Unix Heritage Society)" Sent: Thursday, June 13, 2024 4:06:39 PM Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' • The Register On Thu, Jun 13, 2024, 3:38 PM Alan D. Salewski wrote: On Thu, Jun 13, 2024, at 14:39, segaloco via TUHS wrote: [...] > Are there any known attempts in the modern age to roll Linux with > something resembling research/BSD init? I'm interested in hearing about other options in this space, too. https://nosysyemd.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Fri Jun 14 06:31:54 2024 From: tuhs at tuhs.org (Bakul Shah via TUHS) Date: Thu, 13 Jun 2024 13:31:54 -0700 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> Message-ID: <9B073245-B042-42B0-8C2D-1CE62B05322E@iitbombay.org> On Jun 13, 2024, at 1:05 PM, Clem Cole wrote: > > My primary Linux instances are on my growing family of Raspberry PIs or equiv (I have a couple of BananaPIs and Libre boards). It's based enough on Raspian which like Henry's line WRT 4.2, is just like Debian only different. Frankly, dealing with those issues when you leave the fold is a huge PITA. The problem for me, I really don't have a choice as I can not run a *BSD on them easily to do what I want to do - which is typically to control HW (like my PiDP's or a some "homecontrol" stuff I have). (at least) FreeBSD works on Pis (though support will usually lag or be non-existent for various "HATs"). And plan9 has worked for much longer, not to mention it's more flexible. Both have gpio drivers. Writing drivers for plan9 should be far easier. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Fri Jun 14 07:35:44 2024 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 13 Jun 2024 14:35:44 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> Message-ID: <20240613213544.GB28495@mcvoy.com> On Thu, Jun 13, 2024 at 04:06:39PM -0400, A. P. Garcia wrote: > On Thu, Jun 13, 2024, 3:38???PM Alan D. Salewski wrote: > > > On Thu, Jun 13, 2024, at 14:39, segaloco via TUHS wrote: > > [...] > > > Are there any known attempts in the modern age to roll Linux with > > > something resembling research/BSD init? > > > > > > I'm interested in hearing about other options in this space, > > too. > > > > https://nosysyemd.org Doesn't resolve for me? -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat From flexibeast at gmail.com Fri Jun 14 10:27:29 2024 From: flexibeast at gmail.com (Alexis) Date: Fri, 14 Jun 2024 10:27:29 +1000 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> (Alan D. Salewski's message of "Thu, 13 Jun 2024 15:37:57 -0400") References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> Message-ID: <87cyokpdvy.fsf@gmail.com> "Alan D. Salewski" writes: > I'm interested in hearing about other options in this space, > too. i'm currently running Gentoo+OpenRC as my daily driver, with OpenRC an 'official' Gentoo option. https://www.gentoo.org/ Previously i was running Void+s6/66, after having been running Void+runit, with runit being Void's default system (at least at the time). https://voidlinux.org/ Artix is an Arch-based non-systemd distro, with support for OpenRC, runit, s6 and dinit. https://artixlinux.org/ Obarun is an Arch-based distro using 66, which is roughly a 'wrapper' for s6, providing declarative syntax for service definition. https://wiki.obarun.org/ Not a distro, but the s6-overlay project allows using s6 as PID 1 in Docker containers: https://github.com/just-containers/s6-overlay The developer of nosh has a page outlining the know problems with Sys V rc: https://jdebp.uk/FGA/system-5-rc-problems.html The developer of dinit has written a nice comparison of various non-systemd systems providing init / service supervision / service management: https://github.com/davmac314/dinit/blob/master/doc/COMPARISON The developer of s6 has pages: * explaining his perspective on various non-systemd systems: https://skarnet.org/software/s6/why.html * providing a general overview of s6 itself: https://skarnet.org/software/s6/overview.html * discussing s6's approach to 'socket activation', which uses file descriptors: https://skarnet.org/software/s6/socket-activation.html (s6 is the system i'm most familiar with in this space, not least because i'm the porter and maintainer of mdoc(7) versions of the documentation for various parts of the s6/skaware ecosystem.) Alexis. From lm at mcvoy.com Fri Jun 14 10:59:02 2024 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 13 Jun 2024 17:59:02 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <87cyokpdvy.fsf@gmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> Message-ID: <20240614005902.GD28495@mcvoy.com> This is all well and good but what I, and I suspect other boomers like me, are looking for, is something like Ubuntu without systemd. I'm a xubuntu guy (Ubuntu with a lighter weight desktop), but whatever. Ubuntu is fine, everything works there. So is there an "Everything just works" distro without systemd? A guy can hope but I suspect not. I'm not trying to be a pain in the ass but I'm 62, I prefer to spend my effort on fishing on the ocean, I'm not some young guy that wants to put in a ton of hours on my Linux install, I like Linux because it is Unix and it is trivial to install. Windows? Hours and hours of finding drivers after you find some USB network connector that Windows knows? No thanks. *BSD - have you installed one of those? It's a trip back to the 1980s, those installers are fine for BSD developers but just suck compared to Linux. Mainstream Linux just works. On Fri, Jun 14, 2024 at 10:27:29AM +1000, Alexis wrote: > "Alan D. Salewski" writes: > > >I'm interested in hearing about other options in this space, > >too. > > i'm currently running Gentoo+OpenRC as my daily driver, with OpenRC an > 'official' Gentoo option. > > https://www.gentoo.org/ > > Previously i was running Void+s6/66, after having been running Void+runit, > with runit being Void's default system (at least at the time). > > https://voidlinux.org/ > > Artix is an Arch-based non-systemd distro, with support for OpenRC, runit, > s6 and dinit. > > https://artixlinux.org/ > > Obarun is an Arch-based distro using 66, which is roughly a 'wrapper' for > s6, providing declarative syntax for service definition. > > https://wiki.obarun.org/ > > Not a distro, but the s6-overlay project allows using s6 as PID 1 in Docker > containers: > > https://github.com/just-containers/s6-overlay > > The developer of nosh has a page outlining the know problems with Sys V rc: > > https://jdebp.uk/FGA/system-5-rc-problems.html > > The developer of dinit has written a nice comparison of various non-systemd > systems providing init / service supervision / service management: > > https://github.com/davmac314/dinit/blob/master/doc/COMPARISON > > The developer of s6 has pages: > > * explaining his perspective on various non-systemd systems: > > https://skarnet.org/software/s6/why.html > > * providing a general overview of s6 itself: > > https://skarnet.org/software/s6/overview.html > > * discussing s6's approach to 'socket activation', which uses file > descriptors: > > https://skarnet.org/software/s6/socket-activation.html > > (s6 is the system i'm most familiar with in this space, not least because > i'm the porter and maintainer of mdoc(7) versions of the documentation for > various parts of the s6/skaware ecosystem.) > > > Alexis. -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat From luther.johnson at makerlisp.com Fri Jun 14 11:11:39 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Thu, 13 Jun 2024 18:11:39 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <20240614005902.GD28495@mcvoy.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> <20240614005902.GD28495@mcvoy.com> Message-ID: I believe there is a Debian package you can install after installing Debian that reverts to sysvinit, removes systemd. There is also a configuration that leaves systemd in place but lets you use the sysvinit scripts and forget (well for most people, most uses) that systemd is there. I have the latter style of installation on my server, but I was thinking of going the full no-systemd route sometime. On 06/13/2024 05:59 PM, Larry McVoy wrote: > This is all well and good but what I, and I suspect other boomers like me, > are looking for, is something like Ubuntu without systemd. I'm a xubuntu > guy (Ubuntu with a lighter weight desktop), but whatever. Ubuntu is fine, > everything works there. > > So is there an "Everything just works" distro without systemd? A guy can > hope but I suspect not. > > I'm not trying to be a pain in the ass but I'm 62, I prefer to spend my > effort on fishing on the ocean, I'm not some young guy that wants to > put in a ton of hours on my Linux install, I like Linux because it is > Unix and it is trivial to install. Windows? Hours and hours of finding > drivers after you find some USB network connector that Windows knows? > No thanks. *BSD - have you installed one of those? It's a trip back > to the 1980s, those installers are fine for BSD developers but just suck > compared to Linux. Mainstream Linux just works. > > On Fri, Jun 14, 2024 at 10:27:29AM +1000, Alexis wrote: >> "Alan D. Salewski" writes: >> >>> I'm interested in hearing about other options in this space, >>> too. >> i'm currently running Gentoo+OpenRC as my daily driver, with OpenRC an >> 'official' Gentoo option. >> >> https://www.gentoo.org/ >> >> Previously i was running Void+s6/66, after having been running Void+runit, >> with runit being Void's default system (at least at the time). >> >> https://voidlinux.org/ >> >> Artix is an Arch-based non-systemd distro, with support for OpenRC, runit, >> s6 and dinit. >> >> https://artixlinux.org/ >> >> Obarun is an Arch-based distro using 66, which is roughly a 'wrapper' for >> s6, providing declarative syntax for service definition. >> >> https://wiki.obarun.org/ >> >> Not a distro, but the s6-overlay project allows using s6 as PID 1 in Docker >> containers: >> >> https://github.com/just-containers/s6-overlay >> >> The developer of nosh has a page outlining the know problems with Sys V rc: >> >> https://jdebp.uk/FGA/system-5-rc-problems.html >> >> The developer of dinit has written a nice comparison of various non-systemd >> systems providing init / service supervision / service management: >> >> https://github.com/davmac314/dinit/blob/master/doc/COMPARISON >> >> The developer of s6 has pages: >> >> * explaining his perspective on various non-systemd systems: >> >> https://skarnet.org/software/s6/why.html >> >> * providing a general overview of s6 itself: >> >> https://skarnet.org/software/s6/overview.html >> >> * discussing s6's approach to 'socket activation', which uses file >> descriptors: >> >> https://skarnet.org/software/s6/socket-activation.html >> >> (s6 is the system i'm most familiar with in this space, not least because >> i'm the porter and maintainer of mdoc(7) versions of the documentation for >> various parts of the s6/skaware ecosystem.) >> >> >> Alexis. From flexibeast at gmail.com Fri Jun 14 11:42:32 2024 From: flexibeast at gmail.com (Alexis) Date: Fri, 14 Jun 2024 11:42:32 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <20240614005902.GD28495@mcvoy.com> (Larry McVoy's message of "Thu, 13 Jun 2024 17:59:02 -0700") References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> <20240614005902.GD28495@mcvoy.com> Message-ID: <878qz8paev.fsf@gmail.com> Larry McVoy writes: > This is all well and good but what I, and I suspect other > boomers like me, > are looking for, is something like Ubuntu without systemd. I'm > a xubuntu > guy (Ubuntu with a lighter weight desktop), but whatever. > Ubuntu is fine, > everything works there. > > So is there an "Everything just works" distro without systemd? > A guy can > hope but I suspect not. Mm, well, i guess that depends on what one's "everything" is. i used Ubuntu years ago - having moved from Mandriva - and was pleased by how everything "just worked". But over time i started experiencing various issues where things _didn't_ just work (i can't remember what now; i think printing might have been one thing), which became increasingly frustrating. So i moved to Debian, and had a much more "just works" experience. But then Debian moved to systemd, and i started getting frustrated again in various ways, and so i moved to Void. Void's a binary distro, and i don't recall having any more issues with it than i ended up having with Ubuntu. And for experienced *n*x users, the installation process is trivial (even if the installer is text-based, rather than involving snazzy graphics). > I'm not trying to be a pain in the ass but I'm 62, I prefer to > spend my > effort on fishing on the ocean, I'm not some young guy that > wants to > put in a ton of hours on my Linux install Fwiw, i'm a 50-year-old woman. :-) My first distro was RedHat 5.2, around the end of '97. To me, this is a "bubbles in wallpaper" thing. i've spent the time setting up Gentoo because i'm now at the point where i'm clear on what i do and don't need/want (in general), and i'm trying to minimise the extent to which i'm beholden to having to deal with breaking changes to subsystems / libraries / software that i don't need/want, or with breakages i don't know how to immediately fix or workaround. Because i have _many_ other life commitments myself, and i've never distro-hopped just for the fun of it; i've always been driven to do so, for various reasons. My distro is merely a means to an end, not the end in itself. (i've taken on s6 documentation stuff because although there's no shortage of people wanting alternatives to systemd, there are far fewer people volunteering to do even small amounts of the work necessary for that.) Alexis. From rminnich at gmail.com Fri Jun 14 14:22:03 2024 From: rminnich at gmail.com (ron minnich) Date: Thu, 13 Jun 2024 21:22:03 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <878qz8paev.fsf@gmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> <20240614005902.GD28495@mcvoy.com> <878qz8paev.fsf@gmail.com> Message-ID: well, it depends on what you want, but tinycorelinux has worked well for me, and it fits in about 24M On Thu, Jun 13, 2024 at 6:42 PM Alexis wrote: > Larry McVoy writes: > > > This is all well and good but what I, and I suspect other > > boomers like me, > > are looking for, is something like Ubuntu without systemd. I'm > > a xubuntu > > guy (Ubuntu with a lighter weight desktop), but whatever. > > Ubuntu is fine, > > everything works there. > > > > So is there an "Everything just works" distro without systemd? > > A guy can > > hope but I suspect not. > > Mm, well, i guess that depends on what one's "everything" is. i > used Ubuntu years ago - having moved from Mandriva - and was > pleased by how everything "just worked". But over time i started > experiencing various issues where things _didn't_ just work (i > can't remember what now; i think printing might have been one > thing), which became increasingly frustrating. So i moved to > Debian, and had a much more "just works" experience. But then > Debian moved to systemd, and i started getting frustrated again in > various ways, and so i moved to Void. > > Void's a binary distro, and i don't recall having any more issues > with it than i ended up having with Ubuntu. And for experienced > *n*x users, the installation process is trivial (even if the > installer is text-based, rather than involving snazzy graphics). > > > I'm not trying to be a pain in the ass but I'm 62, I prefer to > > spend my > > effort on fishing on the ocean, I'm not some young guy that > > wants to > > put in a ton of hours on my Linux install > > Fwiw, i'm a 50-year-old woman. :-) My first distro was RedHat 5.2, > around the end of '97. > > To me, this is a "bubbles in wallpaper" thing. i've spent the time > setting up Gentoo because i'm now at the point where i'm clear on > what i do and don't need/want (in general), and i'm trying to > minimise the extent to which i'm beholden to having to deal with > breaking changes to subsystems / libraries / software that i don't > need/want, or with breakages i don't know how to immediately fix > or workaround. Because i have _many_ other life commitments > myself, and i've never distro-hopped just for the fun of it; i've > always been driven to do so, for various reasons. My distro is > merely a means to an end, not the end in itself. > > (i've taken on s6 documentation stuff because although there's no > shortage of people wanting alternatives to systemd, there are far > fewer people volunteering to do even small amounts of the work > necessary for that.) > > > Alexis. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ama at ugr.es Fri Jun 14 16:54:13 2024 From: ama at ugr.es (Angel M Alganza) Date: Fri, 14 Jun 2024 08:54:13 +0200 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <878qz8paev.fsf@gmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> <20240614005902.GD28495@mcvoy.com> <878qz8paev.fsf@gmail.com> Message-ID: On 2024-06-14 03:42, Alexis wrote: > Mm, well, i guess that depends on what one's "everything" is. i used > Ubuntu years ago - having moved from Mandriva - and was pleased by how > everything "just worked". But over time i started experiencing various > issues where things _didn't_ just work (i can't remember what now; i > think printing might have been one thing), which became increasingly > frustrating. So i moved to Debian, and had a much more "just works" > experience. But then Debian moved to systemd, and i started getting > frustrated again in various ways, and so i moved to Void. I used Debian for 27 years, until ascii (the last release without the nasty systemd), which I upgraded to Devuan ascii. The upgrade process was flawless with everything working without problems for me (I don't use disgusting DEs like Gnome or KDE, of course). In several systems, I've kept upgrading release after release of Devuan, changing every part of the hardware in the way, and it keeps working great. In the BSDs, I much prefer the more classic text installers, where I'm in control, but NomadBSD and GhostBSD have graphical installers which seem to be much simpler and easier to use than the Ubuntu one. Cheers, Ángel From dave at horsfall.org Fri Jun 14 17:04:29 2024 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 14 Jun 2024 17:04:29 +1000 (EST) Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <20240614005902.GD28495@mcvoy.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> <20240614005902.GD28495@mcvoy.com> Message-ID: On Thu, 13 Jun 2024, Larry McVoy wrote: > This is all well and good but what I, and I suspect other boomers like > me, are looking for, is something like Ubuntu without systemd. I'm a > xubuntu guy (Ubuntu with a lighter weight desktop), but whatever. > Ubuntu is fine, everything works there. I'm looking for something like Debian for its Amateur radio ("ham") stuff for a laptop, but without the "systemd" monstrosity; I've seen a reference to "Devuan" (?) which might suit me. > So is there an "Everything just works" distro without systemd? A guy > can hope but I suspect not. Hopefully... > I'm not trying to be a pain in the ass but I'm 62, I prefer to spend my > effort on fishing on the ocean, I'm not some young guy that wants to put > in a ton of hours on my Linux install, I like Linux because it is Unix > and it is trivial to install. Windows? Hours and hours of finding > drivers after you find some USB network connector that Windows knows? No > thanks. *BSD - have you installed one of those? It's a trip back to > the 1980s, those installers are fine for BSD developers but just suck > compared to Linux. Mainstream Linux just works. Only 62? I turn 72 in a few months :-) And I *don't* like Linux precisely because it is *not* Unix (too many irritating differences), but I need something for the aforesaid lapdog (with some proprietary hardware, but 3rd-party drivers exist). As for *BSD, yes, many times; I started with SunOS, used OpenBSD/NetBSD/FreeBSD (the latter is my current server), and I really don't see any problem. Heck, even my Mac runs FreeBSD (albeit on steroids)... I have to say though that the SunOS graphical installer was beautiful :-) -- Dave From arnold at skeeve.com Fri Jun 14 17:33:06 2024 From: arnold at skeeve.com (arnold at skeeve.com) Date: Fri, 14 Jun 2024 01:33:06 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <20240614005902.GD28495@mcvoy.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> <20240614005902.GD28495@mcvoy.com> Message-ID: <202406140733.45E7X6vn408140@freefriends.org> I'm with Larry on this one. I'm 64, I been running Ubuntu Mate for ~8 years or so, and even though it's systemd, and yeah, systemd IS an abomination, I don't care enough to go looking for something else. I still have $DAYJOB, kids living at home, Free Software to maintain, books to write, etc., etc., etc. https://wiki.ubuntu.com/SystemdForUpstartUsers#Permanent_switch_back_to_upstart looks like it might do the trick, but it also looks like it's a little old, and I don't want to brick my production systems. I may try to spin up a VM and see if the instructions work before doing it for real. Arnold Larry McVoy wrote: > This is all well and good but what I, and I suspect other boomers like me, > are looking for, is something like Ubuntu without systemd. I'm a xubuntu > guy (Ubuntu with a lighter weight desktop), but whatever. Ubuntu is fine, > everything works there. > > So is there an "Everything just works" distro without systemd? A guy can > hope but I suspect not. > > I'm not trying to be a pain in the ass but I'm 62, I prefer to spend my > effort on fishing on the ocean, I'm not some young guy that wants to > put in a ton of hours on my Linux install, I like Linux because it is > Unix and it is trivial to install. Windows? Hours and hours of finding > drivers after you find some USB network connector that Windows knows? > No thanks. *BSD - have you installed one of those? It's a trip back > to the 1980s, those installers are fine for BSD developers but just suck > compared to Linux. Mainstream Linux just works. From akosela at andykosela.com Fri Jun 14 17:34:41 2024 From: akosela at andykosela.com (Andy Kosela) Date: Fri, 14 Jun 2024 09:34:41 +0200 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <20240614005902.GD28495@mcvoy.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> <20240614005902.GD28495@mcvoy.com> Message-ID: On Friday, June 14, 2024, Larry McVoy wrote: > This is all well and good but what I, and I suspect other boomers like me, > are looking for, is something like Ubuntu without systemd. I'm a xubuntu > guy (Ubuntu with a lighter weight desktop), but whatever. Ubuntu is fine, > everything works there. > > So is there an "Everything just works" distro without systemd? A guy can > hope but I suspect not. > > I'm not trying to be a pain in the ass but I'm 62, I prefer to spend my > effort on fishing on the ocean, I'm not some young guy that wants to > put in a ton of hours on my Linux install, I like Linux because it is > Unix and it is trivial to install. Windows? Hours and hours of finding > drivers after you find some USB network connector that Windows knows? > No thanks. *BSD - have you installed one of those? It's a trip back > to the 1980s, those installers are fine for BSD developers but just suck > compared to Linux. Mainstream Linux just works. > Larry, in that case I think you will be best with sticking to Xubuntu or Debian. These distros just work. And although I am also from anti-systemd camp after years of using systemd in production environments -- pretty much it has been a standard since rhel 7 -- I conclude that systemd is not the end of the world like some purported back in the days. Mostly it just works too. Switching to some esoteric distro maintained by a couple of university students that will most likely disappear in three years is probably not the best option. Debian is stable and been there from the beginning. All the packages are also there just one simple command away. These days I am also not that interested in kernel development anymore or debugging low level stuff. It was fun when Linux or FreeBSD kernels were much smaller and less complex. I am mostly into retro computing and retro games these days; still playing and enjoying old classics like "The Secret of Monkey Island" from LucasArts on period correct hardware. For daily tasks I am using Macbook which also just works and have a terminal, so I can run my stuff from there. I can control my k8s clusters from the terminal. I am still mostly CLI oriented. Command line will always remain the most elegant and beautiful interface to speak with machines. --Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Fri Jun 14 17:44:22 2024 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 14 Jun 2024 17:44:22 +1000 (EST) Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> <20240614005902.GD28495@mcvoy.com> Message-ID: On Fri, 14 Jun 2024, Andy Kosela wrote: > For daily tasks I am using Macbook which also just works and have a > terminal, so I can run my stuff from there. I can control my k8s > clusters from the terminal. I am still mostly CLI oriented. Command line > will always remain the most elegant and beautiful interface to speak > with machines. What he said... -- Dave From ralph at inputplus.co.uk Fri Jun 14 18:59:39 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Fri, 14 Jun 2024 09:59:39 +0100 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <20240613184520.3ED803740153@freecalypso.org> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <20240613184520.3ED803740153@freecalypso.org> Message-ID: <20240614085939.0D1F421A59@orac.inputplus.co.uk> Hi, The Arch Linux wiki if often useful for non-Arch systems. It has details of alternative init systems to systemd, including s6 mentioned elsewhere in the thread: https://wiki.archlinux.org/title/Init -- Cheers, Ralph. From katolaz at freaknet.org Fri Jun 14 21:31:57 2024 From: katolaz at freaknet.org (Vincenzo Nicosia) Date: Fri, 14 Jun 2024 11:31:57 +0000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' ??? The Register In-Reply-To: <20240614005902.GD28495@mcvoy.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <4f7a96cc-2d96-4547-952c-b414a773b62a@app.fastmail.com> <87cyokpdvy.fsf@gmail.com> <20240614005902.GD28495@mcvoy.com> Message-ID: On Thu, Jun 13, 2024 at 05:59:02PM -0700, Larry McVoy wrote: > This is all well and good but what I, and I suspect other boomers like me, > are looking for, is something like Ubuntu without systemd. I'm a xubuntu > guy (Ubuntu with a lighter weight desktop), but whatever. Ubuntu is fine, > everything works there. > > So is there an "Everything just works" distro without systemd? A guy can > hope but I suspect not. TL;DR: Devuan (https://devuan.org) works more or less fine as a daily drive, both as a desktop and on servers, and it gives choice of sysvinit, runit, openrc, and lately also s6 I believe (but I haven't tried it). If you need something that "kinda works" without systemd, well, Devuan is still "kinda usable", and possibly one of the best options around. Personal rant follows. You are not expected to read this ;P Linux is probably broken beyond repair now, and I am saying that with a heavy heart, having used exlusively Linux and all the other *BSD in the last 26 years, and having advocated its adoption strongly, in different environments. In many ways, Linux is not unix, not any more, to any sensible measure, and since a good while ago. These days Linux can only provide a somehow-lightly-unixy-flavoured system that "kinda works", provided that you do things as the distro decided, and do not look under the hood, ever. I personally believe Linux was at its top around 10-12 years ago, even in terms of how well everything worked in it and how easy it still was to do things your own way, if you wanted to do so. It was still simple enough, yet it provided a full-featured computing experience, from desktops to high-end servers. Nowadays if you decide to use Linux you must accept that far too many things "do happen" to your computer, and neither you nor anybody else knows why they do, or why they shouldn't, or how to alter their behavious or avoid them altogether. There is so much complexity everywhere that there is almost no space left for KISS, anywhere. Linux has eaten itself alive plus a whole bunch of additional bloat, many times, recursively. I have already moved all the servers away of Linux in the last 6-7 years, and I am currently in the last phase of moving my desktops away from it as well. It's a sad farewell, but a necessary one. You can't be totally fed up and keep carrying on for long, can you? :) My2Cents Enzo -- From e5655f30a07f at ewoof.net Fri Jun 14 21:32:27 2024 From: e5655f30a07f at ewoof.net (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Fri, 14 Jun 2024 11:32:27 +0000 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> On 14 Jun 2024 06:26 +1000, from dave at horsfall.org (Dave Horsfall): >> Good sysadmins live & die by grep and being able to visually detect >> departures from the norm by just looking at the “shape” of logs >> scrolling down a screen (before), terminal window now. > > Which is exactly what I do: one window with "tail -F /var/log/maillog" and > another with "tail -F /var/log/httpd-access.log"; I've spotted lots of > attacks that way (followed by a quick update to my firewall). journalctl -f -u 'postfix*' or journalctl -f -u 'exim*' or journalctl -f -u 'smtpd' or whatever else might map to the SMTP server software you're running. (Sure, it gets slightly more complicated if you don't know what SMTP server software is in use, but in that case I think a case can be made for why do you even care about its logs?) To filter, one can either add -g 'pcre-expression'; or pipe the output through grep -P for the same effect. Or you can use something like --output=json (or -o json) and pipe the output of that through, say, jq. And I'm pretty sure most web servers still log to text files in typical configurations, so that plain "tail -F" should still work there. Is systemd perfect? Of course not. I have my gripes with it myself, but not enough to actively fight it. And nobody is _forcing_ anyone to use systemd; plenty of examples have already been posted in this thread, from specially-made Linux distribution derivatives to ones that have opted to not include systemd to *BSDs to links to instructions for how to get rid of systemd from more mainstream Linux distributions that have opted to use it as a default. Also, the subjectular headline from The Register seems like something someone has dreamed up; I certainly don't see anything like that in the actual release announcement at . Also using multiple different search engines to try to find it only brought up the _The Register_ article and a handful of places regurgitating that quote as a real representation of a statement from the systemd maintainers. I don't see anything resembling it anywhere on either systemd.io or github.com. Until I see someone posting a link to something like that quote posted by a systemd maintainer in representation of _any_ systemd release, let alone v256, I'm going to treat that one as hearsay at best, and actively malicious at worst. As much as I can appreciate the architectural simplicity of early UNIX, how about not ignoring the fact that today's systems are quite a bit more complex both at the hardware and the software level than they were in the late 1960s, and that to some extent, this complexity _itself_ (unfortunately) drives complexity in other areas. Also that much of that simplicity was also out of necessity. There's a reason why most software these days isn't written directly in assembler or even C. None of which negates the accomplishments of either UNIX or C. -- Michael Kjörling 🔗 https://michael.kjorling.se “Remember when, on the Internet, nobody cared that you were a dog?” From a.phillip.garcia at gmail.com Fri Jun 14 22:21:17 2024 From: a.phillip.garcia at gmail.com (A. P. Garcia) Date: Fri, 14 Jun 2024 08:21:17 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> Message-ID: On Fri, Jun 14, 2024, 7:42 AM Michael Kjörling wrote Also, the subjectular headline from The Register seems like something > someone has dreamed up; I certainly don't see anything like that in > the actual release announcement at > . Also using > multiple different search engines to try to find it only brought up > the _The Register_ article and a handful of places regurgitating that > quote as a real representation of a statement from the systemd > maintainers. I don't see anything resembling it anywhere on either > systemd.io or github.com. Until I see someone posting a link to > something like that quote posted by a systemd maintainer in > representation of _any_ systemd release, let alone v256, I'm going to > treat that one as hearsay at best, and actively malicious at worst. > https://fosstodon.org/@bluca/112600235561688561 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Sat Jun 15 00:17:38 2024 From: tuhs at tuhs.org (Grant Taylor via TUHS) Date: Fri, 14 Jun 2024 09:17:38 -0500 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: On 6/13/24 15:03, Dan Cross wrote: > I may be in a bit of a grumpy mood, so forgive me if this is snarkier > than I intend, but statements like this bother me. ;-) > Second, there are many reasons beyond just "lol it crashed" that > you may want to restart dependent services; for example, perhaps you > are upgrading a system and part of the upgrade process is restarting > your dependents. Having a system that does things like that for you > is useful. It's my understanding that systemd as a service lifecycle manager is starting to take on some aspects of what cluster service managers used to do. E.g. - Are all the other dependencies this service needs up and running -> is it okay to start this service on this system? - Is the service running and responding like it should be? -> periodically check to make sure the system is returning expected results; is DNS answering queries / can I send a test email - Stop the service when it's operating outside acceptable parameters (read: failing). - Notify other related services that this service has been stopped. I'm anti-systemd *cough*Master Control Program*cough* and it's associated suite of utilities for many reasons. But I've come to accept that systemd is not just an init system. It's role of a service life cycle manager is a superset of what an init system does. It's a relatively new world (at least comparatively). I also have seriouis doubts about systemd's stability as a services life cycle manager. I've seen too many times when systems get into a wedged state that require a power fail / reset (or sys request if enabled) to recover. I've seen too many times when a systemd based system won't shut down because of some circular configuration it's gotten itself into. Without the complication of NFS servers being unreachable after taking down the network. -- Grant. . . . -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4033 bytes Desc: S/MIME Cryptographic Signature URL: From woods at robohack.ca Fri Jun 14 03:07:36 2024 From: woods at robohack.ca (Greg A. Woods) Date: Thu, 13 Jun 2024 10:07:36 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: At Thu, 13 Jun 2024 16:03:30 -0400, Dan Cross wrote: Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > On Thu, Jun 13, 2024 at 3:18 PM Greg A. Woods wrote: > > [snip] > > If it were more about servers then it would look more like SMF, or maybe > > launchd, and it's code wouldn't look like it was written by a grade > > school student. > > Sorry, but this is exactly the sort of overly dismissive attitude that > I was referring to earlier. You undermine your own argument by > mentioning SMF (which can automatically restart services when the > crash), for example. No, that's exactly my point! SMF isn't the anywhere near the nearly-bootable monster systemd seems to be, and it isn't filled with grade-school-level code, and it _is_ written much more in keeping with the Unix tool philosophy. It manages services, it does it in a well defined and predictable way, and it doesn't try to take over the universe. Launchd isn't quite so clean and Unix-like, but it's still a well designed more-or-less single-purpose tool! (Albeit one with a rather wide set of specifications.) Some of Apples other "frameworks" tend to look a bit more like some of systemd, but that's different part of the problem. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From flexibeast at gmail.com Sun Jun 16 15:48:15 2024 From: flexibeast at gmail.com (Alexis) Date: Sun, 16 Jun 2024 15:48:15 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: (Grant Taylor via TUHS's message of "Fri, 14 Jun 2024 09:17:38 -0500") References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> Message-ID: <87msnl4ew0.fsf@gmail.com> Grant Taylor via TUHS writes: > I'm anti-systemd *cough*Master Control Program*cough* and it's > associated suite of utilities for many reasons. But I've come > to > accept that systemd is not just an init system. It's role of a > service life cycle manager is a superset of what an init system > does. > It's a relatively new world (at least comparatively). Indeed: it doesn't just do init, but also _service supervision_ (making sure that a service that _should_ be up, _is_ up) and _service management_ (enabling, disabling, starting, stopping, dependencies, etc.). Hence why phrases like "the init wars" are such a misnomer. As described in the potted history outlined in the "known problems with System 5 rc" article i linked to upthread, Sys V rc's issues with service supervision and service management have been known for decades: > In 1999, Luke Mewburn worked on replacing the /etc/rc system in > NetBSD. netbsd.tech.userlevel mailing list discussions from the > time show several criticisms of the System 5 rc and System 5 > init systems, and encouragement not to repeat their mistakes in > the BSD world. The resultant rc.d system was roughly > contemporary with Daniel Robbins producing OpenRC, another > System 5 rc replacement that replaced the (Bourne/Bourne Again) > shell with a different script interpreter, nowadays named > /sbin/openrc, that provided a whole lot of standard service > management functionality as pre-supplied functions. The NetBSD > rc.d system likewise reduced rc.d scripts to a few variable > assignments and function calls (in about two thirds of cases). The initial release of OpenRC - still Gentoo's 'native' system for service management - was in April 2007; the initial release of systemd was in March 2010. But although both OpenRC and systemd address various pain points of Sys V rc on Linux, systemd has _also_ had the backing of an 800-pound gorilla in the Linux world - Red Hat - which has _implicitly_ forced its adoption over alternatives by distros that don't have the same level of resources behind them. Here's an excerpt from something i wrote on the Gentoo forum back in April: > There's been so much anger and vitriol expressed about > systemd. Has that significantly slowed the systemd juggernaut? > Not really. Not least because, as in the case of D-Bus, and as > in the case of Wayland, it addresses very real issues for > significant numbers of people. > > For example: unlike on, say, OpenBSD, which has developed a > pretty clean shell-script-based service management system, with > a 'standard library' in the form of rc.subr(8), the situation on > Linux was a mess. Many of the (usually volunteers) who maintain > packages for Linux don't want to have to learn the complexities > of shell scripting and the subtle issues that can arise, or > develop and maintain workarounds for race conditions, and so > on. systemd comes along and says: "Hey, with systemd, you'll be > able to write service definitions declaratively; you won't need > to wrangle shell scripts." That's a pretty attractive > proposition to a number of package maintainers, and in the > absence of systemd alternatives explicitly providing such an > interface - not just saying "oh that could be done on our > alternative" - those maintainers are going to be inclined > towards systemd, regardless of what design and implementation > issues are involved in systemd's approach. > > So in wanting to try to ensure that myself and others have > choices and alternatives available, i feel that ranting against > the incoming tide, like a tech King Cnut, is typically far less > effective than actually putting in the work to develop and > support those choices and alternatives. Alexis. From wobblygong at gmail.com Sun Jun 16 16:43:40 2024 From: wobblygong at gmail.com (Wesley Parish) Date: Sun, 16 Jun 2024 18:43:40 +1200 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87msnl4ew0.fsf@gmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <87msnl4ew0.fsf@gmail.com> Message-ID: On 16/06/24 17:48, Alexis wrote: > Grant Taylor via TUHS writes: > >> I'm anti-systemd *cough*Master Control Program*cough* and it's >> associated suite of utilities for many reasons.  But I've come to >> accept that systemd is not just an init system.  It's role of a >> service life cycle manager is a superset of what an init system does. >> It's a relatively new world (at least comparatively). > > Indeed: it doesn't just do init, but also _service supervision_ > (making sure that a service that _should_ be up, _is_ up) and _service > management_ (enabling, disabling, starting, stopping, dependencies, > etc.). Hence why phrases like "the init wars" are such a misnomer. > > As described in the potted history outlined in the "known problems > with System 5 rc" article i linked to upthread, Sys V rc's issues with > service supervision and service management have been known for decades: > >> In 1999, Luke Mewburn worked on replacing the /etc/rc system in >> NetBSD. netbsd.tech.userlevel mailing list discussions from the time >> show several criticisms of the System 5 rc and System 5 init systems, >> and encouragement not to repeat their mistakes in the BSD world. The >> resultant rc.d system was roughly contemporary with Daniel Robbins >> producing OpenRC, another System 5 rc replacement that replaced the >> (Bourne/Bourne Again) shell with a different script interpreter, >> nowadays named /sbin/openrc, that provided a whole lot of standard >> service management functionality as pre-supplied functions. The >> NetBSD rc.d system likewise reduced rc.d scripts to a few variable >> assignments and function calls (in about two thirds of cases). > > The initial release of OpenRC - still Gentoo's 'native' system for > service management - was in April 2007; the initial release of systemd > was in March 2010. But although both OpenRC and systemd address > various pain points of Sys V rc on Linux, systemd has _also_ had the > backing of an 800-pound gorilla in the Linux world - Red Hat - which > has _implicitly_ forced its adoption over alternatives by distros that > don't have the same level of resources behind them. > > Here's an excerpt from something i wrote on the Gentoo forum back in > April: > >> There's been so much anger and vitriol expressed about systemd. Has >> that significantly slowed the systemd juggernaut? Not really. Not >> least because, as in the case of D-Bus, and as in the case of >> Wayland, it addresses very real issues for significant numbers of >> people. >> >> For example: unlike on, say, OpenBSD, which has developed a pretty >> clean shell-script-based service management system, with a 'standard >> library' in the form of rc.subr(8), the situation on Linux was a >> mess. Many of the (usually volunteers) who maintain packages for >> Linux don't want to have to learn the complexities of shell scripting >> and the subtle issues that can arise, or develop and maintain >> workarounds for race conditions, and so on. systemd comes along and >> says: "Hey, with systemd, you'll be able to write service definitions >> declaratively; you won't need to wrangle shell scripts." That's a >> pretty attractive proposition to a number of package maintainers, and >> in the absence of systemd alternatives explicitly providing such an >> interface - not just saying "oh that could be done on our >> alternative" - those maintainers are going to be inclined towards >> systemd, regardless of what design and implementation issues are >> involved in systemd's approach. >> >> So in wanting to try to ensure that myself and others have choices >> and alternatives available, i feel that ranting against the incoming >> tide, like a tech King Cnut, is typically far less effective than >> actually putting in the work to develop and support those choices and >> alternatives. > > > Alexis. Might also be worth pointing out that Red Hat's an IBM *nix daemon, and IBM's mainframe business is built in no small part on service managers in the OS management layer. I expect their "Phone Home" ability was part-and-parcel of the IBM mainframe service contracts. If systemd phones home without an explicit (ie, sign-on-the-dotted-line type) contract between me and Red Hat, I'll raise a stink about - but so far it hasn't. (I'm running Fedora.) Wesley Parish From woods at robohack.ca Sat Jun 15 18:48:39 2024 From: woods at robohack.ca (Greg A. Woods) Date: Sat, 15 Jun 2024 01:48:39 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87msnl4ew0.fsf@gmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <87msnl4ew0.fsf@gmail.com> Message-ID: At Sun, 16 Jun 2024 15:48:15 +1000, Alexis wrote: Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > Here's an excerpt from something i wrote > on the Gentoo forum back in April: > > > [[...]] the situation on > > Linux was a mess. Many of the (usually > > volunteers) who maintain packages for > > Linux don't want to have to learn the > > complexities of shell scripting and the > > subtle issues that can arise That pretty much says it all about the state of the GNU/linux world right there. In the "Unix world" everyone learns shell scripting, some better than others of course, and some hate it at the same time too, but I would say from my experience it's a given. You either learn shell scripting or you are "just a user" (even if you also write application code). -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From clemc at ccc.com Mon Jun 17 05:44:16 2024 From: clemc at ccc.com (Clem Cole) Date: Sun, 16 Jun 2024 15:44:16 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <87msnl4ew0.fsf@gmail.com> Message-ID: On Sun, Jun 16, 2024 at 2:50 PM Greg A. Woods wrote: > In the "Unix world" everyone learns shell scripting, some better than > others of course, and some hate it at the same time too, but I would say > from my experience it's a given. You either learn shell scripting or > you are "just a user" (even if you also write application code). > Side story - I think you can tell a lot about a person by what is on their bookshelf at work and what books they have read. A few years ago, I discovered this same flaw in using UNIX (Linux) well with some of the new hires (from really good schools, too), and it was worse because they often had never seen the true Bourne shell (nor knew much/anything about Algol, much less A68). Many thought "bash" was the UNIX shell because they never knew better (chuckle). I realized it was a huge hole in their education, so I got my admin to order each copies of K&R2 and UPE for their desks. I said I expected them to do the exercises in them as part of their "training." I could usually tell a lot about each person by the questions they asked during that period. Many often griped about having to learn to use ed and nroff. I think those that were already EMACS folks thought I was a little bonkers but my comment was that you'll understand the other tools better/be a lot more effective with the shell in particular. Many had seen Latex, so the >>idea<< of a document compiler was not always completely foreign. But they crawled through each book. But it was interesting when it was done. To a person, they all said they were much better with the UNIX tool kit after UPE, and because they actually read K&R2, they often learned a few things about C they never realized. Once they "graduated" also I gave them a copy of APUE, if they were doing networking stuff, UNP too. Most would start doing the APUE and UNP problems also as I would get some of them coming to my office with questions, but I never said they had to do them. Clem ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From davida at pobox.com Mon Jun 17 07:56:20 2024 From: davida at pobox.com (David Arnold) Date: Mon, 17 Jun 2024 07:56:20 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: Message-ID: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> > On 15 Jun 2024, at 00:18, Grant Taylor via TUHS wrote: > > It's my understanding that systemd as a service lifecycle manager is starting to take on some aspects of what cluster service managers used to do. I think it goes beyond this, and that systemd is just a convenient focus point for folks to push back against a wider set of changes. My usual example here is PolKit and polkitd. In this latest systemd release, for example, it seems the new systemd-run0 command (replacing sudo or su), starts a privileged process by checking permissions with polkitd over DBus, and then uses systemd to actually fork and setup the “child”. This is a fairly distinctive departure from how Unix works: over the last decade, Linux has increasingly moved away from being Unix, and I think this is why people find systemd so confronting. And there’s more to come, eg. varlink. I’m sure systemd, polkitd and their ilk address real needs. But the solution isn’t (in my view) Unix-like, and those for whom Linux is a convenient Unix are disappointed (to put it mildly). The world is no longer a PDP-11 or a Vax or a SPARCstation. USB in particular introduced a lot more dynamism to the Unix device model, and started us down this path of devfs, DBus, systemd, etc. Users reasonably want things to work, and Red Hat wants to satisfy those users, and they’ve chosen this way to do it. Unfortunately, there’s been no real competition: this goes well beyond system startup, and well beyond service management, and citing s6 or launchd or whatever misses the war by focusing on the battle. d From luther.johnson at makerlisp.com Mon Jun 17 09:34:34 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Sun, 16 Jun 2024 16:34:34 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> References: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> Message-ID: <802b871c-2c5e-514c-f8d5-f3eef71d76d4@makerlisp.com> I think there's a parallel from the Unix/Linux systems that we think of as more Unix-like, to the cars and airplanes and other machines of that and earlier eras. It used to be that part of the design of a system, alongside its operation, was the idea of normal, regular maintenance. The system could be pretty simple, but there was some maintenance and wearable parts replacement required. It was expected that there was an administrator or mechanic checking in once in a while to keep things tuned and in "good repair". This worked well, as long as people accepted this responsibility as part of the deal. Now it seems like people want everything done for them automatically, and not to have to know anything about the systems they are using. They want the systems to be smarter so they don't have to know as much. It's sort of like when the private airplane industry tried to engineer any skill required on the part of the pilot, out of the airplane. The results were not good. Planes became more complex, with more points of failure, and pilots did not know how to react to unexpected situations. I see this happening with our computer systems, and the people using them now, too. Of course there's a reasonable middle ground, but I think we've gone a little too far making things "easy", and in fact it's not easier at all, we're just fiddling in a different way, often through random trial and error, it all seems horribly indirect, opaque, and irrational, to support some programmer's idea somewhere, of some perfect abstraction. For example: CMake vs. just learning how to write makefiles properly. You fiddle with CMake and you never really know why it does what it does, especially from one version to the next, "but you don't have to write makefiles". On 06/16/2024 02:56 PM, David Arnold wrote: >> On 15 Jun 2024, at 00:18, Grant Taylor via TUHS wrote: >> >> It's my understanding that systemd as a service lifecycle manager is starting to take on some aspects of what cluster service managers used to do. > I think it goes beyond this, and that systemd is just a convenient focus point for folks to push back against a wider set of changes. > > My usual example here is PolKit and polkitd. In this latest systemd release, for example, it seems the new systemd-run0 command (replacing sudo or su), starts a privileged process by checking permissions with polkitd over DBus, and then uses systemd to actually fork and setup the “child”. > > This is a fairly distinctive departure from how Unix works: over the last decade, Linux has increasingly moved away from being Unix, and I think this is why people find systemd so confronting. And there’s more to come, eg. varlink. > > I’m sure systemd, polkitd and their ilk address real needs. But the solution isn’t (in my view) Unix-like, and those for whom Linux is a convenient Unix are disappointed (to put it mildly). > > The world is no longer a PDP-11 or a Vax or a SPARCstation. USB in particular introduced a lot more dynamism to the Unix device model, and started us down this path of devfs, DBus, systemd, etc. Users reasonably want things to work, and Red Hat wants to satisfy those users, and they’ve chosen this way to do it. Unfortunately, there’s been no real competition: this goes well beyond system startup, and well beyond service management, and citing s6 or launchd or whatever misses the war by focusing on the battle. > > > > d > From lm at mcvoy.com Mon Jun 17 09:46:54 2024 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 16 Jun 2024 16:46:54 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <802b871c-2c5e-514c-f8d5-f3eef71d76d4@makerlisp.com> References: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> <802b871c-2c5e-514c-f8d5-f3eef71d76d4@makerlisp.com> Message-ID: <20240616234654.GB12821@mcvoy.com> On Sun, Jun 16, 2024 at 04:34:34PM -0700, Luther Johnson wrote: > I think there's a parallel from the Unix/Linux systems that we think of > as more Unix-like, to the cars and airplanes and other machines of that > and earlier eras. It used to be that part of the design of a system, > alongside its operation, was the idea of normal, regular maintenance. > The system could be pretty simple, but there was some maintenance and > wearable parts replacement required. It was expected that there was an > administrator or mechanic checking in once in a while to keep things > tuned and in "good repair". This worked well, as long as people accepted > this responsibility as part of the deal. > > Now it seems like people want everything done for them automatically, > and not to have to know anything about the systems they are using. They > want the systems to be smarter so they don't have to know as much. It's > sort of like when the private airplane industry tried to engineer any > skill required on the part of the pilot, out of the airplane. The > results were not good. Planes became more complex, with more points of > failure, and pilots did not know how to react to unexpected situations. > I see this happening with our computer systems, and the people using > them now, too. Of course there's a reasonable middle ground, but I think > we've gone a little too far making things "easy", and in fact it's not > easier at all, we're just fiddling in a different way, often through > random trial and error, it all seems horribly indirect, opaque, and > irrational, to support some programmer's idea somewhere, of some perfect > abstraction. > > For example: CMake vs. just learning how to write makefiles properly. > You fiddle with CMake and you never really know why it does what it > does, especially from one version to the next, "but you don't have to > write makefiles". I could not agree more with this post, all of it, but especially the Cmake stuff. Writing Makefiles isn't that hard, if you are a programmer and can't do that, how good of a programmer are you? And is it really easier to learn shiny-new-make-replacement-du-jour every year? From peter.martin.yardley at gmail.com Mon Jun 17 10:10:54 2024 From: peter.martin.yardley at gmail.com (Peter Yardley) Date: Mon, 17 Jun 2024 10:10:54 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <87msnl4ew0.fsf@gmail.com> Message-ID: <90972698-038C-494E-8826-34F2B6E57B71@gmail.com> Algol has been a dead language for many years, for good reasons too. > On 17 Jun 2024, at 5:44 AM, Clem Cole wrote: > > > > On Sun, Jun 16, 2024 at 2:50 PM Greg A. Woods wrote: > In the "Unix world" everyone learns shell scripting, some better than > others of course, and some hate it at the same time too, but I would say > from my experience it's a given. You either learn shell scripting or > you are "just a user" (even if you also write application code). > Side story - I think you can tell a lot about a person by what is on their bookshelf at work and what books they have read. > > A few years ago, I discovered this same flaw in using UNIX (Linux) well with some of the new hires (from really good schools, too), and it was worse because they often had never seen the true Bourne shell (nor knew much/anything about Algol, much less A68). Many thought "bash" was the UNIX shell because they never knew better (chuckle). I realized it was a huge hole in their education, so I got my admin to order each copies of K&R2 and UPE for their desks. I said I expected them to do the exercises in them as part of their "training." I could usually tell a lot about each person by the questions they asked during that period. Many often griped about having to learn to use ed and nroff. I think those that were already EMACS folks thought I was a little bonkers but my comment was that you'll understand the other tools better/be a lot more effective with the shell in particular. Many had seen Latex, so the >>idea<< of a document compiler was not always completely foreign. But they crawled through each book. > > But it was interesting when it was done. To a person, they all said they were much better with the UNIX tool kit after UPE, and because they actually read K&R2, they often learned a few things about C they never realized. Once they "graduated" also I gave them a copy of APUE, if they were doing networking stuff, UNP too. Most would start doing the APUE and UNP problems also as I would get some of them coming to my office with questions, but I never said they had to do them. > > Clem > ᐧ Peter Yardley peter.martin.yardley at gmail.com From clemc at ccc.com Mon Jun 17 10:29:11 2024 From: clemc at ccc.com (Clem Cole) Date: Sun, 16 Jun 2024 20:29:11 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <90972698-038C-494E-8826-34F2B6E57B71@gmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <87msnl4ew0.fsf@gmail.com> <90972698-038C-494E-8826-34F2B6E57B71@gmail.com> Message-ID: Except A68 is the core of Bourne Shell. Truth is original Algol’s DNA - much less A68 - lived on in most programming languages we use today. Not knowing anything about both puts you at a huge disadvantage. Linux at its core is the Unix ideas (core IP) not the source code. Trying to rid it of so called “bad ideas” shows how little respect those taking such actions have. In others word, their taste is rather disappointing if not out right poor. Frankly my major complaint with much of the modern world is that when we ignore the past and we are cursed for forgetting its lessons. As had been said many times before, we are better served when we stand on the shoulders of great people rather than stepping on their toes. This discussion wrt the systemd is a prefect example. Sent from a handheld expect more typos than usual On Sun, Jun 16, 2024 at 8:11 PM Peter Yardley < peter.martin.yardley at gmail.com> wrote: > Algol has been a dead language for many years, for good reasons too. > > > On 17 Jun 2024, at 5:44 AM, Clem Cole wrote: > > > > > > > > On Sun, Jun 16, 2024 at 2:50 PM Greg A. Woods wrote: > > In the "Unix world" everyone learns shell scripting, some better than > > others of course, and some hate it at the same time too, but I would say > > from my experience it's a given. You either learn shell scripting or > > you are "just a user" (even if you also write application code). > > Side story - I think you can tell a lot about a person by what is on > their bookshelf at work and what books they have read. > > > > A few years ago, I discovered this same flaw in using UNIX (Linux) well > with some of the new hires (from really good schools, too), and it was > worse because they often had never seen the true Bourne shell (nor knew > much/anything about Algol, much less A68). Many thought "bash" was the > UNIX shell because they never knew better (chuckle). I realized it was a > huge hole in their education, so I got my admin to order each copies of > K&R2 and UPE for their desks. I said I expected them to do the exercises > in them as part of their "training." I could usually tell a lot about each > person by the questions they asked during that period. Many often griped > about having to learn to use ed and nroff. I think those that were already > EMACS folks thought I was a little bonkers but my comment was that you'll > understand the other tools better/be a lot more effective with the shell in > particular. Many had seen Latex, so the >>idea<< of a document compiler was > not always completely foreign. But they crawled through each book. > > > > But it was interesting when it was done. To a person, they all said they > were much better with the UNIX tool kit after UPE, and because they > actually read K&R2, they often learned a few things about C they never > realized. Once they "graduated" also I gave them a copy of APUE, if they > were doing networking stuff, UNP too. Most would start doing the APUE and > UNP problems also as I would get some of them coming to my office with > questions, but I never said they had to do them. > > > > Clem > > ᐧ > > Peter Yardley > peter.martin.yardley at gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Mon Jun 17 10:48:16 2024 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 16 Jun 2024 20:48:16 -0400 (EDT) Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register Message-ID: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> > From: Clem Cole > Frankly my major complaint with much of the modern world is that when we > ignore the past "There are two kinds of fools. One says, 'This is old, therefore it is good'; the other says, 'This is new, therefore it is better.'" -- Dean Inge, quoted by John Brunner in "The Shockwave Rider". Noel From ake.nordin at netia.se Mon Jun 17 10:54:05 2024 From: ake.nordin at netia.se (=?UTF-8?Q?=C3=85ke_Nordin?=) Date: Mon, 17 Jun 2024 02:54:05 +0200 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> References: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> Message-ID: <022282cf-1930-4344-bba8-2b0f3202a6d7@netia.se> On 2024-06-16 23:56, David Arnold wrote: >> On 15 Jun 2024, at 00:18, Grant Taylor via TUHS wrote: >> >> It's my understanding that systemd as a service lifecycle manager is starting to take on some aspects of what cluster service managers used to do. > I think it goes beyond this, and that systemd is just a convenient focus point for folks to push back against a wider set of changes. As an example of where I believe evolution is headed, I'd like to talk about the Elephant in the Room. Android. It has a Linux, and thus Unix, heritage. The parts of it that still depends on libc enjoys the quality of OpenBSD code, so it is blessed by some unixy simplicity. Yet regular users are so far removed from anything unix-like that it might as well be Multivac or the Mima. That it still has a file manager of sorts that knows the typical locations of downloads or photos is one of the last concessions to us "I know it's a computer, let me use it as one" types. By default, its apps are sandboxed and isolated in their own hives with their code (main(), library dependencies, media resources) and data presumably sealed off from the rest of the file system. Every code component is of course duplicated in every app. Each new version of Android seems to remove yet another aspect of its Unix roots. It didn't start there, though. Once upon a time, chroot() was a popular way to reduce attack surface area in Linux as well as elsewhere. You had to carefully populate it with just the dependencies that were needed. Containers followed, automating dependency provisioning. Android and its app ecosystem  is just a logical continuation of that evolution. Ubuntu has promoted "snaps," a kind of containerized applications that pretty much walks and quacks just like an Android app. Maybe it'sjust me being stupid trying to make things work with e.g. a snap-based version of synergy for keyboard and mouse sharing, but to me it seems that they typically don't see much of your file system, not to talk about any comprehensive view of your /dev. Quite a few distros seems to be headed that way. I'm probably both deluded as well as occluded in my reasoning, but I strongly suspect that the last generation of actively interested computer users where a majority understood processor memory models, I/O and interrupts is now largely promoted out of harms way. "Add another layer of abstractions so we don't need to care about such bullshit" seems to be the new call to arms. That dbus, systemd and Wayland isn't worse than they are is frankly an amazing success given the circumstances they were born under. -- Åke Nordin , resident Net/Lunix/telecom geek. Netia Data AB, Stockholm SWEDEN *46#7O466OI99# From flexibeast at gmail.com Mon Jun 17 11:01:40 2024 From: flexibeast at gmail.com (Alexis) Date: Mon, 17 Jun 2024 11:01:40 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: (Greg A. Woods's message of "Sat, 15 Jun 2024 01:48:39 -0700") References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <87msnl4ew0.fsf@gmail.com> Message-ID: <87iky84c23.fsf@gmail.com> "Greg A. Woods" writes: > At Sun, 16 Jun 2024 15:48:15 +1000, Alexis > > wrote: > Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix > philosophy' The Register >> >> Here's an excerpt from something i wrote >> on the Gentoo forum back in April: >> >> > [[...]] the situation on >> > Linux was a mess. Many of the (usually >> > volunteers) who maintain packages for >> > Linux don't want to have to learn the >> > complexities of shell scripting and the >> > subtle issues that can arise > > That pretty much says it all about the state of the GNU/linux > world > right there. > > In the "Unix world" everyone learns shell scripting, some better > than > others of course, and some hate it at the same time too, but I > would > say > from my experience it's a given. You either learn shell > scripting or > you are "just a user" (even if you also write application code). i feel this comment is unfair. The specific thing i wrote was: > the _complexities_ of shell scripting and the _subtle issues_ > that can arise [emphasis added] The issue isn't about learning shell scripting _per se_. It's about the extent to which _volunteers_ have to go beyond the _basics_ of shell scripting to learn about the _complexities_ and _subtle issues_ involved in using it to provide _robust_ service management. Including learning, for example, that certain functionality one takes for granted in a given shell isn't actually POSIX, and can't be assumed to be present in the shell one is working with (not to mention that POSIX-compatibility might need to be actively enabled, as in the case of e.g. ksh, via POSIXLY_CORRECT). Here's a FreeBSD thread from 2014, about how service(8) wasn't providing the same environment to scripts that boot did: https://lists.freebsd.org/pipermail/svn-src-head/2014-July/060519.html i'm a BSD user as well as a Linux user - i've been maintaining OpenBSD servers for several years. i certainly have my own criticisms of Linux versus OpenBSD - for example, i'm, mm, 'not a fan' of the somewhat cavalier attitude towards documentation that can often be found in the Linux world, so i'm grateful for people like Alejandro Colomar and his extensive work on the Linux man-pages project. But i feel the above thread suggests that either the FreeBSD devs are clueless about shell scripting or - as i feel is actually the case - that service management via shell scripting isn't as straightforward as one might assume. Alexis. From clemc at ccc.com Mon Jun 17 11:02:38 2024 From: clemc at ccc.com (Clem Cole) Date: Sun, 16 Jun 2024 21:02:38 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> Message-ID: Exactly Sent from a handheld expect more typos than usual On Sun, Jun 16, 2024 at 8:48 PM Noel Chiappa wrote: > > From: Clem Cole > > > Frankly my major complaint with much of the modern world is that > when we > > ignore the past > > "There are two kinds of fools. One says, 'This is old, therefore it is > good'; > the other says, 'This is new, therefore it is better.'" -- Dean Inge, > quoted > by John Brunner in "The Shockwave Rider". > > Noel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Mon Jun 17 11:05:32 2024 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 16 Jun 2024 18:05:32 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> Message-ID: <20240617010532.GC12821@mcvoy.com> On Sun, Jun 16, 2024 at 08:48:16PM -0400, Noel Chiappa wrote: > > From: Clem Cole > > > Frankly my major complaint with much of the modern world is that when we > > ignore the past > > "There are two kinds of fools. One says, 'This is old, therefore it is good'; > the other says, 'This is new, therefore it is better.'" -- Dean Inge, quoted > by John Brunner in "The Shockwave Rider". "Want to be a hero in the computer science world? Find a good paper from 5 or 10 years ago and rewrite it. Everyone will think you're a genius" -- Ron Minnich I paraphrased, Ron, feel free to correct me. -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat From imp at bsdimp.com Mon Jun 17 11:21:02 2024 From: imp at bsdimp.com (Warner Losh) Date: Sun, 16 Jun 2024 19:21:02 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87iky84c23.fsf@gmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> Message-ID: On Sun, Jun 16, 2024, 7:01 PM Alexis wrote: > "Greg A. Woods" writes: > > > At Sun, 16 Jun 2024 15:48:15 +1000, Alexis > > > > wrote: > > Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix > > philosophy' The Register > >> > >> Here's an excerpt from something i wrote > >> on the Gentoo forum back in April: > >> > >> > [[...]] the situation on > >> > Linux was a mess. Many of the (usually > >> > volunteers) who maintain packages for > >> > Linux don't want to have to learn the > >> > complexities of shell scripting and the > >> > subtle issues that can arise > > > > That pretty much says it all about the state of the GNU/linux > > world > > right there. > > > > In the "Unix world" everyone learns shell scripting, some better > > than > > others of course, and some hate it at the same time too, but I > > would > > say > > from my experience it's a given. You either learn shell > > scripting or > > you are "just a user" (even if you also write application code). > > i feel this comment is unfair. > > The specific thing i wrote was: > > > the _complexities_ of shell scripting and the _subtle issues_ > > that can arise > > [emphasis added] > > The issue isn't about learning shell scripting _per se_. It's > about the extent to which _volunteers_ have to go beyond the > _basics_ of shell scripting to learn about the _complexities_ and > _subtle issues_ involved in using it to provide _robust_ service > management. Including learning, for example, that certain > functionality one takes for granted in a given shell isn't > actually POSIX, and can't be assumed to be present in the shell > one is working with (not to mention that POSIX-compatibility might > need to be actively enabled, as in the case of e.g. ksh, via > POSIXLY_CORRECT). > > Here's a FreeBSD thread from 2014, about how service(8) wasn't > providing the same environment to scripts that boot did: > > https://lists.freebsd.org/pipermail/svn-src-head/2014-July/060519.html > > i'm a BSD user as well as a Linux user - i've been maintaining > OpenBSD servers for several years. i certainly have my own > criticisms of Linux versus OpenBSD - for example, i'm, mm, 'not a > fan' of the somewhat cavalier attitude towards documentation that > can often be found in the Linux world, so i'm grateful for people > like Alejandro Colomar and his extensive work on the Linux > man-pages project. But i feel the above thread suggests that > either the FreeBSD devs are clueless about shell scripting or - as > i feel is actually the case - that service management via shell > scripting isn't as straightforward as one might assume. > "Exactly the same" turned out to be harder than one would naively assume. That thread, and others like it elsewhere, hammered out what the differences were and how to fix them. The thread you found is the internal one used to point out issues from changes made... and was 10 years ago... FreeBSD developers, as a whole, are quite adept at shell acripting and the myriad of subtle issues around it. Every time I've committed something that missed one or more of the relevant ones, I've got email.... sometimes, though, it takes a village to know all the subtly in play. It's been 25 or more years since it all could be done by one brilliant person... Warner Alexis. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Mon Jun 17 11:25:31 2024 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 16 Jun 2024 18:25:31 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87iky84c23.fsf@gmail.com> References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> Message-ID: <20240617012531.GE12821@mcvoy.com> On Mon, Jun 17, 2024 at 11:01:40AM +1000, Alexis wrote: > "Greg A. Woods" writes: > > >At Sun, 16 Jun 2024 15:48:15 +1000, Alexis > >wrote: > >Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix > >philosophy' The Register > >> > >>Here's an excerpt from something i wrote > >>on the Gentoo forum back in April: > >> > >>> [[...]] the situation on > >>> Linux was a mess. Many of the (usually > >>> volunteers) who maintain packages for > >>> Linux don't want to have to learn the > >>> complexities of shell scripting and the > >>> subtle issues that can arise > > > >That pretty much says it all about the state of the GNU/linux world > >right there. > > > >In the "Unix world" everyone learns shell scripting, some better than > >others of course, and some hate it at the same time too, but I would > >say > >from my experience it's a given. You either learn shell scripting or > >you are "just a user" (even if you also write application code). > > i feel this comment is unfair. > > The specific thing i wrote was: > > >the _complexities_ of shell scripting and the _subtle issues_ that can > >arise > > [emphasis added] > > The issue isn't about learning shell scripting _per se_. It's about the > extent to which _volunteers_ have to go beyond the _basics_ of shell > scripting to learn about the _complexities_ and _subtle issues_ involved in > using it to provide _robust_ service management. Including learning, for > example, that certain functionality one takes for granted in a given shell > isn't actually POSIX, and can't be assumed to be present in the shell one is > working with (not to mention that POSIX-compatibility might need to be > actively enabled, as in the case of e.g. ksh, via POSIXLY_CORRECT). This is sort of off topic but maybe relevant. When I was running my company, my engineers joked that if it were invented after 1980 I wouldn't let them use it. Which wasn't true, we used mmap(). But the underlying sentiment sort of was true. Even though they were all used to bash, I tried very hard to not use bash specific stuff. And it paid off, in our hey day, we supported SCO, AIX, HPUX, SunOS, Solaris, Tru64, Linux on every architecture from tin to IBM mainframes, Windows, Macos on PPC and x86, etc. And probably a bunch of other platforms I've forgotten. *Every* time they used some bash-ism, it bit us in the ass. I kept telling them "our build environment is not our deployment environment". We had a bunch of /bin/sh stuff that we shipped so we had to go for the common denominator. I did relax things to allow GNU Make, there were some features that they really wanted and that is build environment, so, shrug. From imp at bsdimp.com Mon Jun 17 11:32:53 2024 From: imp at bsdimp.com (Warner Losh) Date: Sun, 16 Jun 2024 19:32:53 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240617012531.GE12821@mcvoy.com> References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> Message-ID: On Sun, Jun 16, 2024, 7:25 PM Larry McVoy wrote: > On Mon, Jun 17, 2024 at 11:01:40AM +1000, Alexis wrote: > > "Greg A. Woods" writes: > > > > >At Sun, 16 Jun 2024 15:48:15 +1000, Alexis > > >wrote: > > >Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix > > >philosophy' The Register > > >> > > >>Here's an excerpt from something i wrote > > >>on the Gentoo forum back in April: > > >> > > >>> [[...]] the situation on > > >>> Linux was a mess. Many of the (usually > > >>> volunteers) who maintain packages for > > >>> Linux don't want to have to learn the > > >>> complexities of shell scripting and the > > >>> subtle issues that can arise > > > > > >That pretty much says it all about the state of the GNU/linux world > > >right there. > > > > > >In the "Unix world" everyone learns shell scripting, some better than > > >others of course, and some hate it at the same time too, but I would > > >say > > >from my experience it's a given. You either learn shell scripting or > > >you are "just a user" (even if you also write application code). > > > > i feel this comment is unfair. > > > > The specific thing i wrote was: > > > > >the _complexities_ of shell scripting and the _subtle issues_ that can > > >arise > > > > [emphasis added] > > > > The issue isn't about learning shell scripting _per se_. It's about the > > extent to which _volunteers_ have to go beyond the _basics_ of shell > > scripting to learn about the _complexities_ and _subtle issues_ involved > in > > using it to provide _robust_ service management. Including learning, for > > example, that certain functionality one takes for granted in a given > shell > > isn't actually POSIX, and can't be assumed to be present in the shell > one is > > working with (not to mention that POSIX-compatibility might need to be > > actively enabled, as in the case of e.g. ksh, via POSIXLY_CORRECT). > > This is sort of off topic but maybe relevant. > > When I was running my company, my engineers joked that if it were invented > after 1980 I wouldn't let them use it. Which wasn't true, we used mmap(). > > But the underlying sentiment sort of was true. Even though they were > all used to bash, I tried very hard to not use bash specific stuff. > And it paid off, in our hey day, we supported SCO, AIX, HPUX, SunOS, > Solaris, Tru64, Linux on every architecture from tin to IBM mainframes, > Windows, Macos on PPC and x86, etc. And probably a bunch of other > platforms I've forgotten. > > *Every* time they used some bash-ism, it bit us in the ass. I kept > telling them "our build environment is not our deployment environment". > We had a bunch of /bin/sh stuff that we shipped so we had to go for > the common denominator. > The fallout of the Unix Wars was that this denominator was kept too low for too long. Warner I did relax things to allow GNU Make, there were some features that they > really wanted and that is build environment, so, shrug. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Mon Jun 17 13:56:15 2024 From: rminnich at gmail.com (ron minnich) Date: Sun, 16 Jun 2024 20:56:15 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240617010532.GC12821@mcvoy.com> References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> <20240617010532.GC12821@mcvoy.com> Message-ID: yeah, I was on a rant that day, it actually made it into the plan 9 fortunes 'You want to make your way in the CS field? Simple. Calculate rough time of amnesia (hell, 10 years is plenty, probably 10 months is plenty), go to the dusty archives, dig out something fun, and go for it. It's worked for many people, and it can work for you. - Ron Minnich ' I'm sorry to say, it still seems to work that way. I just saw a talk from red hat about putting linux in flash to use as a boot loader, for example, and the authors somehow ignored the last 25 years and acted like they invented it. Amazing. Turns out 10 minutes is enough. On Sun, Jun 16, 2024 at 6:05 PM Larry McVoy wrote: > On Sun, Jun 16, 2024 at 08:48:16PM -0400, Noel Chiappa wrote: > > > From: Clem Cole > > > > > Frankly my major complaint with much of the modern world is that > when we > > > ignore the past > > > > "There are two kinds of fools. One says, 'This is old, therefore it is > good'; > > the other says, 'This is new, therefore it is better.'" -- Dean Inge, > quoted > > by John Brunner in "The Shockwave Rider". > > "Want to be a hero in the computer science world? Find a good paper from > 5 or 10 years ago and rewrite it. Everyone will think you're a genius" > > -- Ron Minnich > > I paraphrased, Ron, feel free to correct me. > -- > --- > Larry McVoy Retired to fishing > http://www.mcvoy.com/lm/boat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Mon Jun 17 13:57:20 2024 From: rminnich at gmail.com (ron minnich) Date: Sun, 16 Jun 2024 20:57:20 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> <20240617010532.GC12821@mcvoy.com> Message-ID: I'm curious, as to the original topic of this discussion: can anyone justify systemd-homed and how it works? Does that even look like 0% of a unix idea? On Sun, Jun 16, 2024 at 8:56 PM ron minnich wrote: > yeah, I was on a rant that day, it actually made it into the plan 9 > fortunes > 'You want to make your way in the CS field? Simple. Calculate rough time > of amnesia (hell, 10 years is plenty, probably 10 months is plenty), go to > the dusty archives, dig out something fun, and go for it. It's worked for > many people, and it can work for you. - Ron Minnich > ' > I'm sorry to say, it still seems to work that way. > > I just saw a talk from red hat about putting linux in flash to use as a > boot loader, for example, and the authors somehow ignored the last 25 years > and acted like they invented it. Amazing. Turns out 10 minutes is enough. > > On Sun, Jun 16, 2024 at 6:05 PM Larry McVoy wrote: > >> On Sun, Jun 16, 2024 at 08:48:16PM -0400, Noel Chiappa wrote: >> > > From: Clem Cole >> > >> > > Frankly my major complaint with much of the modern world is that >> when we >> > > ignore the past >> > >> > "There are two kinds of fools. One says, 'This is old, therefore it is >> good'; >> > the other says, 'This is new, therefore it is better.'" -- Dean Inge, >> quoted >> > by John Brunner in "The Shockwave Rider". >> >> "Want to be a hero in the computer science world? Find a good paper from >> 5 or 10 years ago and rewrite it. Everyone will think you're a genius" >> >> -- Ron Minnich >> >> I paraphrased, Ron, feel free to correct me. >> -- >> --- >> Larry McVoy Retired to fishing >> http://www.mcvoy.com/lm/boat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Mon Jun 17 15:41:32 2024 From: tuhs at tuhs.org (Bakul Shah via TUHS) Date: Sun, 16 Jun 2024 22:41:32 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> <20240617010532.GC12821@mcvoy.com> Message-ID: <653E15D7-DD66-414C-94F3-A74B4EE3DD10@iitbombay.org> On Jun 16, 2024, at 8:57 PM, ron minnich wrote: > > I'm curious, as to the original topic of this discussion: can anyone justify systemd-homed and how it works? Does that even look like 0% of a unix idea? I am not a fan of systemd (or linux) and don't follow their excesses/adventures but I am not a fan of how BSD does initialization & brings up services either. They don't quite get all the dependencies right for all the possible combinations of devices etc. Its /etc/rc.d/* system is pretty clunky -- I tend to think any time you are repeating more or less the same boilerplate code in many files, something worth abstracting is hiding in there. I like how launchd treats a service as an object (more than just a program but also the auxiliary files and scripts). For me it was a lightbulb moment (like realizing how a function operates in an environment!). Though I'd probably use s-expr or a simpler config format, not xml (as in launchd plist/SMF manifest). At the other extreme of complexity we have things like Kubernetes. Not a fan. What I want is to be able to map all my computers and compute clusters into a single virtual machine -- where storage, IO and computing resources may be added / removed without taking the whole VM down, and where each display/input user interface is a window on the same underlying VM and all sharing is under my control. Plan9 does a bit of this but that experiment ended too early. Apple is tending in this direction though not cleanly (+ I don't want to rely on a faceless behemoth corp that may trample on my data without even meaning to). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Mon Jun 17 15:51:09 2024 From: tuhs at tuhs.org (Bakul Shah via TUHS) Date: Sun, 16 Jun 2024 22:51:09 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <653E15D7-DD66-414C-94F3-A74B4EE3DD10@iitbombay.org> References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> <20240617010532.GC12821@mcvoy.com> <653E15D7-DD66-414C-94F3-A74B4EE3DD10@iitbombay.org> Message-ID: <85C11B5C-7AE0-40F6-A348-1771AB9F8B09@iitbombay.org> On Jun 16, 2024, at 10:41 PM, Bakul Shah wrote: > > On Jun 16, 2024, at 8:57 PM, ron minnich wrote: >> >> I'm curious, as to the original topic of this discussion: can anyone justify systemd-homed and how it works? Does that even look like 0% of a unix idea? > > > I am not a fan of systemd (or linux) and don't follow their excesses/adventures but I am not a fan of how BSD does initialization & brings up services either. They don't quite get all the dependencies right for all the possible combinations of devices etc. Its /etc/rc.d/* system is pretty clunky -- I tend to think any time you are repeating more or less the same boilerplate code in many files, something worth abstracting is hiding in there. > > I like how launchd treats a service as an object (more than just a program but also the auxiliary files and scripts). For me it was a lightbulb moment (like realizing how a function operates in an environment!). Though I'd probably use s-expr or a simpler config format, not xml (as in launchd plist/SMF manifest). > > At the other extreme of complexity we have things like Kubernetes. Not a fan. > > What I want is to be able to map all my computers and compute clusters into a single virtual machine -- where storage, IO and computing resources may be added / removed without taking the whole VM down, and where each display/input user interface is a window on the same underlying VM and all sharing is under my control. Plan9 does a bit of this but that experiment ended too early. Apple is tending in this direction though not cleanly (+ I don't want to rely on a faceless behemoth corp that may trample on my data without even meaning to). Forgot to mention LOCUS, which was the only distributed Unix compatible OS I am aware of. To anyone who has user/implementer experience, I would love to hear what worked well, what didn't, what was easy to implement, what was very hard and what you wished was added to it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.salz at gmail.com Mon Jun 17 22:59:16 2024 From: rich.salz at gmail.com (Rich Salz) Date: Mon, 17 Jun 2024 08:59:16 -0400 Subject: [TUHS] Diff: The seminal water we swim in Message-ID: >From Mastodon, MHoye posting at https://mastodon.social/@mhoye/112615649865163136: I haven't seen anybody mentioning it or even noticing it, like it's just the water we swim in now, but this month marks the fiftieth anniversary of the release of what would become a seminal, and is arguably the single most important, piece of social software ever created. Written by Douglas McIlroy and James Hunt and released with the 5th Edition of Unix this month in 1974: diff. https://minnie.tuhs.org/cgi-bin/utree. .. -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcilroy at dartmouth.edu Tue Jun 18 00:16:15 2024 From: douglas.mcilroy at dartmouth.edu (Douglas McIlroy) Date: Mon, 17 Jun 2024 10:16:15 -0400 Subject: [TUHS] Diff: The seminal water we swim in Message-ID: > this month marks the fiftieth anniversary of the release of what > would become a seminal, and is arguably the single most important, > piece of social software ever created. I'm flattered, but must point out that diff was just one of a sequence of more capable and robust versions of proof(1), which Mike Lesk contributed to Unix v3. It, in turn, copied a program written by Steve Johnson before Unix and general consciousness of software tools. Credit must also go to several people who studied and created algorithms for the "longest common subsequence" problem: Harold Stone (who invented the diff algorithm at a blackboard during a one-day visit to Bell Labs), Dan Hirschberg, Tom Szymanksi, Al Aho, and Jeff Ullman. For a legal case in which I served as an expert witness, I found several examples of diff-type programs developed in the late 1960s specifically for preparing critical editions of ancient documents. However, Steve Johnson's unpublished program from the same era appears to be the first that was inspired as a general tool, and thus as "social software". Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Tue Jun 18 01:56:55 2024 From: clemc at ccc.com (Clem Cole) Date: Mon, 17 Jun 2024 11:56:55 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <85C11B5C-7AE0-40F6-A348-1771AB9F8B09@iitbombay.org> References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> <20240617010532.GC12821@mcvoy.com> <653E15D7-DD66-414C-94F3-A74B4EE3DD10@iitbombay.org> <85C11B5C-7AE0-40F6-A348-1771AB9F8B09@iitbombay.org> Message-ID: On Mon, Jun 17, 2024 at 1:51 AM Bakul Shah via TUHS wrote: > Forgot to mention LOCUS, which was the only distributed Unix compatible OS > I am aware of. To anyone who has user/implementer experience, I would love > to hear what worked well, what didn't, what was easy to implement, what was > very hard and what you wished was added to it. > Jerry and Bruce's book is the complete reference: https://www.amazon.com/Distributed-System-Architecture-Computer-Systems/dp/0262161028 There were basically 3/4 versions... the original version of the PDP 11 which is the SOSP paper, which morphed to include a VAX at UCLA; IBM's AIX/370 and AIX/PS2 which included TCF (Transparent Computing Facility), and LCC's TNC Transparent Networking Computing "product" which were the 14 core technologies used to built it. Part of them landed in other systems from Tru64, HPUX, the Paragon and even a later a Linux implementation (which sadly was done on the V2 kernel so was lost when Linus did not understand it). What worked well was different flavors of the DFS and the later core idea of the VPROCS layer which I sorely miss, which allowed process migration - which w worked well and boy did I miss later in my career. Admin of a Locus based system was a dream because it was just one system for up to 4096 nodes in a Paragon. It also means you could migrate processes off a node, take the node down, reboot/change and bring it back. Very cool. After the first system was installed, adding a node was trivial, by the way. You booted the node, "joined" the cluster, and were up. AIX used file replication to then build the local disks as needed. BTW: "checkpointing" was a freebie -- you just migrated the file to a disk. Mixing ISA like the 370 and PS/2 was a mixed bag -- I'll let Charlie comment. With TNC we redid that model a bit, I'm not sure we ever got it 100% right. The HP-UX version was probably the best. The biggest implementation issue is that UNIX has too many different namespaces with all sorts of rules that are particular to each. For all of the concept of "everything is a file," - when you start to try to bring it together, you discover new and werid^H^H^H^H^Hintersting name spaces from System V IPC to signals to FIFOs and Name Pipes (similar but different). It seemed like everything we looked, we would find another NS we needed to handle, and when we started to try to look at non-UNIX process layers, it got even stranger. The original UNIX protection model is a tad weak, but most people had started to add ACLs, and POSIX was in the throughs of standardizing them -- so we based it on an early POSIX proposal (mostly based on HP-UX since they had them before the others did). To be more specific, the virtual process layer (VPROC) attempted to do what VFS had done for the FS layer to the core kernel. If you look at both the original 2 Locus schemes, process control was ad hoc and thus very messy. LCC realized if we were going to succeed, we needed to make that cleaner. But that still took major surgery - although, like the CFS layer, things were a lot clearer once done. Bruce, Roman, and I came up with VPROCs. BTW: one of the cool parts of VPROC is like VFS. It conceptually made it possible to have other process models. We did a prototype for OS/2 running inside of the OSF uK and were trying to get a contract from DEC to do it to Tru64 and adding VMS before we got sold (we had already developed CFS for DEC as part of Tru64 - which TNC's Cluster File System). Truth is, cheap VMs killed the need for this idea, but it worked fairly well. After the core VPROCs layer, the hardest thing was distributed shared memory (DSM) and the distributed lock manager (DLM). DSM was an example that offered pure transparency in operation, *i.e.,* test and set worked (operationally) correctly across the DSM, but it was not "speed transparent." But if you rewrote to use DLM, then you could get full transparency and speed. The DLM is one of the TNC technology which lives on today. It ended up in a number of systems - Oracle wrote their own based on the specs for the DEC DLM we built for the CFS for Tru64 (which is from TNC). I believe a few other folks used it. It was in OSF's DCE, and ISTR Microsoft picked it up. So a good question is if TNC was so cool, why did Beowulf (a real hack in comparison) stick around and TNC die? Well, a few things. LCC/HP did not open-source the code until it was too late. So Beowulf, which was around, was what folks (like me) used to build big scientific clusters. And while Popek was "right," -- it takes something like Locus/TNC to make a cluster fully transparent. Beowulf ignored the seams and i the end, that was "good enough." But it makes setup and admin a PITA, and the program needs to be careful -- the dragons are all over the place. So, when I went to Intel, I was the Architect of Cluster Ready, which defined away many of those seams and then provided tools to test for them and help you admin. Tools like the Cluster Checker and the whole ClusterReady program would not be needed if TNC had "stuck," and I think clusters, in general, a cluster of small computers on a LAN, not just clusters on a high-speed/special interconnect like a supercomputer, would be more available today. Clem ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Tue Jun 18 02:00:13 2024 From: clemc at ccc.com (Clem Cole) Date: Mon, 17 Jun 2024 12:00:13 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> <20240617010532.GC12821@mcvoy.com> <653E15D7-DD66-414C-94F3-A74B4EE3DD10@iitbombay.org> <85C11B5C-7AE0-40F6-A348-1771AB9F8B09@iitbombay.org> Message-ID: typo... like the VFS layer (not CFS layer) ᐧ On Mon, Jun 17, 2024 at 11:56 AM Clem Cole wrote: > > > On Mon, Jun 17, 2024 at 1:51 AM Bakul Shah via TUHS wrote: > >> Forgot to mention LOCUS, which was the only distributed Unix compatible >> OS I am aware of. To anyone who has user/implementer experience, I would >> love to hear what worked well, what didn't, what was easy to implement, >> what was very hard and what you wished was added to it. >> > Jerry and Bruce's book is the complete reference: > https://www.amazon.com/Distributed-System-Architecture-Computer-Systems/dp/0262161028 > > There were basically 3/4 versions... the original version of the PDP 11 > which is the SOSP paper, which morphed to include a VAX at UCLA; IBM's > AIX/370 and AIX/PS2 which included TCF (Transparent Computing Facility), > and LCC's TNC Transparent Networking Computing "product" which were the 14 > core technologies used to built it. Part of them landed in other systems > from Tru64, HPUX, the Paragon and even a later a Linux implementation > (which sadly was done on the V2 kernel so was lost when Linus did not > understand it). > > What worked well was different flavors of the DFS and the later core idea > of the VPROCS layer which I sorely miss, which allowed process migration - > which w worked well and boy did I miss later in my career. Admin of a > Locus based system was a dream because it was just one system for up to > 4096 nodes in a Paragon. It also means you could migrate processes off a > node, take the node down, reboot/change and bring it back. Very cool. > After the first system was installed, adding a node was trivial, by the > way. You booted the node, "joined" the cluster, and were up. AIX used file > replication to then build the local disks as needed. BTW: > "checkpointing" was a freebie -- you just migrated the file to a disk. > > Mixing ISA like the 370 and PS/2 was a mixed bag -- I'll let Charlie > comment. With TNC we redid that model a bit, I'm not sure we ever got it > 100% right. The HP-UX version was probably the best. > > The biggest implementation issue is that UNIX has too many different > namespaces with all sorts of rules that are particular to each. For all of > the concept of "everything is a file," - when you start to try to bring it > together, you discover new and werid^H^H^H^H^Hintersting name spaces from > System V IPC to signals to FIFOs and Name Pipes (similar but different). > It seemed like everything we looked, we would find another NS we needed to > handle, and when we started to try to look at non-UNIX process layers, it > got even stranger. The original UNIX protection model is a tad weak, but > most people had started to add ACLs, and POSIX was in the throughs of > standardizing them -- so we based it on an early POSIX proposal (mostly > based on HP-UX since they had them before the others did). > > To be more specific, the virtual process layer (VPROC) attempted to do > what VFS had done for the FS layer to the core kernel. If you look at > both the original 2 Locus schemes, process control was ad hoc and thus very > messy. LCC realized if we were going to succeed, we needed to make that > cleaner. But that still took major surgery - although, like the CFS layer, > things were a lot clearer once done. Bruce, Roman, and I came up with > VPROCs. BTW: one of the cool parts of VPROC is like VFS. It conceptually > made it possible to have other process models. We did a prototype for OS/2 > running inside of the OSF uK and were trying to get a contract from DEC to > do it to Tru64 and adding VMS before we got sold (we had already developed > CFS for DEC as part of Tru64 - which TNC's Cluster File System). Truth is, > cheap VMs killed the need for this idea, but it worked fairly well. > > After the core VPROCs layer, the hardest thing was distributed > shared memory (DSM) and the distributed lock manager (DLM). DSM was an > example that offered pure transparency in operation, *i.e.,* test and set > worked (operationally) correctly across the DSM, but it was not "speed > transparent." But if you rewrote to use DLM, then you could get full > transparency and speed. The DLM is one of the TNC technology which lives > on today. It ended up in a number of systems - Oracle wrote their own > based on the specs for the DEC DLM we built for the CFS for Tru64 (which is > from TNC). I believe a few other folks used it. It was in OSF's DCE, and > ISTR Microsoft picked it up. > > So a good question is if TNC was so cool, why did Beowulf (a real hack in > comparison) stick around and TNC die? Well, a few things. LCC/HP did not > open-source the code until it was too late. So Beowulf, which was around, > was what folks (like me) used to build big scientific clusters. And while > Popek was "right," -- it takes something like Locus/TNC to make a cluster > fully transparent. Beowulf ignored the seams and i the end, that was "good > enough." But it makes setup and admin a PITA, and the program needs to be > careful -- the dragons are all over the place. So, when I went to Intel, I > was the Architect of Cluster Ready, which defined away many of those seams > and then provided tools to test for them and help you admin. > > Tools like the Cluster Checker and the whole ClusterReady program would > not be needed if TNC had "stuck," and I think clusters, in general, a > cluster of small computers on a LAN, not just clusters on a > high-speed/special interconnect like a supercomputer, would be more > available today. > > > Clem > > ᐧ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Tue Jun 18 02:43:56 2024 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 17 Jun 2024 09:43:56 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> <20240617010532.GC12821@mcvoy.com> <653E15D7-DD66-414C-94F3-A74B4EE3DD10@iitbombay.org> <85C11B5C-7AE0-40F6-A348-1771AB9F8B09@iitbombay.org> Message-ID: <20240617164356.GA27039@mcvoy.com> On Mon, Jun 17, 2024 at 11:56:55AM -0400, Clem Cole wrote: > What worked well was different flavors of the DFS and the later core idea > of the VPROCS layer which I sorely miss, which allowed process migration - > which w worked well and boy did I miss later in my career. Admin of a > Locus based system was a dream because it was just one system for up to > 4096 nodes in a Paragon. It also means you could migrate processes off a > node, take the node down, reboot/change and bring it back. Very cool. > After the first system was installed, adding a node was trivial, by the > way. You booted the node, "joined" the cluster, and were up. I'm so bummed this didn't make it in the market place. I dreamed up my own version of this, very similar, actually started BitMover to build this but got side tracked onto BitKeeper to help Linus. What I wanted, and sadly never got, was nodes that were small SMP machines. Maybe 4 way. And a tricked up C that had simplistic objects built in, locks would be automatic for the most part so when you accessed VOP_OPEN() the lock was taken automatically. Yes, I know that won't scale but that's fine. Scale it as far as it can go, which is not far, and then cluster those SMP machines to get further scaling. The vision was to maintain a simplistic OS, not the monstrosity that Linux has become. For most people, a simplistic SMP OS would work fine. Then introduce the clustering to go from 4 way to 4096 way. I gave a bunch of talks on it, I was pushing for it in the early 1990's. I gave the talk to some VP at SGI and they promptly hired me. Never got anywhere while I was there but I believe they did something like that after I left. Woulda, coulda, shoulda. From sauer at technologists.com Tue Jun 18 02:59:46 2024 From: sauer at technologists.com (Charles H Sauer (he/him)) Date: Mon, 17 Jun 2024 11:59:46 -0500 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> <20240617010532.GC12821@mcvoy.com> <653E15D7-DD66-414C-94F3-A74B4EE3DD10@iitbombay.org> <85C11B5C-7AE0-40F6-A348-1771AB9F8B09@iitbombay.org> Message-ID: Clem suggests I comment on mixing ISA. I'm not sure how to respond. I saw Bruce and Jerry demo process migration many times, particularly during our dramatic Santa Monica meetings in October 1987, coincident with the Whittier earthquake. However, I never got a chance to work with this myself. (During the strongest aftershocks, Bruce and I would just stare and hold on to our chairs. Having us Austin IBM folks in Santa Monica to try to resolve the Austin/LCC disagreements seemed historic, but probably not the cause of the October 19 Wall Street crash.) In general, I was always impressed by what Bruce and Jerry did, but the assertions that LCC could do everything exacerbated the ongoing political challenges within IBM. To repeat from https://notes.technologists.com/notes/2017/03/08/lets-start-at-the-very-beginning-801-romp-rtpc-aix-versions/: o "The former LCC person has mentioned that IBM then seemed like N competing companies. Actually, it was more like Mn competing factions within N competing companies." o "The traditional product organizations, e.g., those associated with the 370 and the System 3x, saw little need for UNIX or a new hardware architecture. The renegade but surprisingly successful PC organizations looked askance for their own reasons. Even the Yorktown partners were partly detrimental because of disdain for UNIX." [To amplify on this, in 1984 CEO John Akers told a gathering of Austin IBM managers that he questioned the need for RISC processors and UNIX.] o "Besides our technical concerns about distributed system issues, the implicit question seemed an all or nothing proposition of continuing AIX vs. IBM depending on LCC for UNIX." And we could dwell on OSF, DCE, etc. On the day OSF was announced, with Akers on stage with Ken Olsen, Akers flew across the country to an awards event, where Glenn Henry, Larry Loucks and I received substantial checks in recognition of AIX. When Akers shook my hand, he told me how proud he was of what had happened that day. When I saw the Register article, I knew that systemd folks hadn't boasted '42% less Unix philosophy', that it was really someone on mas.to, but I felt like stirring up discussion. Seems to have worked... Charlie On 6/17/2024 11:00 AM, Clem Cole wrote: > typo...  like the VFS layer (not CFS layer) > ᐧ > > On Mon, Jun 17, 2024 at 11:56 AM Clem Cole wrote: > > > > On Mon, Jun 17, 2024 at 1:51 AM Bakul Shah via TUHS > wrote: > > Forgot to mention LOCUS, which was the only distributed Unix > compatible OS I am aware of. To anyone who has > user/implementer experience, I would love to hear what worked > well, what didn't, what was easy to implement, what was very > hard and what you wished was added to it. > > Jerry and Bruce's book is the complete reference: > https://www.amazon.com/Distributed-System-Architecture-Computer-Systems/dp/0262161028 > > There were basically 3/4 versions...  the original version of the > PDP 11 which is the SOSP paper, which morphed to include a VAX at > UCLA; IBM's AIX/370 and AIX/PS2 which included TCF (Transparent > Computing Facility), and LCC's TNC Transparent Networking > Computing "product" which were the 14 core technologies used to > built it.  Part of them landed in other systems from Tru64, HPUX, > the Paragon and even a later a Linux implementation (which sadly > was done on the V2  kernel so was lost when Linus did not > understand it). > > What worked well was different flavors of the DFS and the later > core idea of the VPROCS layer which I sorely miss, which allowed > process migration - which w worked well and boy did I miss later > in my career.  Admin of a Locus based system was a dream because > it was just one system for up to 4096 nodes in a Paragon.   It > also means you could migrate processes off a node, take the node > down, reboot/change and bring it back. Very cool.  After the first > system was installed, adding a node was trivial, by the way.  You > booted the node, "joined" the cluster, and were up. AIX used file > replication to then build the local disks as needed.   BTW: > "checkpointing" was a freebie -- you just migrated the file to a disk. > > Mixing ISA like the 370 and PS/2  was a mixed bag -- I'll let > Charlie comment.   With TNC we redid that model a bit, I'm not > sure we ever got it 100% right.  The HP-UX version was probably > the best. > > The biggest implementation issue is that UNIX has too many > different namespaces with all sorts of rules that are particular > to each.  For all of the concept of "everything is a file," - when > you start to try to bring it together, you discover new and > werid^H^H^H^H^Hintersting name spaces from System V IPC to signals > to FIFOs and Name Pipes (similar but different).  It seemed like > everything we looked, we would find another NS we needed to > handle, and when we started to try to look at non-UNIX process > layers, it got even stranger.  The original UNIX protection model > is a tad weak, but most people had started to add ACLs, and POSIX > was in the throughs of standardizing them -- so we based it on an > early POSIX proposal (mostly based on HP-UX since they had them > before the others did). > > To be more specific, the virtual process layer (VPROC) attempted > to do what VFS had done for the FS layer to the core kernel.   If > you look at both the original 2 Locus schemes, process control was > ad hoc and thus very messy.   LCC realized if we were going to > succeed, we needed to make that cleaner.  But that still took > major surgery - although, like the CFS layer, things were a lot > clearer once done.   Bruce, Roman, and I came up with VPROCs.  > BTW: one of the cool parts of VPROC is like VFS. It conceptually > made it possible to have other process models. We did a prototype > for OS/2 running inside of the OSF uK and were trying to get a > contract from DEC to do it to Tru64 and adding VMS before we got > sold (we had already developed CFS for DEC as part of Tru64 - > which TNC's Cluster File System). Truth is, cheap VMs killed the > need for this idea, but it worked fairly well. > > After the core VPROCs layer, the hardest thing was distributed > shared memory (DSM) and the distributed lock manager (DLM).   DSM > was an example that offered pure transparency in operation, > /i.e.,/ test and set worked (operationally) correctly across the > DSM, but it was not "speed transparent."  But if you rewrote to > use DLM, then you could get full transparency and speed.  The DLM > is one of the TNC technology which lives on today.  It ended up in > a number of systems - Oracle wrote their own based on the specs > for the DEC DLM we built for the CFS for Tru64 (which is from > TNC). I believe a few other folks used it.  It was in OSF's DCE, > and ISTR Microsoft picked it up. > > So a good question is if TNC was so cool, why did Beowulf (a real > hack in comparison) stick around and TNC die?  Well, a few > things.  LCC/HP did not open-source the code until it was too > late.  So Beowulf, which was around, was what folks (like me) used > to build big scientific clusters. And while Popek was "right," -- > it takes something like Locus/TNC to make a cluster fully > transparent.  Beowulf ignored the seams and i the end, that was > "good enough."   But it makes setup and admin a PITA, and the > program needs to be careful -- the dragons are all over the > place. So, when I went to Intel, I was the Architect of Cluster > Ready, which defined away many of those seams and then provided > tools to test for them and help you admin. > > Tools like the Cluster Checker and the whole ClusterReady program > would not be needed if TNC had "stuck," and I think clusters, in > general, a cluster of small computers on a LAN, not just clusters > on a high-speed/special interconnect like a supercomputer, would > be more available today. > > > Clem > > ᐧ > -- voice: +1.512.784.7526 e-mail:sauer at technologists.com fax: +1.512.346.5240 Web:https://technologists.com/sauer/ Facebook/Google/LinkedIn/Twitter: CharlesHSauer -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuff at riddermarkfarm.ca Tue Jun 18 05:21:47 2024 From: stuff at riddermarkfarm.ca (Stuff Received) Date: Mon, 17 Jun 2024 15:21:47 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240617012531.GE12821@mcvoy.com> References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> Message-ID: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> On 2024-06-16 21:25, Larry McVoy wrote (in part): [...] > *Every* time they used some bash-ism, it bit us in the ass. This is so true for a lot of OS projects (on Github, for example). Most -- sometimes all -- the scripts that start with /bin/sh but are full of bashisms because the authors run systems where /bin/sh is really bash. S. From lm at mcvoy.com Tue Jun 18 05:28:05 2024 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 17 Jun 2024 12:28:05 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: <20240617192805.GB27039@mcvoy.com> On Mon, Jun 17, 2024 at 03:21:47PM -0400, Stuff Received wrote: > On 2024-06-16 21:25, Larry McVoy wrote (in part): > [...] > >*Every* time they used some bash-ism, it bit us in the ass. > > This is so true for a lot of OS projects (on Github, for example). Most -- > sometimes all -- the scripts that start with /bin/sh but are full of > bashisms because the authors run systems where /bin/sh is really bash. I think it is less of an issue these days, it's mostly Windows, MacOS, Linux and bash is available there. That said, anything I care about is v7 compat. No need to get fancy, if I need fancy, there is C. -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat From steffen at sdaoden.eu Tue Jun 18 07:40:14 2024 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Mon, 17 Jun 2024 23:40:14 +0200 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240616234654.GB12821@mcvoy.com> References: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> <802b871c-2c5e-514c-f8d5-f3eef71d76d4@makerlisp.com> <20240616234654.GB12821@mcvoy.com> Message-ID: <20240617214014.-ipxbzuu@steffen%sdaoden.eu> Larry McVoy wrote in <20240616234654.GB12821 at mcvoy.com>: |On Sun, Jun 16, 2024 at 04:34:34PM -0700, Luther Johnson wrote: |> I think there's a parallel from the Unix/Linux systems that we think of ... |> For example: CMake vs. just learning how to write makefiles properly. |> You fiddle with CMake and you never really know why it does what it |> does, especially from one version to the next, "but you don't have to |> write makefiles". | |I could not agree more with this post, all of it, but especially the |Cmake stuff. Writing Makefiles isn't that hard, if you are a programmer |and can't do that, how good of a programmer are you? And is it really |easier to learn shiny-new-make-replacement-du-jour every year? It must be said that "thrillingly fast" is a key item all those (maybe ninja cmake ant) throw in. And that it takes quite a bit of (non-portability and) thought to empower "normal" makefiles to achieve full parallelism etc. I think you watch the FreeBSD hacker community, and there is "war" around the "meta-mode" (against cmake) to avoid recompilations etc. Multiple people are working on BSD make and the BSD makefile system. (In fact on NetBSD the last years even saw a tremendous run on overhauling BSD make, which then only got imported to FreeBSD.) The files are very dense after decades of engineering, and due to "clean namespace" paradigm there are long variable names that sometimes fill half of an eighty column screen alone; for (stupid first-see-and-take) things like INSTALL_DDIR= ${_INSTALL_DDIR:S://:/:g:C:/$::} you need a clear head. This is not self-descriptive. (Not to talk about the fact that lines (may) become expanded by the shell after they have become expanded by make, ie, all the quoting, and the delayed or immediate macro expansion mechanism(s).) Original make did not have conditionals, or file inclusions, or dedicated control of parallelism (on file, on target level) via .NOTPARALLEL: and .WAIT:, so things like tangerine: config .WAIT build .WAIT test .WAIT install are not portable. (In fact portability and parallelism is not possible unless you use a recursive approach, with all the pitfalls that then brings.) And then all the bugs everywhere, with quoting pitfalls, and this applies to helper tools like awk too (ie xpg4/bin/awk even documents "Notice that backslash escapes are interpreted twice"). I also remember (from the time i still gave money to journalists) terms like "the usual triad" for "./configure && make && make install" with that implied "grazy times, but that is how you do it" undertone maybe even. Now i see for example "cmake -D VAR1 .. && cmake --build build && cmake --install build" which is possibly easier to grasp when compiling a C compiler that is 1.2 GiB when installed. --End of <20240616234654.GB12821 at mcvoy.com> --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From usotsuki at buric.co Tue Jun 18 08:34:06 2024 From: usotsuki at buric.co (Steve Nickolas) Date: Mon, 17 Jun 2024 18:34:06 -0400 (EDT) Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: On Mon, 17 Jun 2024, Stuff Received wrote: > On 2024-06-16 21:25, Larry McVoy wrote (in part): > [...] >> *Every* time they used some bash-ism, it bit us in the ass. > > This is so true for a lot of OS projects (on Github, for example). Most -- > sometimes all -- the scripts that start with /bin/sh but are full of bashisms > because the authors run systems where /bin/sh is really bash. Which is why I'm glad Debian's /bin/sh is dash (fork of ash) instead. -uso. From steffen at sdaoden.eu Tue Jun 18 08:49:31 2024 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Tue, 18 Jun 2024 00:49:31 +0200 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <653E15D7-DD66-414C-94F3-A74B4EE3DD10@iitbombay.org> References: <20240617004816.C28BC18C098@mercury.lcs.mit.edu> <20240617010532.GC12821@mcvoy.com> <653E15D7-DD66-414C-94F3-A74B4EE3DD10@iitbombay.org> Message-ID: <20240617224931.bogV4e4V@steffen%sdaoden.eu> Bakul Shah via TUHS wrote in <653E15D7-DD66-414C-94F3-A74B4EE3DD10 at iitbombay.org>: |On Jun 16, 2024, at 8:57 PM, ron minnich wrote: |> I'm curious, as to the original topic of this discussion: can anyone \ |> justify systemd-homed and how it works? Does that even look like \ |> 0% of a unix idea? | |I am not a fan of systemd (or linux) and don't follow their excesses/adv\ |entures but I am not a fan of how BSD does initialization & brings \ |up services either. They don't quite get all the dependencies right \ |for all the possible combinations of devices etc. Its /etc/rc.d/* \ |system is pretty clunky -- I tend to think any time you are repeating \ Now even more since they started to add a "jail-this-service" variable, ie containerization by setting a variable. |more or less the same boilerplate code in many files, something worth \ |abstracting is hiding in there. | |I like how launchd treats a service as an object (more than just a \ |program but also the auxiliary files and scripts). For me it was a \ |lightbulb moment (like realizing how a function operates in an environme\ To me -- it turned off my light as i tried to "do something", but could not figure out how; i somehow managed to create the XML file necessary. I am happy to have forgotten what it was about. Ah, wait, voila: $ v ~/arena/misc/macosx-plist-use.txt Label com.bell-labs.plan9.u9fs Program /usr/bin/u9fs ProgramArguments u9fs -l /var/log/u9fs.log -a p9any /opt/plan9 Sockets Listeners SockServiceName 9pfs inetdCompatibility Wait To cause this to be run on system start, this should be installed as /Library/LaunchDaemons/9pfs.plist. Installing instead in /Library/LaunchAgents will cause it to be run only when a user is logged in, while $HOME/Library/LaunchAgents will cause it to be run only when that particular user is logged in. In order to start the listner it must first be ``loaded'' $ sudo launchctrl load /path/to/9pfs.plist If you are running the Mac OS X firewall you will need to add an entry pass the 9pfs protocol in: SystemPreferences->Sharing->Firewall I give you ten points for configuration lightbulb moments! So nice and easy, also for human consumption once written. ... |What I want is to be able to map all my computers and compute clusters \ |into a single virtual machine -- where storage, IO and computing resources \ |may be added / removed without taking the whole VM down, and where \ |each display/input user interface is a window on the same underlying \ |VM and all sharing is under my control. Plan9 does a bit of this but \ |that experiment ended too early. Apple is tending in this direction \ |though not cleanly (+ I don't want to rely on a faceless behemoth corp \ |that may trample on my data without even meaning to). I had that dream somewhen spoken out in a FreeBSD IRC channel a few years back. It *could* be that the new per-service-jails do it a bit like that, via nullfs mounting, that deep i have not looked into it yet. But my idea was that the base system is mounted partially, and you would specify the PKGs you want in the jail, and that only the files of the given pkgs are actually visible in the jail. I use something a bit similar for some boxed things here on Linux, with overlayfs; however, after mount -n -t overlay -o upperdir=${rundir}/storage,lowerdir=/,workdir=${rundir}/work \ overlayfs ${rundir}/root || exit 21 i then start rm(1) removal of some files, eg rm -rf \ ${rundir}/root/boot \ ${rundir}/root/home \ ${rundir}/root/media \ ${rundir}/root/opt \ ${rundir}/root/root \ ${rundir}/root/run \ ${rundir}/root/var \ plus over-mounting things like dev # devtmpfs fully populates instead, including log socket etc!! #mount -n -t devtmpfs dev ${rundir}/root/dev || exit 50 mount -n -t tmpfs -o nosuid,noexec dev ${rundir}/root/dev || exit 50 etc etc etc. That is *not* what i meant. That idea is old, i have not yet managed it to create a shareable root system, which is then *per-se* overlay mounted, even by the system itself. (That is: the base is shared by the base system and containers, which then add onto what they want and need. It could even be mounted as a base for real VMs.) Regarding systemd my only hope is that Linux remains usable without it. It seems more and more require the systemd infrastructure in order to function, i have heard. That "super / sudo / doas / [BSD] su -c" replacement that systemd just recently added, somehow i followed a link to the github or so issue where this was discussed, and i still hear the lead developer of AlpineLinux ask for a separate udev, a part that anyone needs, i think he did not even get an answer. Which deterred me further. (I think the AlpineLinux lead developer's name is pretty well-known in that society.) Hmm. It *could* be that it was in fact in another issue that turned fixed linked-in libraries like compressors like xz (the backdoor there, you know) into dlopen()-managed mess, via kinda marshalling. There. Whatever. Btw i did not sent out another email last week Jim Capp wrote in <1403506.1536.1718310415450.JavaMail.root at zimbraanteil>: |https://nosystemd.org/ It is the pride and ignorance of the billion dollars that pay lots of developers on the Linux front that makes me sad. For example, on ossec-, we repeatedly see an OpenSUSE employee, who seems to get paid for doing security audits, publish security advisories. That is (mostly seems to be) a one man show. Of course, many things happen behind the scenes, in bug trackers etc. But i track fulldisclosure and ossec- for way over a decade. And "the same is true" for the boot and daemon monitoring environment. On FreeBSD, for example, one programmer is working to integrate "jail"ing (FreeBSD jails: twenty+ years ago it was a precondition in some system calls, looking for "is-jailed", plus dedicated network stack etc; usually (i hate it) also with its dedicated file system aka mounts etc. etc) into daemon startup aka the rc system, so you (that seems to be the idea) just set a rc.conf variable and the service is "boxed" in a jail automatically. If you look at all the rotten data in the Linux login etc (with PAM or not, etc etc), just a few programs in the startup process, but it is too much to keep them in line Yeah! [.] with modern achievements like PR_SET_CHILD_SUBREAPER to move entire process hierarchies to dedicated zombie collectors and such that is to say, with capabilities and all that, which systemd makes easy, via easy text config files. Ie, namespace containment (aka network stack etc isolation), at your fingertips. But per se a stacked call like cd / ip netns exec ${netns} \ /usr/bin/env -i AUTHDISPLAY=${AUTHDISPLAY} DISPLAY=${DISPLAY} TERM=${TERM} XAUTHORITY=${XAUTHORITY} \ /usr/bin/unshare --ipc --uts --pid --fork --mount --mount-proc ${kill_child} ${rooter} ${prog} & pid=${!} [ -d /sys/fs/cgroup/_box_web ] && printf '%s\n' ${pid} > /sys/fs/cgroup/_box_web/cgroup.procs # if [ ${netns} = secweb ] || [ ${netns} = priwse ]; then wait $pid cleanup_setup # fi exec 7>&- can (add capabilities) do the very same thing. Btw there is an init-on-steroids [1] from the guy who took maintainership of sysklogd quite some years ago (used by my Linux distro ever since), it can also supervise, use cgroups, etc. I plan to try it out for years, but since i realized i have to go second line i have written some scripts, and then just call in via SysV or OpenRC, or what. I mean, i have no problem with the notification stuff of systemd, it is now even included in OpenSSH (optionally, in openbsd-compat/port-linux.c), but i mean the PID file things are used for decades, and with "daemons and zombies are reparented to a configured subreaper process" (instead of PID 1), and with the fully capable but not systemd integrated udev (all "looser" Linux distributions "have to use" eudev). Ah! Talking too much!! [1] https://github.com/troglobit/finit --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From woods at robohack.ca Sun Jun 16 17:57:42 2024 From: woods at robohack.ca (Greg A. Woods) Date: Sun, 16 Jun 2024 00:57:42 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: At Mon, 17 Jun 2024 18:34:06 -0400 (EDT), Steve Nickolas wrote: Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > Which is why I'm glad Debian's /bin/sh is dash (fork of ash) instead. Well, to be pedantic "dash" was a direct descendant of NetBSD's /bin/sh, which in turn was the shell from 4.4BSD, which was of course originally Kenneth Almquist's Ash. Quite a few changes were made to the shell in BSD between the time it was imported (1991), and the 4.4 release (1995). Unfortunately Dash now lags very far behind NetBSD's /bin/sh code. If they had just kept it as a port of the upstream code and continued to update it from upstream then "they" would now have a much better shell (as much development has occurred in NetBSD since 1997), but no it's a full-on fork that's basically ignored its upstream parent since day one. It is doomed now to need fixes for the same bugs again, often in incompatible ways, and probably inevitably new features will be added to it, also in incompatible ways. Then again OpenBSD and FreeBSD (and its derivatives) have also continued forked development of the 4.4BSD shell (and most of the rest of the system) with only very occasional sharing of code back and forth with NetBSD. I guess this forking of code is also somewhat a part of "Unix" practice, even if it goes against the strict tenets of Unix philosophy. I don't think it's as egregious as the N.I.H. "doctrine" (of which systemd could be the result of, and cmake is definitely the result of), but it is problematic. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From imp at bsdimp.com Tue Jun 18 09:44:40 2024 From: imp at bsdimp.com (Warner Losh) Date: Mon, 17 Jun 2024 17:44:40 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: On Mon, Jun 17, 2024, 5:30 PM Greg A. Woods wrote: > At Mon, 17 Jun 2024 18:34:06 -0400 (EDT), Steve Nickolas < > usotsuki at buric.co> wrote: > Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix > philosophy' The Register > > > > Which is why I'm glad Debian's /bin/sh is dash (fork of ash) instead. > > Well, to be pedantic "dash" was a direct descendant of NetBSD's /bin/sh, > which in turn was the shell from 4.4BSD, which was of course originally > Kenneth Almquist's Ash. Quite a few changes were made to the shell in > BSD between the time it was imported (1991), and the 4.4 release (1995). > > Unfortunately Dash now lags very far behind NetBSD's /bin/sh code. > > If they had just kept it as a port of the upstream code and continued to > update it from upstream then "they" would now have a much better shell > (as much development has occurred in NetBSD since 1997), but no it's a > full-on fork that's basically ignored its upstream parent since day one. > It is doomed now to need fixes for the same bugs again, often in > incompatible ways, and probably inevitably new features will be added to > it, also in incompatible ways. > > Then again OpenBSD and FreeBSD (and its derivatives) have also continued > forked development of the 4.4BSD shell (and most of the rest of the > system) with only very occasional sharing of code back and forth with > NetBSD. > Yea. Personality squabbles trump common sense. Some areas have reconverged, and those are bright points. I guess this forking of code is also somewhat a part of "Unix" practice, > even if it goes against the strict tenets of Unix philosophy. I don't > think it's as egregious as the N.I.H. "doctrine" (of which systemd could > be the result of, and cmake is definitely the result of), but it is > problematic. > Yea. It's more of a people problem and for the first 15 or 20 years of 4.4BSD the tools to reconverge weren't up to the task, even if the political will had been there to bless it. Diff was just one part. The easy part. But knowing why things differed. What mattered, why it was different (often with only the log message "fix. Ok xxx" to go on). Once it morphed organically for even 5 years, going back was hard. There was no upstream anymore. Csrg was gone, and all successor BSD projects assumed they were the new upstream. It was rarely clear whichnproject has the rights to that claim as the answer was patjologically different for different parts of the system. The NIH stuff sunk adopting jails, geom, smp, etc from FreeBSD and almost sunk make from unifying some years ago. Too much ego and wanting perfect code so all that other code is junk... It's a hard problem because continuing engineering is actually hard and boring work nobody wants to do as their fun hobby... not least because it requires a lot of time to keep up and the skills of a diplomat, which previous few people have.. plus a perception that mere merging never advances the state of the art... Warner -- > Greg A. Woods > > Kelowna, BC +1 250 762-7675 RoboHack > Planix, Inc. Avoncote Farms > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Tue Jun 18 10:06:30 2024 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 17 Jun 2024 17:06:30 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: <20240618000630.GC32048@mcvoy.com> On Mon, Jun 17, 2024 at 05:44:40PM -0600, Warner Losh wrote: > Once it morphed > organically for even 5 years, going back was hard. There was no upstream > anymore. Csrg was gone, and all successor BSD projects assumed they were > the new upstream. It was rarely clear whichnproject has the rights to that > claim as the answer was patjologically different for different parts of the > system. I've said before, and I'll say it again. The BSD community couldn't decide who was going to drive the big red fire truck. Instead of doing that, they each took their own toy fire truck and we have the vision of grown men driving around on toy trucks. All while the Linux community let Linus drive. The results speak for themselves, we've got Android Linux on a zillion cell phones, I believe all of the top 500 super computers are Linux, Linux even has some use on the desktop. BSD has what? MacOS but that's a closed off fork, it's not helping BSD in any way other than marketing. It's sad, I started out as a BSD/Sun guy and loved it. But when the forks happened I could see the writing on the wall and went over to Linux. Shoulda, coulda, woulda (again). From usotsuki at buric.co Tue Jun 18 11:52:55 2024 From: usotsuki at buric.co (Steve Nickolas) Date: Mon, 17 Jun 2024 21:52:55 -0400 (EDT) Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: On Sun, 16 Jun 2024, Greg A. Woods wrote: > At Mon, 17 Jun 2024 18:34:06 -0400 (EDT), Steve Nickolas wrote: > > Well, to be pedantic "dash" was a direct descendant of NetBSD's /bin/sh, > which in turn was the shell from 4.4BSD, which was of course originally > Kenneth Almquist's Ash. Quite a few changes were made to the shell in > BSD between the time it was imported (1991), and the 4.4 release (1995). > > Unfortunately Dash now lags very far behind NetBSD's /bin/sh code. > > If they had just kept it as a port of the upstream code and continued to > update it from upstream then "they" would now have a much better shell > (as much development has occurred in NetBSD since 1997), but no it's a > full-on fork that's basically ignored its upstream parent since day one. > It is doomed now to need fixes for the same bugs again, often in > incompatible ways, and probably inevitably new features will be added to > it, also in incompatible ways. It's still possible to port NetBSD's /bin/sh to Debian (I've done it, called it "nash", but don't have any official release because I don't really see a point). And it's basically the "sh" I'm currently using in my projects because I don't have the talent to write my own. :P -uso. From tuhs at tuhs.org Tue Jun 18 14:52:51 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Tue, 18 Jun 2024 04:52:51 +0000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: On Monday, June 17th, 2024 at 6:51 PM, Steve Nickolas wrote: > On Sun, 16 Jun 2024, Greg A. Woods wrote: > > > At Mon, 17 Jun 2024 18:34:06 -0400 (EDT), Steve Nickolas usotsuki at buric.co wrote: > > > > Well, to be pedantic "dash" was a direct descendant of NetBSD's /bin/sh, > > which in turn was the shell from 4.4BSD, which was of course originally > > Kenneth Almquist's Ash. Quite a few changes were made to the shell in > > BSD between the time it was imported (1991), and the 4.4 release (1995). > > > > Unfortunately Dash now lags very far behind NetBSD's /bin/sh code. > > > > If they had just kept it as a port of the upstream code and continued to > > update it from upstream then "they" would now have a much better shell > > (as much development has occurred in NetBSD since 1997), but no it's a > > full-on fork that's basically ignored its upstream parent since day one. > > It is doomed now to need fixes for the same bugs again, often in > > incompatible ways, and probably inevitably new features will be added to > > it, also in incompatible ways. > > > It's still possible to port NetBSD's /bin/sh to Debian (I've done it, > called it "nash", but don't have any official release because I don't > really see a point). > > And it's basically the "sh" I'm currently using in my projects because I > don't have the talent to write my own. :P > > -uso. Dash is my go-to /bin/sh on minimal Linux systems I prepare owing to its similar minimalism. I've considered that angle and hearing of your success has me tempted to pursue something along those lines. There are projects out there that have propped up a BSD userland over the Linux kernel too. I've not really tinkered with such things myself but I wonder if, given enough time, such a combo could gain more traction or fill a want/need not being met otherwise? Technically it's another way for Linux-sans-systemd. Systemd does seem to cover a diverse spread of use-cases, some better than others. For a personal system, it feels a bit much, but many folks have made valid points, particularly regarding systems you create and walk away from. I think of things so often from the interactive, personal system angle, but many systems don't have one person sitting at them with a handful of xterms open. I imagine the Linux world is steered a bit more in the server and enterprise directions as far as there is money to be made, naturally. Upstream wants to satisfy this crowd so personal user systems dip into the same systemd pool. My only major concern still is a sort of homogenization of the Linux userland, the same as exists in the marriage of Linux and GNU. Much of the software out there assumes if you'd got a Linux kernel, you've a GNU C library and some supporting bits, and vice versa. That's not to diminish the real help of things like autotools and CMake, but if someone is liable to use a non-portable thing, it's probably a GNU extension or Linux-ism. This isn't critique of either or, rather, the weight of their combined influence. If systemd gains comparable eminence in the overwhelming majority of Linux distros to the GNU C library itself, similarly one will find themselves with daemons that may only behave themselves under systemd's piercing gaze. Maybe it's only natural, systemd does seem to satisfy the needs of more than it offends. They can't take my sysvinit away from me though. It "just works" but my needs are also narrow. - Matt G. From flexibeast at gmail.com Tue Jun 18 15:55:08 2024 From: flexibeast at gmail.com (Alexis) Date: Tue, 18 Jun 2024 15:55:08 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> (David Arnold's message of "Mon, 17 Jun 2024 07:56:20 +1000") References: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> Message-ID: <87jzim3idf.fsf@gmail.com> David Arnold writes: > Users reasonably want things to work, and Red Hat wants to > satisfy > those users, and they’ve chosen this way to do > it. Unfortunately, > there’s been no real competition: this goes well beyond system > startup, and well beyond service management, and citing s6 or > launchd > or whatever misses the war by focusing on the battle. Good point. The problem that D-Bus attempts to solve is communication between components and applications designed for different desktop environments. i wasn't paying particular attention at the time, so i don't know what more Unix-y alternatives were proposed, if any. Laurent Bercot, the developer of s6[a], has created a bare-bones proof-of-concept alternative: https://skarnet.org/software/skabus/ but this hasn't been taken further, as his priorities have been elsewhere. Wayland is an attempt to solve various issues and limitations of X. It's not a project by people who don't understand X; as an example, Matthieu Herbb, an Xorg dev and a primary OpenBSD dev in this area, did a presentation last year in which he said "X11 is fading away" and "Wayland is the way to go for graphical desktop": https://2023.eurobsdcon.org/slides/eurobsdcon2023-matthieu_herrb-wayland-openbsd.pdf The problem is, people who aren't facing the issues and limitations faced by others are unlikely to have any motivation to work on, or support, the development of alternatives. This leaves the field of proposed solutions open to those with a different approach and/or a desire for résumé-driven development, regardless of the quality of the design and/or implementation. But even when the people working on alternatives are people who understand the problem space, those for whom the existing solution is perfectly adequate are unlikely to provide input regarding the development of those alternatives - so when a particular alternative gains sufficient momentum that those people are forced to start dealing with it, they might find it unusable for their use-case(s). In other words, the war is pretty much lost at the outset, and people are left fighting battles that, in the medium-to-long term, are likely to turn out to be quixotic. Alexis. [a] i would say s6 is very much in the spirit of "the Unix philosophy": a suite small focused programs that can be combined in various ways, "mechanism not policy", and utilising fundamental Unix features. As Laurent writes at the end of the page about the s6 approach to 'socket activation', to which i linked upthread: > You don't have to use specific libraries or write complex unit > files, you just need to understand how a command line > works. This is Unix. -- https://skarnet.org/software/s6/socket-activation.html Nevertheless, he's also noted, back in 2020, the real-world issues that have been an obstacle to s6's uptake: > The fact is that a full-featured init system *is* a complex > beast, and the s6 stack does nothing more than is strictly > needed, but it exposes all the tools, all the entrails, all the > workings of the system, and that is a lot for non-specialists to > handle. Integrated init systems, such as systemd, are > significantly *more* complex than the s6 stack, but they do a > better job of *hiding* the complexity, and presenting a > relatively simple interface. That is why, despite being > technically inferior (on numerous metrics: bugs, length of code > paths, resource consumption, actual modularity, flexibility, > portability, etc.), they are more easily accepted: they are just > less intimidating. > > As a friend told me, and it was an enlightening moment: you are > keeping the individual parts simple, but in doing so, you are > moving the complexity to the *interactions* between the parts, > and are burdening the user with that complexity. You are keeping > the code simple, which has undeniable maintainability benefits, > but you are making the administration more difficult, and the > trade-off is not good enough for a lot of users. -- https://skarnet.org/lists/supervision/2586.html From e5655f30a07f at ewoof.net Tue Jun 18 16:39:01 2024 From: e5655f30a07f at ewoof.net (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Tue, 18 Jun 2024 06:39:01 +0000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87jzim3idf.fsf@gmail.com> References: <1841E020-8BDD-4997-A319-2FFEE75F84A5@pobox.com> <87jzim3idf.fsf@gmail.com> Message-ID: <07b679f0-f0ce-4150-95d3-e6b5100ef83e@home.arpa> On 18 Jun 2024 15:55 +1000, from flexibeast at gmail.com (Alexis): >> As a friend told me, and it was an enlightening moment: you are keeping >> the individual parts simple, but in doing so, you are moving the >> complexity to the *interactions* between the parts, and are burdening >> the user with that complexity. You are keeping the code simple, which >> has undeniable maintainability benefits, but you are making the >> administration more difficult, and the trade-off is not good enough for >> a lot of users. > > -- https://skarnet.org/lists/supervision/2586.html It used to be the case, not least before the widespread advent of microcomputers (let's say between the late 1970s and mid-1980s), that computing time was fairly expensive, and programmer time was fairly cheap (both grossly relatively speaking). So it made sense to shift the burden from the computer to the programmer where reasonable, especially where the operation being automated would be performed many times: the computer would need to do the work every time, whereas the programmer only needed to do the corresponding work once. Now computing time (as in the cost of completing a given operation; say, adding two 64-bit integers, or storing a given number of bytes, or even determining whether a given line of text appears in the output of a command) is for all intents and purposes dirt cheap, while programmer time is relatively expensive. Hence programming-level abstractions which save programmer time, even when those come at a performance cost. The same argument can be made for memory cost. It probably can also be made for system administrator time versus the computing cost of making the sysadmin's job easier. It's also pretty undeniably the case that there are many, many more sysadmins working on many, many, many more systems than there are programmers working on init systems and system state management services. (Which is not to say that it those abstractions are only a good thing. Hiding the complexity of what is actually going on opens up _its own_ set of pitfalls, both because it hides what's going on and because the abstractions can only go so far; and all that performance cost does eventually add up. If we wrote software for today's computers like we wrote it for an early-1980s computer, it could probably be blazingly fast; but could we reasonably write software at the level of today's software complexity that way? The jury might not have arrived yet.) -- Michael Kjörling 🔗 https://michael.kjorling.se “Remember when, on the Internet, nobody cared that you were a dog?” From tuhs at tuhs.org Tue Jun 18 22:02:26 2024 From: tuhs at tuhs.org (Arrigo Triulzi via TUHS) Date: Tue, 18 Jun 2024 14:02:26 +0200 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> Message-ID: <8CD173B8-1B87-44C6-BBE9-39842BDF21F1@alchemistowl.org> From Mastodon (https://mastodon.social/@nixCraft/112637213238431183): >FYI, there is a bug in systemd. So, running: "systemd-tmpfiles --purge" will delete your /home/ in systemd version 256. and Twitter (https://x.com/DevuanOrg/status/1802997574695080067). Could be Debian-specific but… Arrigo From beebe at math.utah.edu Wed Jun 19 01:41:40 2024 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Tue, 18 Jun 2024 09:41:40 -0600 Subject: [TUHS] ACM Software System Award to Andrew S. Tanenbaum for MINIX Message-ID: This announcement just arrived on the ACM Bulletins list: >> ... >> Andrew S. Tanenbaum, Vrije Universiteit, receives the ACM Software >> System Award (http://awards.acm.org/software-system) for MINIX, which >> influenced the teaching of Operating Systems principles to multiple >> generations of students and contributed to the design of widely used >> operating systems, including Linux. >> ... ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From clemc at ccc.com Wed Jun 19 03:21:45 2024 From: clemc at ccc.com (Clem Cole) Date: Tue, 18 Jun 2024 13:21:45 -0400 Subject: [TUHS] ACM Software System Award to Andrew S. Tanenbaum for MINIX In-Reply-To: References: Message-ID: Wonderful Sent from a handheld expect more typos than usual On Tue, Jun 18, 2024 at 11:41 AM Nelson H. F. Beebe wrote: > This announcement just arrived on the ACM Bulletins list: > > >> ... > >> Andrew S. Tanenbaum, Vrije Universiteit, receives the ACM Software > >> System Award (http://awards.acm.org/software-system) for MINIX, which > >> influenced the teaching of Operating Systems principles to multiple > >> generations of students and contributed to the design of widely used > >> operating systems, including Linux. > >> ... > > > ------------------------------------------------------------------------------- > - Nelson H. F. Beebe Tel: +1 801 581 5254 > - > - University of Utah > - > - Department of Mathematics, 110 LCB Internet e-mail: > beebe at math.utah.edu - > - 155 S 1400 E RM 233 beebe at acm.org > beebe at computer.org - > - Salt Lake City, UT 84112-0090, USA URL: > http://www.math.utah.edu/~beebe/ - > > ------------------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Wed Jun 19 03:38:33 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Tue, 18 Jun 2024 17:38:33 +0000 Subject: [TUHS] ACM Software System Award to Andrew S. Tanenbaum for MINIX In-Reply-To: References: Message-ID: On Tuesday, June 18th, 2024 at 8:41 AM, Nelson H. F. Beebe wrote: > This announcement just arrived on the ACM Bulletins list: > > > > ... > > > Andrew S. Tanenbaum, Vrije Universiteit, receives the ACM Software > > > System Award (http://awards.acm.org/software-system) for MINIX, which > > > influenced the teaching of Operating Systems principles to multiple > > > generations of students and contributed to the design of widely used > > > operating systems, including Linux. > > > ... > > > ------------------------------------------------------------------------------- > - Nelson H. F. Beebe Tel: +1 801 581 5254 - > - University of Utah - > - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - > - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - > - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - > ------------------------------------------------------------------------------- I have Andy Tanenbaum to thank in part for my interest in turning up UNIX 4.0 information due to the quote: "Whatever happened to System IV is one of the great unsolved mysteries of computer science." >From Modern Operating Systems. I took this as an impudent challenge and well here I am. - Matt G. From dave at horsfall.org Wed Jun 19 06:59:49 2024 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 19 Jun 2024 06:59:49 +1000 (EST) Subject: [TUHS] ACM Software System Award to Andrew S. Tanenbaum for MINIX In-Reply-To: References: Message-ID: On Tue, 18 Jun 2024, segaloco via TUHS wrote: > I have Andy Tanenbaum to thank in part for my interest in turning up > UNIX 4.0 information due to the quote: > > "Whatever happened to System IV is one of the great unsolved mysteries > of computer science." > > From Modern Operating Systems. I took this as an impudent challenge and > well here I am. Well, don't keep us in suspense; what happened to SysIV? Not that I'm a fan of either SysIII or SysV... -- Dave From tuhs at tuhs.org Wed Jun 19 07:15:32 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Tue, 18 Jun 2024 21:15:32 +0000 Subject: [TUHS] ACM Software System Award to Andrew S. Tanenbaum for MINIX In-Reply-To: References: Message-ID: On Tuesday, June 18th, 2024 at 1:59 PM, Dave Horsfall wrote: > On Tue, 18 Jun 2024, segaloco via TUHS wrote: > > > I have Andy Tanenbaum to thank in part for my interest in turning up > > UNIX 4.0 information due to the quote: > > > > "Whatever happened to System IV is one of the great unsolved mysteries > > of computer science." > > > > From Modern Operating Systems. I took this as an impudent challenge and > > well here I am. > > > Well, don't keep us in suspense; what happened to SysIV? Not that I'm a > fan of either SysIII or SysV... > > -- Dave It has left its droppings out there in the world, some of which were held onto by Arnold Robbins: https://www.tuhs.org/Archive/Documentation/Manuals/Unix_4.0/ And others found by myself on eBay and reconstructed: https://gitlab.com/segaloco/pwb4u_man The story I've gotten is AT&T policy was to release odd-numbered versions, so PWB 1.0, System III, and System V made it out into the world, PWB 2.0 and Release 4.0 stayed in the labs. In the most technical sense, System IV never existed, what could've become it remained a Bell System-only issue. - Matt G. From dave at horsfall.org Wed Jun 19 08:00:14 2024 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 19 Jun 2024 08:00:14 +1000 (EST) Subject: [TUHS] ACM Software System Award to Andrew S. Tanenbaum for MINIX In-Reply-To: References: Message-ID: On Tue, 18 Jun 2024, segaloco via TUHS wrote: [...] > The story I've gotten is AT&T policy was to release odd-numbered > versions, so PWB 1.0, System III, and System V made it out into the > world, PWB 2.0 and Release 4.0 stayed in the labs. In the most > technical sense, System IV never existed, what could've become it > remained a Bell System-only issue. Thanks; I'd forgotten about AT&T's "odd-only" policy. -- Dave From woods at robohack.ca Wed Jun 19 08:44:39 2024 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 18 Jun 2024 15:44:39 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: At Mon, 17 Jun 2024 17:44:40 -0600, Warner Losh wrote: Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > .... There was no upstream > anymore. Csrg was gone, and all successor BSD projects assumed they were > the new upstream. Hmmm.... I never really thought of it that way before! I guess I had hoped everyone would come to realize a new one-size-fits-all upstream would be a "good thing", or perhaps even a necessity, and that none would automatically slip into thinking they were it without first reaching some consensus with other projects -- at least between NetBSD and FreeBSD for starters (and I suppose all hope of this had long evaporated before any of the other off-shoots formed). > The NIH stuff sunk adopting jails, geom, smp, etc from FreeBSD and almost > sunk make from unifying some years ago. Too much ego and wanting perfect > code so all that other code is junk... It's a hard problem because > continuing engineering is actually hard and boring work nobody wants to do > as their fun hobby... not least because it requires a lot of time to keep > up and the skills of a diplomat, which previous few people have.. plus a > perception that mere merging never advances the state of the art... Indeed! -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From woods at robohack.ca Wed Jun 19 08:50:35 2024 From: woods at robohack.ca (Greg A. Woods) Date: Tue, 18 Jun 2024 15:50:35 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: At Tue, 18 Jun 2024 04:52:51 +0000, segaloco via TUHS wrote: Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > That's not > to diminish the real help of things like autotools and CMake, Oh, that strikes a nerve. CMake is the very antithesis of a good tool. It doesn't help. I think it is perhaps the worst abomination EVER in the world of software tools, and especially amongst software construction tools. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From imp at bsdimp.com Wed Jun 19 09:03:07 2024 From: imp at bsdimp.com (Warner Losh) Date: Tue, 18 Jun 2024 17:03:07 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: On Tue, Jun 18, 2024, 4:50 PM Greg A. Woods wrote: > At Tue, 18 Jun 2024 04:52:51 +0000, segaloco via TUHS > wrote: > Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix > philosophy' The Register > > > > That's not > > to diminish the real help of things like autotools and CMake, > > Oh, that strikes a nerve. > > CMake is the very antithesis of a good tool. It doesn't help. I think > it is perhaps the worst abomination EVER in the world of software tools, > and especially amongst software construction tools. > Someone clearly never used imake... Warner -- > Greg A. Woods > > Kelowna, BC +1 250 762-7675 RoboHack > Planix, Inc. Avoncote Farms > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Wed Jun 19 09:27:42 2024 From: rminnich at gmail.com (ron minnich) Date: Tue, 18 Jun 2024 16:27:42 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: but it rhymes with mistake! On Tue, Jun 18, 2024 at 4:12 PM Warner Losh wrote: > > > On Tue, Jun 18, 2024, 4:50 PM Greg A. Woods wrote: > >> At Tue, 18 Jun 2024 04:52:51 +0000, segaloco via TUHS >> wrote: >> Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix >> philosophy' The Register >> > >> > That's not >> > to diminish the real help of things like autotools and CMake, >> >> Oh, that strikes a nerve. >> >> CMake is the very antithesis of a good tool. It doesn't help. I think >> it is perhaps the worst abomination EVER in the world of software tools, >> and especially amongst software construction tools. >> > > Someone clearly never used imake... > > Warner > > -- >> Greg A. Woods >> >> Kelowna, BC +1 250 762-7675 RoboHack >> Planix, Inc. Avoncote Farms >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luther.johnson at makerlisp.com Wed Jun 19 10:08:47 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Tue, 18 Jun 2024 17:08:47 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> On 06/18/2024 03:50 PM, Greg A. Woods wrote: > At Tue, 18 Jun 2024 04:52:51 +0000, segaloco via TUHS wrote: > Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register >> That's not >> to diminish the real help of things like autotools and CMake, > Oh, that strikes a nerve. > > CMake is the very antithesis of a good tool. It doesn't help. I think > it is perhaps the worst abomination EVER in the world of software tools, > and especially amongst software construction tools. > > -- > Greg A. Woods > > Kelowna, BC +1 250 762-7675 RoboHack > Planix, Inc. Avoncote Farms I agree with Greg here. In fact even if it was well done, it is declaring something that wasn't really a problem, to be a problem, to insert itself as the solution, but I think it's just extra stuff and steps that ultimately obfuscates and creates yet more dependencies. Self-serving complexity. From nliber at gmail.com Wed Jun 19 10:46:15 2024 From: nliber at gmail.com (Nevin Liber) Date: Tue, 18 Jun 2024 19:46:15 -0500 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> Message-ID: On Tue, Jun 18, 2024 at 7:09 PM Luther Johnson wrote: > I agree with Greg here. In fact even if it was well done, it is > declaring something that wasn't really a problem, to be a problem, to > insert itself as the solution, but I think it's just extra stuff and > steps that ultimately obfuscates and creates yet more dependencies. > That's a really bold claim. You may not like the solution (I don't tend to comment on it because unlike some here, I recognize that build systems are a Hard Problem and I don't know how to make a better solution), but that doesn't mean it isn't solving real problems. But I'll bite. There was the claim by Larry McVoy that "Writing Makefiles isn't that hard". Please show these beautiful makefiles for a non-toy non-trivial product (say, something like gcc or llvm), which make it easy to change platforms, underlying compilers, works well with modern multicore processors, gets the dependencies right (one should never have to type "make clean" to get a build working correctly), etc. and doesn't require blindly running some 20K line shell script like "configure" to set it up. -- Nevin ":-)" Liber iber at gmail.com> +1-847-691-1404 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Wed Jun 19 11:00:10 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Wed, 19 Jun 2024 01:00:10 +0000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> Message-ID: On Tuesday, June 18th, 2024 at 5:46 PM, Nevin Liber wrote: > On Tue, Jun 18, 2024 at 7:09 PM Luther Johnson wrote: > > > I agree with Greg here. In fact even if it was well done, it is > > declaring something that wasn't really a problem, to be a problem, to > > insert itself as the solution, but I think it's just extra stuff and > > steps that ultimately obfuscates and creates yet more dependencies. > > > That's a really bold claim. You may not like the solution (I don't tend to comment on it because unlike some here, I recognize that build systems are a Hard Problem and I don't know how to make a better solution), but that doesn't mean it isn't solving real problems. > > But I'll bite. There was the claim by Larry McVoy that "Writing Makefiles isn't that hard". > > Please show these beautiful makefiles for a non-toy non-trivial product (say, something like gcc or llvm), which make it easy to change platforms, underlying compilers, works well with modern multicore processors, gets the dependencies right (one should never have to type "make clean" to get a build working correctly), etc. and doesn't require blindly running some 20K line shell script like "configure" to set it up. > -- > Nevin ":-)" Liber +1-847-691-1404 Not sure if this counts but technically the Linux kernel build itself is largely interfaced with via a makefile. The makefile itself may not necessarily be doing *all* the heavy lifting, but you don't start with a "./configure" this or "cmake ." that or "meson setup build" etc, you just run make to get at things like configuration, building, etc. Linux and it's various configurators do point to an alternate, albeit also more-than-a-makefile, way to do things. POSIX make's lack of conditionals for me is the main sticking point, but one solution is separate "parent" makefiles for certain conditions, with common definitions and rules being put in an include file. Then you, say, have one makefile per combination of ASFLAGS/LDFLAGS for a specific build scenario, then you can call your script that interprets conditions and calls the right makefile. You've still got a script involved, but one consisting of a handful of lines you wrote and understand well rather than oodles of generated code. Like systemd though, I think it comes down to the use-case. Most folks aren't thinking about POSIX through and through in their work probably, if they can cross-compile for various targets using one type of host machine, they only have to worry about their application, not their build system, playing by the POSIX rules. I say none of this to defend anything, especially CMake, I'm also not a fan, but it plays to the unfortunate norm out there where the build system just has to work for the author and/or high-profile contributors to a project, focusing on ensuring something will build anywhere probably isn't in most folks' immediate interest. Truth be told, the main thing that does have me focus on it is most of the stuff I work on these days is producing disassemblies of 80s/90s video games, something where the focus of the work *is* the quality of code representation, buildability, ease of modification, etc. so keeping such a thing from being tightly coupled to a specific platform does play heavily into my recent interactions with least common denominators. That and I'm nomadic with operating environments, so I don't want to paint myself into a corner where a project I'm working on is suddenly out in the rain because I bumped back to FreeBSD from Linux or out into some non-UNIX environment entirely. Sticking to POSIX make et. al. rules where possible minimizes that. - Matt G. From grog at lemis.com Wed Jun 19 11:38:29 2024 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Wed, 19 Jun 2024 11:38:29 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: On Tuesday, 18 June 2024 at 17:03:07 -0600, Warner Losh wrote: > On Tue, Jun 18, 2024, 4:50 PM Greg A. Woods wrote: >> >> CMake is the very antithesis of a good tool. It doesn't help. I think >> it is perhaps the worst abomination EVER in the world of software tools, >> and especially amongst software construction tools. > > Someone clearly never used imake... I've used both. I'm with Greg (Woods): cmake takes the cake. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA.php -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: not available URL: From imp at bsdimp.com Wed Jun 19 11:42:59 2024 From: imp at bsdimp.com (Warner Losh) Date: Tue, 18 Jun 2024 19:42:59 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: On Tue, Jun 18, 2024, 7:38 PM Greg 'groggy' Lehey wrote: > On Tuesday, 18 June 2024 at 17:03:07 -0600, Warner Losh wrote: > > On Tue, Jun 18, 2024, 4:50 PM Greg A. Woods wrote: > >> > >> CMake is the very antithesis of a good tool. It doesn't help. I think > >> it is perhaps the worst abomination EVER in the world of software tools, > >> and especially amongst software construction tools. > > > > Someone clearly never used imake... > > I've used both. I'm with Greg (Woods): cmake takes the cake. > Cmake actually works though... Warner Greg > -- > Sent from my desktop computer. > Finger grog at lemis.com for PGP public key. > See complete headers for address and phone numbers. > This message is digitally signed. If your Microsoft mail program > reports problems, please read http://lemis.com/broken-MUA.php > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davida at pobox.com Wed Jun 19 12:33:39 2024 From: davida at pobox.com (David Arnold) Date: Wed, 19 Jun 2024 12:33:39 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: Message-ID: > On 19 Jun 2024, at 08:44, Greg A. Woods wrote: > > At Mon, 17 Jun 2024 17:44:40 -0600, Warner Losh wrote: > Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register >> >> .... There was no upstream >> anymore. Csrg was gone, and all successor BSD projects assumed they were >> the new upstream. > > Hmmm.... I never really thought of it that way before! > > I guess I had hoped everyone would come to realize a new > one-size-fits-all upstream would be a "good thing", or > perhaps even a necessity, and that none would automatically > slip into thinking they were it without first reaching some > consensus with other projects A somewhat similar thing has happened with Plan9. There are multiple “successor” projects, some basically single-person projects, others semi-official with legal structure, others larger groups of like-minded developers. The are high levels of acrimony between the groups, and no accepted processes for evolving together. Periodically, some naive passerby will suggest a common core repository, perhaps even using a popular technology, and they’ll get barbecued in the resulting flamefest. So it goes. d From davida at pobox.com Wed Jun 19 12:38:57 2024 From: davida at pobox.com (David Arnold) Date: Wed, 19 Jun 2024 12:38:57 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: Message-ID: > On 19 Jun 2024, at 09:03, Warner Losh wrote: > > Someone clearly never used imake... The authors of cmake? d From luther.johnson at makerlisp.com Wed Jun 19 13:07:00 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Tue, 18 Jun 2024 20:07:00 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> Message-ID: I don't think any makefiles I've written do all of that. I guess I don't expect all of that in one place. So i will have some makefiles that are really portable, because they are very compute-bound or their interface to the world is something else generic, like files. And then for more platform-specific parts I would have different makefiles for different platforms. One-button, one command-build (that seems) identical for all platforms, is not that important to me. And yes, sometimes I write scripts to do the parts of a build in sequence. And I don't consider any of this 'hard', but I'm not trying make the builds look like they are the same, even if they are really quite different. The GNU ./configure, make model is one model. CMake and other makefile generators are another. But I have used several compilers or other general purpose tools that have more than one makefile or build script, depending on the platform, and I just take the tool for what it is, and use it. And when I have to debug or change something about the build, it's MUCH easier to work with makefiles and build scripts than it is to extend configure scripts, or extend a build-specification in a build-tool-specific language. In my experience, so far. But some people will get into configure and/or CMake or any of the others and learn how to be productive that way. More power to them, but I don't enjoy doing that. When I have had to use CMake, it seemed to require more specification on my part to generate all sorts of crufty state, so every build was not necessarily the same, unless I used the right commands or deleted all these extra directories full of persistence from the last CMake or build, to write all these weird, generated, unreadablemakefiles calling makefiles, doing no more than I could easily do by hand in one makefile. No, my hand-written makefiles will not be absolutely universal, or appear to be, but they will work in a way I can predict, and that is of great value to me. On 06/18/2024 05:46 PM, Nevin Liber wrote: > On Tue, Jun 18, 2024 at 7:09 PM Luther Johnson > > > wrote: > > I agree with Greg here. In fact even if it was well done, it is > declaring something that wasn't really a problem, to be a problem, to > insert itself as the solution, but I think it's just extra stuff and > steps that ultimately obfuscates and creates yet more dependencies. > > > That's a really bold claim. You may not like the solution (I don't > tend to comment on it because unlike some here, I recognize that build > systems are a Hard Problem and I don't know how to make a better > solution), but that doesn't mean it isn't solving real problems. > > But I'll bite. There was the claim by Larry McVoy that "Writing > Makefiles isn't that hard". > > Please show these beautiful makefiles for a non-toy non-trivial > product (say, something like gcc or llvm), which make it easy to > change platforms, underlying compilers, works well with modern > multicore processors, gets the dependencies right (one should never > have to type "make clean" to get a build working correctly), etc. and > doesn't require blindly running some 20K line shell script like > "configure" to set it up. > -- > Nevin ":-)" Liber iber at gmail.com > > +1-847-691-1404 -------------- next part -------------- An HTML attachment was scrubbed... URL: From luther.johnson at makerlisp.com Wed Jun 19 13:14:37 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Tue, 18 Jun 2024 20:14:37 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> Message-ID: To be fair, makefiles are specifications in a build-tool specific language. But it is one language I already know, and it is one that seems to be well-formed, translates to very definite actions on conditions, and I get to choose those actions. I guess it works for me if I do my part, and I can't really see what CMake does for me that I can't do for myself. On 06/18/2024 08:07 PM, Luther Johnson wrote: > > I don't think any makefiles I've written do all of that. I guess I > don't expect all of that in one place. So i will have some makefiles > that are really portable, because they are very compute-bound or their > interface to the world is something else generic, like files. And then > for more platform-specific parts I would have different makefiles for > different platforms. > > One-button, one command-build (that seems) identical for all > platforms, is not that important to me. And yes, sometimes I write > scripts to do the parts of a build in sequence. And I don't consider > any of this 'hard', but I'm not trying make the builds look like they > are the same, even if they are really quite different. The GNU > ./configure, make model is one model. CMake and other makefile > generators are another. But I have used several compilers or other > general purpose tools that have more than one makefile or build > script, depending on the platform, and I just take the tool for what > it is, and use it. And when I have to debug or change something about > the build, it's MUCH easier to work with makefiles and build scripts > than it is to extend configure scripts, or extend a > build-specification in a build-tool-specific language. In my > experience, so far. But some people will get into configure and/or > CMake or any of the others and learn how to be productive that way. > More power to them, but I don't enjoy doing that. When I have had to > use CMake, it seemed to require more specification on my part to > generate all sorts of crufty state, so every build was not necessarily > the same, unless I used the right commands or deleted all these extra > directories full of persistence from the last CMake or build, to write > all these weird, generated, unreadablemakefiles calling makefiles, > doing no more than I could easily do by hand in one makefile. No, my > hand-written makefiles will not be absolutely universal, or appear to > be, but they will work in a way I can predict, and that is of great > value to me. > > On 06/18/2024 05:46 PM, Nevin Liber wrote: >> On Tue, Jun 18, 2024 at 7:09 PM Luther Johnson >> > >> wrote: >> >> I agree with Greg here. In fact even if it was well done, it is >> declaring something that wasn't really a problem, to be a problem, to >> insert itself as the solution, but I think it's just extra stuff and >> steps that ultimately obfuscates and creates yet more dependencies. >> >> >> That's a really bold claim. You may not like the solution (I don't >> tend to comment on it because unlike some here, I recognize that >> build systems are a Hard Problem and I don't know how to make a >> better solution), but that doesn't mean it isn't solving real problems. >> >> But I'll bite. There was the claim by Larry McVoy that "Writing >> Makefiles isn't that hard". >> >> Please show these beautiful makefiles for a non-toy non-trivial >> product (say, something like gcc or llvm), which make it easy to >> change platforms, underlying compilers, works well with modern >> multicore processors, gets the dependencies right (one should never >> have to type "make clean" to get a build working correctly), etc. and >> doesn't require blindly running some 20K line shell script like >> "configure" to set it up. >> -- >> Nevin ":-)" Liber > iber at gmail.com >> > +1-847-691-1404 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luther.johnson at makerlisp.com Wed Jun 19 13:36:05 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Tue, 18 Jun 2024 20:36:05 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> Message-ID: I think there is a parallel to the systemd discussion here, again. Both CMake and systemd ask you to declare properties or qualities to be ingested into the abstract model of the build or init problem, that is their worldview, and then, the 'engine' will consume that and decide what to do and how to do it. Whereas init scripts and makefiles say exactly what to do when, and the abstract model of what is to be done is in the mind of the author of the build or the init process. Makefiles and init scripts are prescriptive, Cmake and systemd input are descriptive. My problem with CMake has been that the abstract model that the CMake engine has in mind was not docmented, to my satisfaction, or I couldn't find the answers to questions I had. The 'algorithm' was not published, so to speak, or I couldn't find it. Unless I read the CMake code and can understand it well enough to predict what it will do. Maybe CMake aficionados do just that, I don't know. To me, both systemd and CMake seem much more opaque and mysterious. If I have to read the code for a tool to use it effectively, that seems wrong to me. Maybe I just haven't read the right books. Is there a 'nutshell' or similar book for CMake ? These tools seem to have more complexity, and a different mission, then /etc/rc or sysvinit scripts, or make. They are designed to solve a problem that isn't a problem to me. I expect a little bit of human attention to maintenance is required, for the actual problems I face, not all possible problems, so that I could theoretically not ever know how to solve those problems, because the tool would have done that for me. If I could only learn the dark art of that tool. On 06/18/2024 08:14 PM, Luther Johnson wrote: > > To be fair, makefiles are specifications in a build-tool specific > language. But it is one language I already know, and it is one that > seems to be well-formed, translates to very definite actions on > conditions, and I get to choose those actions. I guess it works for me > if I do my part, and I can't really see what CMake does for me that I > can't do for myself. > > On 06/18/2024 08:07 PM, Luther Johnson wrote: >> >> I don't think any makefiles I've written do all of that. I guess I >> don't expect all of that in one place. So i will have some makefiles >> that are really portable, because they are very compute-bound or >> their interface to the world is something else generic, like files. >> And then for more platform-specific parts I would have different >> makefiles for different platforms. >> >> One-button, one command-build (that seems) identical for all >> platforms, is not that important to me. And yes, sometimes I write >> scripts to do the parts of a build in sequence. And I don't consider >> any of this 'hard', but I'm not trying make the builds look like they >> are the same, even if they are really quite different. The GNU >> ./configure, make model is one model. CMake and other makefile >> generators are another. But I have used several compilers or other >> general purpose tools that have more than one makefile or build >> script, depending on the platform, and I just take the tool for what >> it is, and use it. And when I have to debug or change something about >> the build, it's MUCH easier to work with makefiles and build scripts >> than it is to extend configure scripts, or extend a >> build-specification in a build-tool-specific language. In my >> experience, so far. But some people will get into configure and/or >> CMake or any of the others and learn how to be productive that way. >> More power to them, but I don't enjoy doing that. When I have had to >> use CMake, it seemed to require more specification on my part to >> generate all sorts of crufty state, so every build was not >> necessarily the same, unless I used the right commands or deleted all >> these extra directories full of persistence from the last CMake or >> build, to write all these weird, generated, unreadablemakefiles >> calling makefiles, doing no more than I could easily do by hand in >> one makefile. No, my hand-written makefiles will not be absolutely >> universal, or appear to be, but they will work in a way I can >> predict, and that is of great value to me. >> >> On 06/18/2024 05:46 PM, Nevin Liber wrote: >>> On Tue, Jun 18, 2024 at 7:09 PM Luther Johnson >>> > >>> wrote: >>> >>> I agree with Greg here. In fact even if it was well done, it is >>> declaring something that wasn't really a problem, to be a >>> problem, to >>> insert itself as the solution, but I think it's just extra stuff and >>> steps that ultimately obfuscates and creates yet more dependencies. >>> >>> >>> That's a really bold claim. You may not like the solution (I don't >>> tend to comment on it because unlike some here, I recognize that >>> build systems are a Hard Problem and I don't know how to make a >>> better solution), but that doesn't mean it isn't solving real problems. >>> >>> But I'll bite. There was the claim by Larry McVoy that "Writing >>> Makefiles isn't that hard". >>> >>> Please show these beautiful makefiles for a non-toy non-trivial >>> product (say, something like gcc or llvm), which make it easy to >>> change platforms, underlying compilers, works well with modern >>> multicore processors, gets the dependencies right (one should never >>> have to type "make clean" to get a build working correctly), etc. >>> and doesn't require blindly running some 20K line shell script like >>> "configure" to set it up. >>> -- >>> Nevin ":-)" Liber >> iber at gmail.com >>> > +1-847-691-1404 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Wed Jun 19 16:50:53 2024 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 19 Jun 2024 00:50:53 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> Message-ID: <202406190650.45J6orc7902066@freefriends.org> Luther Johnson wrote: > and I can't really see what CMake does for me that I > can't do for myself. I suspect that it's biggest advantage is that the same (set of) CMake input files can produce Makefiles, config files for Visual Studio, and also for Apple's IDE / build system. I also don't like it; it mixes system configuration with build dependencies and is ugly and hard to learn. But that's a separate issue. Arnold From arnold at skeeve.com Wed Jun 19 16:55:46 2024 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 19 Jun 2024 00:55:46 -0600 Subject: [TUHS] ACM Software System Award to Andrew S. Tanenbaum for MINIX In-Reply-To: References: Message-ID: <202406190655.45J6tkVP902384@freefriends.org> Dave Horsfall wrote: > On Tue, 18 Jun 2024, segaloco via TUHS wrote: > > [...] > > > The story I've gotten is AT&T policy was to release odd-numbered > > versions, so PWB 1.0, System III, and System V made it out into the > > world, PWB 2.0 and Release 4.0 stayed in the labs. In the most > > technical sense, System IV never existed, what could've become it > > remained a Bell System-only issue. > > Thanks; I'd forgotten about AT&T's "odd-only" policy. > > -- Dave In 1982 I did some contract C programming on Unix 4.0 on a PDP 11/70 at Southern Bell. At the time, C programmers were not so common. The "odd only" policy may be true, but it's not what I was told; I was told that the policy was to release externally one version behind what was being run internally. With the consent decree done and Divestiture in the works, AT&T was going to be allowed get into the computer business. So at some point, someone decided that for System V, the current system would be released externally. I doubt we'll ever know the exact truth. Interestingly, there was no printed reference manual for Unix 4.0; I was given a 3.0 manual. The documents for Unix were for 4.0, these were the equivalent of the Volume 2 doc in the research releases. It seems that the major changes in 4.0 were kernel improvements. Arnold From ralph at inputplus.co.uk Wed Jun 19 19:00:00 2024 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Wed, 19 Jun 2024 10:00:00 +0100 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> Message-ID: <20240619090000.8732E22057@orac.inputplus.co.uk> Hi, Luther Johnson: > And when I have to debug or change something about the build, it's > MUCH easier to work with makefiles and build scripts than it is to > extend configure scripts, or extend a build-specification in > a build-tool-specific language. In my experience, so far. Eric Raymond recently started autodafe which chews over autotools' inputs and outputs to simplify its use or help move away from it to an editable makefile. https://gitlab.com/esr/autodafe/-/blob/master/README.adoc -- Cheers, Ralph. From sjenkin at canb.auug.org.au Wed Jun 19 21:28:16 2024 From: sjenkin at canb.auug.org.au (sjenkin at canb.auug.org.au) Date: Wed, 19 Jun 2024 21:28:16 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <202406190650.45J6orc7902066@freefriends.org> References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> <202406190650.45J6orc7902066@freefriends.org> Message-ID: Not responding to this email, apologies to Luther and Arnold. I’ve posted on COFF a related note on Unix Philosophy, Would appreciate comments & corrections. > On 19 Jun 2024, at 16:50, arnold at skeeve.com wrote: > > Luther Johnson wrote: > >> and I can't really see what CMake does for me that I >> can't do for myself. > > I suspect that it's biggest advantage is that the same (set of) > CMake input files can produce Makefiles, config files for Visual > Studio, and also for Apple's IDE / build system. > > I also don't like it; it mixes system configuration with build > dependencies and is ugly and hard to learn. But that's a separate > issue. > > Arnold -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Wed Jun 19 23:28:46 2024 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 19 Jun 2024 06:28:46 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> Message-ID: <20240619132846.GR32048@mcvoy.com> On Tue, Jun 18, 2024 at 07:46:15PM -0500, Nevin Liber wrote: > But I'll bite. There was the claim by Larry McVoy that "Writing Makefiles > isn't that hard". > > Please show these beautiful makefiles for a non-toy non-trivial product Works on *BSD, MacOS, Windows, Linux on a bunch of different architectures, Solaris, HPUX, AIX, IRIX, Tru64, etc. # Copyright 1999-2016 BitMover, Inc # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Makefile for BitKeeper. # Bitmover makefiles try to provide the following targets: # # all build everything under the current directory # # clean remove all objects and programs # # clobber run clean plus 'bk -r. clean' # # srcs bk get all sources in current directory # # tags build ctags for all srcs (only needed in this (top) makefile) # # tags.local build ctags for srcs under current directory relative to top # #--- # Special make variables commonly used this makefile: # $@ target # $^ all sources # $< first source INSTALLED_BK ?= $(shell bash -c "cd / && command -v bk") INREPO ?= $(shell bash -c "test -d ../.bk && echo true || echo false") HERE := $(shell pwd) ROOT := $(shell dirname $(HERE)) REPO := $(notdir $(ROOT)) URL := $(shell echo bk://work/$(ROOT) | sed s,/home/bk/,,) LOG = $(shell echo LOG-`bk getuser`) OSTYPE := $(shell bash -c 'echo $$OSTYPE') include conf.mk ## Which hosts are used for producing nightly builds NIGHTLY_HOSTS := macos106 win7-vm debian40 debian40-64 ifeq "$(OSTYPE)" "msys" SYS=win32 EXE=.exe XTRA=win32 ifeq (,$(INSTALLED_BK)) # BINDIR should really be :C:/Program Files/BitKeeper # The shell can not handle space in pathname, so # we use the short name here BINDIR := "C:/PROGRA~1/BITKEE~1" else BINDIR := $(shell bk pwd -s "`bk _registry get 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion' ProgramFilesDir`/BitKeeper") endif INSTALL=installdir RESOURCE=bkres.o UWT_C=$(patsubst %,win32/uwtlib/%.c, wapi_intf wcrt_intf) BKGUI=bkg$(EXE) BKG_O=bkg.o else SYS=unix EXE= # You can set this to anywhere you like and do a # build production" and you'll have an installed BitKeeper. ifeq (,$(INSTALLED_BK)) BINDIR := /usr/local/bitkeeper else BINDIR := $(shell "$(INSTALLED_BK)" bin) endif INSTALL=install RESOURCE= endif # By default, we don't print verbose output. If you want to see # the full compiler command line, use 'make V=1' # The trick is to do "$(Q)$(CC)" instead of just "$(CC)" so that if # Q is not set, it's just "$(CC)" and if Q is set to @ it becomes # a quiet "@$(CC)". # For the verbose messages, gmake provides # $(if $(Q),,) # so we just conditionalize on Q. Empty is false. ifndef V Q=@ export Q endif BK=./bk$(EXE) G =-g TRIAL =0 IMGDIR =$(HERE)/tmp/bitkeeper # Handle warning arguments in GCC # # -Wall enables a bunch of warnings by default # -Wno-parentheses shuts up "suggest parentheses around assignment ...". # Unfortunately it also turns off dangling else warnings. # -Wno-char-subscripts shuts up "subscript has type char", which comes # up all the time with broken implementations. # (renabled in GCC3 since it supresses warnings in system files by default) # -Wno-format-y2k supresses complains about '%y' in strftime formats # -Wstrict-prototypes Don't allow non-ansi function declarations WARNINGS=-Wall -Wno-parentheses -Wno-char-subscripts -Wno-format-y2k \ -Wstrict-prototypes # Warnings enabled with GCC newer than 3.0 # # -Wredundant-decls Declaring same function twice # -Wmissing-declarations Functions without a prototype WARNINGS_GCC3=-Wchar-subscripts -Wredundant-decls -Wmissing-declarations # Warnings enabled with GCC newer than 4.0 # # -Wextra enable a bunch of random things (called -Wextra in newer gccs) # -Wno-pointer-sign Suppress warnings about changing the signs of pointers # -Wno-sign-compare Suppress warnings about comparing signed and unsigned vars # -Wno-unsed-parameter Support warnings about function parameters that are # no used # -Wno-missing-field-initializers # -Wdeclaration-after-statement Warn if someone does a C++ thing of declaring # a variable in the middle of a block WARNINGS_GCC4=-Wextra -Wno-pointer-sign -Wno-sign-compare \ -Wno-unused-parameter -Wno-missing-field-initializers \ -Wdeclaration-after-statement -Wpointer-arith # Warnings enabled with GCC newer than 5.0 # # -Wno-unusedr-esult Do not warn if a caller ignores return value WARNINGS_GCC5=-Wno-unused-result WARNINGS_GCC6= -Wno-misleading-indentation # XXX could not get -Wimplicit-fallthrough=3 to work WARNINGS_GCC7= -Wno-implicit-fallthrough # Other options to consider enabling in the future: # # -Wnested-externs Prototypes declared in a function # -Wwrite-string warn in string constant is passed to a char * # -Wmissing-prototypes # -Wunused-parameter # -Wold-style-definition Would be nice, but zlib falls all over here GCC_MAJOR_REV=$(shell $(CC) -dumpversion | sed 's/\..*//') GCC_MINOR_REV=$(shell $(CC) -dumpversion | sed 's/.*\.//') ifeq ($(GCC_MAJOR_REV),3) WARNINGS += $(WARNINGS_GCC3) endif ifeq ($(GCC_MAJOR_REV),4) WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) ifeq ($(shell expr $(GCC_MINOR_REV) \> 5), 1) WARNINGS += -Wno-unused-result endif endif ifeq ($(GCC_MAJOR_REV),5) WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) $(WARNINGS_GCC5) endif ifeq ($(GCC_MAJOR_REV),6) WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) $(WARNINGS_GCC5) \ $(WARNINGS_GCC6) endif ifeq ($(GCC_MAJOR_REV),7) WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) $(WARNINGS_GCC5) \ $(WARNINGS_GCC6) $(WARNINGS_GCC7) endif ifeq ($(GCC_MAJOR_REV),8) WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) $(WARNINGS_GCC5) \ $(WARNINGS_GCC6) $(WARNINGS_GCC7) $(WARNINGS_GCC8) endif TRACE = -DUSE_TRACE ifeq ($(shell uname -s), Darwin) XLIBS += -lresolv G += -DNOPROC endif ifeq (clang, $(findstring clang, $(shell $(CC) --version))) WARNINGS += -Wno-unused-value -Wno-empty-body -Wno-self-assign endif GCCOPTS= CC_DEBUG=$(GCCOPTS) $G $(WARNINGS) $(TRACE) CC_FAST_DEBUG=$(GCCOPTS) $G -O2 $(WARNINGS) $(TRACE) CC_FAST =$(CC_FAST_DEBUG) CC_WALL=$(GCCOPTS) $G -DLINT $(WARNINGS) $(TRACE) BINS = $(BK) $(BKGUI) # List of all objects in bk other than bk.o. Keep it sorted. # But put bkver.o/cmd.o first, they generate headers. OBJ = bkver.o cmd.o \ abort.o adler32.o alias.o admin.o annotate.o attributes.o \ bam.o bisect.o bkd.o bkd_bam.o bkd_cd.o \ bkd_changes.o bkd_client.o bkd_clone.o bkd_cmdtab.o \ bkd_findkey.o bkd_http.o \ bkd_id.o bkd_kill.o bkd_level.o bkd_misc.o bkd_nested.o \ bkd_partition.o bkd_pull.o bkd_push.o bkd_pwd.o \ bkd_r2c.o \ bkd_rclone.o bkd_rootkey.o bkd_status.o bkd_synckeys.o bkd_version.o \ bkverinfo.o \ cat.o cfile.o changes.o config.o \ check.o checksum.o clean.o cleanpath.o clone.o \ cmdlog.o \ collapse.o comment.o comments.o commit.o comps.o compress.o \ contrib/cat.o \ contrib/test.o \ converge.o \ cp.o \ crypto.o \ cset.o cset_inex.o csetprune.o csets.o cweave.o \ dataheap.o dbfile.o delta.o diff.o dspec.o \ export.o \ fast-import.o fast-export.o features.o findmerge.o \ find.o findcset.o fixtool.o fsl.o fslayer.o \ g2bk.o gca.o get.o gethelp.o \ gethost.o gettemp.o getuser.o gfiles.o glob.o \ gnupatch.o graft.o grep.o \ hash_nokey.o \ heapdump.o help.o here.o here_check.o hostme.o http.o \ idcache.o isascii.o info.o \ key2rev.o key2path.o kill.o kv.o \ libcommit.o libdiff.o libgraph.o librange.o \ libsfiles.o lines.o \ localtm.o lock.o locking.o \ mail.o merge.o mklock.o \ mailslot.o \ mtime.o mv.o names.o ndiff.o nested.o newroot.o \ opark.o \ parent.o park.o partition.o \ patch.o \ pending.o preference.o proj.o \ poly.o \ populate.o \ port/bkd_server.o \ port/check_rsh.o \ port/gethomedir.o \ port/gethost.o port/getinput.o \ port/getrealname.o port/getrusage.o port/globalroot.o port/gui.o \ port/hostColonPath.o port/http_proxy.o \ port/mail.o port/mnext.o port/networkfs.o \ port/notifier.o port/ns_sock_host2ip.o port/platforminit.o \ port/sccs_getuser.o port/sccs_lockfile.o \ port/startmenu.o \ port/svcinfo.o \ port/uninstall.o \ progress.o \ prs.o pull.o push.o pwd.o \ randombits.o randseed.o range.o rcheck.o rclone.o \ rcs2bk.o rcsparse.o \ receive.o redblack.o regex.o registry.o renumber.o \ remap.o remote.o \ repo.o repos.o repogca.o repostats.o repotype.o \ resolve.o resolve_binaries.o resolve_contents.o \ resolve_create.o resolve_filetypes.o \ resolve_flags.o resolve_generic.o resolve_modes.o \ resolve_renames.o resolve_tags.o restore.o review.o \ rm.o rmdel.o rmgone.o \ root.o rset.o sane.o scat.o sccs.o sccs2bk.o \ sccslog.o sccs_mv.o search.o sec2hms.o send.o sendbug.o \ set.o setup.o sfio.o shrink.o sinfo.o \ slib.o smerge.o sort.o startmenu.o \ stat.o stattest.o status.o stripdel.o synckeys.o \ tagmerge.o testcode.o tclsh.o takepatch.o \ testdates.o time.o timestamp.o touch.o trigger.o \ unbk.o undo.o undos.o unedit.o \ unique.o uninstall.o unlink.o unlock.o unpull.o unrm.o unwrap.o upgrade.o \ urlinfo.o \ utils.o uu.o what.o which.o \ xfile.o xflags.o \ zone.o SCRIPTS = bk.script import \ uuwrap unuuwrap gzip_uuwrap ungzip_uuwrap \ b64wrap unb64wrap gzip_b64wrap ungzip_b64wrap PSCR = t/doit t/guitest PROGS = libc/mtst$(EXE) LIBS = libc/libc.a DATA = bkmsg.txt bkhelp.txt version \ ../doc/bk_refcard.ps ../doc/bk_refcard.pdf ../RELEASE-NOTES.md \ dspec-changes dspec-changes-3.2 dspec-changes-4.0 dspec-changes-h \ dspec-changes-hv dspec-changes-json dspec-changes-json-v \ dspec-changes-vv dspec-log dspec-prs CONTRIB = gui/ide/emacs/vc-bk.el contrib/git2bk.l ALL = PCRE $(LIBS) $(BINS) $(SCRIPTS) $(PSCR) $(XTRA) \ $(PROGS) L-clean GUI L-doc $(DATA) CFLAGS = $(CC_DEBUG) export CFLAGS CPPFLAGS= -Ilibc $(TOMCRYPT_CPPFLAGS) $(TOMMATH_CPPFLAGS) \ $(PCRE_CPPFLAGS) $(LZ4_CPPFLAGS) $(ZLIB_CPPFLAGS) # Override this if you don't have it. RANLIB = ranlib # list of C sources in bk SRCS = bk.c $(OBJ:.o=.c) # list of headers in bk HDRS = bam.h bkd.h bk-features.h config.h configvars.def diff.h fsfuncs.h \ graph.h nested.h \ progress.h range.h rcs.h resolve.h sccs.h \ cmd.h poly.h proj.h redblack.h libc/system.h xfile.h # list of non-C sources in bk SCRSRCS = bk.sh import.sh kwextract.pl uuwrap.sh unuuwrap.sh \ port/unix_platform.sh port/win32_platform.sh \ gzip_uuwrap.sh ungzip_uuwrap.sh \ substvars.sh b64wrap.sh gzip_b64wrap.sh \ unb64wrap.sh ungzip_b64wrap.sh MISC = bkmsg.doc t/doit.sh default: $(MAKE) p SUBDIRS = libc $(shell ls -d tomcrypt tommath 2>/dev/null) all: $(ALL) prof: $(MAKE) CFLAGS="$G -pg -O2" LDFLAGS=-pg all gprof: $(MAKE) CFLAGS="$G -DPROFILE -pg -O2" LDFLAGS=-pg all ggprof: $(MAKE) CFLAGS="$G -DPROFILE -pg" LDFLAGS=-pg all # Debugging... d: $(MAKE) CFLAGS="$G -DDEBUG" all debug: $(MAKE) CFLAGS="$G -DDEBUG" all debug2: $(MAKE) CFLAGS="$G -DDEBUG2" all gWall Wall: $(MAKE) CFLAGS="$(CC_WALL)" all # production builds p: ## Build a production version of BitKeeper (no -g) $(MAKE) CFLAGS="$(CC_FAST) $(CF)" all trial: $(MAKE) TRIAL="3*WEEK" CFLAGS="$(CC_FAST) $(CF)" all trial3M: $(MAKE) TRIAL="3*MONTH" CFLAGS="$(CC_FAST) $(CF)" all g: ## Build a debug version of BitKeeper (-g) $(MAKE) CFLAGS="$(CC_DEBUG)" all gO: $(MAKE) CFLAGS="$(CC_FAST_DEBUG)" all gcov: $(MAKE) CFLAGS="$(CC_DEBUG) -fprofile-arcs -ftest-coverage" all clean: L-clean FORCE ## Remove object files and executables $(if $(Q), at echo Cleaning up,) $(Q)for sub in $(SUBDIRS) ../doc ../man gui utils win32 t t/win32; \ do $(MAKE) -C$$sub "CFLAGS=$(CFLAGS)" $@; \ done $(Q)$(RM) $(OBJ) bk.o $(BKG_O) $(BINS) $(SCRIPTS) \ $(PSRC) $(PROGS) $(Q)$(RM) tags TAGS tags.local cscope.out substvars a.out cmd.c cmd.h \ core *.bb *.bbg *.da *.gcov \ bk.ico \ bkmsg.txt bkhelp.txt bkver.c version \ t/doit t/guitest kw2val_lookup.c bkres.o svcmgr.exe \ conf.mk $(Q)$(RM) -r tmp ifeq "$(OSTYPE)" "msys" $(Q)$(RM) -rf gnu/bin gnu/doc gnu/etc gnu/share $(Q)$(RM) -f gnu/m.ico gnu/msys.bat gnu/msys.ico $(Q)-rmdir gnu/tmp $(Q)-rmdir gnu endif ifeq (true,$(INREPO)) ifneq (,$(INSTALLED_BK)) $(Q)EXTRALIST=`"$(INSTALLED_BK)" -Aax | \ grep -v '~$$\|conf-.*\.mk$$'` ; \ if [ "$$EXTRALIST" ]; then \ echo "Clean left behind the following files:" ; \ for file in $$EXTRALIST; do \ echo " $$file" ; \ done ; \ else \ echo Clean complete ; \ fi endif endif clobber: clean FORCE ## Same as 'clean' but also bk clean files -@$(BK) -A clean # XXX subdirs? (see tags) wc: $(HDRS) $(SRCS) $(SCRSRCS) $(MISC) wc -l $(SRCS) $(HDRS) $(SCRSRCS) $(MISC) get-e: FORCE -@$(BK) edit -qT `echo $(HDRS) $(SRCS) $(SCRSRCS) $(MISC) | fmt -1|sort -u` $(Q)$(MAKE) tags srcs: $(SRCS) $(HDRS) FORCE $(Q)for sub in $(SUBDIRS); do $(BK) -r$$sub co -q; done tags: $(patsubst %,%/tags.local, $(SUBDIRS)) tags.local @if [ -x $(BK) ]; \ then $(BK) get -Sq tags.skippats; \ $(BK) _sort -u $^ | grep -v -ftags.skippats > $@; \ else \ bk get -Sq tags.skippats; \ bk _sort -u $^ | grep -v -ftags.skippats > $@; \ fi @echo ctags completed tags.local: $(SRCS) $(HDRS) @ctags -f $@ --file-tags=yes --c-types=d+f+s+t $^ %/tags.local: FORCE $(Q)$(MAKE) -C $(dir $@) tags.local ssh sshtest: $(MAKE) realtest rsh rshtest: PREFER_RSH=YES $(MAKE) realtest test tests: DO_REMOTE=NO $(MAKE) -C t nonet nonet_test localtest: BK_NONET=YES PREFER_RSH=YES $(MAKE) realtest realtest: $(ALL) t/doit -cd gui/tcltk && $(MAKE) clobber -$(BK) get -qS t/setup t/win32/win32_common $(BK) -rt get -qTS 't.*' cd t && ./doit -f 5 guitest: $(ALL) t/doit -$(BK) get -qS t/SCCS/s.g.* t/setup t/win32/win32_common t/guitest.tcl cd t && ./doit -g -i t/doit: t/doit.sh substvars ./substvars t/doit.sh > t/doit chmod +x t/doit t/guitest: t/guitest.tcl cat < t/guitest.tcl > t/guitest .PHONY: FORCE FORCE: win32: FORCE cd win32 && $(MAKE) BINDIR=$(BINDIR) cd t/win32 && $(MAKE) # build libraries in sub directories %.a: FORCE $(Q)$(MAKE) -C $(dir $@) $(notdir $@) libc/mtst$(EXE): libc/libc.a FORCE $(Q)$(MAKE) -C libc mtst$(EXE) bkres.o: win32/data/bk.rc bk.ico windres -i win32/data/bk.rc -o bkres.o bk.ico: win32/data/bk.ico @cp -f win32/data/bk.ico . ifneq ($(TOMCRYPT_SYSTEM),1) # add dependency on building libraries first $(BK): $(TOMCRYPT_LDFLAGS) endif ifneq ($(TOMMATH_SYSTEM),1) # add dependency on building libraries first $(BK): $(TOMMATH_LDFLAGS) endif $(BK): $(LIBS) bk.o $(RESOURCE) $(OBJ) $(if $(Q), at echo LINKING $(BK),) $(Q)$(LD) $(LDFLAGS) -o $@ bk.o $(OBJ) $(RESOURCE) $(LIBS) \ $(TOMCRYPT_LDFLAGS) $(TOMMATH_LDFLAGS) \ $(PCRE_LDFLAGS) $(LZ4_LDFLAGS) $(ZLIB_LDFLAGS) $(XLIBS) # Windows only rule, BKGUI should be blank on other platforms $(BKGUI): bkg.o $(RESOURCE) $(if $(Q), at echo LINKING $(BKGUI),) $(Q)$(LD) $(LDFLAGS) -o $@ bkg.o $(RESOURCE) -Llibc -lc -mwindows $(XLIBS) bk.script: bk.sh port/$(SYS)_platform.sh cat port/$(SYS)_platform.sh bk.sh > bk.script chmod +x bk.script bkmsg.txt: bkmsg.doc cp -f $< $@ L-clean: FORCE @rm -f gui/share/doc/L/little.man ../man/man1/bk-little.1 @rm -f ../man/man2help/bk-little-1.fmt # has to run before bkhelp.txt but after GUI L-doc L-docs: GUI FORCE @test -f gui/share/doc/L/little.man || { \ echo Failed to build gui/share/doc/L/little.man; \ exit 1; \ } @if [ -s gui/share/doc/L/little.man ]; \ then cp gui/share/doc/L/little.man ../man/man1/bk-little.1; \ else cp ../man/man1/bk-little.1.pfmt ../man/man1/bk-little.1; \ fi; \ chmod +w ../man/man1/bk-little.1 bkhelp.txt: $(BK) version L-docs FORCE @rm -f ../man/man2help/bk-little.fmt @cd ../man/man2help && $(MAKE) BK=$(HERE)/bk$(EXE) helptxt @cp ../man/man2help/helptxt bkhelp.txt @rm -f ../man/man1/bk-little.1 html-docs: bkhelp.txt @cd ../man/man2html && $(MAKE) ../doc/bk_refcard.ps: $(BK) FORCE $(Q)echo building $@ $(Q)-$(BK) -r../doc co -qS $(Q)$(MAKE) -C ../doc BK=$(HERE)/bk$(EXE) all ../doc/bk_refcard.pdf: ../doc/bk_refcard.ps # This must be rebuilt every time because it includes the build time bkver.c: utils/bk_version FORCE $(if $(Q), at echo Building $@,) $(Q)echo "#include \"sccs.h\"" > bk.v $(Q)echo "char *bk_platform = \""`./utils/bk_version`"\";" >> bk.v $(Q)echo "int test_release = "$(TRIAL)";" >> bk.v $(Q)echo "time_t bk_build_timet = "`perl -e "print time"`";" >> bk.v $(Q)echo "char *bk_build_dir = \""`pwd`"\";" >> bk.v $(Q)mv -f bk.v bkver.c version: version.sh $(BK) utils/bk_version GUI FORCE bash version.sh > $@ %: %.sh $(if $(Q), at echo Building $@,) $(Q)$(RM) $@ $(Q)cp $< $@ $(Q)chmod +x $@ %: %.l $(if $(Q), at echo Not lexing $@,) import: import.sh port/$(SYS)_platform.sh cat port/$(SYS)_platform.sh import.sh > import.T chmod +x import.T mv -f import.T import # Quick and dirty target so we can make all the gui tools without the rest .PHONY: GUI GUI: PCRE $(BK) @$(MAKE) -Cgui BK=$(HERE)/bk$(EXE) gui install: installdir tmp/bitkeeper/bk _install -d -f $(DESTDIR)$(BINDIR) @echo BitKeeper is installed in $(BINDIR) @echo We suggest you run: @echo @echo sudo $(BINDIR)/bk links /usr/local/bin @echo @echo to create the bk symlink. installdir: utils/registry.tcl rm -rf $(IMGDIR) || exit 1 mkdir -p $(IMGDIR)/contrib mkdir -p $(IMGDIR)/lscripts -$(BK) -rwww get -S -cp -f -r www $(IMGDIR) -$(BK) get -S $(CONTRIB) tar cf - $(BINS) $(SCRIPTS) lscripts gui/bin gui/lib gui/images \ | (cd $(IMGDIR) && tar xf -) cp -f $(DATA) $(IMGDIR) cp -f $(CONTRIB) $(IMGDIR)/contrib (cd ../doc/nested && $(MAKE) install HTML=$(IMGDIR)/html) if [ $(SYS) = unix ]; \ then $(BK) get -S ../man/Makefile; \ cd ../man && $(MAKE) install BINDIR=$(IMGDIR) ;\ else \ (cd win32 && $(MAKE) BINDIR=$(IMGDIR) install); \ cp utils/registry.tcl $(IMGDIR)/gui/lib; \ fi cd $(IMGDIR); \ find . -type l | \ perl -ne 'chomp; $$a = readlink; print "$$a|$$_\n";'>symlinks; \ test -s symlinks || rm -f symlinks @true image: ## Build the installer (left in src/utils/bk-*) $(MAKE) p $(MAKE) _image _image: $(MAKE) installdir ${MAKE} -Cutils BINDIR=$(IMGDIR) "CC=$(CC)" "BK=$(HERE)/bk$(EXE)" "CFLAGS=$(CFLAGS)" image crankturn: crank.sh remote.sh ## Run a clean-build + regressions in cluster REPO=$(REPO) URL=$(URL) REMOTE=remote.sh LOG=$(LOG) bash crank.sh cranksave: crank.sh remote.sh ## Run a crankturn but save the built images REPO=$(REPO) URL=$(URL) REMOTE=remote.sh LOG=$(LOG) bash crank.sh save crankstatus: crank.sh remote.sh ## See how the crank is going REPO=$(REPO) URL=$(URL) REMOTE=remote.sh LOG=$(LOG) bash crank.sh status crankrelease nightly: $(BK) crank.sh remote.sh ## Do a BitKeeper release (or nightly build) @(TAG=$(shell $(BK) changes -r+ -d:TAG:) ; \ test x$$TAG = x && { \ echo Cannot crankrelease with a non-tagged tip ; \ exit 1 ; \ } ; \ case $@ in \ crankrelease ) \ TYPE=release; DIR=/home/bk/images/$$TAG; \ ;; \ nightly ) \ TYPE=nightly; DIR=/home/bk/images/nightly; \ HOSTS="$(NIGHTLY_HOSTS)" ; \ ;; \ esac ; \ test -d $$DIR || mkdir -p $$DIR ; \ REPO=$(REPO) URL=$(URL) HOSTS=$$HOSTS REMOTE=remote.sh \ LOG=$(LOG) bash crank.sh $$TYPE ; \ $(BK) -R get -qS ../RELEASE-NOTES.md ; \ cp ../RELEASE-NOTES.md $$DIR ; \ SAVED_WD=$(shell pwd) ; \ cd $$DIR && chmod +rx bk-* >/dev/null 2>&1 ; \ rm -f MD5SUMS ; \ md5sum bk-* >> MD5SUMS ; \ echo "Your images are in $$DIR" ; \ case $@ in \ crankrelease ) \ echo "Run './mkrelease $$TAG' to release this version of bk."; \ ;; \ nightly ) \ # cd $$SAVED_WD ; \ # ./mkupgrades --nightly $$TAG ; \ ;; \ esac) crankclean: crank.sh remote.sh REPO=$(REPO) URL=$(URL) REMOTE=remote.sh LOG=$(LOG) bash crank.sh clean # This target assumes a bk repository .PHONY: src-tar src-tar: $(BK) version ## build tar.gz image for the current tree ifeq (false,$(INREPO)) $(error This target only works in a BK source repository) else ./bk here add default TCLTK $(Q)-mkdir -p tmp/src $(Q)(DIR=bk-$(shell $(BK) version -s) ; \ TAR="$$DIR".tar.gz ; \ echo "Creating $$TAR in tmp/src..." ; \ cd tmp/src ; \ rm -rf "$$DIR" ; \ ../../bk export -tplain -kwr+ -sdefault -sTCLTK "$$DIR" ; \ cat ../../version > "$$DIR/src/bkvers.txt" ; \ tar -czf "$$TAR" "$$DIR" ; \ rm -rf "$$DIR" ; \ echo Done ; \ ) endif # only depend on conf.mk.local if it exists conf.mk: mkconf.sh $(wildcard conf.mk.local) sh mkconf.sh > $@ || { $(RM) $@; false; } %.o: %.c $(if $(Q), at echo CC $<,) $(Q)$(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $@ port/startmenu.o: port/startmenu.c $(HDRS) $(if $(Q), at echo CC $<,) $(Q)$(CC) $(CFLAGS) -fno-strict-aliasing $(CPPFLAGS) -c $< -o $@ depend: $(SRCS) $(CC) -MM -MG -D_DEPEND $(SRCS) > depends # for system.h we need to actually run libc's makefile because it includes # calculated header files libc/system.h: FORCE $(MAKE) -C libc system.h libc/libc.a: libc/system.h sccs.h: PCRE .PHONY: PCRE PCRE: ifneq ($(PCRE_SYSTEM),1) $(MAKE) -Cgui/tcltk pcre endif $(OBJ) bk.o: $(HDRS) cmd.c cmd.h: cmd.pl bk.sh $(filter bkd_%,$(SRCS)) $(if $(Q), at echo Building $@,) $(Q)perl cmd.pl || (rm -f cmd.c cmd.h; exit 1) # This parses slib.c and extracts the meta-data keywords expanded # by kw2val() and passes them to gperf to generate hash lookup code. slib.o: kw2val_lookup.c kw2val_lookup.c: slib.c kw2val.pl $(if $(Q), at echo Building $@,) $(Q)perl kw2val.pl slib.c || (rm -f kw2val_lookup.c; exit 1) check-syntax: $(CC) $(CFLAGS) $(CPPFLAGS) -c -S ${CHK_SOURCES} -o /dev/null # print a make variable 'make print-REPO' # http://www.cmcrossroads.com/article/printing-value-makefile-variable print-%: @echo $* = \"$($*)\" .PHONY: help help: @grep -E -h '^[-a-zA-Z_\ ]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' @echo Suggested: make -j12 image From imp at bsdimp.com Thu Jun 20 00:44:14 2024 From: imp at bsdimp.com (Warner Losh) Date: Wed, 19 Jun 2024 08:44:14 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240619132846.GR32048@mcvoy.com> References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> <20240619132846.GR32048@mcvoy.com> Message-ID: On Wed, Jun 19, 2024, 7:28 AM Larry McVoy wrote: > On Tue, Jun 18, 2024 at 07:46:15PM -0500, Nevin Liber wrote: > > But I'll bite. There was the claim by Larry McVoy that "Writing > Makefiles > > isn't that hard". > > > > Please show these beautiful makefiles for a non-toy non-trivial product > > Works on *BSD, MacOS, Windows, Linux on a bunch of different architectures, > Solaris, HPUX, AIX, IRIX, Tru64, etc. > The posted Makefile is no a strictly conforming POSIX Makefile, but uses gmake extensions extensively... And eyes of the beholder may vary... Warner # Copyright 1999-2016 BitMover, Inc > > # Licensed under the Apache License, Version 2.0 (the "License"); > # you may not use this file except in compliance with the License. > # You may obtain a copy of the License at > > # http://www.apache.org/licenses/LICENSE-2.0 > > # Unless required by applicable law or agreed to in writing, software > # distributed under the License is distributed on an "AS IS" BASIS, > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > # See the License for the specific language governing permissions and > # limitations under the License. > > # Makefile for BitKeeper. > > # Bitmover makefiles try to provide the following targets: > # > # all build everything under the current directory > # > # clean remove all objects and programs > # > # clobber run clean plus 'bk -r. clean' > # > # srcs bk get all sources in current directory > # > # tags build ctags for all srcs (only needed in this (top) > makefile) > # > # tags.local build ctags for srcs under current directory relative to > top > # > #--- > # Special make variables commonly used this makefile: > # $@ target > # $^ all sources > # $< first source > > INSTALLED_BK ?= $(shell bash -c "cd / && command -v bk") > INREPO ?= $(shell bash -c "test -d ../.bk && echo true || echo false") > HERE := $(shell pwd) > ROOT := $(shell dirname $(HERE)) > REPO := $(notdir $(ROOT)) > URL := $(shell echo bk://work/$(ROOT) | sed s,/home/bk/,,) > LOG = $(shell echo LOG-`bk getuser`) > OSTYPE := $(shell bash -c 'echo $$OSTYPE') > > include conf.mk > > ## Which hosts are used for producing nightly builds > NIGHTLY_HOSTS := macos106 win7-vm debian40 debian40-64 > > ifeq "$(OSTYPE)" "msys" > SYS=win32 > EXE=.exe > XTRA=win32 > ifeq (,$(INSTALLED_BK)) > # BINDIR should really be :C:/Program Files/BitKeeper > # The shell can not handle space in pathname, so > # we use the short name here > BINDIR := "C:/PROGRA~1/BITKEE~1" > else > BINDIR := $(shell bk pwd -s "`bk _registry get > 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion' > ProgramFilesDir`/BitKeeper") > endif > INSTALL=installdir > RESOURCE=bkres.o > UWT_C=$(patsubst %,win32/uwtlib/%.c, wapi_intf wcrt_intf) > BKGUI=bkg$(EXE) > BKG_O=bkg.o > else > SYS=unix > EXE= > # You can set this to anywhere you like and do a > # build production" and you'll have an installed BitKeeper. > ifeq (,$(INSTALLED_BK)) > BINDIR := /usr/local/bitkeeper > else > BINDIR := $(shell "$(INSTALLED_BK)" bin) > endif > INSTALL=install > RESOURCE= > endif > > # By default, we don't print verbose output. If you want to see > # the full compiler command line, use 'make V=1' > # The trick is to do "$(Q)$(CC)" instead of just "$(CC)" so that if > # Q is not set, it's just "$(CC)" and if Q is set to @ it becomes > # a quiet "@$(CC)". > # For the verbose messages, gmake provides > # $(if $(Q),,) > # so we just conditionalize on Q. Empty is false. > ifndef V > Q=@ > export Q > endif > > BK=./bk$(EXE) > G =-g > TRIAL =0 > IMGDIR =$(HERE)/tmp/bitkeeper > > # Handle warning arguments in GCC > # > # -Wall enables a bunch of warnings by default > # -Wno-parentheses shuts up "suggest parentheses around assignment ...". > # Unfortunately it also turns off dangling else warnings. > # -Wno-char-subscripts shuts up "subscript has type char", which comes > # up all the time with broken implementations. > # (renabled in GCC3 since it supresses warnings in system files by > default) > # -Wno-format-y2k supresses complains about '%y' in strftime formats > # -Wstrict-prototypes Don't allow non-ansi function declarations > WARNINGS=-Wall -Wno-parentheses -Wno-char-subscripts -Wno-format-y2k \ > -Wstrict-prototypes > > # Warnings enabled with GCC newer than 3.0 > # > # -Wredundant-decls Declaring same function twice > # -Wmissing-declarations Functions without a prototype > WARNINGS_GCC3=-Wchar-subscripts -Wredundant-decls -Wmissing-declarations > > # Warnings enabled with GCC newer than 4.0 > # > # -Wextra enable a bunch of random things (called -Wextra in newer gccs) > # -Wno-pointer-sign Suppress warnings about changing the signs of pointers > # -Wno-sign-compare Suppress warnings about comparing signed and unsigned > vars > # -Wno-unsed-parameter Support warnings about function parameters that are > # no used > # -Wno-missing-field-initializers > # -Wdeclaration-after-statement Warn if someone does a C++ thing of > declaring > # a variable in the middle of a block > WARNINGS_GCC4=-Wextra -Wno-pointer-sign -Wno-sign-compare \ > -Wno-unused-parameter -Wno-missing-field-initializers \ > -Wdeclaration-after-statement -Wpointer-arith > > # Warnings enabled with GCC newer than 5.0 > # > # -Wno-unusedr-esult Do not warn if a caller ignores return value > WARNINGS_GCC5=-Wno-unused-result > > WARNINGS_GCC6= -Wno-misleading-indentation > > # XXX could not get -Wimplicit-fallthrough=3 to work > WARNINGS_GCC7= -Wno-implicit-fallthrough > > # Other options to consider enabling in the future: > # > # -Wnested-externs Prototypes declared in a function > # -Wwrite-string warn in string constant is passed to a char * > # -Wmissing-prototypes > # -Wunused-parameter > # -Wold-style-definition Would be nice, but zlib falls all over here > > GCC_MAJOR_REV=$(shell $(CC) -dumpversion | sed 's/\..*//') > GCC_MINOR_REV=$(shell $(CC) -dumpversion | sed 's/.*\.//') > ifeq ($(GCC_MAJOR_REV),3) > WARNINGS += $(WARNINGS_GCC3) > endif > ifeq ($(GCC_MAJOR_REV),4) > WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) > ifeq ($(shell expr $(GCC_MINOR_REV) \> 5), 1) > WARNINGS += -Wno-unused-result > endif > endif > ifeq ($(GCC_MAJOR_REV),5) > WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) $(WARNINGS_GCC5) > endif > ifeq ($(GCC_MAJOR_REV),6) > WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) $(WARNINGS_GCC5) \ > $(WARNINGS_GCC6) > endif > ifeq ($(GCC_MAJOR_REV),7) > WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) $(WARNINGS_GCC5) \ > $(WARNINGS_GCC6) $(WARNINGS_GCC7) > endif > ifeq ($(GCC_MAJOR_REV),8) > WARNINGS += $(WARNINGS_GCC3) $(WARNINGS_GCC4) $(WARNINGS_GCC5) \ > $(WARNINGS_GCC6) $(WARNINGS_GCC7) $(WARNINGS_GCC8) > endif > > TRACE = -DUSE_TRACE > > ifeq ($(shell uname -s), Darwin) > XLIBS += -lresolv > G += -DNOPROC > endif > > ifeq (clang, $(findstring clang, $(shell $(CC) --version))) > WARNINGS += -Wno-unused-value -Wno-empty-body -Wno-self-assign > endif > > GCCOPTS= > CC_DEBUG=$(GCCOPTS) $G $(WARNINGS) $(TRACE) > CC_FAST_DEBUG=$(GCCOPTS) $G -O2 $(WARNINGS) $(TRACE) > CC_FAST =$(CC_FAST_DEBUG) > CC_WALL=$(GCCOPTS) $G -DLINT $(WARNINGS) $(TRACE) > BINS = $(BK) $(BKGUI) > > # List of all objects in bk other than bk.o. Keep it sorted. > # But put bkver.o/cmd.o first, they generate headers. > OBJ = bkver.o cmd.o \ > abort.o adler32.o alias.o admin.o annotate.o attributes.o \ > bam.o bisect.o bkd.o bkd_bam.o bkd_cd.o \ > bkd_changes.o bkd_client.o bkd_clone.o bkd_cmdtab.o \ > bkd_findkey.o bkd_http.o \ > bkd_id.o bkd_kill.o bkd_level.o bkd_misc.o bkd_nested.o \ > bkd_partition.o bkd_pull.o bkd_push.o bkd_pwd.o \ > bkd_r2c.o \ > bkd_rclone.o bkd_rootkey.o bkd_status.o bkd_synckeys.o > bkd_version.o \ > bkverinfo.o \ > cat.o cfile.o changes.o config.o \ > check.o checksum.o clean.o cleanpath.o clone.o \ > cmdlog.o \ > collapse.o comment.o comments.o commit.o comps.o compress.o \ > contrib/cat.o \ > contrib/test.o \ > converge.o \ > cp.o \ > crypto.o \ > cset.o cset_inex.o csetprune.o csets.o cweave.o \ > dataheap.o dbfile.o delta.o diff.o dspec.o \ > export.o \ > fast-import.o fast-export.o features.o findmerge.o \ > find.o findcset.o fixtool.o fsl.o fslayer.o \ > g2bk.o gca.o get.o gethelp.o \ > gethost.o gettemp.o getuser.o gfiles.o glob.o \ > gnupatch.o graft.o grep.o \ > hash_nokey.o \ > heapdump.o help.o here.o here_check.o hostme.o http.o \ > idcache.o isascii.o info.o \ > key2rev.o key2path.o kill.o kv.o \ > libcommit.o libdiff.o libgraph.o librange.o \ > libsfiles.o lines.o \ > localtm.o lock.o locking.o \ > mail.o merge.o mklock.o \ > mailslot.o \ > mtime.o mv.o names.o ndiff.o nested.o newroot.o \ > opark.o \ > parent.o park.o partition.o \ > patch.o \ > pending.o preference.o proj.o \ > poly.o \ > populate.o \ > port/bkd_server.o \ > port/check_rsh.o \ > port/gethomedir.o \ > port/gethost.o port/getinput.o \ > port/getrealname.o port/getrusage.o port/globalroot.o port/gui.o \ > port/hostColonPath.o port/http_proxy.o \ > port/mail.o port/mnext.o port/networkfs.o \ > port/notifier.o port/ns_sock_host2ip.o port/platforminit.o \ > port/sccs_getuser.o port/sccs_lockfile.o \ > port/startmenu.o \ > port/svcinfo.o \ > port/uninstall.o \ > progress.o \ > prs.o pull.o push.o pwd.o \ > randombits.o randseed.o range.o rcheck.o rclone.o \ > rcs2bk.o rcsparse.o \ > receive.o redblack.o regex.o registry.o renumber.o \ > remap.o remote.o \ > repo.o repos.o repogca.o repostats.o repotype.o \ > resolve.o resolve_binaries.o resolve_contents.o \ > resolve_create.o resolve_filetypes.o \ > resolve_flags.o resolve_generic.o resolve_modes.o \ > resolve_renames.o resolve_tags.o restore.o review.o \ > rm.o rmdel.o rmgone.o \ > root.o rset.o sane.o scat.o sccs.o sccs2bk.o \ > sccslog.o sccs_mv.o search.o sec2hms.o send.o sendbug.o \ > set.o setup.o sfio.o shrink.o sinfo.o \ > slib.o smerge.o sort.o startmenu.o \ > stat.o stattest.o status.o stripdel.o synckeys.o \ > tagmerge.o testcode.o tclsh.o takepatch.o \ > testdates.o time.o timestamp.o touch.o trigger.o \ > unbk.o undo.o undos.o unedit.o \ > unique.o uninstall.o unlink.o unlock.o unpull.o unrm.o unwrap.o > upgrade.o \ > urlinfo.o \ > utils.o uu.o what.o which.o \ > xfile.o xflags.o \ > zone.o > SCRIPTS = bk.script import \ > uuwrap unuuwrap gzip_uuwrap ungzip_uuwrap \ > b64wrap unb64wrap gzip_b64wrap ungzip_b64wrap > PSCR = t/doit t/guitest > PROGS = libc/mtst$(EXE) > LIBS = libc/libc.a > DATA = bkmsg.txt bkhelp.txt version \ > ../doc/bk_refcard.ps ../doc/bk_refcard.pdf ../RELEASE-NOTES.md \ > dspec-changes dspec-changes-3.2 dspec-changes-4.0 dspec-changes-h \ > dspec-changes-hv dspec-changes-json dspec-changes-json-v \ > dspec-changes-vv dspec-log dspec-prs > > CONTRIB = gui/ide/emacs/vc-bk.el contrib/git2bk.l > ALL = PCRE $(LIBS) $(BINS) $(SCRIPTS) $(PSCR) $(XTRA) \ > $(PROGS) L-clean GUI L-doc $(DATA) > > CFLAGS = $(CC_DEBUG) > export CFLAGS > CPPFLAGS= -Ilibc $(TOMCRYPT_CPPFLAGS) $(TOMMATH_CPPFLAGS) \ > $(PCRE_CPPFLAGS) $(LZ4_CPPFLAGS) $(ZLIB_CPPFLAGS) > # Override this if you don't have it. > RANLIB = ranlib > > # list of C sources in bk > SRCS = bk.c $(OBJ:.o=.c) > # list of headers in bk > HDRS = bam.h bkd.h bk-features.h config.h configvars.def diff.h > fsfuncs.h \ > graph.h nested.h \ > progress.h range.h rcs.h resolve.h sccs.h \ > cmd.h poly.h proj.h redblack.h libc/system.h xfile.h > > # list of non-C sources in bk > SCRSRCS = bk.sh import.sh kwextract.pl uuwrap.sh unuuwrap.sh \ > port/unix_platform.sh port/win32_platform.sh \ > gzip_uuwrap.sh ungzip_uuwrap.sh \ > substvars.sh b64wrap.sh gzip_b64wrap.sh \ > unb64wrap.sh ungzip_b64wrap.sh > MISC = bkmsg.doc t/doit.sh > > default: > $(MAKE) p > > SUBDIRS = libc $(shell ls -d tomcrypt tommath 2>/dev/null) > > all: $(ALL) > > prof: > $(MAKE) CFLAGS="$G -pg -O2" LDFLAGS=-pg all > gprof: > $(MAKE) CFLAGS="$G -DPROFILE -pg -O2" LDFLAGS=-pg all > ggprof: > $(MAKE) CFLAGS="$G -DPROFILE -pg" LDFLAGS=-pg all > # Debugging... > d: > $(MAKE) CFLAGS="$G -DDEBUG" all > debug: > $(MAKE) CFLAGS="$G -DDEBUG" all > debug2: > $(MAKE) CFLAGS="$G -DDEBUG2" all > > gWall Wall: > $(MAKE) CFLAGS="$(CC_WALL)" all > > # production builds > p: ## Build a production version of BitKeeper (no -g) > $(MAKE) CFLAGS="$(CC_FAST) $(CF)" all > > trial: > $(MAKE) TRIAL="3*WEEK" CFLAGS="$(CC_FAST) $(CF)" all > > trial3M: > $(MAKE) TRIAL="3*MONTH" CFLAGS="$(CC_FAST) $(CF)" all > > g: ## Build a debug version of BitKeeper (-g) > $(MAKE) CFLAGS="$(CC_DEBUG)" all > gO: > $(MAKE) CFLAGS="$(CC_FAST_DEBUG)" all > gcov: > $(MAKE) CFLAGS="$(CC_DEBUG) -fprofile-arcs -ftest-coverage" all > > clean: L-clean FORCE ## Remove object files and executables > $(if $(Q), at echo Cleaning up,) > $(Q)for sub in $(SUBDIRS) ../doc ../man gui utils win32 t t/win32; > \ > do $(MAKE) -C$$sub "CFLAGS=$(CFLAGS)" $@; \ > done > $(Q)$(RM) $(OBJ) bk.o $(BKG_O) $(BINS) $(SCRIPTS) \ > $(PSRC) $(PROGS) > $(Q)$(RM) tags TAGS tags.local cscope.out substvars a.out cmd.c > cmd.h \ > core *.bb *.bbg *.da *.gcov \ > bk.ico \ > bkmsg.txt bkhelp.txt bkver.c version \ > t/doit t/guitest kw2val_lookup.c bkres.o svcmgr.exe \ > conf.mk > $(Q)$(RM) -r tmp > ifeq "$(OSTYPE)" "msys" > $(Q)$(RM) -rf gnu/bin gnu/doc gnu/etc gnu/share > $(Q)$(RM) -f gnu/m.ico gnu/msys.bat gnu/msys.ico > $(Q)-rmdir gnu/tmp > $(Q)-rmdir gnu > endif > ifeq (true,$(INREPO)) > ifneq (,$(INSTALLED_BK)) > $(Q)EXTRALIST=`"$(INSTALLED_BK)" -Aax | \ > grep -v '~$$\|conf-.*\.mk$$'` ; \ > if [ "$$EXTRALIST" ]; then \ > echo "Clean left behind the following files:" ; \ > for file in $$EXTRALIST; do \ > echo " $$file" ; \ > done ; \ > else \ > echo Clean complete ; \ > fi > endif > endif > > clobber: clean FORCE ## Same as 'clean' but also bk clean files > -@$(BK) -A clean > > # XXX subdirs? (see tags) > wc: $(HDRS) $(SRCS) $(SCRSRCS) $(MISC) > wc -l $(SRCS) $(HDRS) $(SCRSRCS) $(MISC) > > get-e: FORCE > -@$(BK) edit -qT `echo $(HDRS) $(SRCS) $(SCRSRCS) $(MISC) | fmt > -1|sort -u` > $(Q)$(MAKE) tags > > srcs: $(SRCS) $(HDRS) FORCE > $(Q)for sub in $(SUBDIRS); do $(BK) -r$$sub co -q; done > > tags: $(patsubst %,%/tags.local, $(SUBDIRS)) tags.local > @if [ -x $(BK) ]; \ > then $(BK) get -Sq tags.skippats; \ > $(BK) _sort -u $^ | grep -v -ftags.skippats > $@; \ > else \ > bk get -Sq tags.skippats; \ > bk _sort -u $^ | grep -v -ftags.skippats > $@; \ > fi > @echo ctags completed > > tags.local: $(SRCS) $(HDRS) > @ctags -f $@ --file-tags=yes --c-types=d+f+s+t $^ > > %/tags.local: FORCE > $(Q)$(MAKE) -C $(dir $@) tags.local > > ssh sshtest: > $(MAKE) realtest > > rsh rshtest: > PREFER_RSH=YES $(MAKE) realtest > > test tests: > DO_REMOTE=NO $(MAKE) -C t > > nonet nonet_test localtest: > BK_NONET=YES PREFER_RSH=YES $(MAKE) realtest > > realtest: $(ALL) t/doit > -cd gui/tcltk && $(MAKE) clobber > -$(BK) get -qS t/setup t/win32/win32_common > $(BK) -rt get -qTS 't.*' > cd t && ./doit -f 5 > > guitest: $(ALL) t/doit > -$(BK) get -qS t/SCCS/s.g.* t/setup t/win32/win32_common > t/guitest.tcl > cd t && ./doit -g -i > > t/doit: t/doit.sh substvars > ./substvars t/doit.sh > t/doit > chmod +x t/doit > > t/guitest: t/guitest.tcl > cat < t/guitest.tcl > t/guitest > > .PHONY: FORCE > FORCE: > > win32: FORCE > cd win32 && $(MAKE) BINDIR=$(BINDIR) > cd t/win32 && $(MAKE) > > # build libraries in sub directories > %.a: FORCE > $(Q)$(MAKE) -C $(dir $@) $(notdir $@) > > libc/mtst$(EXE): libc/libc.a FORCE > $(Q)$(MAKE) -C libc mtst$(EXE) > > bkres.o: win32/data/bk.rc bk.ico > windres -i win32/data/bk.rc -o bkres.o > > bk.ico: win32/data/bk.ico > @cp -f win32/data/bk.ico . > > ifneq ($(TOMCRYPT_SYSTEM),1) > # add dependency on building libraries first > $(BK): $(TOMCRYPT_LDFLAGS) > endif > ifneq ($(TOMMATH_SYSTEM),1) > # add dependency on building libraries first > $(BK): $(TOMMATH_LDFLAGS) > endif > > $(BK): $(LIBS) bk.o $(RESOURCE) $(OBJ) > $(if $(Q), at echo LINKING $(BK),) > $(Q)$(LD) $(LDFLAGS) -o $@ bk.o $(OBJ) $(RESOURCE) $(LIBS) \ > $(TOMCRYPT_LDFLAGS) $(TOMMATH_LDFLAGS) \ > $(PCRE_LDFLAGS) $(LZ4_LDFLAGS) $(ZLIB_LDFLAGS) $(XLIBS) > > # Windows only rule, BKGUI should be blank on other platforms > $(BKGUI): bkg.o $(RESOURCE) > $(if $(Q), at echo LINKING $(BKGUI),) > $(Q)$(LD) $(LDFLAGS) -o $@ bkg.o $(RESOURCE) -Llibc -lc -mwindows > $(XLIBS) > > bk.script: bk.sh port/$(SYS)_platform.sh > cat port/$(SYS)_platform.sh bk.sh > bk.script > chmod +x bk.script > > bkmsg.txt: bkmsg.doc > cp -f $< $@ > > L-clean: FORCE > @rm -f gui/share/doc/L/little.man ../man/man1/bk-little.1 > @rm -f ../man/man2help/bk-little-1.fmt > > # has to run before bkhelp.txt but after GUI > L-doc L-docs: GUI FORCE > @test -f gui/share/doc/L/little.man || { \ > echo Failed to build gui/share/doc/L/little.man; \ > exit 1; \ > } > @if [ -s gui/share/doc/L/little.man ]; \ > then cp gui/share/doc/L/little.man ../man/man1/bk-little.1; \ > else cp ../man/man1/bk-little.1.pfmt ../man/man1/bk-little.1; \ > fi; \ > chmod +w ../man/man1/bk-little.1 > > bkhelp.txt: $(BK) version L-docs FORCE > @rm -f ../man/man2help/bk-little.fmt > @cd ../man/man2help && $(MAKE) BK=$(HERE)/bk$(EXE) helptxt > @cp ../man/man2help/helptxt bkhelp.txt > @rm -f ../man/man1/bk-little.1 > > html-docs: bkhelp.txt > @cd ../man/man2html && $(MAKE) > > ../doc/bk_refcard.ps: $(BK) FORCE > $(Q)echo building $@ > $(Q)-$(BK) -r../doc co -qS > $(Q)$(MAKE) -C ../doc BK=$(HERE)/bk$(EXE) all > > ../doc/bk_refcard.pdf: ../doc/bk_refcard.ps > > # This must be rebuilt every time because it includes the build time > bkver.c: utils/bk_version FORCE > $(if $(Q), at echo Building $@,) > $(Q)echo "#include \"sccs.h\"" > bk.v > $(Q)echo "char *bk_platform = \""`./utils/bk_version`"\";" >> bk.v > $(Q)echo "int test_release = "$(TRIAL)";" >> bk.v > $(Q)echo "time_t bk_build_timet = "`perl -e "print time"`";" >> > bk.v > $(Q)echo "char *bk_build_dir = \""`pwd`"\";" >> bk.v > $(Q)mv -f bk.v bkver.c > > version: version.sh $(BK) utils/bk_version GUI FORCE > bash version.sh > $@ > > %: %.sh > $(if $(Q), at echo Building $@,) > $(Q)$(RM) $@ > $(Q)cp $< $@ > $(Q)chmod +x $@ > > %: %.l > $(if $(Q), at echo Not lexing $@,) > > import: import.sh port/$(SYS)_platform.sh > cat port/$(SYS)_platform.sh import.sh > import.T > chmod +x import.T > mv -f import.T import > > # Quick and dirty target so we can make all the gui tools without the rest > .PHONY: GUI > GUI: PCRE $(BK) > @$(MAKE) -Cgui BK=$(HERE)/bk$(EXE) gui > > install: installdir > tmp/bitkeeper/bk _install -d -f $(DESTDIR)$(BINDIR) > @echo BitKeeper is installed in $(BINDIR) > @echo We suggest you run: > @echo > @echo sudo $(BINDIR)/bk links /usr/local/bin > @echo > @echo to create the bk symlink. > > installdir: utils/registry.tcl > rm -rf $(IMGDIR) || exit 1 > mkdir -p $(IMGDIR)/contrib > mkdir -p $(IMGDIR)/lscripts > -$(BK) -rwww get -S > -cp -f -r www $(IMGDIR) > -$(BK) get -S $(CONTRIB) > tar cf - $(BINS) $(SCRIPTS) lscripts gui/bin gui/lib gui/images \ > | (cd $(IMGDIR) && tar xf -) > cp -f $(DATA) $(IMGDIR) > cp -f $(CONTRIB) $(IMGDIR)/contrib > (cd ../doc/nested && $(MAKE) install HTML=$(IMGDIR)/html) > if [ $(SYS) = unix ]; \ > then $(BK) get -S ../man/Makefile; \ > cd ../man && $(MAKE) install BINDIR=$(IMGDIR) ;\ > else \ > (cd win32 && $(MAKE) BINDIR=$(IMGDIR) install); \ > cp utils/registry.tcl $(IMGDIR)/gui/lib; \ > fi > cd $(IMGDIR); \ > find . -type l | \ > perl -ne 'chomp; $$a = readlink; print > "$$a|$$_\n";'>symlinks; \ > test -s symlinks || rm -f symlinks > @true > > image: ## Build the installer (left in src/utils/bk-*) > $(MAKE) p > $(MAKE) _image > > _image: > $(MAKE) installdir > ${MAKE} -Cutils BINDIR=$(IMGDIR) "CC=$(CC)" "BK=$(HERE)/bk$(EXE)" > "CFLAGS=$(CFLAGS)" image > > crankturn: crank.sh remote.sh ## Run a clean-build + regressions in > cluster > REPO=$(REPO) URL=$(URL) REMOTE=remote.sh LOG=$(LOG) bash crank.sh > > cranksave: crank.sh remote.sh ## Run a crankturn but save the built images > REPO=$(REPO) URL=$(URL) REMOTE=remote.sh LOG=$(LOG) bash crank.sh > save > > crankstatus: crank.sh remote.sh ## See how the crank is going > REPO=$(REPO) URL=$(URL) REMOTE=remote.sh LOG=$(LOG) bash crank.sh > status > > crankrelease nightly: $(BK) crank.sh remote.sh ## Do a BitKeeper release > (or nightly build) > @(TAG=$(shell $(BK) changes -r+ -d:TAG:) ; \ > test x$$TAG = x && { \ > echo Cannot crankrelease with a non-tagged tip ; \ > exit 1 ; \ > } ; \ > case $@ in \ > crankrelease ) \ > TYPE=release; DIR=/home/bk/images/$$TAG; \ > ;; \ > nightly ) \ > TYPE=nightly; DIR=/home/bk/images/nightly; \ > HOSTS="$(NIGHTLY_HOSTS)" ; \ > ;; \ > esac ; \ > test -d $$DIR || mkdir -p $$DIR ; \ > REPO=$(REPO) URL=$(URL) HOSTS=$$HOSTS REMOTE=remote.sh \ > LOG=$(LOG) bash crank.sh $$TYPE ; \ > $(BK) -R get -qS ../RELEASE-NOTES.md ; \ > cp ../RELEASE-NOTES.md $$DIR ; \ > SAVED_WD=$(shell pwd) ; \ > cd $$DIR && chmod +rx bk-* >/dev/null 2>&1 ; \ > rm -f MD5SUMS ; \ > md5sum bk-* >> MD5SUMS ; \ > echo "Your images are in $$DIR" ; \ > case $@ in \ > crankrelease ) \ > echo "Run './mkrelease $$TAG' to release this version of > bk."; \ > ;; \ > nightly ) \ > # cd $$SAVED_WD ; \ > # ./mkupgrades --nightly $$TAG ; \ > ;; \ > esac) > > crankclean: crank.sh remote.sh > REPO=$(REPO) URL=$(URL) REMOTE=remote.sh LOG=$(LOG) bash crank.sh > clean > > # This target assumes a bk repository > .PHONY: src-tar > src-tar: $(BK) version ## build tar.gz image for the current tree > ifeq (false,$(INREPO)) > $(error This target only works in a BK source repository) > else > ./bk here add default TCLTK > $(Q)-mkdir -p tmp/src > $(Q)(DIR=bk-$(shell $(BK) version -s) ; \ > TAR="$$DIR".tar.gz ; \ > echo "Creating $$TAR in tmp/src..." ; \ > cd tmp/src ; \ > rm -rf "$$DIR" ; \ > ../../bk export -tplain -kwr+ -sdefault -sTCLTK "$$DIR" ; \ > cat ../../version > "$$DIR/src/bkvers.txt" ; \ > tar -czf "$$TAR" "$$DIR" ; \ > rm -rf "$$DIR" ; \ > echo Done ; \ > ) > endif > > # only depend on conf.mk.local if it exists > conf.mk: mkconf.sh $(wildcard conf.mk.local) > sh mkconf.sh > $@ || { $(RM) $@; false; } > > %.o: %.c > $(if $(Q), at echo CC $<,) > $(Q)$(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $@ > > port/startmenu.o: port/startmenu.c $(HDRS) > $(if $(Q), at echo CC $<,) > $(Q)$(CC) $(CFLAGS) -fno-strict-aliasing $(CPPFLAGS) -c $< -o $@ > > depend: $(SRCS) > $(CC) -MM -MG -D_DEPEND $(SRCS) > depends > > # for system.h we need to actually run libc's makefile because it includes > # calculated header files > libc/system.h: FORCE > $(MAKE) -C libc system.h > > libc/libc.a: libc/system.h > > sccs.h: PCRE > .PHONY: PCRE > PCRE: > ifneq ($(PCRE_SYSTEM),1) > $(MAKE) -Cgui/tcltk pcre > endif > > $(OBJ) bk.o: $(HDRS) > > cmd.c cmd.h: cmd.pl bk.sh $(filter bkd_%,$(SRCS)) > $(if $(Q), at echo Building $@,) > $(Q)perl cmd.pl || (rm -f cmd.c cmd.h; exit 1) > > # This parses slib.c and extracts the meta-data keywords expanded > # by kw2val() and passes them to gperf to generate hash lookup code. > slib.o: kw2val_lookup.c > kw2val_lookup.c: slib.c kw2val.pl > $(if $(Q), at echo Building $@,) > $(Q)perl kw2val.pl slib.c || (rm -f kw2val_lookup.c; exit 1) > > check-syntax: > $(CC) $(CFLAGS) $(CPPFLAGS) -c -S ${CHK_SOURCES} -o /dev/null > > # print a make variable 'make print-REPO' > # http://www.cmcrossroads.com/article/printing-value-makefile-variable > print-%: > @echo $* = \"$($*)\" > > .PHONY: help > > help: > @grep -E -h '^[-a-zA-Z_\ ]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | > awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' > @echo Suggested: make -j12 image > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Thu Jun 20 00:53:59 2024 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 19 Jun 2024 07:53:59 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> <20240619132846.GR32048@mcvoy.com> Message-ID: <20240619145359.GA24884@mcvoy.com> On Wed, Jun 19, 2024 at 08:44:14AM -0600, Warner Losh wrote: > On Wed, Jun 19, 2024, 7:28???AM Larry McVoy wrote: > > > On Tue, Jun 18, 2024 at 07:46:15PM -0500, Nevin Liber wrote: > > > But I'll bite. There was the claim by Larry McVoy that "Writing > > Makefiles > > > isn't that hard". > > > > > > Please show these beautiful makefiles for a non-toy non-trivial product > > > > Works on *BSD, MacOS, Windows, Linux on a bunch of different architectures, > > Solaris, HPUX, AIX, IRIX, Tru64, etc. > > > > The posted Makefile is no a strictly conforming POSIX Makefile, but uses > gmake extensions extensively... And eyes of the beholder may vary... Yeah, I lost that battle. I prefer, and carry around the sources to, a make from Unix. It's simple and does what I need. But my guys convinced me there was enough value in gmake that we used it. I tried to keep the craziness to a minimum. And I think I succeeded, I can fix bugs in that Makefile. From imp at bsdimp.com Thu Jun 20 01:08:04 2024 From: imp at bsdimp.com (Warner Losh) Date: Wed, 19 Jun 2024 09:08:04 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240619145359.GA24884@mcvoy.com> References: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> <20240619132846.GR32048@mcvoy.com> <20240619145359.GA24884@mcvoy.com> Message-ID: On Wed, Jun 19, 2024, 8:54 AM Larry McVoy wrote: > On Wed, Jun 19, 2024 at 08:44:14AM -0600, Warner Losh wrote: > > On Wed, Jun 19, 2024, 7:28???AM Larry McVoy wrote: > > > > > On Tue, Jun 18, 2024 at 07:46:15PM -0500, Nevin Liber wrote: > > > > But I'll bite. There was the claim by Larry McVoy that "Writing > > > Makefiles > > > > isn't that hard". > > > > > > > > Please show these beautiful makefiles for a non-toy non-trivial > product > > > > > > Works on *BSD, MacOS, Windows, Linux on a bunch of different > architectures, > > > Solaris, HPUX, AIX, IRIX, Tru64, etc. > > > > > > > The posted Makefile is no a strictly conforming POSIX Makefile, but uses > > gmake extensions extensively... And eyes of the beholder may vary... > > Yeah, I lost that battle. I prefer, and carry around the sources to, a > make from Unix. It's simple and does what I need. But my guys convinced > me there was enough value in gmake that we used it. I tried to keep > the craziness to a minimum. And I think I succeeded, I can fix bugs in > that Makefile. > I thought the ask was for a POSIX one that did that, hence my comment. I agree that is a fools errand for anything non-trivial. I can do way better using BSD's make since i can hide almost all the ugliness behind the scenes... Though what's hidden has rightfully been criticized already (I did diagree with some of it, but the main points still stand with my quibbles so I let it go). In many ways I really like cmake's declarative approach. I like bmake's include macros that try to do similar, but more constrained, things. I like that cmake figures things out, though I've done too much battle to control how it does things, but I digress. It's useful to have a tool that can do dependencies. However, you need a higher level tool to generate input to that tool, like meson with ninjamake or cmake. Combining them like gmake or bmake creates a nice macro assembler that can be made to work, but the pain winds up being in all the wrong places and the needs to constantly refactor is high to try to retain the simplicity. Warner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.branden.robinson at gmail.com Thu Jun 20 01:11:24 2024 From: g.branden.robinson at gmail.com (G. Branden Robinson) Date: Wed, 19 Jun 2024 10:11:24 -0500 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240619145359.GA24884@mcvoy.com> References: <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> <20240619132846.GR32048@mcvoy.com> <20240619145359.GA24884@mcvoy.com> Message-ID: <20240619151124.duulxrxw7r7v2p5f@illithid> At 2024-06-19T07:53:59-0700, Larry McVoy wrote: > On Wed, Jun 19, 2024 at 08:44:14AM -0600, Warner Losh wrote: > > The posted Makefile is no a strictly conforming POSIX Makefile, but > > uses gmake extensions extensively... And eyes of the beholder may > > vary... > > Yeah, I lost that battle. I prefer, and carry around the sources to, > a make from Unix. It's simple and does what I need. But my guys > convinced me there was enough value in gmake that we used it. I tried > to keep the craziness to a minimum. And I think I succeeded, I can > fix bugs in that Makefile. As of POSIX 2024, that Makefile is less GNUish than its used to be. I excerpted the list of changes to POSIX make for Issue 8. Here it is. Changes in POSIX 2024 make Issue 8 Austin Group Defect 251 is applied, encouraging implementations to disallow the creation of filenames containing any bytes that have the encoded value of a character. Austin Group Defects 330, 1417, 1422, 1709, and 1710 are applied, adding new forms of macro assignment using the "::=", "?=", and "+=" operators. Austin Group Defect 333 is applied, adding support for “silent includes” using −include. Austin Group Defects 336 and 1711 are applied, specifying the behavior when string1 in a macro expansion contains a macro expansion. Austin Group Defect 337 is applied, adding a new form of macro assignment using the "!=" operator. Austin Group Defects 373 and 1417 are applied, changing the set of characters that portable applications can use in macro names to the entire portable filename character set (thus adding to the set that could previously be used). Austin Group Defects 514 and 1520 are applied, adding the $+ and $^ internal macros. Austin Group Defect 518 is applied, allowing multiple files to be specified on an include line. Austin Group Defects 519, 1712, and 1715 are applied, adding support for pattern macro expansions. Austin Group Defects 523, 1708, and 1749 are applied, adding the .PHONY special target. Austin Group Defect 875 is applied, clarifying the requirements for inference rules. Austin Group Defect 1104 is applied, changing “s2.a” to “.s2.a”. Austin Group Defect 1122 is applied, changing the description of NLSPATH. Austin Group Defect 1141 is applied, changing “core files” to “a file named core”. Austin Group Defect 1155 is applied, clarifying the handling of the MAKE macro. Austin Group Defect 1325 is applied, adding requirements relating to the creation of include files. Austin Group Defect 1330 is applied, removing obsolescent interfaces. Austin Group Defect 1419 is applied, updating the .SCCS_GET default rule. Austin Group Defect 1420 is applied, clarifying where internal macros can be used. Austin Group Defect 1421 is applied, changing the APPLICATION USAGE section. Austin Group Defects 1424, 1658, 1690, 1701, 1702, 1703, 1704, 1707, 1719, 1720, 1721, 1722, and 1750 are applied, making various minor editorial wording changes. Austin Group Defects 1436, 1437, 1652, 1660, 1661, and 1733 are applied, adding the −j maxjobs option and the .NOTPARALLEL and .WAIT special targets, and changing the −n option. Austin Group Defects 1471 and 1513 are applied, adding a new form of macro assignment using the ":::=" operator. Austin Group Defect 1479 is applied, clarifying the requirements for default rules and macro values. Austin Group Defect 1492 is applied, changing the EXIT STATUS section. Austin Group Defect 1505 is applied, clarifying the requirements for expansion of macros that do not exist. Austin Group Defect 1510 is applied, correcting a typographic error in the RATIONALE section. Austin Group Defect 1549 is applied, clarifying the requirements for an escaped in a command line. Austin Group Defect 1615 is applied, allowing target names to contain slashes and hyphens. Austin Group Defect 1626 is applied, adding the CURDIR macro. Austin Group Defect 1631 is applied, adding information about use of the −j option with the .c.a default rule to the APPLICATION USAGE and EXAMPLES sections. Austin Group Defect 1650 is applied, changing the few occurrences of “dependencies” to use the more common “prerequisites”. Austin Group Defect 1653 is applied, clarifying the difference between how MAKEFLAGS is parsed compared to shell commands that use the make utility. Austin Group Defects 1654 and 1655 are applied, changing the APPLICATION USAGE section. Austin Group Defect 1656 is applied, changing the NAME section. Austin Group Defect 1657 is applied, moving some requirements unrelated to makefile syntax from the Makefile Syntax subsection to the beginning of the EXTENDED DESCRIPTION section. Austin Group Defect 1689 is applied, removing some redundant wording from the DESCRIPTION section. Austin Group Defect 1692 is applied, allowing make, when invoked with the −q or −t option, to execute command lines (without a prefix) that expand the MAKE macro. Austin Group Defect 1693 is applied, changing “command lines” to “execution lines” in the description of the −s option. Austin Group Defect 1694 is applied, changing “in the order they appear” to “in the order specified” in the OPERANDS section. Austin Group Defect 1696 is applied, changing the STDOUT section. Austin Group Defect 1697 is applied, changing the RATIONALE and FUTURE DIRECTIONS sections. Austin Group Defect 1698 is applied, changing “of a target” to “of the target” in the EXTENDED DESCRIPTION section. Austin Group Defect 1699 is applied, addressing some inconsistencies in the use of the term “rules”. Austin Group Defect 1706 is applied, removing a line from the format specified for target rules. Austin Group Defect 1714 is applied, changing “beginning of the line” to “beginning of the value”. Austin Group Defect 1716 is applied, changing the typographic convention used for variable elements within target names, in particular the inference rule suffixes s1 and s2. Austin Group Defect 1723 is applied, adding historical context to a paragraph in the RATIONALE section. Austin Group Defect 1772 is applied, clarifying the ASYNCHRONOUS EVENTS section. "Well, I'm not even sure that's a crime anymore--there've been a lot of changes in the law." -- Irwin Fletcher Regards, Branden -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rminnich at gmail.com Thu Jun 20 01:16:21 2024 From: rminnich at gmail.com (ron minnich) Date: Wed, 19 Jun 2024 08:16:21 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240619145359.GA24884@mcvoy.com> References: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> <20240619132846.GR32048@mcvoy.com> <20240619145359.GA24884@mcvoy.com> Message-ID: somewhere in this zillion-thread discussion there was a comment about Plan 9 and its multi-headed community. While that comment was probably accurate a few years back, the last two years of Plan 9 workshops saw a lot of us, representing many different Plan 9 code bases, get together and converge on where we want to go. Once you meet someone in person, and go get a cheesesteak together, arguments seem to resolve. I would say, don't take too many impressions from 9fans, a famously argumentative list. The folks who write Plan 9 code are in broad agreement about moving forward and leaving hatchets buried. Progress is never as rapid as we all would like, but I'm optimistic. On Wed, Jun 19, 2024 at 7:54 AM Larry McVoy wrote: > On Wed, Jun 19, 2024 at 08:44:14AM -0600, Warner Losh wrote: > > On Wed, Jun 19, 2024, 7:28???AM Larry McVoy wrote: > > > > > On Tue, Jun 18, 2024 at 07:46:15PM -0500, Nevin Liber wrote: > > > > But I'll bite. There was the claim by Larry McVoy that "Writing > > > Makefiles > > > > isn't that hard". > > > > > > > > Please show these beautiful makefiles for a non-toy non-trivial > product > > > > > > Works on *BSD, MacOS, Windows, Linux on a bunch of different > architectures, > > > Solaris, HPUX, AIX, IRIX, Tru64, etc. > > > > > > > The posted Makefile is no a strictly conforming POSIX Makefile, but uses > > gmake extensions extensively... And eyes of the beholder may vary... > > Yeah, I lost that battle. I prefer, and carry around the sources to, a > make from Unix. It's simple and does what I need. But my guys convinced > me there was enough value in gmake that we used it. I tried to keep > the craziness to a minimum. And I think I succeeded, I can fix bugs in > that Makefile. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Thu Jun 20 01:47:44 2024 From: clemc at ccc.com (Clem Cole) Date: Wed, 19 Jun 2024 11:47:44 -0400 Subject: [TUHS] ACM Software System Award to Andrew S. Tanenbaum for MINIX In-Reply-To: <202406190655.45J6tkVP902384@freefriends.org> References: <202406190655.45J6tkVP902384@freefriends.org> Message-ID: 👍 On Wed, Jun 19, 2024 at 2:56 AM wrote: > The "odd only" policy may be true, but it's not what I was told; I > was told that the policy was to release externally one version behind > what was being run internally. > That's how I remember Otis Wilson explaining it to us as commercial licensees at a licensing meeting in the early 1980s. We had finally completed the PWB 3.0 license to replace the V7 commercial license (AT&T would rename this System III - but we knew it as PWB 3.) during the negociations Summit had already moved on to the next version - PWB 4.0. IMO: Otis was not ready to start that process again. > With the consent decree done and Divestiture in the works, AT&T was > going to be allowed get into the computer business. Exactly, and Charlie Brown wanted to compete with IBM in particular—which was an issue—by the time of Judge Green, the microprocessor-based workstations had started to make huge inroads against the mini-computers. AT&T management (Brown *et al*), still equated the "computer business" with mainframes running Wall Street. > So at some point, someone decided that for System V, the current system > would be released externally. > Right—that would have had to have been someone(s) in AT&T UNIX marketing in North Carolina—the folks that gave us the "*Consider it Standard*" campaign. > I doubt we'll ever know the exact truth. > I agree. I take a WAG, though. I >>suspect<< it was linked to the attempt to sell the 3B20S against the DEC Vax family and the then IBM model 140 (which was the "minicomputer" size IBM mainframe system). By that time, System V was the OS Summit had supplied for it. If there were going to be in the commercial hardware business, the OS SW had to match what the HW used. Clem ᐧ ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tytso at mit.edu Thu Jun 20 01:59:31 2024 From: tytso at mit.edu (Theodore Ts'o) Date: Wed, 19 Jun 2024 11:59:31 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240619132846.GR32048@mcvoy.com> References: <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> <20240619132846.GR32048@mcvoy.com> Message-ID: <20240619155931.GA1513615@mit.edu> On Wed, Jun 19, 2024 at 06:28:46AM -0700, Larry McVoy wrote: > On Tue, Jun 18, 2024 at 07:46:15PM -0500, Nevin Liber wrote: > > But I'll bite. There was the claim by Larry McVoy that "Writing Makefiles > > isn't that hard". > > > > Please show these beautiful makefiles for a non-toy non-trivial product > > Works on *BSD, MacOS, Windows, Linux on a bunch of different architectures, > Solaris, HPUX, AIX, IRIX, Tru64, etc. True, but it uses multiple GNU make features, include file inclusions, conditionals, pattern substitutions, etc. That probably worked for Bitkeeper because you controlled the build envirnment for the product, as you were primarily distributing binaries. >From portability perspective for e2fsprogs, I wanted to make sure I could build it using the native build environment (e.g., suncc and later clang, not just gcc, and the default make distributed by Sun, AIX, Irix, HPUX, and later NetBSD/FreeBSD). I also wanted to support shared library support, and I didn't want to deal the horrific performance attributes of libtool and the inscrutibility of automake. Since my primary distribution channel was the source tarball (and later, a git checkout), and other high priority requirement for me is that I didn't want to require that people need to download some custom build infratrture. This rules out cmake, imake, gmake, and blaze (especially since blaze/bazel requires installing a Java runtime). And since I did want to use various advanced features (optionally, if they exist on the system) such as Poix Threads (which back then I couldn't take for granted as existing on all of the OS's that I supported) and Thread Local Storage, as opposed to just restricting myself to the BSD v4.4 feature subset, I needed to use autoconf anyway, and from a runtime perspective, it only requires m4 / awk / sed which is available everywhere. So I did everything using (only) autoconf, including building and using shared libraries, with some optional build features that require GNU make, but the same makefiles will also work on FreeBSD's pmake. I do agree with your basic premise, though, which is there's realy no need to use fancy/complicated build infrastructure such as cmake or imake. - Ted From aek at bitsavers.org Thu Jun 20 02:00:44 2024 From: aek at bitsavers.org (Al Kossow) Date: Wed, 19 Jun 2024 09:00:44 -0700 Subject: [TUHS] Unix single-machine licensing (was Re: Re: ACM Software System Award to Andrew S. Tanenbaum for MINIX) In-Reply-To: References: <202406190655.45J6tkVP902384@freefriends.org> Message-ID: <1c8abcf6-5997-3c9e-7d08-b941b34089db@bitsavers.org> On 6/19/24 8:47 AM, Clem Cole wrote: > That's how I remember Otis Wilson explaining it to us as commercial licensees at a licensing meeting in the early 1980s. > We had finally completed the PWB 3.0 license to replace the V7 commercial license (AT&T would rename this System III - but we knew it as PWB > 3.) during the negociations   Summit had already moved on to the next version - PWB 4.0.  IMO: Otis was not ready to start that process again. Is the really early history of Unix licensing documented anywhere? The work on reviving a Plexus P20 prompted me to put up the history of Onyx and Plexus at http://bitsavers.org/pdf/plexus/history and a long time ago someone who worked at Fortune told me we can all thank Onyx in 1980 for working out the single machine licensing with AT&T From aek at bitsavers.org Thu Jun 20 02:07:50 2024 From: aek at bitsavers.org (Al Kossow) Date: Wed, 19 Jun 2024 09:07:50 -0700 Subject: [TUHS] Fwd: Anyone have Plexus docs/software squirreled away? In-Reply-To: References: Message-ID: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> FYI, for people like me that care about 80s 68K Unix systemf There is a pretty serious multi-purpose preservation effort that started a few weeks ago around Plexus systems as a result of a series of YouTube videos https://youtu.be/iltZYXg5hZw https://github.com/misterblack1/plexus-p20/ From jnc at mercury.lcs.mit.edu Thu Jun 20 02:17:20 2024 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 19 Jun 2024 12:17:20 -0400 (EDT) Subject: [TUHS] [COFF] Re: Supervisor mode on ye olde PDP-11 Message-ID: <20240619161720.B301E18C088@mercury.lcs.mit.edu> > From: Warner Losh > 2.11BSD used a mode between kernel and user for the TCP stack to get > more effective address space... Is there a document for 2.11 which explains in detail why they did that? I suspect it's actually a little more complicated than just "more address space". The thing is that PDP-11 Unix had been using overlays in the kernel for quite a while to provide more address space. I forget where they first came in (I suspect there were a number of local hacks, before everyone started using the BSD approach), but by 2.9 BSD they were a standard part of the system. (See: https://minnie.tuhs.org/cgi-bin/utree.pl?file=2.9BSD/usr/src/sys/conf/Ovmakefile for some clues about how this works. There is unfortunately no documentation that I know of which explains clearly how it works; if anyone knows of any, can you please let me know? Otherwise you'll have to read the sources.) I can think of two possible reasons they started using supervisor mode: i) There were a limited number of the 2.9-type overlays, and they were not large; trying to support all the networking code with the existing overlay system may have been too hard. ii) I think this one is unlikely, but I'll list it as a possibility. Switching overlays took a certain amount of overhead (since mapping registers had to be re-loaded); if all the networking code ran in supervisor mode, the supervisor mode mapping registers could be loaded with the right thing and just left. Noel From usotsuki at buric.co Thu Jun 20 02:20:28 2024 From: usotsuki at buric.co (Steve Nickolas) Date: Wed, 19 Jun 2024 12:20:28 -0400 (EDT) Subject: [TUHS] Fwd: Anyone have Plexus docs/software squirreled away? In-Reply-To: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> Message-ID: On Wed, 19 Jun 2024, Al Kossow wrote: > > > FYI, for people like me that care about 80s 68K Unix systemf > There is a pretty serious multi-purpose preservation effort that started a > few weeks ago > around Plexus systems as a result of a series of YouTube videos > > https://youtu.be/iltZYXg5hZw > https://github.com/misterblack1/plexus-p20/ > Adrian Black - same person who kicked off the Nabu craze (and yes, I've got one of those). Apparently with the help of Usagi Electric, who made his name reverse-engineering a couple Centurion minis. -uso. From clemc at ccc.com Thu Jun 20 02:44:53 2024 From: clemc at ccc.com (Clem Cole) Date: Wed, 19 Jun 2024 12:44:53 -0400 Subject: [TUHS] Unix single-machine licensing (was Re: Re: ACM Software System Award to Andrew S. Tanenbaum for MINIX) In-Reply-To: <1c8abcf6-5997-3c9e-7d08-b941b34089db@bitsavers.org> References: <202406190655.45J6tkVP902384@freefriends.org> <1c8abcf6-5997-3c9e-7d08-b941b34089db@bitsavers.org> Message-ID: On Wed, Jun 19, 2024 at 12:00 PM Al Kossow wrote: > On 6/19/24 8:47 AM, Clem Cole wrote: > > > That's how I remember Otis Wilson explaining it to us as > commercial licensees at a licensing meeting in the early 1980s. > > We had finally completed the PWB 3.0 license to replace the V7 > commercial license (AT&T would rename this System III - but we knew it as > PWB > > 3.) during the negociations Summit had already moved on to the next > version - PWB 4.0. IMO: Otis was not ready to start that process again. > > Is the really early history of Unix licensing documented anywhere? > Not to my knowledge -- I probably know much/most of it as I lived it as part of a couple of the negotiation teams. The work on reviving a Plexus P20 prompted me to put up the history of Onyx > and Plexus at > http://bitsavers.org/pdf/plexus/history and a long time ago someone who > worked at Fortune > told me we can all thank Onyx in 1980 for working out the single machine > licensing with AT&T > Hmm, I'm not sure —but I don't think it is wholly clear—although Onyx was early and certainly would have been a part. They were not the only firm that wanted redistribution rights. Numerous vendors asked for the V7 redistribution license, with HP (Fred Clegg), Microsoft (Bob Greenberg/Bill Gates), and Tektronix (me) being three, I am aware. It is quite possible Onyx signed the original V7 license first, but I know there was great unhappiness with the terms that AT&T initially set up. When the folks from AT&T Patents and Licensing (Al Arms at that point) talked to us individually, it was sort of "this is what we are offering" - mind you, this all started >>pre-Judge Green<< and the concept of negotiation was somewhat one-sided as AT&T was not allowed in the computer business. There was also a bit of gnashing of teeth as PWB 2.0 was not on the price list. At the time, Al's position was they could license the research, but since AT&T was not in the commercial computer business, anything done for the operation companies *(i.e.*, USG output) was not allowed to be discussed. The desire to redistribute UNIX (particularly on microprocessors) came up at one of the earlier Asilomar Microprocessor workshops (which just held its 50th in April, BTW). Prof Dennis Allison of Stanford was consulting for most of us at the time and recognized we had a common problem. He set up a meeting for the approx 10 firms, introduced us, and left us alone. Thus began the meetings at Ricky's Hyatt (of which I was a part). This all *eventually* begat the replacement license for what would be PWB 3.0. I've mentioned those meetings a few times in this forum. As I said, it was the only time I was ever in a small meeting with Gates. When we were discussing the price for binary copies, starting at $5K and getting down to $1K seemed reasonable for a $25K-$125K computer, which was most of our price points. Microsoft wanted to pay $25/copy. He said to the rest of us, "You guys don't get it. *The only thing that matters is volume*." -------------- next part -------------- An HTML attachment was scrubbed... URL: From phil at ultimate.com Thu Jun 20 02:55:34 2024 From: phil at ultimate.com (Phil Budne) Date: Wed, 19 Jun 2024 12:55:34 -0400 Subject: [TUHS] [COFF] Re: Supervisor mode on ye olde PDP-11 In-Reply-To: <20240619161720.B301E18C088@mercury.lcs.mit.edu> References: <20240619161720.B301E18C088@mercury.lcs.mit.edu> Message-ID: <202406191655.45JGtYjI008727@ultimate.com> JNC wrote: > Is there a document for 2.11 which explains in detail why they did that? I > suspect it's actually a little more complicated than just "more address > space". ... > ... Switching overlays took a certain amount of > overhead (since mapping registers had to be re-loaded); if all the networking > code ran in supervisor mode, the supervisor mode mapping registers could be > loaded with the right thing and just left. That's my understanding... It allows mbufs to be mapped only in supervisor mode... https://minnie.tuhs.org/PUPS/Setup/2.11bsd_setup.html says: The networking in 2.11BSD, runs in supervisor mode, separate from the mainstream kernel. There is room without overlaying to hold both a SL/IP and ethernet driver. This is a major win, as it allows the networking to maintain its mbufs in normal data space, among other things. The networking portion of the kernel resides in ``/netnix'', and is loaded after the kernel is running. Since the kernel only looks for the file ``/netnix'', it will not run if it is unable to load ``/netnix'' , sites should build and keep a non-networking kernel in ``/'' at all times, as a backup. NOTE: The ``/unix'' and ``/netnix'' imagines must have been created at the same time, do not attempt to use mismatched images. The ability to have boot tell the kernel which network image to load is on the wish list (had to have something take the place of wishing for disklabels ;-)). https://wfjm.github.io/home/ouxr/ shows the code path for the socket(2) syscall From imp at bsdimp.com Thu Jun 20 03:20:04 2024 From: imp at bsdimp.com (Warner Losh) Date: Wed, 19 Jun 2024 11:20:04 -0600 Subject: [TUHS] [COFF] Re: Supervisor mode on ye olde PDP-11 In-Reply-To: <202406191655.45JGtYjI008727@ultimate.com> References: <20240619161720.B301E18C088@mercury.lcs.mit.edu> <202406191655.45JGtYjI008727@ultimate.com> Message-ID: On Wed, Jun 19, 2024 at 10:56 AM Phil Budne wrote: > JNC wrote: > > Is there a document for 2.11 which explains in detail why they did that? > I > > suspect it's actually a little more complicated than just "more address > > space". > ... > > ... Switching overlays took a certain amount of > > overhead (since mapping registers had to be re-loaded); if all the > networking > > code ran in supervisor mode, the supervisor mode mapping registers could > be > > loaded with the right thing and just left. > > That's my understanding... It allows mbufs to be mapped only > in supervisor mode... > Yea. that's a much better explanation than my glossed over 'more address space'. It's both to get more text space (overlays weren't infinite, and being a separate image allowed more selective communication across the interface boundary) and to provide some separation and allow for more data to be around easily (the BSD kernel didn't overlay data, which is technically possible on PDP-11, but the linker didn't support it). > https://minnie.tuhs.org/PUPS/Setup/2.11bsd_setup.html says: > > The networking in 2.11BSD, runs in supervisor mode, separate > from the mainstream kernel. There is room without overlaying to > hold both a SL/IP and ethernet driver. This is a major win, as > it allows the networking to maintain its mbufs in normal data > space, among other things. The networking portion of the kernel > resides in ``/netnix'', and is loaded after the kernel is > running. Since the kernel only looks for the file ``/netnix'', > it will not run if it is unable to load ``/netnix'' , sites > should build and keep a non-networking kernel in ``/'' at all > times, as a backup. NOTE: The ``/unix'' and ``/netnix'' > imagines must have been created at the same time, do not > attempt to use mismatched images. The ability to have boot tell > the kernel which network image to load is on the wish list (had > to have something take the place of wishing for disklabels > ;-)). > > https://wfjm.github.io/home/ouxr/ shows the code path for the socket(2) > syscall > Oh, that's nice. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From aek at bitsavers.org Thu Jun 20 03:22:35 2024 From: aek at bitsavers.org (Al Kossow) Date: Wed, 19 Jun 2024 10:22:35 -0700 Subject: [TUHS] Fwd: Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> Message-ID: <17bfad61-ad44-8c77-7592-bd6fd291f60d@bitsavers.org> On 6/19/24 9:20 AM, Steve Nickolas wrote: > Adrian Black - same person who kicked off the Nabu craze (and yes, I've got one of those).  Apparently with the help of Usagi Electric, who > made his name reverse-engineering a couple Centurion minis. > > -uso. He does seem to have the ability to geek snipe and marshal a lot of people From tuhs at tuhs.org Thu Jun 20 05:10:59 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Wed, 19 Jun 2024 19:10:59 +0000 Subject: [TUHS] Unix single-machine licensing (was Re: Re: ACM Software System Award to Andrew S. Tanenbaum for MINIX) In-Reply-To: <1c8abcf6-5997-3c9e-7d08-b941b34089db@bitsavers.org> References: <202406190655.45J6tkVP902384@freefriends.org> <1c8abcf6-5997-3c9e-7d08-b941b34089db@bitsavers.org> Message-ID: On Wednesday, June 19th, 2024 at 9:00 AM, Al Kossow wrote: > On 6/19/24 8:47 AM, Clem Cole wrote: > > > That's how I remember Otis Wilson explaining it to us as commercial licensees at a licensing meeting in the early 1980s. > > We had finally completed the PWB 3.0 license to replace the V7 commercial license (AT&T would rename this System III - but we knew it as PWB > > 3.) during the negociations Summit had already moved on to the next version - PWB 4.0. IMO: Otis was not ready to start that process again. > > > Is the really early history of Unix licensing documented anywhere? > The work on reviving a Plexus P20 prompted me to put up the history of Onyx and Plexus at > http://bitsavers.org/pdf/plexus/history and a long time ago someone who worked at Fortune > told me we can all thank Onyx in 1980 for working out the single machine licensing with > AT&T I've got a stack of license specimens as well as a bit of correspondence between MIT Lincoln Laboratory, Raytheon, and Western Electric discussing UNIX licenses for single CPUs. The correspondence (circa 1980) concerns V7 licenses for a PDP-11/44 (MIT LL) and PDP-11/45 (Raytheon). The license specimens are in two groups, one set that has blanks and/or generic language describing "Licensed Software" and a second set specifically issued for UNIX System III. The licenses have document codes: - Software-Corp.-020173-020182-2 - Software Agreement between AT&T and - Software-Customer CPU-052776-090180-2 - Customer CPU Agreement between and - Supp. Ag.-Time Sharing-020178-010180-2 - Supplemental Agreement (Time Sharing) between Western Electric Company, Incorporated and - Supp. Ag.-Customer CPU-020178-010180-2 - Supplemental Agreement (Customer CPU) between Western Electric Company, Incorporated and - Supp. Ag.-Cust. Spec.-020181-2 - Supplemental Agreement (Customer Software, Specified Number of Users) between Western Electric Company, Incorporated and - Cont. CPU-060181-1 - Contractor CPU Agreement between and - Sys. III-Corp.-110181-040182-2 - Software Agreement between AT&T and for UNIX System III - Sys. III-Cust.-010182-041582-2 - Supplemental Agreement (Customer Provisions) between AT&T and for UNIX System III Would scans of these documents help? The licenses at least should be fine as they're specimen copies with no PII. Regarding the correspondence, there is one letter on DARPA letterhead (from MIT LL to WECo), two on WECo letterhead (one back to MIT LL, the other to Raytheon) and then one on AT&T letterhead responding generically to an uinquiry regarding UNIX System III licensing. Does anyone foresee issues with scanning the correspondence, or is that the sort of thing that might get me shipped off to some black site? - Matt G. From dougj at iastate.edu Thu Jun 20 05:35:16 2024 From: dougj at iastate.edu (Jacobson, Doug W [E CPE]) Date: Wed, 19 Jun 2024 19:35:16 +0000 Subject: [TUHS] Unix single-machine licensing (was Re: Re: ACM Software System Award to Andrew S. Tanenbaum for MINIX) In-Reply-To: References: <202406190655.45J6tkVP902384@freefriends.org> <1c8abcf6-5997-3c9e-7d08-b941b34089db@bitsavers.org> Message-ID: I have an original signed copy of: V6 Mini Unix "Software-Univ 020173-020179-2" (Feb 1, 1980) Licensed to a single CPU PDP 11/34 S# AG 00720 V7 Unix "software-Univ 020173-090178-7" (Feb 1, 1980) Licensed to a single CPU PDP 11/34 S# AG 00720 (same machine) UNIX/32V Version 1.0 "Software-UNIV 020173-090178-7 (Feb 1, 1980) Licensed to a single CPU VAX 11/780 S# 780675580 A copy of a signed agreement for V7 Unix "software-Univ 020173-120176-5" (March 1, 1977) Licensed to a single CPU PDP 11/34 S# AG 00720 (same machine) I also have various other agreement (SYS V (Version 2 and 3) and several software packages (like troff) Doug -----Original Message----- From: segaloco via TUHS Sent: Wednesday, June 19, 2024 2:11 PM To: tuhs at tuhs.org Subject: [TUHS] Re: Unix single-machine licensing (was Re: Re: ACM Software System Award to Andrew S. Tanenbaum for MINIX) On Wednesday, June 19th, 2024 at 9:00 AM, Al Kossow wrote: > On 6/19/24 8:47 AM, Clem Cole wrote: > > > That's how I remember Otis Wilson explaining it to us as commercial licensees at a licensing meeting in the early 1980s. > > We had finally completed the PWB 3.0 license to replace the V7 > > commercial license (AT&T would rename this System III - but we knew > > it as PWB > > 3.) during the negociations Summit had already moved on to the next version - PWB 4.0. IMO: Otis was not ready to start that process again. > > > Is the really early history of Unix licensing documented anywhere? > The work on reviving a Plexus P20 prompted me to put up the history of > Onyx and Plexus at http://bitsavers.org/pdf/plexus/history and a long > time ago someone who worked at Fortune told me we can all thank Onyx > in 1980 for working out the single machine licensing with AT&T I've got a stack of license specimens as well as a bit of correspondence between MIT Lincoln Laboratory, Raytheon, and Western Electric discussing UNIX licenses for single CPUs. The correspondence (circa 1980) concerns V7 licenses for a PDP-11/44 (MIT LL) and PDP-11/45 (Raytheon). The license specimens are in two groups, one set that has blanks and/or generic language describing "Licensed Software" and a second set specifically issued for UNIX System III. The licenses have document codes: - Software-Corp.-020173-020182-2 - Software Agreement between AT&T and - Software-Customer CPU-052776-090180-2 - Customer CPU Agreement between and - Supp. Ag.-Time Sharing-020178-010180-2 - Supplemental Agreement (Time Sharing) between Western Electric Company, Incorporated and - Supp. Ag.-Customer CPU-020178-010180-2 - Supplemental Agreement (Customer CPU) between Western Electric Company, Incorporated and - Supp. Ag.-Cust. Spec.-020181-2 - Supplemental Agreement (Customer Software, Specified Number of Users) between Western Electric Company, Incorporated and - Cont. CPU-060181-1 - Contractor CPU Agreement between and - Sys. III-Corp.-110181-040182-2 - Software Agreement between AT&T and for UNIX System III - Sys. III-Cust.-010182-041582-2 - Supplemental Agreement (Customer Provisions) between AT&T and for UNIX System III Would scans of these documents help? The licenses at least should be fine as they're specimen copies with no PII. Regarding the correspondence, there is one letter on DARPA letterhead (from MIT LL to WECo), two on WECo letterhead (one back to MIT LL, the other to Raytheon) and then one on AT&T letterhead responding generically to an uinquiry regarding UNIX System III licensing. Does anyone foresee issues with scanning the correspondence, or is that the sort of thing that might get me shipped off to some black site? - Matt G. From tuhs at tuhs.org Thu Jun 20 08:13:03 2024 From: tuhs at tuhs.org (Grant Taylor via TUHS) Date: Wed, 19 Jun 2024 17:13:03 -0500 Subject: [TUHS] Fwd: Anyone have Plexus docs/software squirreled away? In-Reply-To: <17bfad61-ad44-8c77-7592-bd6fd291f60d@bitsavers.org> References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <17bfad61-ad44-8c77-7592-bd6fd291f60d@bitsavers.org> Message-ID: <409e35c4-53cb-4fa5-9b1e-f6eaba699e7d@tnetconsulting.net> On 6/19/24 12:22, Al Kossow wrote: > He does seem to have the ability to geek snipe and marshal a lot > of people Please clarify, are you referring to Adrian and / or David (?) a.k.a. Usagi? I'm also enjoying Caleb (?) a.k.a. clabretro quite a bit. Aside: I think I have those names correct. Adrian has long been a quick to watch. David and Caleb have surpassed Adrian only because their content is more aligned with parts an pieces that I have in my lag. }:-) The Plexus P/20 min-series is VERY interesting. :-D I also really like the emphasis that Adrian is making on collecting what's necessary for someone to create an emulator for the Plexus P/20. The (forthcoming) 5th video (I'm a patron with early access) has some interesting bits about SCSI IDs and LUNs that I may bring up for discussion on COFF as they aren't super on topic for TUHS. -- Grant. . . . -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4033 bytes Desc: S/MIME Cryptographic Signature URL: From kevin.bowling at kev009.com Thu Jun 20 08:25:10 2024 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Wed, 19 Jun 2024 15:25:10 -0700 Subject: [TUHS] Fwd: Anyone have Plexus docs/software squirreled away? In-Reply-To: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> Message-ID: On Thu, Jun 20, 2024 at 12:07 AM Al Kossow wrote: > > > FYI, for people like me that care about 80s 68K Unix systemf > There is a pretty serious multi-purpose preservation effort that started a > few weeks ago > around Plexus systems as a result of a series of YouTube videos > > https://youtu.be/iltZYXg5hZw > https://github.com/misterblack1/plexus-p20/ Seems like yet another mediocre UNIX port at first, but then they happen to find the system schematic netlists and PAL equations in addition to partial source code on the hard disk. If those are published that will make a quite wholesome thing to study. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aek at bitsavers.org Thu Jun 20 08:46:41 2024 From: aek at bitsavers.org (Al Kossow) Date: Wed, 19 Jun 2024 15:46:41 -0700 Subject: [TUHS] Fwd: Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> Message-ID: <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> On 6/19/24 3:25 PM, Kevin Bowling wrote: > Seems like yet another mediocre UNIX port at first, but then they happen to find the system schematic netlists and PAL equations in addition > to partial source code on the hard disk.  If those are published that will make a quite wholesome thing to study. I tried to stress on their discord that finding things like that 40 years out is a very unusual occurence. I've spent decades digging through people's garages trying to find stuff like that for bitsavers. From kevin.bowling at kev009.com Thu Jun 20 08:48:29 2024 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Wed, 19 Jun 2024 15:48:29 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240619155931.GA1513615@mit.edu> References: <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <9f9db0d2-8a6a-26cc-a0ba-b6fc5d6474cb@makerlisp.com> <20240619132846.GR32048@mcvoy.com> <20240619155931.GA1513615@mit.edu> Message-ID: On Wed, Jun 19, 2024 at 11:59 PM Theodore Ts'o wrote: > On Wed, Jun 19, 2024 at 06:28:46AM -0700, Larry McVoy wrote: > > On Tue, Jun 18, 2024 at 07:46:15PM -0500, Nevin Liber wrote: > > > But I'll bite. There was the claim by Larry McVoy that "Writing > Makefiles > > > isn't that hard". > > > > > > Please show these beautiful makefiles for a non-toy non-trivial product > > > > Works on *BSD, MacOS, Windows, Linux on a bunch of different > architectures, > > Solaris, HPUX, AIX, IRIX, Tru64, etc. > > True, but it uses multiple GNU make features, include file inclusions, > conditionals, pattern substitutions, etc. That probably worked for > Bitkeeper because you controlled the build envirnment for the product, > as you were primarily distributing binaries. > > From portability perspective for e2fsprogs, I wanted to make sure I > could build it using the native build environment (e.g., suncc and > later clang, not just gcc, and the default make distributed by Sun, > AIX, Irix, HPUX, and later NetBSD/FreeBSD). I also wanted to support > shared library support, and I didn't want to deal the horrific > performance attributes of libtool and the inscrutibility of automake. > > Since my primary distribution channel was the source tarball (and > later, a git checkout), and other high priority requirement for me is > that I didn't want to require that people need to download some custom > build infratrture. This rules out cmake, imake, gmake, and blaze > (especially since blaze/bazel requires installing a Java > runtime). > > And since I did want to use various advanced features (optionally, if > they exist on the system) such as Poix Threads (which back then I > couldn't take for granted as existing on all of the OS's that I > supported) and Thread Local Storage, as opposed to just restricting > myself to the BSD v4.4 feature subset, I needed to use autoconf anyway, > and from a runtime perspective, it only requires m4 / awk / sed which > is available everywhere. > > So I did everything using (only) autoconf, including building and > using shared libraries, This is The Way if you really care about portability. Autoconf, once you get your head around what, why, and when it was created, makes for nice Makefiles and projects that are easy to include in the 100 Linux distributions with their own take on packaging the world. > with some optional build features that require > GNU make, but the same makefiles will also work on FreeBSD's pmake. I > do agree with your basic premise, though, which is there's realy no > need to use fancy/complicated build infrastructure such as cmake or > imake. > > - Ted > -------------- next part -------------- An HTML attachment was scrubbed... URL: From woods at robohack.ca Thu Jun 20 08:52:06 2024 From: woods at robohack.ca (Greg A. Woods) Date: Wed, 19 Jun 2024 15:52:06 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: At Tue, 18 Jun 2024 17:03:07 -0600, Warner Losh wrote: Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > [1 ] > On Tue, Jun 18, 2024, 4:50 PM Greg A. Woods wrote: > > > At Tue, 18 Jun 2024 04:52:51 +0000, segaloco via TUHS > > wrote: > > Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix > > philosophy' The Register > > > > > > That's not > > > to diminish the real help of things like autotools and CMake, > > > > Oh, that strikes a nerve. > > > > CMake is the very antithesis of a good tool. It doesn't help. I think > > it is perhaps the worst abomination EVER in the world of software tools, > > and especially amongst software construction tools. > > > > Someone clearly never used imake... Heh heh! I've grovelled deeply in the innards of X11's use of imake, but I assert that it's atrocities pale in comparison to those of cmake. At least there were real parsers and proper syntax for imake, and indeed it even built on and used other existing well known tools! -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From kevin.bowling at kev009.com Thu Jun 20 08:58:29 2024 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Wed, 19 Jun 2024 15:58:29 -0700 Subject: [TUHS] Fwd: Anyone have Plexus docs/software squirreled away? In-Reply-To: <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> Message-ID: On Thu, Jun 20, 2024 at 6:46 AM Al Kossow wrote: > On 6/19/24 3:25 PM, Kevin Bowling wrote: > > > Seems like yet another mediocre UNIX port at first, but then they happen > to find the system schematic netlists and PAL equations in addition > > to partial source code on the hard disk. If those are published that > will make a quite wholesome thing to study. > > I tried to stress on their discord that finding things like that 40 years > out is a very unusual occurence. > I've spent decades digging through people's garages trying to find stuff > like that for bitsavers. I’m weak in my DEC history and collection but I’ve heard some in person stories that VMS source was not impossible to find on microfiche at certain institutions. They might have had at least board level schematics too. That situation would be roughly equivalent to IBM mainframes through the 1980s but you’d still be missing a lot of microcode and logic equations to zoom in at any level for these two examples. Are you aware of anything else where the whole kit and caboodle is available? Particularly keen on microcode and logic equations since that stuff is indeed hard to come by. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rik at rikfarrow.com Thu Jun 20 09:03:46 2024 From: rik at rikfarrow.com (Rik Farrow) Date: Wed, 19 Jun 2024 16:03:46 -0700 Subject: [TUHS] Fwd: Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> Message-ID: On Wed, Jun 19, 2024 at 3:58 PM Kevin Bowling wrote: > > > I’m weak in my DEC history and collection but I’ve heard some in person > stories that VMS source was not impossible to find on microfiche at certain > institutions. They might have had at least board level schematics too. > That situation would be roughly equivalent to IBM mainframes through the > 1980s but you’d still be missing a lot of microcode and logic equations to > zoom in at any level for these two examples. > > Someone who had previously worked at Lawrence Livermore Labs offered me the VMS source on microfiche, sometime around 1990. I turned him down... Rik -------------- next part -------------- An HTML attachment was scrubbed... URL: From aek at bitsavers.org Thu Jun 20 09:10:05 2024 From: aek at bitsavers.org (Al Kossow) Date: Wed, 19 Jun 2024 16:10:05 -0700 Subject: [TUHS] Fwd: Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> Message-ID: On 6/19/24 4:03 PM, Rik Farrow wrote: > Someone who had previously worked at Lawrence Livermore Labs offered me the VMS source on microfiche, sometime around 1990. I turned him > down... > > Rik > I have thousands of sheets of DEC Microfiche. Base VMS sources up through version 5-ish aren't rare. Companies have regular purges of engineering documentation for obsolete products. We were lucky at CHM to have a friend as the archivist at HP in the 00s and we were able to save lots of material from a few of their computer lines (HP1000, 68K 9000, and some Apollo) and the DEC corporate paper archive. From peter.martin.yardley at gmail.com Thu Jun 20 09:21:27 2024 From: peter.martin.yardley at gmail.com (Peter Yardley) Date: Thu, 20 Jun 2024 09:21:27 +1000 Subject: [TUHS] Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> Message-ID: We had VMS source on microfiche as part of he support contract. If you payed more you could have got it as machine readable. > On 20 Jun 2024, at 8:58 AM, Kevin Bowling wrote: > > On Thu, Jun 20, 2024 at 6:46 AM Al Kossow wrote: > On 6/19/24 3:25 PM, Kevin Bowling wrote: > > > Seems like yet another mediocre UNIX port at first, but then they happen to find the system schematic netlists and PAL equations in addition > > to partial source code on the hard disk. If those are published that will make a quite wholesome thing to study. > > I tried to stress on their discord that finding things like that 40 years out is a very unusual occurence. > I've spent decades digging through people's garages trying to find stuff like that for bitsavers. > > I’m weak in my DEC history and collection but I’ve heard some in person stories that VMS source was not impossible to find on microfiche at certain institutions. They might have had at least board level schematics too. That situation would be roughly equivalent to IBM mainframes through the 1980s but you’d still be missing a lot of microcode and logic equations to zoom in at any level for these two examples. > > Are you aware of anything else where the whole kit and caboodle is available? Particularly keen on microcode and logic equations since that stuff is indeed hard to come by. Peter Yardley peter.martin.yardley at gmail.com From ads at salewski.email Thu Jun 20 09:20:24 2024 From: ads at salewski.email (Alan D. Salewski) Date: Wed, 19 Jun 2024 19:20:24 -0400 Subject: [TUHS] Be there a "remote diff" utility? In-Reply-To: <20240516195309.GB287325@mit.edu> References: <20240516195309.GB287325@mit.edu> Message-ID: On Thu, May 16, 2024, at 15:53, Theodore Ts'o wrote: > On Thu, May 16, 2024 at 04:34:54PM +1000, Dave Horsfall wrote: >> Every so often I want to compare files on remote machines, but all I can >> do is to fetch them first (usually into /tmp); I'd like to do something >> like: >> >> rdiff host1:file1 host2:file2 >> >> Breathes there such a beast? I see that Penguin/OS has already taken >> "rdiff" which doesn't seem to do what I want. [...] > In any case, the way I'd suggest that you do this that works as an > extention to the Unix philosohy of "Everything looks like a file" is > to use FUSE: > > sshfs host1:/ ~/mnt/host1 > sshfs host2:/ ~/mnt/host2 > diff ~/mnt/host1/file1 ~/mnt/host2/file2 > > Cheers, > > - Ted Mentioning this since I just came across it today: Eric S. Raymond (ESR) has a 'netdiff' tool available here: http://www.catb.org/~esr/netdiff/ https://gitlab.com/esr/netdiff Perform diff across network links. Takes two arguments which may be host:path pairs in the style of ssh/scp, make temporary local copies as needed, applies diff(1). All options are passed to diff(1). Usage: netdiff [diff-options] [host1:]path1 [host2:]path2 It's 56 lines of POSIX shell script that invokes 'scp' behind the scenes and creates temp files that are fed to 'diff', with some file name post-processing. Compared to the one line of shell script mentioned by Arnold: diff -u <(ssh host1 cat file1) <(ssh host2 cat file2) the main benefits I see to 'netdiff' in its current form are that it encapsulates the mechanism behind a descriptive name that can be found in the file system, and it has a man page. So it can be found, used and/or studied by those with larval shell fu. However, the comments in the 'netdiff' source suggest openness to supporting different transport mechanisms, which (if added) might allow using the same command line interface regardless of the underlying transport being used (scp, ssh, rsync, curl, wget, whatever), possibly with different transports for each of the requested files. Of course, that would not offer a superior user experience to the '9import', 'sshfs', and NFS approaches also mentioned. But if there were an 'rf' ("remote file") tool that did just the transport abstraction portion, then tools such as 'netdiff' could use it internally, or folks could use it directly: diff -u <(rf user at host1:/path/to/file1) <(rf user at host2:/path/to/file2) The presumption here is that the user has control over some local configuration that maps user at host pairs to transports (maybe defaulting to 'ssh' invoking 'cat' on the remote host). Maybe somebody who needs more than 'ssh' would find value in such a thing... -Al -- a l a n d. s a l e w s k i ads at salewski.email salewski at att.net https://github.com/salewski From woods at robohack.ca Thu Jun 20 09:28:42 2024 From: woods at robohack.ca (Greg A. Woods) Date: Wed, 19 Jun 2024 16:28:42 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: At Tue, 18 Jun 2024 19:42:59 -0600, Warner Losh wrote: Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > On Tue, Jun 18, 2024, 7:38 PM Greg 'groggy' Lehey wrote: > > > On Tuesday, 18 June 2024 at 17:03:07 -0600, Warner Losh wrote: > > > On Tue, Jun 18, 2024, 4:50 PM Greg A. Woods wrote: > > >> > > >> CMake is the very antithesis of a good tool. It doesn't help. I think > > >> it is perhaps the worst abomination EVER in the world of software tools, > > >> and especially amongst software construction tools. > > > > > > Someone clearly never used imake... > > > > I've used both. I'm with Greg (Woods): cmake takes the cake. > > > > Cmake actually works though... Well, maybe, sometimes, for some limited set of pre-tested targets. Which was true for imake as well, but.... Last time I had to use cmake it took significantly longer to build the monster than it did to build the entire current-at-the-time GCC release. When it runs it usually chews through far more CPU cycles and requires far more RAM than the equivalent Autotools configure script, even including running autoconf et al to build the script first. Its design and implementation ignores the entire history and legacy and knowledge bank of existing tools and lore used to build portable software and throws the one or two existing tools it does pay homage to in your face to spite you. It couldn't ignore Unix philosophy harder and more completely than it does. For example it can't generate a working makefile or script that can then be used without it! At least I couldn't convince it to do so. At the time I was hacking on a project that claimed to require it, it couldn't even properly parse a string and had trouble with parenthesis. I don't remember the details, but it was very ugly and required stupidly obtuse and self-limiting workarounds. I used to think libtool was the worst software construction tool imagined (well worse than the rest of Autotools), but cmake is many orders of magnitude worse. I will not ever allow cmake to run, or even exist, on the machines I control. Sometimes some tools are just too dangerous to have in the house or workshop, no matter how handy others claim they might be. Using cmake is like _forcing_ you to use a flame thrower to light your portable gas barbecue at a camp site in a tinder-dry forest. Inevitably someone or something is going to get hurt/destroyed. Sorry, cmake is a hot button for me, as you can see. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From imp at bsdimp.com Thu Jun 20 10:20:04 2024 From: imp at bsdimp.com (Warner Losh) Date: Wed, 19 Jun 2024 18:20:04 -0600 Subject: [TUHS] Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> Message-ID: At one point, a google search could find most of the VMS source, at least through VMS 5 or maybe 6. I didn't download it at the time and haven't searched since... Given the ubiquity of vms on microfiche I wasn't surprised at the time. A quick search today, though, comes up dry. Warner On Wed, Jun 19, 2024, 5:21 PM Peter Yardley wrote: > We had VMS source on microfiche as part of he support contract. If you > payed more you could have got it as machine readable. > > > On 20 Jun 2024, at 8:58 AM, Kevin Bowling > wrote: > > > > On Thu, Jun 20, 2024 at 6:46 AM Al Kossow wrote: > > On 6/19/24 3:25 PM, Kevin Bowling wrote: > > > > > Seems like yet another mediocre UNIX port at first, but then they > happen to find the system schematic netlists and PAL equations in addition > > > to partial source code on the hard disk. If those are published that > will make a quite wholesome thing to study. > > > > I tried to stress on their discord that finding things like that 40 > years out is a very unusual occurence. > > I've spent decades digging through people's garages trying to find stuff > like that for bitsavers. > > > > I’m weak in my DEC history and collection but I’ve heard some in person > stories that VMS source was not impossible to find on microfiche at certain > institutions. They might have had at least board level schematics too. > That situation would be roughly equivalent to IBM mainframes through the > 1980s but you’d still be missing a lot of microcode and logic equations to > zoom in at any level for these two examples. > > > > Are you aware of anything else where the whole kit and caboodle is > available? Particularly keen on microcode and logic equations since that > stuff is indeed hard to come by. > > Peter Yardley > peter.martin.yardley at gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.martin.yardley at gmail.com Thu Jun 20 11:18:24 2024 From: peter.martin.yardley at gmail.com (Peter Yardley) Date: Thu, 20 Jun 2024 11:18:24 +1000 Subject: [TUHS] Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> Message-ID: <04086DAD-2267-45D8-80AA-8A9A4D38D125@gmail.com> There is still OpenVMS. > On 20 Jun 2024, at 10:20 AM, Warner Losh wrote: > > At one point, a google search could find most of the VMS source, at least through VMS 5 or maybe 6. I didn't download it at the time and haven't searched since... Given the ubiquity of vms on microfiche I wasn't surprised at the time. A quick search today, though, comes up dry. > > Warner > > On Wed, Jun 19, 2024, 5:21 PM Peter Yardley wrote: > We had VMS source on microfiche as part of he support contract. If you payed more you could have got it as machine readable. > > > On 20 Jun 2024, at 8:58 AM, Kevin Bowling wrote: > > > > On Thu, Jun 20, 2024 at 6:46 AM Al Kossow wrote: > > On 6/19/24 3:25 PM, Kevin Bowling wrote: > > > > > Seems like yet another mediocre UNIX port at first, but then they happen to find the system schematic netlists and PAL equations in addition > > > to partial source code on the hard disk. If those are published that will make a quite wholesome thing to study. > > > > I tried to stress on their discord that finding things like that 40 years out is a very unusual occurence. > > I've spent decades digging through people's garages trying to find stuff like that for bitsavers. > > > > I’m weak in my DEC history and collection but I’ve heard some in person stories that VMS source was not impossible to find on microfiche at certain institutions. They might have had at least board level schematics too. That situation would be roughly equivalent to IBM mainframes through the 1980s but you’d still be missing a lot of microcode and logic equations to zoom in at any level for these two examples. > > > > Are you aware of anything else where the whole kit and caboodle is available? Particularly keen on microcode and logic equations since that stuff is indeed hard to come by. > > Peter Yardley > peter.martin.yardley at gmail.com > Peter Yardley peter.martin.yardley at gmail.com From henry.r.bent at gmail.com Thu Jun 20 12:30:56 2024 From: henry.r.bent at gmail.com (Henry Bent) Date: Wed, 19 Jun 2024 22:30:56 -0400 Subject: [TUHS] Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> Message-ID: On Wed, 19 Jun 2024 at 20:20, Warner Losh wrote: > At one point, a google search could find most of the VMS source, at least > through VMS 5 or maybe 6. I didn't download it at the time and haven't > searched since... Given the ubiquity of vms on microfiche I wasn't > surprised at the time. A quick search today, though, comes up dry. > Someone uploaded what looks to be a full set of VMS V4.0 microfiche to archive.org, but they did it in such a way that it is hundreds of separate documents instead of one cohesive collection. I guess it's in keeping with the "here it is, but not in a particularly easy to use format" idea of the microfiche distribution... I'm not a VMS person but I do look through a lot of DEC archives in more and less above-board places and I've only ever seen VMS source up to V4.0. -Henry -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Thu Jun 20 12:48:47 2024 From: imp at bsdimp.com (Warner Losh) Date: Wed, 19 Jun 2024 20:48:47 -0600 Subject: [TUHS] Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> Message-ID: On Wed, Jun 19, 2024, 8:31 PM Henry Bent wrote: > On Wed, 19 Jun 2024 at 20:20, Warner Losh wrote: > >> At one point, a google search could find most of the VMS source, at least >> through VMS 5 or maybe 6. I didn't download it at the time and haven't >> searched since... Given the ubiquity of vms on microfiche I wasn't >> surprised at the time. A quick search today, though, comes up dry. >> > > Someone uploaded what looks to be a full set of VMS V4.0 microfiche to > archive.org, but they did it in such a way that it is hundreds of > separate documents instead of one cohesive collection. I guess it's in > keeping with the "here it is, but not in a particularly easy to use format" > idea of the microfiche distribution... > Pictures of printouts of listing of compiler output with various source directives that control the ouput. It's hard to beat retyping by hand for recovering the original source.... And these are what i found... and was part of an effort to recover the original sources.... can't recall what hapoened to that... Warner I'm not a VMS person but I do look through a lot of DEC archives in more > and less above-board places and I've only ever seen VMS source up to V4.0. > > -Henry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Thu Jun 20 14:08:45 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Thu, 20 Jun 2024 04:08:45 +0000 Subject: [TUHS] Unix single-machine licensing (was Re: Re: ACM Software System Award to Andrew S. Tanenbaum for MINIX) In-Reply-To: References: <202406190655.45J6tkVP902384@freefriends.org> <1c8abcf6-5997-3c9e-7d08-b941b34089db@bitsavers.org> Message-ID: On Wednesday, June 19th, 2024 at 12:10 PM, segaloco via TUHS wrote: > On Wednesday, June 19th, 2024 at 9:00 AM, Al Kossow aek at bitsavers.org wrote: > > > On 6/19/24 8:47 AM, Clem Cole wrote: > > > > > That's how I remember Otis Wilson explaining it to us as commercial licensees at a licensing meeting in the early 1980s. > > > We had finally completed the PWB 3.0 license to replace the V7 commercial license (AT&T would rename this System III - but we knew it as PWB > > > 3.) during the negociations Summit had already moved on to the next version - PWB 4.0. IMO: Otis was not ready to start that process again. > > > > Is the really early history of Unix licensing documented anywhere? > > The work on reviving a Plexus P20 prompted me to put up the history of Onyx and Plexus at > > http://bitsavers.org/pdf/plexus/history and a long time ago someone who worked at Fortune > > told me we can all thank Onyx in 1980 for working out the single machine licensing with > > AT&T > > > I've got a stack of license specimens as well as a bit of correspondence between > MIT Lincoln Laboratory, Raytheon, and Western Electric discussing UNIX licenses > for single CPUs. The correspondence (circa 1980) concerns V7 licenses for a > PDP-11/44 (MIT LL) and PDP-11/45 (Raytheon). The license specimens are in two > groups, one set that has blanks and/or generic language describing > "Licensed Software" and a second set specifically issued for UNIX System III. > > The licenses have document codes: > > - Software-Corp.-020173-020182-2 - Software Agreement between AT&T and > > - Software-Customer CPU-052776-090180-2 - Customer CPU Agreement between and > > - Supp. Ag.-Time Sharing-020178-010180-2 - Supplemental Agreement (Time Sharing) between Western Electric Company, Incorporated and > > - Supp. Ag.-Customer CPU-020178-010180-2 - Supplemental Agreement (Customer CPU) between Western Electric Company, Incorporated and > > - Supp. Ag.-Cust. Spec.-020181-2 - Supplemental Agreement (Customer Software, Specified Number of Users) between Western Electric Company, Incorporated and > > - Cont. CPU-060181-1 - Contractor CPU Agreement between and > > - Sys. III-Corp.-110181-040182-2 - Software Agreement between AT&T and for UNIX System III > > - Sys. III-Cust.-010182-041582-2 - Supplemental Agreement (Customer Provisions) between AT&T and for UNIX System III > > > Would scans of these documents help? The licenses at least should be fine as they're specimen copies with no PII. Regarding the correspondence, there is one letter on DARPA letterhead (from MIT LL to WECo), two on WECo letterhead (one back to MIT LL, the other to Raytheon) and then one on AT&T letterhead responding generically to an uinquiry regarding UNIX System III licensing. Does anyone foresee issues with scanning the correspondence, or is that the sort of thing that might get me shipped off to some black site? > > - Matt G. And now these are up here: https://archive.org/details/att_unix_licenses_1982 Included is misc.pdf, which is the couple letters as well as a packing slip from Bell Laboratories for a shipment of V7. Also a revision to my last listing of their document codes, the "-2" on the end is a page number...whoops...the document names in the archive posting reflect the document codes sans this oversight. I labeled the posting 1982 as that's the date on the latest of the license specimens, the letters were in the same batch of documents but not adjacent. - Matt G. From tuhs at tuhs.org Thu Jun 20 14:12:27 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Thu, 20 Jun 2024 04:12:27 +0000 Subject: [TUHS] Anyone have Plexus docs/software squirreled away? In-Reply-To: References: <58cc45a9-d1b0-9dba-c3c0-a2970c4d7b3d@bitsavers.org> <477b0ad9-1dc0-11bc-bab0-5a0fb3134639@bitsavers.org> Message-ID: On Wednesday, June 19th, 2024 at 7:48 PM, Warner Losh wrote: > > > On Wed, Jun 19, 2024, 8:31 PM Henry Bent wrote: > > > On Wed, 19 Jun 2024 at 20:20, Warner Losh wrote: > > > > > At one point, a google search could find most of the VMS source, at least through VMS 5 or maybe 6. I didn't download it at the time and haven't searched since... Given the ubiquity of vms on microfiche I wasn't surprised at the time. A quick search today, though, comes up dry. > > > > > > Someone uploaded what looks to be a full set of VMS V4.0 microfiche to archive.org, but they did it in such a way that it is hundreds of separate documents instead of one cohesive collection. I guess it's in keeping with the "here it is, but not in a particularly easy to use format" idea of the microfiche distribution... > > > Pictures of printouts of listing of compiler output with various source directives that control the ouput. It's hard to beat retyping by hand for recovering the original source.... > > And these are what i found... and was part of an effort to recover the original sources.... can't recall what hapoened to that... > > Warner > > > > I'm not a VMS person but I do look through a lot of DEC archives in more and less above-board places and I've only ever seen VMS source up to V4.0. > > > > -Henry It's a shame, a VMS source microfiche set popped up on eBay some time in the past year, I can't recall why I didn't hop on it. I do keep an eye out though. If I ever find another I may be fiching around for some help :) - Matt G. From tuhs at tuhs.org Thu Jun 20 15:01:01 2024 From: tuhs at tuhs.org (Scot Jenkins via TUHS) Date: Thu, 20 Jun 2024 01:01:01 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: <202406200501.45K5118a028500@sdf.org> "Greg A. Woods" wrote: > I will not ever allow cmake to run, or even exist, on the machines I > control... I'm not a fan of cmake either. How do you deal with software that only builds with cmake (or meson, scons, ... whatever the developer decided to use as the build tool)? What alternatives exist short of reimplementing the build process in a standard makefile by hand, which is obviously very time consuming, error prone, and will probably break the next time you want to update a given package? If there is some great alternative, I would like to know about it. scot From luther.johnson at makerlisp.com Thu Jun 20 15:09:11 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Wed, 19 Jun 2024 22:09:11 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <202406200501.45K5118a028500@sdf.org> References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: <5d991bcb-0bac-50f1-ba8e-4c9c561499c9@makerlisp.com> I just avoid tools that build with CMake altogether, I look for alternative tools. The tool has already told me, what I can expect from a continued relationship, by its use of CMake ... On 06/19/2024 10:01 PM, Scot Jenkins via TUHS wrote: > "Greg A. Woods" wrote: > >> I will not ever allow cmake to run, or even exist, on the machines I >> control... > I'm not a fan of cmake either. > > How do you deal with software that only builds with cmake (or meson, > scons, ... whatever the developer decided to use as the build tool)? > What alternatives exist short of reimplementing the build process in > a standard makefile by hand, which is obviously very time consuming, > error prone, and will probably break the next time you want to update > a given package? > > If there is some great alternative, I would like to know about it. > > scot > From davida at pobox.com Thu Jun 20 15:14:57 2024 From: davida at pobox.com (David Arnold) Date: Thu, 20 Jun 2024 15:14:57 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From luther.johnson at makerlisp.com Thu Jun 20 15:18:02 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Wed, 19 Jun 2024 22:18:02 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <5d991bcb-0bac-50f1-ba8e-4c9c561499c9@makerlisp.com> References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> <5d991bcb-0bac-50f1-ba8e-4c9c561499c9@makerlisp.com> Message-ID: That being said, I must confess there is a very smll number of tools I use, which I build from source, keep up with patches and updates for, etc. I don't have the same problems as a sysadmin for a large community using many open source projects, those issues are real but since I'm only supporting my opinionated self, I have the luxury of choosing software that meets my approval. On 06/19/2024 10:09 PM, Luther Johnson wrote: > I just avoid tools that build with CMake altogether, I look for > alternative tools. The tool has already told me, what I can expect from > a continued relationship, by its use of CMake ... > > On 06/19/2024 10:01 PM, Scot Jenkins via TUHS wrote: >> "Greg A. Woods" wrote: >> >>> I will not ever allow cmake to run, or even exist, on the machines I >>> control... >> I'm not a fan of cmake either. >> >> How do you deal with software that only builds with cmake (or meson, >> scons, ... whatever the developer decided to use as the build tool)? >> What alternatives exist short of reimplementing the build process in >> a standard makefile by hand, which is obviously very time consuming, >> error prone, and will probably break the next time you want to update >> a given package? >> >> If there is some great alternative, I would like to know about it. >> >> scot >> > > From ggm at algebras.org Thu Jun 20 15:32:17 2024 From: ggm at algebras.org (George Michaelson) Date: Thu, 20 Jun 2024 15:32:17 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: Message-ID: we used to argue about that. I disliked autoconf because I felt 99% of the work could be precomputed, which is what MIT X11 Makefiles did: they had recipes for the common architectures. -G On Thu, Jun 20, 2024 at 3:15 PM David Arnold wrote: > > > On 20 Jun 2024, at 08:48, Kevin Bowling wrote: > >  > On Wed, Jun 19, 2024 at 11:59 PM Theodore Ts'o wrote: > > > <…> > >> So I did everything using (only) autoconf, including building and >> using shared libraries, > > > This is The Way if you really care about portability. Autoconf, once you get your head around what, why, and when it was created, makes for nice Makefiles and projects that are easy to include in the 100 Linux distributions with their own take on packaging the world. > > > For those of a certain era, autoconf was both useful and relatively simple to use. > > In an era of many, divergent Unices, with different compilers, shared library implementations, and varying degrees of adherence to standards, it made using FOSS a matter of ‘./configure && make && make install’ which was massively easier than what had been required previously unless you happened to have exactly the same platform as the author. > > And to use it, you needed to understand shell, make, and m4, and learn a few dozen macros (at most). m4 was perhaps the least likely skill, but since it was used by sendmail(.mc), twmrc, X11 app defaults and various other stuff, most people already had a basic understanding of it. > > In my view the modern rejection of autoconf as “incomprehensible” mostly suggests that the speaker comes from a generation that never used the original Unix toolset. > > > > > d From flexibeast at gmail.com Thu Jun 20 16:37:35 2024 From: flexibeast at gmail.com (Alexis) Date: Thu, 20 Jun 2024 16:37:35 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: (George Michaelson's message of "Thu, 20 Jun 2024 15:32:17 +1000") References: Message-ID: <87jzikt900.fsf@gmail.com> George Michaelson writes: > we used to argue about that. I disliked autoconf because I felt > 99% of > the work could be precomputed, which is what MIT X11 Makefiles > did: > they had recipes for the common architectures. A point still being made: > So, okay, fine, at some point it made sense to run programs to > empirically determine what was supported on a given system. What > I don't understand is why we kept running those stupid little > shell snippets and little bits of C code over and over. It's > like, okay, we established that this particular system does > with two args, not three. So why the > hell are we constantly testing for it over and over? > > Why didn't we end up with a situation where it was just a > standard thing that had a small number of possible values, and > it would just be set for you somewhere? Whoever was responsible > for building your system (OS company, distribution packagers, > whatever) could leave something in /etc that says "X = flavor 1, > Y = flavor 2" and so on down the line. > > And, okay, fine, I get that there would have been all kinds of > "real OS companies" that wouldn't have wanted to stoop to the > level of the dirty free software hippies. Whatever. Those same > hippies could have run the tests ONCE per platform/OS combo, put > the results into /etc themselves, and then been done with it. > > Then instead of testing all of that shit every time we built > something from source, we'd just drag in the pre-existing > results and go from there. It's not like the results were going > to change on us. They were a reflection of the way the kernel, C > libraries, APIs and userspace happened to work. Short of that > changing, the results wouldn't change either. --https://rachelbythebay.com/w/2024/04/02/autoconf/ Alexis. From davida at pobox.com Thu Jun 20 17:07:51 2024 From: davida at pobox.com (David Arnold) Date: Thu, 20 Jun 2024 17:07:51 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87jzikt900.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> Message-ID: <8806139B-266F-4B8C-B504-AB4F2FD2460E@pobox.com> > On 20 Jun 2024, at 16:37, Alexis wrote: > > George Michaelson writes: > >> we used to argue about that. I disliked autoconf because I felt 99% of >> the work could be precomputed, which is what MIT X11 Makefiles did: >> they had recipes for the common architectures. > > A point still being made: > >> So, okay, fine, at some point it made sense to run programs to empirically determine what was supported on a given system. What I don't understand is why we kept running those stupid little shell snippets and little bits of C code over and over. It's like, okay, we established that this particular system does with two args, not three. So why the hell are we constantly testing for it over and over? >> >> Why didn't we end up with a situation where it was just a standard thing that had a small number of possible values, and it would just be set for you somewhere? Whoever was responsible for building your system (OS company, distribution packagers, whatever) could leave something in /etc that says "X = flavor 1, Y = flavor 2" and so on down the line. >> >> And, okay, fine, I get that there would have been all kinds of "real OS companies" that wouldn't have wanted to stoop to the level of the dirty free software hippies. Whatever. Those same hippies could have run the tests ONCE per platform/OS combo, put the results into /etc themselves, and then been done with it. >> >> Then instead of testing all of that shit every time we built something from source, we'd just drag in the pre-existing results and go from there. It's not like the results were going to change on us. They were a reflection of the way the kernel, C libraries, APIs and userspace happened to work. Short of that changing, the results wouldn't change either. > > --https://rachelbythebay.com/w/2024/04/02/autoconf/ Which brings us back to imake (at least in xmkmf form), where if the pre-prepared settings matched your system, you were good and if not, you had a heap of work to set all the magic variables to have it build correctly. On classic MacOS, otoh, you’d compile against an SDK, but for each ROM/library symbol you wanted to use you were expected to check at runtime if it existed, and if not, switch to some alternative behaviour. Autoconf was somewhat of a middle path: check once for each installation. It also made more sense when there was less uniformity in the platforms in use. That said: autoconf never really worked outside of Unix. You could make other platforms Unix-like (eg. Cygwin, or even BeOS), but its claim to portability was always fairly narrow. U d From usotsuki at buric.co Thu Jun 20 18:05:13 2024 From: usotsuki at buric.co (Steve Nickolas) Date: Thu, 20 Jun 2024 04:05:13 -0400 (EDT) Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: So I have one program that relies on stuff that might vary from system to system. I just make use of functionality common to gmake and bmake, and expect the system to be in a reasonable state that the various detection scripts provided by the libraries work, and have "#ifdef" take care of the rest, so for most systems building the program is just "make". Works on Linux, OSX and a few other unices. I have a separate build script I use to build the Windows version. -uso. From lyndon at orthanc.ca Fri Jun 21 02:45:27 2024 From: lyndon at orthanc.ca (Lyndon Nerenberg (VE7TFX/VE6BBM)) Date: Thu, 20 Jun 2024 09:45:27 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register Message-ID: <517f74db45c3b1af@orthanc.ca> > This is The Way if you really care about portability. Autoconf, > once you get your head around what, why, and when it was created, > makes for nice Makefiles and projects that are easy to include in > the 100 Linux distributions with their own take on packaging the > world. This is outright claptrap and nonsense. In the latter half of the 90s I was responsible for writing installers and generating platform-native packages for about a dozen different commercial UNIX platforms (AIX, Solaris, Irix, HP/UX, OSF, BSD/OS, ...). Each of these package systems was as different as could be from the others. (HP/UX didn't even have one.) That entire process was driven by not very many lines of make recipes, with the assistance of some awk glue that read a template file from which it generated the native packages. And these were not trivial software distributions. We were shipping complex IMAP, X.400 and X.500 servers, along with a couple of MTAs. Our installers didn't just dump the files onto the system and point you at a README; we coded a lot of the site setup into the installers, so the end user mostly just had to edit a single config file to finish up. --lyndon From kevin.bowling at kev009.com Fri Jun 21 04:32:18 2024 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Thu, 20 Jun 2024 11:32:18 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <517f74db45c3b1af@orthanc.ca> References: <517f74db45c3b1af@orthanc.ca> Message-ID: On Thu, Jun 20, 2024 at 9:45 AM Lyndon Nerenberg (VE7TFX/VE6BBM) wrote: > > > This is The Way if you really care about portability. Autoconf, > > once you get your head around what, why, and when it was created, > > makes for nice Makefiles and projects that are easy to include in > > the 100 Linux distributions with their own take on packaging the > > world. > > This is outright claptrap and nonsense. In the latter half of the > 90s I was responsible for writing installers and generating > platform-native packages for about a dozen different commercial > UNIX platforms (AIX, Solaris, Irix, HP/UX, OSF, BSD/OS, ...). Each > of these package systems was as different as could be from the > others. (HP/UX didn't even have one.) Strong language for something that is easily measured by looking at a contemporary package collection. That's great for you and whatever this was but it is a simple fact that autoconf is the most common tool for Linux and rejecting it or something like cmake that has widespread adoption makes life more difficult for distributions. Go look at a random deb or rpm spec or ebuild or apk or whatever you wish, these all have inbox support for autoconf and have to impedance mismatch your clever custom jobs. > That entire process was driven by not very many lines of make > recipes, with the assistance of some awk glue that read a template > file from which it generated the native packages. And these were > not trivial software distributions. We were shipping complex IMAP, > X.400 and X.500 servers, along with a couple of MTAs. Our installers > didn't just dump the files onto the system and point you at a README; > we coded a lot of the site setup into the installers, so the end > user mostly just had to edit a single config file to finish up. The set of people that interact with make is minimal in relation to the userbase of contemporary unix. Binary distribution is the norm and has been for decades. > --lyndon From woods at robohack.ca Fri Jun 21 04:34:04 2024 From: woods at robohack.ca (Greg A. Woods) Date: Thu, 20 Jun 2024 11:34:04 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <202406200501.45K5118a028500@sdf.org> References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: At Thu, 20 Jun 2024 01:01:01 -0400, Scot Jenkins via TUHS wrote: Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > "Greg A. Woods" wrote: > > > I will not ever allow cmake to run, or even exist, on the machines I > > control... > > I'm not a fan of cmake either. > > How do you deal with software that only builds with cmake (or meson, > scons, ... whatever the developer decided to use as the build tool)? > What alternatives exist short of reimplementing the build process in > a standard makefile by hand, which is obviously very time consuming, > error prone, and will probably break the next time you want to update > a given package? The alternative _is_ to reimplement the build process. For example, see: https://github.com/robohack/yajl/ This example is a far more comprehensive rewrite than is usually necessary as I wanted a complete and portable example that could be used as the basis for further projects. An example of a much simpler reimplementation: http://cvsweb.NetBSD.org/bsdweb.cgi/src/external/mit/ctwm/bin/ctwm/Makefile?rev=1.12&content-type=text/x-cvsweb-markup&only_with_tag=MAIN -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From athornton at gmail.com Fri Jun 21 04:41:50 2024 From: athornton at gmail.com (Adam Thornton) Date: Thu, 20 Jun 2024 11:41:50 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: Someone clearly never used imake... There's a reason that the xmkmf command ends in the two letters it does, and I'm never going to believe it's "make file". Adam On Thu, Jun 20, 2024 at 11:34 AM Greg A. Woods wrote: > At Thu, 20 Jun 2024 01:01:01 -0400, Scot Jenkins via TUHS > wrote: > Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix > philosophy' The Register > > > > "Greg A. Woods" wrote: > > > > > I will not ever allow cmake to run, or even exist, on the machines I > > > control... > > > > I'm not a fan of cmake either. > > > > How do you deal with software that only builds with cmake (or meson, > > scons, ... whatever the developer decided to use as the build tool)? > > What alternatives exist short of reimplementing the build process in > > a standard makefile by hand, which is obviously very time consuming, > > error prone, and will probably break the next time you want to update > > a given package? > > The alternative _is_ to reimplement the build process. > > For example, see: > > https://github.com/robohack/yajl/ > > This example is a far more comprehensive rewrite than is usually > necessary as I wanted a complete and portable example that could be used > as the basis for further projects. > > An example of a much simpler reimplementation: > > > http://cvsweb.NetBSD.org/bsdweb.cgi/src/external/mit/ctwm/bin/ctwm/Makefile?rev=1.12&content-type=text/x-cvsweb-markup&only_with_tag=MAIN > > -- > Greg A. Woods > > Kelowna, BC +1 250 762-7675 RoboHack > Planix, Inc. Avoncote Farms > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcapp at anteil.com Fri Jun 21 05:57:03 2024 From: jcapp at anteil.com (Jim Capp) Date: Thu, 20 Jun 2024 15:57:03 -0400 (EDT) Subject: [TUHS] Version 256.1: Now slightly less likely to delete /home In-Reply-To: Message-ID: <11321078.2005.1718913423943.JavaMail.root@zimbraanteil> Enjoy. https://www.theregister.com/2024/06/20/systemd_2561_data_wipe_fix/ " Following closely after the release of version 256, version 256.1 fixes a handful of bugs. One of these is emphatically not systemd-tmpfiles recursively deleting your entire home directory. That's a feature." -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Fri Jun 21 05:59:20 2024 From: imp at bsdimp.com (Warner Losh) Date: Thu, 20 Jun 2024 13:59:20 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: For me, precomputing an environment is the same as a wysiwyg editor: what you see is all you get. If it works for you, and the environment that's inferred from predefined CPP symbols is correct, then it's an easy solution. When it's not, and for me it often wasn't, it's nothing but pain and suffering and saying MF all the time (also not Make File).... I was serious when I've said I've had more positive cmake experiences (which haven't been all that impressive: I'm more impressed with meson in this space, for example) than I ever had with IMakefiles, imake, xmkmf, etc... But It's also clear that different people have lived through different hassles, and I respect that... I've noticed too that we're relatively homogeneous these days: Everybody is a Linux box or Windows Box or MacOS, except for a few weird people on the fringes (like me). It's a lot easier to get things right enough w/o autotools, scons, meson, etc than it was in The Bad Old Days of the Unix Wars and the Innovation Famine that followed from the late 80s to the mid 2000s.... In that environment, there's one of two reactions: Test Everything or Least Common Denominator. And we've seen both represented in this thread. As well as the 'There's so few environments, can't you precompute them all?' sentiment from newbies that never bloodied their knuckles with some of the less like Research Unix machines out there like AIX and HP/UX... Or worse, Eunice... Warner On Thu, Jun 20, 2024 at 12:42 PM Adam Thornton wrote: > > > Someone clearly never used imake... > > > There's a reason that the xmkmf command ends in the two letters it does, > and I'm never going to believe it's "make file". > > Adam > > On Thu, Jun 20, 2024 at 11:34 AM Greg A. Woods wrote: > >> At Thu, 20 Jun 2024 01:01:01 -0400, Scot Jenkins via TUHS >> wrote: >> Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix >> philosophy' The Register >> > >> > "Greg A. Woods" wrote: >> > >> > > I will not ever allow cmake to run, or even exist, on the machines I >> > > control... >> > >> > I'm not a fan of cmake either. >> > >> > How do you deal with software that only builds with cmake (or meson, >> > scons, ... whatever the developer decided to use as the build tool)? >> > What alternatives exist short of reimplementing the build process in >> > a standard makefile by hand, which is obviously very time consuming, >> > error prone, and will probably break the next time you want to update >> > a given package? >> >> The alternative _is_ to reimplement the build process. >> >> For example, see: >> >> https://github.com/robohack/yajl/ >> >> This example is a far more comprehensive rewrite than is usually >> necessary as I wanted a complete and portable example that could be used >> as the basis for further projects. >> >> An example of a much simpler reimplementation: >> >> >> http://cvsweb.NetBSD.org/bsdweb.cgi/src/external/mit/ctwm/bin/ctwm/Makefile?rev=1.12&content-type=text/x-cvsweb-markup&only_with_tag=MAIN >> >> -- >> Greg A. Woods >> >> Kelowna, BC +1 250 762-7675 RoboHack >> Planix, Inc. Avoncote Farms >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Fri Jun 21 06:12:45 2024 From: rminnich at gmail.com (ron minnich) Date: Thu, 20 Jun 2024 13:12:45 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: The slurm configure, produced by the configure script, is 1 mbyte, 30,000 lines of impenetrable script. For each .c file in slurm, an 11960 line libtool script is invoked. autoconfig uses m4. When I told Eric Grosse, back in 2015, that *anything* still used m4, he would not believe it was "the m4 from the 70s" until I showed him. He was speechless. He had assumed m4 died in the 80s. Personally, the autoconfig process does not fill me with confidence, and it was recently responsible for a very serious security problem. And, autoconfig doesn't work: I've lost track of how many times autoconf has failed for me. In general, in my experience, autoconf makes for less portability, not more. For a good example of a very portable system with a very clean, human-readable make setup, I'd recommend a look at plan9ports. It includes 2 window managers, 2 graphical editors, and all the Plan 9 tools, and somehow manages to be at least as portable as the autoconf mechanism. On Thu, Jun 20, 2024 at 12:59 PM Warner Losh wrote: > For me, precomputing an environment is the same as a wysiwyg editor: what > you see is all you get. If it works for you, and the environment that's > inferred from predefined CPP symbols is correct, then it's an easy > solution. When it's not, and for me it often wasn't, it's nothing but pain > and suffering and saying MF all the time (also not Make File).... I was > serious when I've said I've had more positive cmake experiences (which > haven't been all that impressive: I'm more impressed with meson in this > space, for example) than I ever had with IMakefiles, imake, xmkmf, etc... > But It's also clear that different people have lived through different > hassles, and I respect that... > > I've noticed too that we're relatively homogeneous these days: Everybody > is a Linux box or Windows Box or MacOS, except for a few weird people on > the fringes (like me). It's a lot easier to get things right enough w/o > autotools, scons, meson, etc than it was in The Bad Old Days of the Unix > Wars and the Innovation Famine that followed from the late 80s to the mid > 2000s.... In that environment, there's one of two reactions: Test > Everything or Least Common Denominator. And we've seen both represented in > this thread. As well as the 'There's so few environments, can't you > precompute them all?' sentiment from newbies that never bloodied their > knuckles with some of the less like Research Unix machines out there like > AIX and HP/UX... Or worse, Eunice... > > Warner > > On Thu, Jun 20, 2024 at 12:42 PM Adam Thornton > wrote: > >> >> >> Someone clearly never used imake... >> >> >> There's a reason that the xmkmf command ends in the two letters it does, >> and I'm never going to believe it's "make file". >> >> Adam >> >> On Thu, Jun 20, 2024 at 11:34 AM Greg A. Woods wrote: >> >>> At Thu, 20 Jun 2024 01:01:01 -0400, Scot Jenkins via TUHS >>> wrote: >>> Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix >>> philosophy' The Register >>> > >>> > "Greg A. Woods" wrote: >>> > >>> > > I will not ever allow cmake to run, or even exist, on the machines I >>> > > control... >>> > >>> > I'm not a fan of cmake either. >>> > >>> > How do you deal with software that only builds with cmake (or meson, >>> > scons, ... whatever the developer decided to use as the build tool)? >>> > What alternatives exist short of reimplementing the build process in >>> > a standard makefile by hand, which is obviously very time consuming, >>> > error prone, and will probably break the next time you want to update >>> > a given package? >>> >>> The alternative _is_ to reimplement the build process. >>> >>> For example, see: >>> >>> https://github.com/robohack/yajl/ >>> >>> This example is a far more comprehensive rewrite than is usually >>> necessary as I wanted a complete and portable example that could be used >>> as the basis for further projects. >>> >>> An example of a much simpler reimplementation: >>> >>> >>> http://cvsweb.NetBSD.org/bsdweb.cgi/src/external/mit/ctwm/bin/ctwm/Makefile?rev=1.12&content-type=text/x-cvsweb-markup&only_with_tag=MAIN >>> >>> -- >>> Greg A. Woods >>> >>> Kelowna, BC +1 250 762-7675 RoboHack >>> Planix, Inc. Avoncote Farms >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From als at thangorodrim.ch Fri Jun 21 06:14:45 2024 From: als at thangorodrim.ch (Alexander Schreiber) Date: Thu, 20 Jun 2024 22:14:45 +0200 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240617012531.GE12821@mcvoy.com> References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> Message-ID: On Sun, Jun 16, 2024 at 06:25:31PM -0700, Larry McVoy wrote: > On Mon, Jun 17, 2024 at 11:01:40AM +1000, Alexis wrote: > > > > The issue isn't about learning shell scripting _per se_. It's about the > > extent to which _volunteers_ have to go beyond the _basics_ of shell > > scripting to learn about the _complexities_ and _subtle issues_ involved in > > using it to provide _robust_ service management. Including learning, for > > example, that certain functionality one takes for granted in a given shell > > isn't actually POSIX, and can't be assumed to be present in the shell one is > > working with (not to mention that POSIX-compatibility might need to be > > actively enabled, as in the case of e.g. ksh, via POSIXLY_CORRECT). > > This is sort of off topic but maybe relevant. > > When I was running my company, my engineers joked that if it were invented > after 1980 I wouldn't let them use it. Which wasn't true, we used mmap(). > > But the underlying sentiment sort of was true. Even though they were > all used to bash, I tried very hard to not use bash specific stuff. > And it paid off, in our hey day, we supported SCO, AIX, HPUX, SunOS, > Solaris, Tru64, Linux on every architecture from tin to IBM mainframes, > Windows, Macos on PPC and x86, etc. And probably a bunch of other > platforms I've forgotten. > > *Every* time they used some bash-ism, it bit us in the ass. I kept > telling them "our build environment is not our deployment environment". > We had a bunch of /bin/sh stuff that we shipped so we had to go for > the common denominator. My latest brush with someone using bash in the wrong place was when I saw the configure scripts for GlusterFS break on NetBSD. Because someone had used bash 4 syntax in the configure scripts ... presumably on a Linux variant where /bin/sh == /bin/bash. While that was easy to fix (and the PR accepted and patched in) I shouldn't have had to fix that in the first place ... Kind regards, Alex. -- "Opportunity is missed by most people because it is dressed in overalls and looks like work." -- Thomas A. Edison From clemc at ccc.com Fri Jun 21 06:19:06 2024 From: clemc at ccc.com (Clem Cole) Date: Thu, 20 Jun 2024 16:19:06 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: On Thu, Jun 20, 2024 at 3:59 PM Warner Losh wrote: > As well as the 'There's so few environments, can't you precompute them > all?' sentiment from newbies that never bloodied their knuckles with some > of the less like Research Unix machines out there like AIX and HP/UX... Or > worse, Eunice... > Remember Henry's 10 commandments [maybe I can LA to post them in all their universities] the 10th beacons harsh here: - *Thou shalt forswear, renounce, and abjure the vile heresy which claimeth that All the world's a VAX, and have no commerce with the benighted heathens who cling to this barbarous belief, that the days of thy program may be long even though the days of thy current machine be short.* ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From athornton at gmail.com Fri Jun 21 06:22:15 2024 From: athornton at gmail.com (Adam Thornton) Date: Thu, 20 Jun 2024 13:22:15 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: > > In general, in my experience, autoconf makes for less portability, not > more. As an anecdotal data point, Jay Maynard has forked Hercules (S360/370/390/z Emulator) and, among other things, is ripping out 25 years of increasingly-baroque autoconf, which will probably make the build process (notoriously finicky) a whole lot clearer and easier. Adam On Thu, Jun 20, 2024 at 1:12 PM ron minnich wrote: > The slurm configure, produced by the configure script, is 1 mbyte, 30,000 > lines of impenetrable script. For each .c file in slurm, an 11960 line > libtool script is invoked. autoconfig uses m4. When I told Eric Grosse, > back in 2015, that *anything* still used m4, he would not believe it was > "the m4 from the 70s" until I showed him. He was speechless. He had assumed > m4 died in the 80s. > > Personally, the autoconfig process does not fill me with confidence, and > it was recently responsible for a very serious security problem. And, > autoconfig doesn't work: I've lost track of how many times autoconf has > failed for me. In general, in my experience, autoconf makes for less > portability, not more. > > For a good example of a very portable system with a very clean, > human-readable make setup, I'd recommend a look at plan9ports. It includes > 2 window managers, 2 graphical editors, and all the Plan 9 tools, and > somehow manages to be at least as portable as the autoconf mechanism. > > On Thu, Jun 20, 2024 at 12:59 PM Warner Losh wrote: > >> For me, precomputing an environment is the same as a wysiwyg editor: what >> you see is all you get. If it works for you, and the environment that's >> inferred from predefined CPP symbols is correct, then it's an easy >> solution. When it's not, and for me it often wasn't, it's nothing but pain >> and suffering and saying MF all the time (also not Make File).... I was >> serious when I've said I've had more positive cmake experiences (which >> haven't been all that impressive: I'm more impressed with meson in this >> space, for example) than I ever had with IMakefiles, imake, xmkmf, etc... >> But It's also clear that different people have lived through different >> hassles, and I respect that... >> >> I've noticed too that we're relatively homogeneous these days: Everybody >> is a Linux box or Windows Box or MacOS, except for a few weird people on >> the fringes (like me). It's a lot easier to get things right enough w/o >> autotools, scons, meson, etc than it was in The Bad Old Days of the Unix >> Wars and the Innovation Famine that followed from the late 80s to the mid >> 2000s.... In that environment, there's one of two reactions: Test >> Everything or Least Common Denominator. And we've seen both represented in >> this thread. As well as the 'There's so few environments, can't you >> precompute them all?' sentiment from newbies that never bloodied their >> knuckles with some of the less like Research Unix machines out there like >> AIX and HP/UX... Or worse, Eunice... >> >> Warner >> >> On Thu, Jun 20, 2024 at 12:42 PM Adam Thornton >> wrote: >> >>> >>> >>> Someone clearly never used imake... >>> >>> >>> There's a reason that the xmkmf command ends in the two letters it >>> does, and I'm never going to believe it's "make file". >>> >>> Adam >>> >>> On Thu, Jun 20, 2024 at 11:34 AM Greg A. Woods >>> wrote: >>> >>>> At Thu, 20 Jun 2024 01:01:01 -0400, Scot Jenkins via TUHS < >>>> tuhs at tuhs.org> wrote: >>>> Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix >>>> philosophy' The Register >>>> > >>>> > "Greg A. Woods" wrote: >>>> > >>>> > > I will not ever allow cmake to run, or even exist, on the machines I >>>> > > control... >>>> > >>>> > I'm not a fan of cmake either. >>>> > >>>> > How do you deal with software that only builds with cmake (or meson, >>>> > scons, ... whatever the developer decided to use as the build tool)? >>>> > What alternatives exist short of reimplementing the build process in >>>> > a standard makefile by hand, which is obviously very time consuming, >>>> > error prone, and will probably break the next time you want to update >>>> > a given package? >>>> >>>> The alternative _is_ to reimplement the build process. >>>> >>>> For example, see: >>>> >>>> https://github.com/robohack/yajl/ >>>> >>>> This example is a far more comprehensive rewrite than is usually >>>> necessary as I wanted a complete and portable example that could be used >>>> as the basis for further projects. >>>> >>>> An example of a much simpler reimplementation: >>>> >>>> >>>> http://cvsweb.NetBSD.org/bsdweb.cgi/src/external/mit/ctwm/bin/ctwm/Makefile?rev=1.12&content-type=text/x-cvsweb-markup&only_with_tag=MAIN >>>> >>>> -- >>>> Greg A. Woods >>>> >>>> Kelowna, BC +1 250 762-7675 RoboHack >>>> Planix, Inc. Avoncote Farms >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Fri Jun 21 06:29:51 2024 From: rminnich at gmail.com (ron minnich) Date: Thu, 20 Jun 2024 13:29:51 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: and, this just happened in slurm: autoreconf at top level of slurm fails with an m4 error complete with the usual incomprehensible error message. This gets old. As the saying goes, autoconf tools may be slow and buggy, but at least they're hard to use. one of the things I am grateful to Go for is that it does not suffer from this sort of nonsense. On Thu, Jun 20, 2024 at 1:12 PM ron minnich wrote: > The slurm configure, produced by the configure script, is 1 mbyte, 30,000 > lines of impenetrable script. For each .c file in slurm, an 11960 line > libtool script is invoked. autoconfig uses m4. When I told Eric Grosse, > back in 2015, that *anything* still used m4, he would not believe it was > "the m4 from the 70s" until I showed him. He was speechless. He had assumed > m4 died in the 80s. > > Personally, the autoconfig process does not fill me with confidence, and > it was recently responsible for a very serious security problem. And, > autoconfig doesn't work: I've lost track of how many times autoconf has > failed for me. In general, in my experience, autoconf makes for less > portability, not more. > > For a good example of a very portable system with a very clean, > human-readable make setup, I'd recommend a look at plan9ports. It includes > 2 window managers, 2 graphical editors, and all the Plan 9 tools, and > somehow manages to be at least as portable as the autoconf mechanism. > > On Thu, Jun 20, 2024 at 12:59 PM Warner Losh wrote: > >> For me, precomputing an environment is the same as a wysiwyg editor: what >> you see is all you get. If it works for you, and the environment that's >> inferred from predefined CPP symbols is correct, then it's an easy >> solution. When it's not, and for me it often wasn't, it's nothing but pain >> and suffering and saying MF all the time (also not Make File).... I was >> serious when I've said I've had more positive cmake experiences (which >> haven't been all that impressive: I'm more impressed with meson in this >> space, for example) than I ever had with IMakefiles, imake, xmkmf, etc... >> But It's also clear that different people have lived through different >> hassles, and I respect that... >> >> I've noticed too that we're relatively homogeneous these days: Everybody >> is a Linux box or Windows Box or MacOS, except for a few weird people on >> the fringes (like me). It's a lot easier to get things right enough w/o >> autotools, scons, meson, etc than it was in The Bad Old Days of the Unix >> Wars and the Innovation Famine that followed from the late 80s to the mid >> 2000s.... In that environment, there's one of two reactions: Test >> Everything or Least Common Denominator. And we've seen both represented in >> this thread. As well as the 'There's so few environments, can't you >> precompute them all?' sentiment from newbies that never bloodied their >> knuckles with some of the less like Research Unix machines out there like >> AIX and HP/UX... Or worse, Eunice... >> >> Warner >> >> On Thu, Jun 20, 2024 at 12:42 PM Adam Thornton >> wrote: >> >>> >>> >>> Someone clearly never used imake... >>> >>> >>> There's a reason that the xmkmf command ends in the two letters it >>> does, and I'm never going to believe it's "make file". >>> >>> Adam >>> >>> On Thu, Jun 20, 2024 at 11:34 AM Greg A. Woods >>> wrote: >>> >>>> At Thu, 20 Jun 2024 01:01:01 -0400, Scot Jenkins via TUHS < >>>> tuhs at tuhs.org> wrote: >>>> Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix >>>> philosophy' The Register >>>> > >>>> > "Greg A. Woods" wrote: >>>> > >>>> > > I will not ever allow cmake to run, or even exist, on the machines I >>>> > > control... >>>> > >>>> > I'm not a fan of cmake either. >>>> > >>>> > How do you deal with software that only builds with cmake (or meson, >>>> > scons, ... whatever the developer decided to use as the build tool)? >>>> > What alternatives exist short of reimplementing the build process in >>>> > a standard makefile by hand, which is obviously very time consuming, >>>> > error prone, and will probably break the next time you want to update >>>> > a given package? >>>> >>>> The alternative _is_ to reimplement the build process. >>>> >>>> For example, see: >>>> >>>> https://github.com/robohack/yajl/ >>>> >>>> This example is a far more comprehensive rewrite than is usually >>>> necessary as I wanted a complete and portable example that could be used >>>> as the basis for further projects. >>>> >>>> An example of a much simpler reimplementation: >>>> >>>> >>>> http://cvsweb.NetBSD.org/bsdweb.cgi/src/external/mit/ctwm/bin/ctwm/Makefile?rev=1.12&content-type=text/x-cvsweb-markup&only_with_tag=MAIN >>>> >>>> -- >>>> Greg A. Woods >>>> >>>> Kelowna, BC +1 250 762-7675 RoboHack >>>> Planix, Inc. Avoncote Farms >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From luther.johnson at makerlisp.com Fri Jun 21 06:34:56 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Thu, 20 Jun 2024 13:34:56 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: <2a834aef-2b52-6b16-b79a-7f321585a4b8@makerlisp.com> I agree that there are certainly times when CMake's leverage has solved problems for people. My most visceral reactions were mostly based on cases where no tool like CMake was really required at all, but CMake had wormed its way into the consciousness of new programmers who never learned make, and thought CMake was doing them a great service. Bugged the hell out of me, this dumbing-down of the general programming population. My bad experiences were all as a consultant to teams that needed a lot of expert help, when they had thrown CMake along with a lot of other unnecessary complexity into their half-working solutions. So I guess it was all tarred by the same flavor of badly conceived work. But then as I tried to make my peace with the CMake build as it was, I got a deeper understanding of how intrinsically irrational CMake is (and again, behavior changing on the same builds depending on CMake release versions. So there certainly are times when something a little more comprehensive, outside of make, is required. ./configure && make is not so bad, it's not irrational, sometimes it's overkill, but it works ... but only if the system is kind of Unix-y. If not you may wind up doing a lot of work to pretend it's more Unix-y, so instead of porting your software, you're porting it to a common Unix-like subset, then emulating that Unix-like subset on your platform, both ends against the middle. That can be ultimately counter-productive too. I have an emotional reaction when I see the porting problem become transformed into adherence to the "one true way", be it Unix, or one build system or another. Because you're now just re-casting the problem into acceptance of that other tool or OS core as the way it should be. Instead of getting your thing to work on the other platform, by translating from what your application wants, into how to do it on whatever system, you're changing your application to be more like what the "one true system" wants to see. You've given up control of your idea of your app's core OS requirements, you've decided to "just give in and be UNiX (or Windows, or whatever)". To me, that's backwards. On 06/20/2024 12:59 PM, Warner Losh wrote: > For me, precomputing an environment is the same as a wysiwyg editor: > what you see is all you get. If it works for you, and the environment > that's inferred from predefined CPP symbols is correct, then it's an > easy solution. When it's not, and for me it often wasn't, it's nothing > but pain and suffering and saying MF all the time (also not Make > File).... I was serious when I've said I've had more positive cmake > experiences (which haven't been all that impressive: I'm more > impressed with meson in this space, for example) than I ever had with > IMakefiles, imake, xmkmf, etc... But It's also clear that different > people have lived through different hassles, and I respect that... > > I've noticed too that we're relatively homogeneous these days: > Everybody is a Linux box or Windows Box or MacOS, except for a few > weird people on the fringes (like me). It's a lot easier to get things > right enough w/o autotools, scons, meson, etc than it was in The Bad > Old Days of the Unix Wars and the Innovation Famine that followed from > the late 80s to the mid 2000s.... In that environment, there's one of > two reactions: Test Everything or Least Common Denominator. And we've > seen both represented in this thread. As well as the 'There's so few > environments, can't you precompute them all?' sentiment from newbies > that never bloodied their knuckles with some of the less like Research > Unix machines out there like AIX and HP/UX... Or worse, Eunice... > > Warner > > On Thu, Jun 20, 2024 at 12:42 PM Adam Thornton > wrote: > > > > Someone clearly never used imake... > > > There's a reason that the xmkmf command ends in the two letters it > does, and I'm never going to believe it's "make file". > > Adam > > On Thu, Jun 20, 2024 at 11:34 AM Greg A. Woods > wrote: > > At Thu, 20 Jun 2024 01:01:01 -0400, Scot Jenkins via TUHS > > wrote: > Subject: [TUHS] Re: Version 256 of systemd boasts '42% less > Unix philosophy' The Register > > > > "Greg A. Woods" > wrote: > > > > > I will not ever allow cmake to run, or even exist, on the > machines I > > > control... > > > > I'm not a fan of cmake either. > > > > How do you deal with software that only builds with cmake > (or meson, > > scons, ... whatever the developer decided to use as the > build tool)? > > What alternatives exist short of reimplementing the build > process in > > a standard makefile by hand, which is obviously very time > consuming, > > error prone, and will probably break the next time you want > to update > > a given package? > > The alternative _is_ to reimplement the build process. > > For example, see: > > https://github.com/robohack/yajl/ > > This example is a far more comprehensive rewrite than is usually > necessary as I wanted a complete and portable example that > could be used > as the basis for further projects. > > An example of a much simpler reimplementation: > > http://cvsweb.NetBSD.org/bsdweb.cgi/src/external/mit/ctwm/bin/ctwm/Makefile?rev=1.12&content-type=text/x-cvsweb-markup&only_with_tag=MAIN > > -- > Greg A. Woods > > > > Kelowna, BC +1 250 762-7675 RoboHack > > > Planix, Inc. > > Avoncote Farms > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Fri Jun 21 07:00:10 2024 From: rminnich at gmail.com (ron minnich) Date: Thu, 20 Jun 2024 14:00:10 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <2a834aef-2b52-6b16-b79a-7f321585a4b8@makerlisp.com> References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> <2a834aef-2b52-6b16-b79a-7f321585a4b8@makerlisp.com> Message-ID: here's where I'd have to disagree: "./configure && make is not so bad, it's not irrational, sometimes it's overkill, but it works" because it doesn't. Work, that is. At least, for me. I'm amazed: the autoreconf step on slurm permanently broke the build, such that I am having to do a full git reset --hard and clean and start over. Even simple things fail with autoconf: see the sad story here, of how I pulled down a few simple programs, and they all fail to build. I replaced them ALL with a single Go program that was smaller, by far, than a single one of the configure scripts. https://docs.google.com/presentation/d/1d0yK7g-J6oITgE-B_odadSw3nlBWGbMK7clt_TmXo7c/edit?usp=sharing On Thu, Jun 20, 2024 at 1:35 PM Luther Johnson wrote: > I agree that there are certainly times when CMake's leverage has solved > problems for people. My most visceral reactions were mostly based on cases > where no tool like CMake was really required at all, but CMake had wormed > its way into the consciousness of new programmers who never learned make, > and thought CMake was doing them a great service. Bugged the hell out of > me, this dumbing-down of the general programming population. My bad > experiences were all as a consultant to teams that needed a lot of expert > help, when they had thrown CMake along with a lot of other unnecessary > complexity into their half-working solutions. So I guess it was all tarred > by the same flavor of badly conceived work. But then as I tried to make my > peace with the CMake build as it was, I got a deeper understanding of how > intrinsically irrational CMake is (and again, behavior changing on the same > builds depending on CMake release versions. > So there certainly are times when something a little more comprehensive, > outside of make, is required. ./configure && make is not so bad, it's not > irrational, sometimes it's overkill, but it works ... but only if the > system is kind of Unix-y. If not you may wind up doing a lot of work to > pretend it's more Unix-y, so instead of porting your software, you're > porting it to a common Unix-like subset, then emulating that Unix-like > subset on your platform, both ends against the middle. That can be > ultimately counter-productive too. > > I have an emotional reaction when I see the porting problem become > transformed into adherence to the "one true way", be it Unix, or one build > system or another. Because you're now just re-casting the problem into > acceptance of that other tool or OS core as the way it should be. Instead > of getting your thing to work on the other platform, by translating from > what your application wants, into how to do it on whatever system, you're > changing your application to be more like what the "one true system" wants > to see. You've given up control of your idea of your app's core OS > requirements, you've decided to "just give in and be UNiX (or Windows, or > whatever)". To me, that's backwards. > > On 06/20/2024 12:59 PM, Warner Losh wrote: > > For me, precomputing an environment is the same as a wysiwyg editor: what > you see is all you get. If it works for you, and the environment that's > inferred from predefined CPP symbols is correct, then it's an easy > solution. When it's not, and for me it often wasn't, it's nothing but pain > and suffering and saying MF all the time (also not Make File).... I was > serious when I've said I've had more positive cmake experiences (which > haven't been all that impressive: I'm more impressed with meson in this > space, for example) than I ever had with IMakefiles, imake, xmkmf, etc... > But It's also clear that different people have lived through different > hassles, and I respect that... > > I've noticed too that we're relatively homogeneous these days: Everybody > is a Linux box or Windows Box or MacOS, except for a few weird people on > the fringes (like me). It's a lot easier to get things right enough w/o > autotools, scons, meson, etc than it was in The Bad Old Days of the Unix > Wars and the Innovation Famine that followed from the late 80s to the mid > 2000s.... In that environment, there's one of two reactions: Test > Everything or Least Common Denominator. And we've seen both represented in > this thread. As well as the 'There's so few environments, can't you > precompute them all?' sentiment from newbies that never bloodied their > knuckles with some of the less like Research Unix machines out there like > AIX and HP/UX... Or worse, Eunice... > > Warner > > On Thu, Jun 20, 2024 at 12:42 PM Adam Thornton > wrote: > >> >> >> Someone clearly never used imake... >> >> >> There's a reason that the xmkmf command ends in the two letters it does, >> and I'm never going to believe it's "make file". >> >> Adam >> >> On Thu, Jun 20, 2024 at 11:34 AM Greg A. Woods wrote: >> >>> At Thu, 20 Jun 2024 01:01:01 -0400, Scot Jenkins via TUHS >>> wrote: >>> Subject: [TUHS] Re: Version 256 of systemd boasts '42% less Unix >>> philosophy' The Register >>> > >>> > "Greg A. Woods" wrote: >>> > >>> > > I will not ever allow cmake to run, or even exist, on the machines I >>> > > control... >>> > >>> > I'm not a fan of cmake either. >>> > >>> > How do you deal with software that only builds with cmake (or meson, >>> > scons, ... whatever the developer decided to use as the build tool)? >>> > What alternatives exist short of reimplementing the build process in >>> > a standard makefile by hand, which is obviously very time consuming, >>> > error prone, and will probably break the next time you want to update >>> > a given package? >>> >>> The alternative _is_ to reimplement the build process. >>> >>> For example, see: >>> >>> https://github.com/robohack/yajl/ >>> >>> This example is a far more comprehensive rewrite than is usually >>> necessary as I wanted a complete and portable example that could be used >>> as the basis for further projects. >>> >>> An example of a much simpler reimplementation: >>> >>> >>> http://cvsweb.NetBSD.org/bsdweb.cgi/src/external/mit/ctwm/bin/ctwm/Makefile?rev=1.12&content-type=text/x-cvsweb-markup&only_with_tag=MAIN >>> >>> -- >>> Greg A. Woods >>> >>> Kelowna, BC +1 250 762-7675 RoboHack >>> Planix, Inc. Avoncote Farms >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Fri Jun 21 07:07:54 2024 From: tuhs at tuhs.org (Bakul Shah via TUHS) Date: Thu, 20 Jun 2024 14:07:54 -0700 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87jzikt900.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> Message-ID: >> Then instead of testing all of that shit every time we built something from source, we'd just drag in the pre-existing results and go from there. It's not like the results were going to change on us. They were a reflection of the way the kernel, C libraries, APIs and userspace happened to work. Short of that changing, the results wouldn't change either. To build a set of objects you need to worry about at least the following: - build recipes for each of them (which may also depend on other things) - configuration parameters - dealing with differences on each platform - third party libraries & alternatives - toolchains (& may be cross-platform builds) - supporting/navigating different versions of the last 3 above You can't really precompute all this as there are far too many combinations and they keep changing. Though you may be able to train a program porting AI model :-) From davida at pobox.com Fri Jun 21 07:53:26 2024 From: davida at pobox.com (David Arnold) Date: Fri, 21 Jun 2024 07:53:26 +1000 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: Message-ID: > On 21 Jun 2024, at 07:00, ron minnich wrote: > > here's where I'd have to disagree: "./configure && make is not so bad, it's not irrational, sometimes it's overkill, but it works" > > because it doesn't. Work, that is. At least, for me. Never? Any tool can be misused (perhaps there’s an issue with slurm’s implementation here?) I think the quality of autoconf usage (by project authors) has declined, perhaps as building from source has been overtaken by the use of binary packages. I’d argue that autotools (incl automake and libtool) can be a decent solution in the hands of devs who care. At one time, I think it was the best compromise, although I’m open to argument that this time has passed. It was certainly never useful for general portability to Windows, for instance, and more recent tools manage that better. d From rminnich at gmail.com Fri Jun 21 08:00:52 2024 From: rminnich at gmail.com (ron minnich) Date: Thu, 20 Jun 2024 15:00:52 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: Message-ID: You're right. It's not that autoconf never works, it's that it fails so frequently that I can't trust it to work. Case in point, I just had a bunch of trouble this morning with it, with the most trivial command, and had to reset the repo to ground state to get it to build again. but compared to my experience with Go, autoconf does not compare well. On Thu, Jun 20, 2024 at 2:53 PM David Arnold wrote: > > > On 21 Jun 2024, at 07:00, ron minnich wrote: > > > > here's where I'd have to disagree: "./configure && make is not so bad, > it's not irrational, sometimes it's overkill, but it works" > > > > because it doesn't. Work, that is. At least, for me. > > Never? > > Any tool can be misused (perhaps there’s an issue with slurm’s > implementation here?) > > I think the quality of autoconf usage (by project authors) has declined, > perhaps as building from source has been overtaken by the use of binary > packages. > > I’d argue that autotools (incl automake and libtool) can be a decent > solution in the hands of devs who care. At one time, I think it was the > best compromise, although I’m open to argument that this time has passed. > > It was certainly never useful for general portability to Windows, for > instance, and more recent tools manage that better. > > > > > d > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Fri Jun 21 08:11:24 2024 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 20 Jun 2024 15:11:24 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: Message-ID: <20240620221124.GE24884@mcvoy.com> On Thu, Jun 20, 2024 at 03:00:52PM -0700, ron minnich wrote: > You're right. It's not that autoconf never works, it's that it fails so > frequently that I can't trust it to work. Case in point, I just had a bunch > of trouble this morning with it, with the most trivial command, and had to > reset the repo to ground state to get it to build again. > > but compared to my experience with Go, autoconf does not compare well. This is BitKeeper's build shell. Not a lot to it. #!/bin/sh orig_args="$@" ms_env() { unset JOBS test "$MSYSBUILDENV" || { echo running in wrong environment, respawning... rm -f conf*.mk bk get -S ./update_buildenv BK_USEMSYS=1 bk sh ./update_buildenv export HOME=`bk pwd` test -d R:/build/buildenv/bin && exec R:/build/buildenv/bin/sh --login $0 $orig_args exec C:/build/buildenv/bin/sh --login $0 $orig_args } gcc --version | grep -q cyg && { echo No Mingw GCC found, I quit. exit 1 } } JOBS=-j4 while getopts j: opt do case "$opt" in j) JOBS=-j$OPTARG;; esac done shift `expr $OPTIND - 1` # ccache stuff CCLINKS=/build/cclinks CCACHEBIN=`which ccache 2>/dev/null` if [ $? = 0 -a "X$BK_NO_CCACHE" = X ] then test -d $CCLINKS || { mkdir -p $CCLINKS ln -s "$CCACHEBIN" $CCLINKS/cc ln -s "$CCACHEBIN" $CCLINKS/gcc } CCACHE_DIR=/build/.ccache # Seems like a good idea but if cache and # source are on different filesystems, setting # CCACHE_HARDLINK seems to have the same # effect as disabling the cache altogether #CCACHE_HARDLINK=1 CCACHE_UMASK=002 export CCACHE_DIR CCACHE_HARDLINK CCACHE_UMASK export PATH=$CCLINKS:$PATH else CCACHE_DISABLE=1 export CCACHE_DISABLE fi case "X`uname -s`" in XCYGWIN*|XMINGW*) ms_env; ;; esac test "$MAKE" || MAKE=`which gmake 2>/dev/null` test "$MAKE" || MAKE=make test "x$BK_VERBOSE_BUILD" != "x" && { V="V=1"; } "$MAKE" --no-print-directory $JOBS $V "$@" From luther.johnson at makerlisp.com Fri Jun 21 08:35:50 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Thu, 20 Jun 2024 15:35:50 -0700 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> <2a834aef-2b52-6b16-b79a-7f321585a4b8@makerlisp.com> Message-ID: <532fcb28-74a2-896d-be1b-97d16f38f9bf@makerlisp.com> I defer to your greater experience than mine. I guess I've run into ./configure && make in very vanilla situations, a few Gnu or Gnu-ish applications. If it has times when it doesn't work, or doesn't behave well, then I don't doubt your experience. I first entered this thread in pointing out some similarities in the style of opaque artificially pseudo intelligence behind systemd and CMake, namely, don't you decide what to do, tell about these qualities of these modules and we will decide what to do, don't worry your newbie little head. I think autoconf and configure are kind of halfway between user-decides-what to do (straight make) and user-decides-nothing, is-kept-in the-dark (CMake). So in that way, it's only half as bad. If it falls over sometimes when it shouldn't I think you know more about that then me. I will be wary. For my own code, I stick with straight make, and the occasional script. On 06/20/2024 02:00 PM, ron minnich wrote: > here's where I'd have to disagree: "./configure && make is not so bad, > it's not irrational, sometimes it's overkill, but it works" > > because it doesn't. Work, that is. At least, for me. > > I'm amazed: the autoreconf step on slurm permanently broke the build, > such that I am having to do a full git reset --hard and clean and > start over. > > Even simple things fail with autoconf: see the sad story here, of how > I pulled down a few simple programs, and they all fail to build. I > replaced them ALL with a single Go program that was smaller, by far, > than a single one of the configure scripts. > https://docs.google.com/presentation/d/1d0yK7g-J6oITgE-B_odadSw3nlBWGbMK7clt_TmXo7c/edit?usp=sharing > > On Thu, Jun 20, 2024 at 1:35 PM Luther Johnson > > > wrote: > > I agree that there are certainly times when CMake's leverage has > solved problems for people. My most visceral reactions were mostly > based on cases where no tool like CMake was really required at > all, but CMake had wormed its way into the consciousness of new > programmers who never learned make, and thought CMake was doing > them a great service. Bugged the hell out of me, this dumbing-down > of the general programming population. My bad experiences were all > as a consultant to teams that needed a lot of expert help, when > they had thrown CMake along with a lot of other unnecessary > complexity into their half-working solutions. So I guess it was > all tarred by the same flavor of badly conceived work. But then as > I tried to make my peace with the CMake build as it was, I got a > deeper understanding of how intrinsically irrational CMake is (and > again, behavior changing on the same builds depending on CMake > release versions. > > So there certainly are times when something a little more > comprehensive, outside of make, is required. ./configure && make > is not so bad, it's not irrational, sometimes it's overkill, but > it works ... but only if the system is kind of Unix-y. If not you > may wind up doing a lot of work to pretend it's more Unix-y, so > instead of porting your software, you're porting it to a common > Unix-like subset, then emulating that Unix-like subset on your > platform, both ends against the middle. That can be ultimately > counter-productive too. > > I have an emotional reaction when I see the porting problem become > transformed into adherence to the "one true way", be it Unix, or > one build system or another. Because you're now just re-casting > the problem into acceptance of that other tool or OS core as the > way it should be. Instead of getting your thing to work on the > other platform, by translating from what your application wants, > into how to do it on whatever system, you're changing your > application to be more like what the "one true system" wants to > see. You've given up control of your idea of your app's core OS > requirements, you've decided to "just give in and be UNiX (or > Windows, or whatever)". To me, that's backwards. > > On 06/20/2024 12:59 PM, Warner Losh wrote: >> For me, precomputing an environment is the same as a wysiwyg >> editor: what you see is all you get. If it works for you, and the >> environment that's inferred from predefined CPP symbols is >> correct, then it's an easy solution. When it's not, and for me it >> often wasn't, it's nothing but pain and suffering and saying MF >> all the time (also not Make File).... I was serious when I've >> said I've had more positive cmake experiences (which haven't been >> all that impressive: I'm more impressed with meson in this space, >> for example) than I ever had with IMakefiles, imake, xmkmf, >> etc... But It's also clear that different people have lived >> through different hassles, and I respect that... >> >> I've noticed too that we're relatively homogeneous these days: >> Everybody is a Linux box or Windows Box or MacOS, except for a >> few weird people on the fringes (like me). It's a lot easier to >> get things right enough w/o autotools, scons, meson, etc than it >> was in The Bad Old Days of the Unix Wars and the Innovation >> Famine that followed from the late 80s to the mid 2000s.... In >> that environment, there's one of two reactions: Test Everything >> or Least Common Denominator. And we've seen both represented in >> this thread. As well as the 'There's so few environments, can't >> you precompute them all?' sentiment from newbies that never >> bloodied their knuckles with some of the less like Research Unix >> machines out there like AIX and HP/UX... Or worse, Eunice... >> >> Warner >> >> On Thu, Jun 20, 2024 at 12:42 PM Adam Thornton >> > wrote: >> >> >> >> Someone clearly never used imake... >> >> >> There's a reason that the xmkmf command ends in the two >> letters it does, and I'm never going to believe it's "make file". >> >> Adam >> >> On Thu, Jun 20, 2024 at 11:34 AM Greg A. Woods >> > wrote: >> >> At Thu, 20 Jun 2024 01:01:01 -0400, Scot Jenkins via TUHS >> > wrote: >> Subject: [TUHS] Re: Version 256 of systemd boasts '42% >> less Unix philosophy' The Register >> > >> > "Greg A. Woods" > > wrote: >> > >> > > I will not ever allow cmake to run, or even exist, on >> the machines I >> > > control... >> > >> > I'm not a fan of cmake either. >> > >> > How do you deal with software that only builds with >> cmake (or meson, >> > scons, ... whatever the developer decided to use as the >> build tool)? >> > What alternatives exist short of reimplementing the >> build process in >> > a standard makefile by hand, which is obviously very >> time consuming, >> > error prone, and will probably break the next time you >> want to update >> > a given package? >> >> The alternative _is_ to reimplement the build process. >> >> For example, see: >> >> https://github.com/robohack/yajl/ >> >> This example is a far more comprehensive rewrite than is >> usually >> necessary as I wanted a complete and portable example >> that could be used >> as the basis for further projects. >> >> An example of a much simpler reimplementation: >> >> http://cvsweb.NetBSD.org/bsdweb.cgi/src/external/mit/ctwm/bin/ctwm/Makefile?rev=1.12&content-type=text/x-cvsweb-markup&only_with_tag=MAIN >> >> -- >> Greg A. Woods >> > >> >> Kelowna, BC +1 250 762-7675 RoboHack >> > >> Planix, Inc. > >> Avoncote Farms > > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flexibeast at gmail.com Fri Jun 21 09:35:18 2024 From: flexibeast at gmail.com (Alexis) Date: Fri, 21 Jun 2024 09:35:18 +1000 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: (Bakul Shah via TUHS's message of "Thu, 20 Jun 2024 14:07:54 -0700") References: <87jzikt900.fsf@gmail.com> Message-ID: <877cej5gsp.fsf@gmail.com> Bakul Shah via TUHS writes: > To build a set of objects you need to worry about at least the > following: > - build recipes for each of them (which may also depend on other > things) > - configuration parameters > - dealing with differences on each platform > - third party libraries & alternatives > - toolchains (& may be cross-platform builds) > - supporting/navigating different versions of the last 3 above > > You can't really precompute all this as there are far too many > combinations and they keep changing. Both the blog author (who is a long-time sysadmin with many 'war stories') and myself understand all that. i believe the idea is not for precomputing to be done by _builds_, but to be done on and for a given machine and its configuration, independent of any specific piece of software, which is then _queried_ by builds. That precomputation would only need to be re-run when one of the things under its purview changes. If i compile something on one of my OpenBSD boxen in the morning, and then compile some other thing in the afternoon, without an OS upgrade in-between, autoconf isn't going to find that libc.so has changed in-between. If i did the same thing on my Gentoo box, it's theoretically possible that e.g. i've moved from glibc to musl in-between, but in that case, precomputation could be done in postinst (i.e. as part of the post-installation-of-musl process). Alexis. From imp at bsdimp.com Fri Jun 21 10:05:29 2024 From: imp at bsdimp.com (Warner Losh) Date: Thu, 20 Jun 2024 18:05:29 -0600 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <877cej5gsp.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> Message-ID: On Thu, Jun 20, 2024, 5:35 PM Alexis wrote: > Bakul Shah via TUHS writes: > > > To build a set of objects you need to worry about at least the > > following: > > - build recipes for each of them (which may also depend on other > > things) > > - configuration parameters > > - dealing with differences on each platform > > - third party libraries & alternatives > > - toolchains (& may be cross-platform builds) > > - supporting/navigating different versions of the last 3 above > > > > You can't really precompute all this as there are far too many > > combinations and they keep changing. > > Both the blog author (who is a long-time sysadmin with many 'war > stories') and myself understand all that. > > i believe the idea is not for precomputing to be done by _builds_, > but to be done on and for a given machine and its configuration, > independent of any specific piece of software, which is then > _queried_ by builds. That precomputation would only need to be > re-run when one of the things under its purview changes. > > If i compile something on one of my OpenBSD boxen in the morning, > and then compile some other thing in the afternoon, without an OS > upgrade in-between, autoconf isn't going to find that libc.so has > changed in-between. If i did the same thing on my Gentoo box, it's > theoretically possible that e.g. i've moved from glibc to musl > in-between, but in that case, precomputation could be done in > postinst (i.e. as part of the post-installation-of-musl process). > Isn't that what thecautoconf cache is for? Warner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flexibeast at gmail.com Fri Jun 21 10:34:46 2024 From: flexibeast at gmail.com (Alexis) Date: Fri, 21 Jun 2024 10:34:46 +1000 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: (Warner Losh's message of "Thu, 20 Jun 2024 18:05:29 -0600") References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> Message-ID: <87jzijf80p.fsf@gmail.com> Warner Losh writes: > Isn't that what thecautoconf cache is for? There's a cross-project cache? That is, a cache not just for the project for which autoconf was run, but for _all_ software built on that machine? (Over the decades i've regularly observed instances where autoconf doesn't seem to be making use of results of its previous runs for a particular project; i don't know if that's because the build maintainer didn't configure autoconf correctly. My own dev/doc work hasn't required me to wrestle with autoconf.) Alexis. From lm at mcvoy.com Fri Jun 21 10:35:32 2024 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 20 Jun 2024 17:35:32 -0700 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <877cej5gsp.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> Message-ID: <20240621003532.GA13079@mcvoy.com> Are we into bike shed territory? It seems like cmake and autoconf are hated but then they have their fans. I posted a makefile that was pretty portable but that was not OK because it was GNU make? Huh? I've been up since 12:22am (psyched for fishing, couldn't sleep) so maybe I'm not on point, but what is the problem that this discussion is trying to solve? On Fri, Jun 21, 2024 at 09:35:18AM +1000, Alexis wrote: > Bakul Shah via TUHS writes: > > >To build a set of objects you need to worry about at least the > >following: > >- build recipes for each of them (which may also depend on other > >things) > >- configuration parameters > >- dealing with differences on each platform > >- third party libraries & alternatives > >- toolchains (& may be cross-platform builds) > >- supporting/navigating different versions of the last 3 above > > > >You can't really precompute all this as there are far too many > >combinations and they keep changing. > > Both the blog author (who is a long-time sysadmin with many 'war stories') > and myself understand all that. > > i believe the idea is not for precomputing to be done by _builds_, but to be > done on and for a given machine and its configuration, independent of any > specific piece of software, which is then _queried_ by builds. That > precomputation would only need to be re-run when one of the things under its > purview changes. > > If i compile something on one of my OpenBSD boxen in the morning, and then > compile some other thing in the afternoon, without an OS upgrade in-between, > autoconf isn't going to find that libc.so has changed in-between. If i did > the same thing on my Gentoo box, it's theoretically possible that e.g. i've > moved from glibc to musl in-between, but in that case, precomputation could > be done in postinst (i.e. as part of the post-installation-of-musl process). > > > Alexis. -- --- Larry McVoy Retired to fishing http://www.mcvoy.com/lm/boat From tuhs at tuhs.org Fri Jun 21 10:35:28 2024 From: tuhs at tuhs.org (Bakul Shah via TUHS) Date: Thu, 20 Jun 2024 17:35:28 -0700 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <877cej5gsp.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> Message-ID: > On Jun 20, 2024, at 4:35 PM, Alexis wrote: > > If i compile something on one of my OpenBSD boxen in the morning, and then compile some other thing in the afternoon, without an OS upgrade in-between, autoconf isn't going to find that libc.so has changed in-between. If i did the same thing on my Gentoo box, it's theoretically possible that e.g. i've moved from glibc to musl in-between, but in that case, precomputation could be done in postinst (i.e. as part of the post-installation-of-musl process). But the overlap between two different programs or their assumptions will be only partial (except for some very basic things) which likely means the cache won't quite work. For example, you may find that program A and B depend on different versions of some library C. And how does the configure or whatever tool find out that no dependency has changed? I don't think you can factor out a cache of such data to a global place that will work for every ported program. From flexibeast at gmail.com Fri Jun 21 10:49:34 2024 From: flexibeast at gmail.com (Alexis) Date: Fri, 21 Jun 2024 10:49:34 +1000 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240621003532.GA13079@mcvoy.com> (Larry McVoy's message of "Thu, 20 Jun 2024 17:35:32 -0700") References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <20240621003532.GA13079@mcvoy.com> Message-ID: <87frt7f7c1.fsf@gmail.com> Larry McVoy writes: > I've been up since 12:22am (psyched for fishing, couldn't sleep) > so > maybe I'm not on point, but what is the problem that this > discussion > is trying to solve? The complexity of the autoconf-based build process contributed to the xz-utils backdoor attempt. (Here's Russ Cox's writeup: https://research.swtch.com/xz-script) So, to what extent is the complexity of autoconf _needed_ nowadays? For some cases, it's not needed (and might never have been needed). For others, it seems like it might still be needed. What about the in-between cases? Can we do something different that gets us 90% of what autoconf provides in those cases, but with only 10% of the complexity (to use those commonly-provided figures)? Alexis. From woods at robohack.ca Fri Jun 21 10:54:22 2024 From: woods at robohack.ca (Greg A. Woods) Date: Thu, 20 Jun 2024 17:54:22 -0700 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87jzijf80p.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87jzijf80p.fsf@gmail.com> Message-ID: At Fri, 21 Jun 2024 10:34:46 +1000, Alexis wrote: Subject: [TUHS] Re: Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > Warner Losh writes: > > > Isn't that what thecautoconf cache is for? > > There's a cross-project cache? That is, a > cache not just for the project for which > autoconf was run, but for _all_ software > built on that machine? Indeed there is. Nothing new about it either. It's been around for two decades or more. From autoconf.info (with variants going back at least as far as 2002): 7.4.2 Cache Files .... The site initialization script can specify a site-wide cache file to use, instead of the usual per-program cache. In this case, the cache file gradually accumulates information whenever someone runs a new ‘configure’ script. There's a pkgsrc.org package, pkgtools/autoswc, that makes it all work cleanly for NetBSD and other platforms using pkgsrc, caching just the stuff that's known to be invariant (by pre-filling a static cache using a big monster "fake" configure script that covers most of the generic tests) and letting other stuff be handled at runtime. It can be a bit fragile, especially in the "gradually accumulates" way of using it (which is why pkgsrc avoids that), but usually the fault lies squarely on the shoulders of developers who either don't read the Autoconf documentation, or think that somehow they're smarter than Autoconf and the many decades of lore it encodes. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From g.branden.robinson at gmail.com Fri Jun 21 11:06:46 2024 From: g.branden.robinson at gmail.com (G. Branden Robinson) Date: Thu, 20 Jun 2024 20:06:46 -0500 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87jzijf80p.fsf@gmail.com> Message-ID: <20240621010646.rht53umd5u736gct@illithid> At 2024-06-20T17:54:22-0700, Greg A. Woods wrote: > At Fri, 21 Jun 2024 10:34:46 +1000, Alexis wrote: > > There's a cross-project [autoconf] cache? That is, a cache not just > > for the project for which autoconf was run, but for _all_ software > > built on that machine? > > Indeed there is. Nothing new about it either. It's been around for two > decades or more. [...] > It can be a bit fragile, especially in the "gradually accumulates" way > of using it (which is why pkgsrc avoids that), but usually the fault > lies squarely on the shoulders of developers who either don't read the > Autoconf documentation, or think that somehow they're smarter than > Autoconf and the many decades of lore it encodes. Hard to believe such people exist! Regards, Branden -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From flexibeast at gmail.com Fri Jun 21 11:15:29 2024 From: flexibeast at gmail.com (Alexis) Date: Fri, 21 Jun 2024 11:15:29 +1000 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: (Bakul Shah's message of "Thu, 20 Jun 2024 17:35:28 -0700") References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> Message-ID: <87bk3vf64u.fsf@gmail.com> Bakul Shah writes: > But the overlap between two different programs or their > assumptions > will be > only partial (except for some very basic things) which likely > means > the cache > won't quite work. For example, you may find that program A and B > depend on different versions of some library C. The basic things are, in fact, a significant part of what autoconf is being used to check for. "Does this platform provide this function?" And that doesn't significantly change between versions of the specific libc used by the system (glibc, musl, the *BSD libcs, etc.); due to their nature as a fundamental part of the system, they're relatively conservative (compared to higher-level libraries and applications) as to the rate at which they change, and what they add and remove. i've only rarely encountered libc version issues when compiling many pieces of software for my use over the years. For higher-level libraries, there's pkg-config and its reimplementations, such as pkgconf: > pkgconf is a program which helps to configure compiler and > linker flags for development libraries. This allows build > systems to detect other dependencies and use them with the > system toolchain. -- pkgconf(1), https://www.mankier.com/1/pkgconf which can (and does) get _used_ by autoconf, but is a standalone project. (Which is not to say i'm endorsing the system, just that it exists, and is independent of the autoconf system.) > And how does the > configure or whatever > tool find out that no dependency has changed? I don't think you > can > factor > out a cache of such data to a global place that will work for > every > ported > program. You're right: not for _every_ ported program. But even if the cache worked for _most_, and simplified the build processes of _most_ programs, thus reducing the complexity needing to be understood by build maintainers, and reducing the complexity available for malfeasants to hide backdoors, that would still be a significant win, in my opinion. Don't let the perfect be the enemy of the good, etc. Alexis. From woods at robohack.ca Fri Jun 21 11:22:42 2024 From: woods at robohack.ca (Greg A. Woods) Date: Thu, 20 Jun 2024 18:22:42 -0700 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87frt7f7c1.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <20240621003532.GA13079@mcvoy.com> <87frt7f7c1.fsf@gmail.com> Message-ID: At Fri, 21 Jun 2024 10:49:34 +1000, Alexis wrote: Subject: [TUHS] Re: Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > So, > to what extent is the complexity of > autoconf _needed_ nowadays? For xz in particular Autoconf is _probably_ not necessary, but I haven't examined it in detail. For a vast amount of application C code and libraries no build-time configuration beyond what's provided by the system headers and the C compiler is necessary. This is especially true if the code is designed to require a slightly "modern" version of C, such as iso9899:1999 (perhaps with GNU CC extensions). There are however some things in more systems level programs and libraries that are more difficult to handle even with well written code, but compared to the number of tests offered by the likes of Autoconf, well those things are actually very few. One thing that Autoconf gets used for, but for which it is not really necessary, is for choosing build-time options. Indeed its feature set for doing so is complex and error-prone to use! Too much weird mixing of shell scripts and M4 macros, with all the quoting nightmare this brings. Just try to read XZ's configure.ac! A simple declarative configuration file, such as a Makefile fragment, is sufficient. When you wander into the realm of non-C code things might also be a bit more complex, unless of course it's Go code, where this problem simply doesn't exist in the first place. -- Greg A. Woods Kelowna, BC +1 250 762-7675 RoboHack Planix, Inc. Avoncote Farms -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP Digital Signature URL: From flexibeast at gmail.com Fri Jun 21 11:32:09 2024 From: flexibeast at gmail.com (Alexis) Date: Fri, 21 Jun 2024 11:32:09 +1000 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: (Greg A. Woods's message of "Thu, 20 Jun 2024 17:54:22 -0700") References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87jzijf80p.fsf@gmail.com> Message-ID: <877cejf5d2.fsf@gmail.com> "Greg A. Woods" writes: > Indeed there is. Nothing new about it either. It's been around > for two > decades or more. TIL - thank you! i've never seen this mentioned before. (Perhaps because i only use autoconf as an end-user, rather than as a dev.) Looking at section 15.8 of that manual, it looks like i could specify that `-C` / `--config-cache` be passed to configure by default site-wide. So i might do so on my Gentoo system - given most things on that system are locally compiled, it might be an interesting stress-test data-point regarding configuration caching. Alexis. From imp at bsdimp.com Fri Jun 21 11:43:23 2024 From: imp at bsdimp.com (Warner Losh) Date: Thu, 20 Jun 2024 19:43:23 -0600 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <877cejf5d2.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87jzijf80p.fsf@gmail.com> <877cejf5d2.fsf@gmail.com> Message-ID: On Thu, Jun 20, 2024, 7:32 PM Alexis wrote: > "Greg A. Woods" writes: > > > Indeed there is. Nothing new about it either. It's been around > > for two > > decades or more. > > TIL - thank you! i've never seen this mentioned before. (Perhaps > because i only use autoconf as an end-user, rather than as a dev.) > I used it on OpenBSD in the 90s to make ports builds go much faster in the days before pkgsrc was a going concern. It made a huge difference on my arc machine that was about P75 speed, the speed of a pentium clocked at 75MHz.... it was an R4000PC running at 100MHz iirc. Warner Looking at section 15.8 of that manual, it looks like i could > specify that `-C` / `--config-cache` be passed to configure by > default site-wide. So i might do so on my Gentoo system - given > most things on that system are locally compiled, it might be an > interesting stress-test data-point regarding configuration > caching. > > > Alexis. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Fri Jun 21 11:43:54 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Fri, 21 Jun 2024 01:43:54 +0000 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87bk3vf64u.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87bk3vf64u.fsf@gmail.com> Message-ID: On Thursday, June 20th, 2024 at 6:15 PM, Alexis wrote: > Bakul Shah bakul at iitbombay.org writes: > > > But the overlap between two different programs or their > > assumptions > > will be > > only partial (except for some very basic things) which likely > > means > > the cache > > won't quite work. For example, you may find that program A and B > > depend on different versions of some library C. > > > The basic things are, in fact, a significant part of what autoconf > is being used to check for. "Does this platform provide this > function?" > > ... > > > Alexis. This aspect of things I have found a bit perplexing. On one hand, sure, it's nice to have some scripted system spit out a: "Dependency xyz not found" But where it falls apart in my head is what that tells me that, for instance, cpp's error diagnostic about a missing include or ld saying a symbol or library wasn't found does. It's in my mind a minor convenience but one that doesn't justify all the machinery between one's self and make just to provide. Granted that's not all autotools does, so a poor example in practice, but in theory gets at one of my irks, packaging something that you are already going to discover some other way. That and "does my target platform list support xyz" isn't necessarily a matter I'd wait until I've created a whole build package around my software to settle... Just a small part of the puzzle but one of the parts that gives me more headaches than not. Now I don't get to respond to a compiler or linker asking for something by putting it where it asked for it, now I also have to figure out how to do the extra work to ensure that it's put somewhere and in a way that all this machinery between myself and the compiler can also verify in its own magical way component is present. I'd be willing to wager that half the time autotools, cmake, etc has made me want to put my head through a wall is not that some needed thing isn't there, it's just that it's not there according to whatever extra expectations or formulas come into play just to satisfy the build machinery. These tools can be helpful in the face of extreme complexity, but I feel silly when most of the work I put into compiling some package that's like 6 source files is making sure the environment on my system can satisfy the expectations of the build tools. It has been said already that part of the problem too with the uptake of these tools and their growing ubiquity is new folks who don't know any better think that's just "how it is" and then wind up spinning an autotools or cmake build for a <1000 line tool written in ANSI C. I've done the same in less experienced times, one of my first attempts at a game engine uses an autotools build. I quickly grew frustrated with it and everything since has used a flat makefile and has been just fine. Granted I'm not building a triple A game, but that gets at the root of one of my gripes, I think these sorts of frameworks are overused. They have their areas that they shine, or they wouldn't have reached the critical mass they have, but as consequence folks will use them haphazardly regardless of the need. Long story short, maybe gcc needs a configure script, but does GNU ed? Maybe KDE Plasma needs CMake files, but does libtiff? I make no claims regarding the complexity of these actual codebases...but one does have to wonder... - Matt G. From kevin.bowling at kev009.com Fri Jun 21 11:44:14 2024 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Thu, 20 Jun 2024 18:44:14 -0700 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <20240621003532.GA13079@mcvoy.com> <87frt7f7c1.fsf@gmail.com> Message-ID: On Thu, Jun 20, 2024 at 6:22 PM Greg A. Woods wrote: > > At Fri, 21 Jun 2024 10:49:34 +1000, Alexis wrote: > Subject: [TUHS] Re: Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register > > > > So, > > to what extent is the complexity of > > autoconf _needed_ nowadays? > > For xz in particular Autoconf is _probably_ not necessary, but I haven't > examined it in detail. > > For a vast amount of application C code and libraries no build-time > configuration beyond what's provided by the system headers and the C > compiler is necessary. This is especially true if the code is designed > to require a slightly "modern" version of C, such as iso9899:1999 > (perhaps with GNU CC extensions). > > There are however some things in more systems level programs and > libraries that are more difficult to handle even with well written code, > but compared to the number of tests offered by the likes of Autoconf, > well those things are actually very few. Warner sufficiently summed up the philosophical reason for the probe-the-world-after-slumber approach. It seems foolish at first; a waste of time/cycles; but the deeper you dig into the problem space the less foolish it becomes. As an example, autoconf builds easily handle cross toolchains or shimmed toolchains where a mix of native and emulated tools are used to speed up cross builds. I agree with the chorus that the implementation of autoconf is awful. But the audience that assume that autoconf is also bad, and the bits and bobs of shell and make are somehow equivalent, is living in a state of willful ignorance. > One thing that Autoconf gets used for, but for which it is not really > necessary, is for choosing build-time options. Indeed its feature set > for doing so is complex and error-prone to use! Too much weird mixing > of shell scripts and M4 macros, with all the quoting nightmare this > brings. Just try to read XZ's configure.ac! A simple declarative > configuration file, such as a Makefile fragment, is sufficient. > > When you wander into the realm of non-C code things might also be a bit > more complex, unless of course it's Go code, where this problem simply > doesn't exist in the first place. Go's approach was the same as Java, if you control the level of abstraction sufficiently you eliminate much of the complexity. > > -- > Greg A. Woods > > Kelowna, BC +1 250 762-7675 RoboHack > Planix, Inc. Avoncote Farms From stuff at riddermarkfarm.ca Fri Jun 21 23:57:23 2024 From: stuff at riddermarkfarm.ca (Stuff Received) Date: Fri, 21 Jun 2024 09:57:23 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> <2a834aef-2b52-6b16-b79a-7f321585a4b8@makerlisp.com> Message-ID: <5f1771cb-1d63-c3dc-9677-9289de1cf5cb@riddermarkfarm.ca> On 2024-06-20 17:00, ron minnich wrote (in part): > here's where I'd have to disagree: "./configure && make is not so bad, > it's not irrational, sometimes it's overkill, but it works" > > because it doesn't. Work, that is. At least, for me. And me. (Many configure+gmake scripts fail on Solaris 11.) S. From ads at salewski.email Fri Jun 21 23:58:36 2024 From: ads at salewski.email (Alan D. Salewski) Date: Fri, 21 Jun 2024 09:58:36 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87bk3vf64u.fsf@gmail.com> Message-ID: On Thu, Jun 20, 2024, at 21:43, segaloco via TUHS wrote: > On Thursday, June 20th, 2024 at 6:15 PM, Alexis wrote: > [...] >> The basic things are, in fact, a significant part of what autoconf >> is being used to check for. "Does this platform provide this >> function?" >> >> ... >> >> >> Alexis. > > This aspect of things I have found a bit perplexing. On one hand, > sure, it's nice to have some scripted system spit out a: > > "Dependency xyz not found" > > But where it falls apart in my head is what that tells me that, for > instance, cpp's error diagnostic about a missing include or ld saying a > symbol or library wasn't found does. It's in my mind a minor > convenience but one that doesn't justify all the machinery between > one's self and make just to provide. Granted that's not all autotools > does, so a poor example in practice, but in theory gets at one of my > irks, packaging something that you are already going to discover some > other way. That and "does my target platform list support xyz" isn't > necessarily a matter I'd wait until I've created a whole build package > around my software to settle... > > Just a small part of the puzzle but one of the parts that gives me more > headaches than not. Now I don't get to respond to a compiler or linker > asking for something by putting it where it asked for it, now I also > have to figure out how to do the extra work to ensure that it's put > somewhere and in a way that all this machinery between myself and the > compiler can also verify in its own magical way component is > present. [...] The thing about the autotools is that there are two different audiences for different aspects of the system: consumers of the package (sysadmins, porters, distro packagers...) and developers. In general, the developers need to work a little harder to make life of the consumers easier. While some consumers might be fine with (to use the example above) a cpp error diagnostic about a missing include, I imagine most would prefer a "configure time" diagnostic explaining that dependency package foo needs to be installed before they try to build it. Those who are not C or C++ developers wouldn't necessarily know what cpp is, or how to control it.[0] I, for one, appreciate that pretty much any build tool can be integrated into the autotools framework, in part because it acts as a barrier between the consumers and the underlying language-specific build tooling, etc. The same could be (and has been) said of portable Makefiles, but the level of effort would be quite high to achieve what the autotools-generated Makefiles produce out of the box. E.g., the 'make distcheck' target not only drives a full configure and build in a temporary directory, but also verifies that a VPATH build (building separately from the source tree) works. Language-specific build mechanisms are fine as far as it goes. But the more of them you encounter and need to interact with directly, the more friction there is. The audience for such tools is primarily the developer. Python's pip, JavaScript's npm (or yarn, or ...), golang, Rust's cargo, Java's mvn (or ant, or ivy, or ...), Clojure's lein, Perl's ExtUtils::MakeMaker (or Module::Build, or ...), Ruby's gem, ... And tooling to produce documentation is even worse: DocBook, AsciiDoc, reStructuredText. Even driving LaTeX has become complicated (PDFLaTeX, XeTeX, LuaTeX, ...). The developer who necessarily understands such things can, with some effort, integrate them into an autotools build, making life much easier for the consumers of the package (assuming the integration has been done well). When the audience or consumer of the package is sysadmins (including end-users acting as their own sysadmin), porters, and distro packagers, I think "the usual dance" of "./configure && make" is nice because it is consistent across underlying languages, compilers, and whatever auxiliary tools are needed to produce documentation, etc. And I happen to like m4 :-) *ducks* -Al [0] Sadly, the days when C was the lingua franca amongst programmers seem to be behind us. -- a l a n d. s a l e w s k i ads at salewski.email salewski at att.net https://github.com/salewski From tuhs at tuhs.org Sat Jun 22 01:38:13 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Fri, 21 Jun 2024 11:38:13 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: On 6/17/24 6:34 PM, Steve Nickolas wrote: > On Mon, 17 Jun 2024, Stuff Received wrote: > >> On 2024-06-16 21:25, Larry McVoy wrote (in part): >> [...] >>> *Every* time they used some bash-ism, it bit us in the ass. >> >> This is so true for a lot of OS projects (on Github, for example).  Most >> -- sometimes all -- the scripts that start with /bin/sh but are full of >> bashisms because the authors run systems where /bin/sh is really bash. > > Which is why I'm glad Debian's /bin/sh is dash (fork of ash) instead. Like everything else, it depends on your goals. If portability across different OSs is a goal, and you can't be guaranteed that bash will be available, it's best to stick with POSIX features. If you want to run places where you can be guaranteed that bash exists, use whatever `bashisms' you like and use `#! /bin/bash'. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From tuhs at tuhs.org Sat Jun 22 01:41:08 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Fri, 21 Jun 2024 11:41:08 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87msnl4ew0.fsf@gmail.com> <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> Message-ID: <0deb58a4-0153-4143-985c-d32b8b683c74@case.edu> On 6/17/24 9:52 PM, Steve Nickolas wrote: > It's still possible to port NetBSD's /bin/sh to Debian (I've done it, > called it "nash", but don't have any official release because I don't > really see a point). I ported it to macOS to use as a testing variant. You might contact kre and see if he's interested in your work; he and I talked a year or two ago about his possible future plans to release a portable version. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From steve at quintile.net Sat Jun 22 01:42:29 2024 From: steve at quintile.net (Steve Simon) Date: Fri, 21 Jun 2024 16:42:29 +0100 Subject: [TUHS] autotools In-Reply-To: <171892043335.3009365.5212614698282084592@minnie.tuhs.org> References: <171892043335.3009365.5212614698282084592@minnie.tuhs.org> Message-ID: my personal frustration with autotools was trying to port code to plan9. i wish autotools had an intermediate file format which described the packages requirements, that way i could have written my own backend to create my config.h and makefiles (or mkfiles) in the end i wrote my own tool which crudely parses a directory of C or F77 sourcecode and uses heuristics to create a config.h and a plan9 mkfile, it was named mkmk(1) it was almost _never_ completely correct, but usually got close enough that the files only needed a little manual hacking. it also took great pains to generate mkfiles that looked hand written; if you are going to auto generate files, make them look nice. -Steve From tuhs at tuhs.org Sat Jun 22 01:46:55 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Fri, 21 Jun 2024 11:46:55 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> Message-ID: <69f23275-0853-47ac-8c22-9be6fedff13c@case.edu> On 6/20/24 4:12 PM, ron minnich wrote: > Personally, the autoconfig process does not fill me with confidence, and it > was recently responsible for a very serious security problem. And, > autoconfig doesn't work: I've lost track of how many times autoconf has > failed for me. In general, in my experience, autoconf makes for less > portability, not more. I'd be interested in some examples of this. I've had pretty decent success with autoconf-based portability. The one issue is cross-compiling between systems with different versions of libc (glibc vs. musl, for instance). Tools that run natively on the build platform have to be very portable, since they can't use config.h (which is for the target system). -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From tuhs at tuhs.org Sat Jun 22 01:57:44 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Fri, 21 Jun 2024 11:57:44 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <877cej5gsp.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> Message-ID: On 6/20/24 7:35 PM, Alexis wrote: > i believe the idea is not for precomputing to be done by _builds_, but to > be done on and for a given machine and its configuration, independent of > any specific piece of software, which is then _queried_ by builds. That > precomputation would only need to be re-run when one of the things under > its purview changes. This is the rationale behind local (package-specific) and global (site- specific) versions of config.cache and `configure -C'. I use them all the time; they reduce configuration time considerably. (But then, I am probably building more often than someone who just downloads a source tarball, builds it, and installs the result.) > If i compile something on one of my OpenBSD boxen in the morning, and then > compile some other thing in the afternoon, without an OS upgrade > in-between, autoconf isn't going to find that libc.so has changed > in-between. configure and config.cache compute the results for a given build environment. If you change that, whose responsibility is it to update the dependencies? -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From henry.r.bent at gmail.com Sat Jun 22 02:06:09 2024 From: henry.r.bent at gmail.com (Henry Bent) Date: Fri, 21 Jun 2024 12:06:09 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <69f23275-0853-47ac-8c22-9be6fedff13c@case.edu> References: <87iky84c23.fsf@gmail.com> <20240617012531.GE12821@mcvoy.com> <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> <69f23275-0853-47ac-8c22-9be6fedff13c@case.edu> Message-ID: On Fri, 21 Jun 2024 at 11:47, Chet Ramey via TUHS wrote: > On 6/20/24 4:12 PM, ron minnich wrote: > > > Personally, the autoconfig process does not fill me with confidence, and > it > > was recently responsible for a very serious security problem. And, > > autoconfig doesn't work: I've lost track of how many times autoconf has > > failed for me. In general, in my experience, autoconf makes for less > > portability, not more. > > I'd be interested in some examples of this. I've had pretty decent success > with autoconf-based portability. > I think it's important to make a distinction between autotools not working and the actual software distribution not being buildable. For example, I've recently been working with Ultrix V4.5. Most configure scripts are able to complete successfully with ksh or sh5, so I don't absolutely need bash (even though I do have it and use it). The difficulties begin when trying to compile the actual code; for example, Ultrix doesn't have strdup(). Almost every autotools-based package I've used doesn't bother checking if I have strdup() and/or providing a replacement. This isn't the fault of autotools, this is the fault of the code author not considering whether a lack of strdup() is a possibility. The end result, however, is the same - I don't have a buildable release as-is. I know that Ultrix is incredibly out of date, but I use it to illustrate that while there are corner cases that autotools won't catch, that isn't the fault of autotools. It would be no different with cmake or imake or meson or handwritten makefiles or anything else - if the software author doesn't bother checking for and coding around the corner case that comes up on your particular system, you're stuck unless you can fix the code. -Henry -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Sat Jun 22 02:07:27 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Fri, 21 Jun 2024 12:07:27 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87jzijf80p.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87jzijf80p.fsf@gmail.com> Message-ID: <44f0f159-5d7c-4603-8075-c9a69af8b76a@case.edu> On 6/20/24 8:34 PM, Alexis wrote: > Warner Losh writes: > >> Isn't that what thecautoconf cache is for? > > There's a cross-project cache? That is, a cache not just for the project > for which autoconf was run, but for _all_ software built on that machine? Well, `all' is pretty broad. You can set some site-specific defaults: https://www.gnu.org/software/autoconf/manual/autoconf-2.63/html_node/Site-Defaults.html and use a site-specific cache file: https://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Cache-Files.html This cache file can be written by every `configure' run, if you like. It's just a shell script. > (Over the decades i've regularly observed instances where autoconf doesn't > seem to be making use of results of its previous runs for a particular > project; i don't know if that's because the build maintainer didn't > configure autoconf correctly. You have to tell configure to use the cache file. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From tuhs at tuhs.org Sat Jun 22 02:24:06 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Fri, 21 Jun 2024 12:24:06 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> <69f23275-0853-47ac-8c22-9be6fedff13c@case.edu> Message-ID: <773a7b19-279f-4caa-b0a8-44382870118c@case.edu> On 6/21/24 12:06 PM, Henry Bent wrote: > On Fri, 21 Jun 2024 at 11:47, Chet Ramey via TUHS > wrote: > > On 6/20/24 4:12 PM, ron minnich wrote: > > > Personally, the autoconfig process does not fill me with confidence, > and it > > was recently responsible for a very serious security problem. And, > > autoconfig doesn't work: I've lost track of how many times autoconf has > > failed for me. In general, in my experience, autoconf makes for less > > portability, not more. > > I'd be interested in some examples of this. I've had pretty decent success > with autoconf-based portability. > > > I think it's important to make a distinction between autotools not working > and  the actual software distribution not being buildable. > > For example, I've recently been working with Ultrix V4.5.  Most configure > scripts are able to complete successfully with ksh or sh5, so I don't > absolutely need bash (even though I do have it and use it).  The > difficulties begin when trying to compile the actual code; for example, > Ultrix doesn't have strdup(). For most projects, OS releases that ancient are not supported. It's the code author using some base minimum for assumptions -- OSs from the past 35 years or so should be safe (dating from the 4.4 BSD release, to use the strdup() example). Maybe that's the "code author not considering," but I'd say that's the result of the author simply not being interested in something that old. Bash ran on 4.3 BSD for a long time (and may still, I haven't checked with that project maintainer in a while), and I ran bash-5.0 on OPENSTEP 4.2 because I like it, but I'd say those are exceptions. I guess what I'm saying is that it's not the author's fault for not wanting to support OS versions released, for a significant percentage, before they were born. They have different priorities. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From henry.r.bent at gmail.com Sat Jun 22 02:40:23 2024 From: henry.r.bent at gmail.com (Henry Bent) Date: Fri, 21 Jun 2024 12:40:23 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <773a7b19-279f-4caa-b0a8-44382870118c@case.edu> References: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> <69f23275-0853-47ac-8c22-9be6fedff13c@case.edu> <773a7b19-279f-4caa-b0a8-44382870118c@case.edu> Message-ID: On Fri, 21 Jun 2024 at 12:24, Chet Ramey wrote: > > For most projects, OS releases that ancient are not supported. It's the > code author using some base minimum for assumptions -- OSs from the past > 35 years or so should be safe (dating from the 4.4 BSD release, to use > the strdup() example). Maybe that's the "code author not considering," > but I'd say that's the result of the author simply not being interested > in something that old. > > Bash ran on 4.3 BSD for a long time (and may still, I haven't checked with > that project maintainer in a while), and I ran bash-5.0 on OPENSTEP 4.2 > because I like it, but I'd say those are exceptions. > > I guess what I'm saying is that it's not the author's fault for not wanting > to support OS versions released, for a significant percentage, before they > were born. They have different priorities. > > Sure, and I don't disagree. I was just using an old OS to make a point about corner cases; it would be just as applicable if I had a modern OS that for whatever reason lacked strdup(), or your personal favorite "but everyone has this!" function. You're not going to be able to cover all bases all the time, and I'm sure that there are plenty of code authors who aren't interested in formally supporting anything outside of the most common operating systems. If their autotools-based projects work on my other OS that's great, but it isn't the fault of autotools if the project isn't coded with my OS in mind. -Henry -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Sat Jun 22 02:52:49 2024 From: imp at bsdimp.com (Warner Losh) Date: Fri, 21 Jun 2024 10:52:49 -0600 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> <69f23275-0853-47ac-8c22-9be6fedff13c@case.edu> <773a7b19-279f-4caa-b0a8-44382870118c@case.edu> Message-ID: On Fri, Jun 21, 2024 at 10:40 AM Henry Bent wrote: > Sure, and I don't disagree. I was just using an old OS to make a point > about corner cases; it would be just as applicable if I had a modern OS > that for whatever reason lacked strdup(), or your personal favorite "but > everyone has this!" function. You're not going to be able to cover all > bases all the time, and I'm sure that there are plenty of code authors who > aren't interested in formally supporting anything outside of the most > common operating systems. If their autotools-based projects work on my > other OS that's great, but it isn't the fault of autotools if the project > isn't coded with my OS in mind. > Normally in modern software, "has it or not" is controlled by some pre-processor variable you can check. The problem comes in when you have under-conformant systems that claim conformance with POSiX 1-20xx, but that lack that one interface mandated by it (and one that's not controlled by some other thing... posix is super complex, for good and for ill). And you also have the edge case of "newly defined in C11" say, and the base compiler doesn't claim C11 conformance, but this function is none-the-less available. It's really really hard to know if it's there or not w/o testing for it. That even goes for "it's a linux box" since musl vs glibc has variations that you won't know about until you check. So it doesn't have to be something that should be as ubiquitous as strdup to run into issues. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Sat Jun 22 03:25:13 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Fri, 21 Jun 2024 13:25:13 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <202406200501.45K5118a028500@sdf.org> <69f23275-0853-47ac-8c22-9be6fedff13c@case.edu> <773a7b19-279f-4caa-b0a8-44382870118c@case.edu> Message-ID: <6be4bbef-c303-4f93-bdb9-f879a96343cf@case.edu> On 6/21/24 12:40 PM, Henry Bent wrote: > If their autotools-based projects work on my > other OS that's great, but it isn't the fault of autotools if the project > isn't coded with my OS in mind. Yep, we agree on that. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From phil at ultimate.com Sat Jun 22 03:31:59 2024 From: phil at ultimate.com (Phil Budne) Date: Fri, 21 Jun 2024 13:31:59 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <0e6792ed-65b0-e2e1-8159-6426a7f15a8d@riddermarkfarm.ca> <202406200501.45K5118a028500@sdf.org> <69f23275-0853-47ac-8c22-9be6fedff13c@case.edu> <773a7b19-279f-4caa-b0a8-44382870118c@case.edu> Message-ID: <202406211731.45LHVxvP066621@ultimate.com> Henry Bent wrote: > On Fri, 21 Jun 2024 at 12:24, Chet Ramey wrote: > > I guess what I'm saying is that it's not the author's fault for not wanting > > to support OS versions released, for a significant percentage, before they > > were born. They have different priorities. > If their autotools-based projects work on my > other OS that's great, but it isn't the fault of autotools if the project > isn't coded with my OS in mind. autotools isn't a magic wand for making portable apps, it's a toolkit for probing the environment to discover which alternatives are available. I'm sure it's a mysterious waste of time to those who didn't live through the Un*x wars, a time when there were multiple major things called Un*x, multiple major vendors, and available features changed frequently. Code can only be kept portable by building and testing and fixing it across all the platforms, which is time intensive, and given the closer alignment of systems nowadays doesn't get much effort. For a look at the effort I once invested in making/keeping a pet project portable look at the range of platforms I had to test on at http://ftp.regressive.org/finger/00README For a current project: https://www.regressive.org/snobol4/csnobol4/2.3/stats.html (I tested builds on Red Hat (not Enterprise!) 7.1 and FreeBSD 3.2) and https://www.regressive.org/snobol4/csnobol4/2.2/stats.html (includes OpenIndiana, a Solaris based distribution). I've never been an autotools fan; I've used it once, when I wanted to be able to build shared libraries across arbitrary platforms. I once used cpp for conditionalizing Makefiles, but that became unreliable once ANSI C hit the streets (ISTR throwing up my hands when I discovered "pee pee nums") The REAL evil of autotools is that it builds on the premise that all problems can be solved using #ifdef. I drank a different drink mix three decades ago: https://www.usenix.org/legacy/publications/library/proceedings/sa92/spencer.pdf But you're still stuck with having to discover what's available. Apache Portable Runtime or Boost are environments that try to smooth over some plaform differences (for varying values of some, depending on your needs), but that's a whole 'nuther belief system to buy into. Maybe this discussion needs to move to autoconf-haters? On the original topic, I keep imagining the systemd authors are are trying to build a monolithic system; an operating system inside an operating system that someday systemd will appear inside of. Then it will be "systemd all the way down". Followups to systemd-haters? P.S. Some earlier post complained about the complexity of writing rc.d scripts for FreeBSD; The current system only requires a script setting some variables. From tuhs at tuhs.org Sat Jun 22 03:55:24 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Fri, 21 Jun 2024 13:55:24 -0400 Subject: [TUHS] Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <202406211731.45LHVxvP066621@ultimate.com> References: <202406200501.45K5118a028500@sdf.org> <69f23275-0853-47ac-8c22-9be6fedff13c@case.edu> <773a7b19-279f-4caa-b0a8-44382870118c@case.edu> <202406211731.45LHVxvP066621@ultimate.com> Message-ID: <8de72821-51be-4cae-b9e7-5a5afc573046@case.edu> On 6/21/24 1:31 PM, Phil Budne wrote: > The REAL evil of autotools is that it builds on the premise that > all problems can be solved using #ifdef. That's certainly the most common idiom, and one that most autotools users (including me) predominantly use, but it's not the only way. If you want to provide a larger distribution, you can use AC_REPLACE_FUNCS for things that a particular target doesn't provide and isolate the complexity there. gnulib is a big help here. Or you can provide a compatibility layer and push the complexity down to it. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From flexibeast at gmail.com Sat Jun 22 10:04:29 2024 From: flexibeast at gmail.com (Alexis) Date: Sat, 22 Jun 2024 10:04:29 +1000 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: (Chet Ramey via TUHS's message of "Fri, 21 Jun 2024 11:57:44 -0400") References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> Message-ID: <87le2xvo4y.fsf@gmail.com> Chet Ramey via TUHS writes: > On 6/20/24 7:35 PM, Alexis wrote: >> If i compile something on one of my OpenBSD boxen in the >> morning, >> and then compile some other thing in the afternoon, without an >> OS >> upgrade in-between, autoconf isn't going to find that libc.so >> has >> changed in-between. > > configure and config.cache compute the results for a given build > environment. If you change that, whose responsibility is it to > update > the > dependencies? Sorry, i'm not sure i understand your question, particularly given that i was writing about the situation where a fundamental part of the build environment _hasn't_ changed .... Could you please elaborate or rephrase? Alexis. From tuhs at tuhs.org Sat Jun 22 10:33:38 2024 From: tuhs at tuhs.org (Warren Toomey via TUHS) Date: Sat, 22 Jun 2024 10:33:38 +1000 Subject: [TUHS] Thank you all for your civility Message-ID: All, recently I saw on Bruce Schneier "Cryptogram" blog that he has had to change the moderation policy due to toxic comments: https://www.schneier.com/blog/archives/2024/06/new-blog-moderation-policy.html So I want to take this opportunity to thank you all for your civility and respect for others on the TUHS and COFF lists. The recent systemd and make discussions have highlighted significant differences between people's experiences and opinions. Nonetheless, apart from a few pointed comments, the discussions have been polite and informative. These lists have been in use for decades now and, thankfully, I've only had to unsubscribe a handful of people for offensive behaviour. That's a testament to the calibre of people who are on the lists. Cheers and thank you again, Warren P.S. I'm a happy Devuan (non-systemd) user for many years now. From tuhs at tuhs.org Sun Jun 23 03:53:48 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Sat, 22 Jun 2024 13:53:48 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <87le2xvo4y.fsf@gmail.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87le2xvo4y.fsf@gmail.com> Message-ID: <76644602-7257-4050-b625-050966280e1c@case.edu> On 6/21/24 5:04 PM, Alexis wrote: > Chet Ramey via TUHS writes: > >> On 6/20/24 7:35 PM, Alexis wrote: > >>> If i compile something on one of my OpenBSD boxen in the morning, >>> and then compile some other thing in the afternoon, without an OS >>> upgrade in-between, autoconf isn't going to find that libc.so has >>> changed in-between. >> >> configure and config.cache compute the results for a given build >> environment. If you change that, whose responsibility is it to update >> the >> dependencies? > > Sorry, i'm not sure i understand your question, particularly given that i > was writing about the situation where a fundamental part of the build > environment _hasn't_ changed .... Could you please elaborate or rephrase? I think we're kind of saying the same thing. The autotools can't update dependencies if something in the environment changes, regardless of whether an OS update has occurred or not. That responsibility has to fall on the admin or other tools. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From luther.johnson at makerlisp.com Sun Jun 23 04:15:39 2024 From: luther.johnson at makerlisp.com (Luther Johnson) Date: Sat, 22 Jun 2024 11:15:39 -0700 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <76644602-7257-4050-b625-050966280e1c@case.edu> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87le2xvo4y.fsf@gmail.com> <76644602-7257-4050-b625-050966280e1c@case.edu> Message-ID: <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> If I could say something a little more meta, and echoing an earlier comment - autotools, configure, etc, don't do the port for you - it's up to the author to decide and test what OS features are required, and if something hasn't been too implicitly assumed, if a "needs this" hasn't been left out, then the "configure && make" process will give you the right build for a system that is indeed, already supported. If it doesn't build, we can interpret that as "not supported", or that the author did not sufficiently adjust input to the build process, or test similar-enough configurations, to get the right build for that system. But that is not an indictment of this whole way of doing things, or the tools themselves, necessarily. It may just mean that someone made some fairly ordinary mistake along the way in setting up the build. Or that the system on which we are trying to build is different in a way that the author did not imagine. On 06/22/2024 10:53 AM, Chet Ramey via TUHS wrote: > On 6/21/24 5:04 PM, Alexis wrote: >> Chet Ramey via TUHS writes: >> >>> On 6/20/24 7:35 PM, Alexis wrote: >> >>>> If i compile something on one of my OpenBSD boxen in the morning, >>>> and then compile some other thing in the afternoon, without an OS >>>> upgrade in-between, autoconf isn't going to find that libc.so has >>>> changed in-between. >>> >>> configure and config.cache compute the results for a given build >>> environment. If you change that, whose responsibility is it to update >>> the >>> dependencies? >> >> Sorry, i'm not sure i understand your question, particularly given >> that i was writing about the situation where a fundamental part of >> the build environment _hasn't_ changed .... Could you please >> elaborate or rephrase? > > I think we're kind of saying the same thing. The autotools can't update > dependencies if something in the environment changes, regardless of > whether > an OS update has occurred or not. That responsibility has to fall on the > admin or other tools. > From davida at pobox.com Sun Jun 23 07:16:35 2024 From: davida at pobox.com (David Arnold) Date: Sun, 23 Jun 2024 07:16:35 +1000 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> References: <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> Message-ID: <1F747ECB-3BC2-405C-8EDF-96F655571B29@pobox.com> > On 23 Jun 2024, at 04:16, Luther Johnson wrote: > > If I could say something a little more meta, and echoing an earlier > comment - autotools, configure, etc, don't do the port for you - it's up > to the author to decide and test what OS features are required, and if > something hasn't been too implicitly assumed, if a "needs this" hasn't > been left out, then the "configure && make" process will give you the > right build for a system that is indeed, already supported. If it > doesn't build, we can interpret that as "not supported", or that the > author did not sufficiently adjust input to the build process, or test > similar-enough configurations, to get the right build for that system. The author thus ends up searching for a sweet spot: test too many things, and people complain that you’re wasting time checking something that is always true; test too few, and it will break on relatively common platforms. As an example, mentioned up-thread, building on Ultrix in 2024: you need to test and work around a bunch of things that have been fixed on anything updated since the mid-90’s to get a clean build on Ultrix, SunOS-4.x, etc. Your average Linux or macOS user sees this as pointless time wasting. There’s no right answer here: someone is always annoyed at you. d From dave at horsfall.org Sun Jun 23 10:13:14 2024 From: dave at horsfall.org (Dave Horsfall) Date: Sun, 23 Jun 2024 10:13:14 +1000 (EST) Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> Message-ID: On Fri, 14 Jun 2024, Michael Kjörling wrote: [...] > journalctl -f -u 'smtpd' > > or whatever else might map to the SMTP server software you're running. > (Sure, it gets slightly more complicated if you don't know what SMTP > server software is in use, but in that case I think a case can be made > for why do you even care about its logs?) My server runs Sendmail, and I have no idea what "journalctl" is (it sounds Penguin-ish, which I definitely don't run). And I care so that I can firewall the buggers on the spot... -- Dave From tuhs at tuhs.org Sun Jun 23 10:29:29 2024 From: tuhs at tuhs.org (segaloco via TUHS) Date: Sun, 23 Jun 2024 00:29:29 +0000 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <1F747ECB-3BC2-405C-8EDF-96F655571B29@pobox.com> References: <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> <1F747ECB-3BC2-405C-8EDF-96F655571B29@pobox.com> Message-ID: On Saturday, June 22nd, 2024 at 2:16 PM, David Arnold wrote: > > On 23 Jun 2024, at 04:16, Luther Johnson luther.johnson at makerlisp.com wrote: > > > > If I could say something a little more meta, and echoing an earlier > > comment - autotools, configure, etc, don't do the port for you - it's up > > to the author to decide and test what OS features are required, and if > > something hasn't been too implicitly assumed, if a "needs this" hasn't > > been left out, then the "configure && make" process will give you the > > right build for a system that is indeed, already supported. If it > > doesn't build, we can interpret that as "not supported", or that the > > author did not sufficiently adjust input to the build process, or test > > similar-enough configurations, to get the right build for that system. > > > The author thus ends up searching for a sweet spot: test too many things, and people complain that you’re wasting time checking something that is always true; test too few, and it will break on relatively common platforms. > > As an example, mentioned up-thread, building on Ultrix in 2024: you need to test and work around a bunch of things that have been fixed on anything updated since the mid-90’s to get a clean build on Ultrix, SunOS-4.x, etc. Your average Linux or macOS user sees this as pointless time wasting. > > There’s no right answer here: someone is always annoyed at you. > > > > > d Well and part of that is indeed being intentional about platform support, fancy toolkit or not. If you're specifically intending to support just a particular vendor's platform, you make programming choices, not build machinery choices, towards that end. Similarly, if you truly expect something you write to work everywhere with minimal modification, no amount of build machinery is a substitute for pulling up and adhering as closely to POSIX as possible, same with non-UNIX platforms and being intentional about ANSI C. Now, if you're trying to use things these least common denominators don't include, like for instance ANSI C+thread support, build machinery still isn't going to be what makes your program work, you have to write or otherwise incorporate an abstraction layer over the available system services, be it pthreads, Win32 threads, etc. That all said, one excellent point I must agree with up thread is that a build system will be used by a small handful of devs but then countless consumers who aren't experts in programming and that expect things to "just work". We as programmers may be fine with an "alias cc=" to avoid some lengthy cross-compiler detection mechanism in our own development process, but tell every consumer they have to set a bunch of environment variables and/or aliases to get a makefile to work and they'll throw their hands up and look for something with a configure script *even if* the former really is an easier and more dependable way to get exactly what you want. I say this because the former is how I've handled personal projects for a bit now, if I do need any environmental difference from the flat makefile in my project, I setup that environment just-in-time by hand, which usually just amounts to aliasing a command or too, maybe adding to LD_LIBRARY_PATH, that sort of thing, or if it's frequent enough, just do this with a teeny tiny script. I like the control and terseness of the setup, but throw that at someone who is used to using a package manager or at most using a configure script or CMake, and yeah, they'd probably balk at the lack of "sophistication" in your distributable. Looking at it as an accessibility matter rather than a developmental necessity certainly gives me different feelings about all this stuff. I may not need it and indeed would likely be mildly inconvenienced in some situations, but for someone else, it's a crucial piece of their experience. These words may ring true about the original subject of systemd as well... - Matt G. From flexibeast at gmail.com Sun Jun 23 11:47:52 2024 From: flexibeast at gmail.com (Alexis) Date: Sun, 23 Jun 2024 11:47:52 +1000 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: (Dave Horsfall's message of "Sun, 23 Jun 2024 10:13:14 +1000 (EST)") References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> Message-ID: <87o77s77lj.fsf@gmail.com> Dave Horsfall writes: > My server runs Sendmail, and I have no idea what "journalctl" is > (it > sounds Penguin-ish, which I definitely don't run). It's systemd's program for accessing the binary logs it generates. So, yes, it's Penguin, in the sense that systemd is explicitly not supported on anything other than Linux. Alexis. From tytso at mit.edu Mon Jun 24 04:50:57 2024 From: tytso at mit.edu (Theodore Ts'o) Date: Sun, 23 Jun 2024 14:50:57 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <1F747ECB-3BC2-405C-8EDF-96F655571B29@pobox.com> References: <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> <1F747ECB-3BC2-405C-8EDF-96F655571B29@pobox.com> Message-ID: <20240623185057.GA7185@mit.edu> On Sun, Jun 23, 2024 at 07:16:35AM +1000, David Arnold wrote: > > As an example, mentioned up-thread, building on Ultrix in 2024: you > need to test and work around a bunch of things that have been fixed > on anything updated since the mid-90’s to get a clean build on > Ultrix, SunOS-4.x, etc. Your average Linux or macOS user sees this > as pointless time wasting. In practice, the set of OS's which a particular software package using autotools will change over time. For me and e2fsprogs, just to give a concrete example, there is the set of packages for which I personally have access to. Many years ago this included Ultrix, OSF/1, AIX, Irix, and Solaris; and when I did have easy access to those machines, it wasn't hard for me to do a test compile, and when things broke, it was easy enough to add (or write) an autoconf test, and fix that particular build or test breakage. There would also be OS's for which I did not have direct access --- for example HPUX, where (a) sometimes the portability work that I did for AIX or Solaris would address eome portability issue on HPUX, or (b) someone building the package on HPUX would send me a bug report, and in general, it would be really easy to fix up the sources so that things worked on HPUX. So surprise, Autotools is not magic. It will not magically make your code portable. However, for someone who *wants* to make portable code, I've found autotools to be the most developer friendly way of supporting portable code. Certainly, it's ***way*** more user-friendly than imake, for which I have had the misfortune of having to have used, and because autotools is test features, and not OS's, it's much easier to support than needing to have a manually curated set of #define's for each OS that you want to support. (I can't beieve people think that's a good idea; I would find that incredibly painful and would involve much more work.) What does happen over time, though, is that when the maintainer, or the development commuity at large, loses access to an hardware and/or OS combination, support for that that platform will start to rot, and things will gradully start breaking as new feature development will accidentally add some dependency which can't be guaranteed everywhere. Yes, you can have someone strictly trying to require that no advanced feature beyond that which was available in BSD 4.3 be used, in the name of portability, but sometimes there are real functional and performance sacrifices that might get made if you do that. > There’s no right answer here: someone is always annoyed at you. Indeed, and there will always be people who want to backseat drive and tell you that you should do things their way. I find there's a lot of religion wrapped up here, much like the choice of ed, vi or emacs.... - Ted From tuhs at tuhs.org Mon Jun 24 04:56:49 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Sun, 23 Jun 2024 14:56:49 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87le2xvo4y.fsf@gmail.com> <76644602-7257-4050-b625-050966280e1c@case.edu> <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> Message-ID: On 6/22/24 2:15 PM, Luther Johnson wrote: > It may just mean that someone made some > fairly ordinary mistake along the way in setting up the build. Or that > the system on which we are trying to build is different in a way that > the author did not imagine. A third possibility is that the author or authors decided not to test for and support a set of (in this case) older systems. Not a lack of imagination, but a development priority decision. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From tytso at mit.edu Mon Jun 24 05:00:02 2024 From: tytso at mit.edu (Theodore Ts'o) Date: Sun, 23 Jun 2024 15:00:02 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <87o77s77lj.fsf@gmail.com> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> <87o77s77lj.fsf@gmail.com> Message-ID: <20240623190002.GB7185@mit.edu> On Sun, Jun 23, 2024 at 11:47:52AM +1000, Alexis wrote: > Dave Horsfall writes: > > > My server runs Sendmail, and I have no idea what "journalctl" is (it > > sounds Penguin-ish, which I definitely don't run). > > It's systemd's program for accessing the binary logs it generates. So, yes, > it's Penguin, in the sense that systemd is explicitly not supported on > anything other than Linux. Systemd certainly isn't a pioneer in terms of binary log files. The first such "innovation" that I can think of is Ultrix's (and later OSF/1 and Tru64)'s uerf (Ultrix error report formatter). AIX also had binary error logs that needed to be decoded using the errpt command. And Solaris's audit logs are also stored in a binary format. All of these "innovations" consider it a Feature that it becomes easier to store and filter on structured data, instead of trying to write complex regex's to pull out events that match some particular query. - Ted From als at thangorodrim.ch Mon Jun 24 06:04:22 2024 From: als at thangorodrim.ch (Alexander Schreiber) Date: Sun, 23 Jun 2024 22:04:22 +0200 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <20240623190002.GB7185@mit.edu> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> <87o77s77lj.fsf@gmail.com> <20240623190002.GB7185@mit.edu> Message-ID: On Sun, Jun 23, 2024 at 03:00:02PM -0400, Theodore Ts'o wrote: > On Sun, Jun 23, 2024 at 11:47:52AM +1000, Alexis wrote: > > Dave Horsfall writes: > > > > > My server runs Sendmail, and I have no idea what "journalctl" is (it > > > sounds Penguin-ish, which I definitely don't run). > > > > It's systemd's program for accessing the binary logs it generates. So, yes, > > it's Penguin, in the sense that systemd is explicitly not supported on > > anything other than Linux. > > Systemd certainly isn't a pioneer in terms of binary log files. The > first such "innovation" that I can think of is Ultrix's (and later > OSF/1 and Tru64)'s uerf (Ultrix error report formatter). AIX also had > binary error logs that needed to be decoded using the errpt command. > And Solaris's audit logs are also stored in a binary format. AIX sort of gets a pass here on account of being on the weird side to begin with and bonus points for not using DB/2 for primary log storage ;-) > All of these "innovations" consider it a Feature that it becomes > easier to store and filter on structured data, instead of trying to > write complex regex's to pull out events that match some particular > query. Except you now have to do the additional step of extracting the data from the binary logs and _then_ apply the regex filter you were going to use in the first place, which makes the logs less accessible. All of my systemd running machines still get rsyslog plugged into it so it can deliver the logs to my central log host (which then dumps them into PostgreSQL) - and to enable a quick rummage in the local logs via less & grep. Kind regards, Alex. -- "Opportunity is missed by most people because it is dressed in overalls and looks like work." -- Thomas A. Edison From stuff at riddermarkfarm.ca Mon Jun 24 06:15:40 2024 From: stuff at riddermarkfarm.ca (Stuff Received) Date: Sun, 23 Jun 2024 16:15:40 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87le2xvo4y.fsf@gmail.com> <76644602-7257-4050-b625-050966280e1c@case.edu> <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> Message-ID: <6e98901c-16cd-9a46-105e-8e694353c666@riddermarkfarm.ca> On 2024-06-23 14:56, Chet Ramey via TUHS wrote: > On 6/22/24 2:15 PM, Luther Johnson wrote: >> It may just mean that someone made some >> fairly ordinary mistake along the way in setting up the build. Or that >> the system on which we are trying to build is different in a way that >> the author did not imagine. > > A third possibility is that the author or authors decided not to test for > and support a set of (in this case) older systems. Not a lack of > imagination, but a development priority decision. My opinion is that the authors simply did not have access to other systems or were not interested. Sometimes, one finds a disclaimer to that effect. I understand that but I am irked when they claim POSIX compliance. S. From tytso at mit.edu Mon Jun 24 23:50:49 2024 From: tytso at mit.edu (Theodore Ts'o) Date: Mon, 24 Jun 2024 09:50:49 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> <87o77s77lj.fsf@gmail.com> <20240623190002.GB7185@mit.edu> Message-ID: <20240624135049.GA280025@mit.edu> On Sun, Jun 23, 2024 at 10:04:22PM +0200, Alexander Schreiber wrote: > Except you now have to do the additional step of extracting the > data from the binary logs and _then_ apply the regex filter you > were going to use in the first place, which makes the logs less > accessible. All of my systemd running machines still get rsyslog > plugged into it so it can deliver the logs to my central log host > (which then dumps them into PostgreSQL) - and to enable a quick > rummage in the local logs via less & grep. Well, no, not necessarily. You *could* just query the structured data directly, which avoids needing to do complex, and error-prone parsing of the data using complex (and potentially easy to fool regex's). If this is being done to trigger automation to handle certain exception conditions, this could potentially be a security vulnerability if the attacker could use /usr/ucb/logger to insert arbitrary text into the system logs which could then potentially fool the regex parser (in the worst case, if there things aren't properly escaped when handling the parsed text, there might be a Bobby Tables style attack[1]). [1] https://xkcd.com/327/ Now, you could say that there could be two separate events notification; one which is sent via the old-fashioned text-based syslog system, and a different one which is structured which is better suited for large-scale automation. So for example, at $WORK we *used* to grovel through the system logs looking for file system corruption reports which were sent to the console logs. But we've since added an fsnotify schema to the upstream Linux kernel which sends a structured event notification which isn't nearly as error-prone, and which doens't require parsing to fetch the device name, etc. We didn't remove the old-style "print to the console log" for backwards compatibility reasons, however, so this didn't break people who were used to parse the console log by hand. Linux kernel developers are a lot more about backwards compatibility than some userspace projects (including, unfortunately, systemd). However, this dysfunction isn't limited to Linux userspace. Unfortunately, in the case of (I think, but I'll Digital folks correct me if I'm wrong) Digital's uerf and AIX's unspeakable horror ("AIX --- it *reminds* you of Unix"), backwards compatibility wasn't considered as important by the Product Managers who make these sorts of decisions. Back in the Linux-land, for a while, a number of distributions had rsyslog installed in parallel to the systemd logging system precisely to provide that backwards compatibility. A stable release later, because this meant logging data was being written twice to stable storage, rsyslog has been made no longer the default for some distributions, but of course, you can just install rsyslog by hand if you still want to use it. The bottom line is that while people seem to be ranting and raving about systemd --- and there are a lot of things that are terrible about systemd, don't get me wrong --- I find it interesting that legacy Unix systems get viewed with kind of a rosy-eyed set of glasses in the past, when in fact, the "good old days" weren't necessary all that good --- and there *are* reasons why some engineers have considered plain text ala the 1970's Unix philosophy to not necessarily be the final word in systems design. Cheers, - Ted From tytso at mit.edu Tue Jun 25 00:03:37 2024 From: tytso at mit.edu (Theodore Ts'o) Date: Mon, 24 Jun 2024 10:03:37 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <6e98901c-16cd-9a46-105e-8e694353c666@riddermarkfarm.ca> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87le2xvo4y.fsf@gmail.com> <76644602-7257-4050-b625-050966280e1c@case.edu> <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> <6e98901c-16cd-9a46-105e-8e694353c666@riddermarkfarm.ca> Message-ID: <20240624140337.GB280025@mit.edu> On Sun, Jun 23, 2024 at 04:15:40PM -0400, Stuff Received wrote: > My opinion is that the authors simply did not have access to other > systems or were not interested. Sometimes, one finds a disclaimer to > that effect. I understand that but I am irked when they claim POSIX > compliance. I get irked because Posix compliance applies to OS's (a specific binary release of the kernel plus userspace runtime environment), and not to applications. Also, compliance implies that it has passed a specific test process, after paying $$$$ to a Posix Test Compliance Lab, and said compliance certificate gets revoked the moment you fix a security bug, until you go and you pay additional $$$ to the Posix compliance lab. Basically, it's racket that generally only companies who need to sell into the US or European government market were willing to play. (e.g., at one point there were Red Hat and SuSE distributions which were POSIX certified, but Fedora or Debian never were.) A project or vendor could claim that there product was a "strictly conforming POSIX application[1], but that's hard to actually prove (which is why there is no compliance testing for it), since not only do you have to limit yourself to only those interface guaranted to be present by POSIX, but you must also not depend on any behavior which specified to be "implementation defined" (and very often many traditional Unix behaviors are technically "implementation defined", so that VMS and Windows could claim to be be "POSIX compliant implementation".) So a strictly POSIX conforming application was likely only relevant for very simple "toy" applications that didn't need to do anything fancy, like say, networking. (Berkeley sockets couldn't be required because AT&T Streams. Oh, joy.) [1] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap02.html#tag_02_02_01 Can you tell I'm a bit jaded and cynical about the whole Posix compliance/conformance thing? :-) Cheers, - Ted From crossd at gmail.com Tue Jun 25 00:21:27 2024 From: crossd at gmail.com (Dan Cross) Date: Mon, 24 Jun 2024 10:21:27 -0400 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <20240624135049.GA280025@mit.edu> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> <87o77s77lj.fsf@gmail.com> <20240623190002.GB7185@mit.edu> <20240624135049.GA280025@mit.edu> Message-ID: On Mon, Jun 24, 2024 at 9:51 AM Theodore Ts'o wrote: >[snip] > > The bottom line is that while people seem to be ranting and raving > about systemd --- and there are a lot of things that are terrible > about systemd, don't get me wrong --- I find it interesting that > legacy Unix systems get viewed with kind of a rosy-eyed set of glasses > in the past, when in fact, the "good old days" weren't necessary all > that good --- and there *are* reasons why some engineers have > considered plain text ala the 1970's Unix philosophy to not > necessarily be the final word in systems design. I must concur here. To bring this back to history, I think it's useful to consider the context in which, "use text, as it's a universal interchange format" arose. We _are_ talking about the 1970s here, where there was a lot more variation between computers than nowadays; back in that era, you still had a lot of word-oriented machines with non-power-of-2 word sizes, one's complement machines, and the world had not yet coalesced around the 8-bit byte (much of networking is _still_ defined in terms of "octets" because of this). In that era, yeah, it was just easier to move text between programs: transporting a program from a 16-bit machine to a 32-bit machine didn't mean changing parsing routines, for example. Contrast this to today, where things are much more homogenized, even between different ISAs. Most ISAs are little endian, and for general purpose machines 8 bit bytes, power-of-two integer widths, and 2's complement are pretty much universal (I'm aware that there are some embedded and special purpose processors --- like some types of DSPs --- for which this is not true. But I'm not trying to run Unix on those). Furthermore, we have robust serialization formats that allow us to move binary data between dissimilar machines in a well-defined manner; things like XDR -- dating back almost 40 years now -- paved the way for Protobuf and all the rest of them. In this environment, the argument for "text first!" isn't as strong as it was in the 70s. Something that I think also gets lost here is that we also have well-defined, text-based serialization formats for structured data. Things like sexprs, JSON, have all been employed to good effect here. You can have your textual cake and eat your structured data, too! I think what irks people more is that the traditional, line-oriented tools we all know and love are no longer prioritized. But to me that's an invitation to ask "why?" The default assumption seems to be that the people who don't are just ignorant, or worse, stupid. But could it be that they have actual, real-world problems that are not well served by those tools? So it is with systemd. I don't like it, and the recent, "deletes your homedir lol you're holding it wrong lmao" thing solidifies that opinion, but in some ways it's actually _more_ Unix-y than some of the alternatives. Take smf, where nothing screams "UNIX!!!" at me more than XML-based config files consumed by giant libraries. Systemd, at least, is broken into a bunch of little programs that each do one thing (sorta...) well, and it uses somewhat-readable text-based configuration files and symlinks. Indeed, we look at what we consider "real Unix" with some very rosy glasses. Perhaps that's why we overlook un-Unix-like functionality like Solaris's "profile" facilities, where the kernel does an upcall to a userspace daemon to determine what privileges a program should have? Or how about the IP management daemon, in.ndpd, or the rest of the libipadm.so stuff? Unix hasn't been Unix for a very long time now. - Dan C. From crossd at gmail.com Tue Jun 25 00:33:08 2024 From: crossd at gmail.com (Dan Cross) Date: Mon, 24 Jun 2024 10:33:08 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <20240624140337.GB280025@mit.edu> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87le2xvo4y.fsf@gmail.com> <76644602-7257-4050-b625-050966280e1c@case.edu> <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> <6e98901c-16cd-9a46-105e-8e694353c666@riddermarkfarm.ca> <20240624140337.GB280025@mit.edu> Message-ID: On Mon, Jun 24, 2024 at 10:12 AM Theodore Ts'o wrote: > On Sun, Jun 23, 2024 at 04:15:40PM -0400, Stuff Received wrote: > > My opinion is that the authors simply did not have access to other > > systems or were not interested. Sometimes, one finds a disclaimer to > > that effect. I understand that but I am irked when they claim POSIX > > compliance. > > I get irked because Posix compliance applies to OS's (a specific > binary release of the kernel plus userspace runtime environment), and > not to applications. > > Also, compliance implies that it has passed a specific test process, > after paying $$$$ to a Posix Test Compliance Lab, and said compliance > certificate gets revoked the moment you fix a security bug, until you > go and you pay additional $$$ to the Posix compliance lab. Basically, > it's racket that generally only companies who need to sell into the US > or European government market were willing to play. (e.g., at one > point there were Red Hat and SuSE distributions which were POSIX > certified, but Fedora or Debian never were.) > > A project or vendor could claim that there product was a "strictly > conforming POSIX application[1], but that's hard to actually prove > (which is why there is no compliance testing for it), since not only > do you have to limit yourself to only those interface guaranted to be > present by POSIX, but you must also not depend on any behavior which > specified to be "implementation defined" (and very often many > traditional Unix behaviors are technically "implementation defined", > so that VMS and Windows could claim to be be "POSIX compliant > implementation".) So a strictly POSIX conforming application was > likely only relevant for very simple "toy" applications that didn't > need to do anything fancy, like say, networking. Also, what is "POSIX" changes over time: new things are added, and occasionally something is removed. Indeed, a new version was just released a couple of weeks ago. So what does it mean to say that some OS conforms to POSIX? Which version? For some very old systems, particularly those that are no longer being substantially updated but that may have conformed to an older version of the standard, they may have credibly claimed "POSIX compliant" at some point in the past, but time has left them behind. It is unreasonable to constrain program authors to ancient versions of standards just because some tiny fraction of people want to use an old system. Consider 4.3BSD, for example: it shipped with a compiler that predated the ANSI C standard, and doesn't understand the ANSI-style function declaration syntax. Should one restrict oneself to the traditional C dialect forever? If so, one loses out on the substantial benefits of stronger type checking. Or consider better string handling functions that came later (`snprintf` is an obvious example, but I would argue `strlcpy` and `strlcat` as well). Should we restrict ourselves to laborious and error-prone shenanigans with `strlen` and `strcpy` just to keep code running on a Sun4c machine under SunOS 4? I really don't think so. - Dan C. > (Berkeley sockets couldn't be required because AT&T Streams. Oh, > joy.) > > [1] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap02.html#tag_02_02_01 > > Can you tell I'm a bit jaded and cynical about the whole Posix > compliance/conformance thing? :-) > > Cheers, > > - Ted From crossd at gmail.com Tue Jun 25 00:46:14 2024 From: crossd at gmail.com (Dan Cross) Date: Mon, 24 Jun 2024 10:46:14 -0400 Subject: [TUHS] Fwd: [multicians] Dennis Ritchie's 1993 Usenet posting "BTL Leaves Multics" In-Reply-To: References: Message-ID: FYI, Tom Van Vleck just passed this on the Multicians list; DMR's recollections of the end of Multics at BTL. I can't resist asking about the nugget buried in here about Ken writing a small kernel for the 645. Is that in the archives anywhere? - Dan C. ---------- Forwarded message --------- From: Tom Van Vleck via groups.io Date: Mon, Jun 24, 2024 at 10:38 AM Subject: [multicians] Dennis Ritchie's 1993 Usenet posting "BTL Leaves Multics" To: in "alt.os.multics" about Unix, CTSS, Multics, BTL, qed, and mail https://groups.google.com/g/alt.os.multics/c/1iHfrDJkyyE Comments by DMR, me, RMF, PAG, PAK, BSG, PWB, JJL, AE, MAP, EHR, DMW Covers many issues. (I feel like we should save this thread somehow. hard to trust Google any more. the posting ends with a heading of a response by JWG but no content.) _._,_._,_ ________________________________ Groups.io Links: You receive all messages sent to this group. View/Reply Online (#5547) | Reply To Group | Reply To Sender | Mute This Topic | New Topic ________________________________ -- sent via multicians at groups.io -- more Multics info at https:://multicians.org/ ________________________________ Your Subscription | Contact Group Owner | Unsubscribe [crossd at gmail.com] _._,_._,_ From steffen at sdaoden.eu Tue Jun 25 01:03:50 2024 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Mon, 24 Jun 2024 17:03:50 +0200 Subject: [TUHS] =?utf-8?q?Version_256_of_systemd_boasts_=2742=25_less_Uni?= =?utf-8?q?x_philosophy=27_=E2=80=A2_The_Register?= In-Reply-To: <20240624135049.GA280025@mit.edu> References: <73819d1a-395a-4b74-a20c-0123fbed56bd@technologists.com> <22508b22-db5f-491e-bc02-2d4ab4d33cd9@home.arpa> <87o77s77lj.fsf@gmail.com> <20240623190002.GB7185@mit.edu> <20240624135049.GA280025@mit.edu> Message-ID: <20240624150350.t0d8Mq_H@steffen%sdaoden.eu> Theodore Ts'o wrote in <20240624135049.GA280025 at mit.edu>: |On Sun, Jun 23, 2024 at 10:04:22PM +0200, Alexander Schreiber wrote: |> Except you now have to do the additional step of extracting the |> data from the binary logs and _then_ apply the regex filter you |> were going to use in the first place, which makes the logs less |> accessible. All of my systemd running machines still get rsyslog |> plugged into it so it can deliver the logs to my central log host |> (which then dumps them into PostgreSQL) - and to enable a quick |> rummage in the local logs via less & grep. | |Well, no, not necessarily. You *could* just query the structured data |directly, which avoids needing to do complex, and error-prone parsing |of the data using complex (and potentially easy to fool regex's). If ... |console logs. But we've since added an fsnotify schema to the |upstream Linux kernel which sends a structured event notification |which isn't nearly as error-prone, and which doens't require parsing |to fetch the device name, etc. We didn't remove the old-style "print ... |The bottom line is that while people seem to be ranting and raving |about systemd --- and there are a lot of things that are terrible |about systemd, don't get me wrong --- I find it interesting that |legacy Unix systems get viewed with kind of a rosy-eyed set of glasses |in the past, when in fact, the "good old days" weren't necessary all |that good --- and there *are* reasons why some engineers have |considered plain text ala the 1970's Unix philosophy to not |necessarily be the final word in systems design. But that is the thing, and it has nothing to do with systemd vs a normal syslog. I always wondered why things like fail2ban are used to make active system decisions, including active firewall setup, where daemons which *know* (to their extend) a state transform it to string data, send it to syslog (or files), which then gets parsed again. Christos Zoulas of NetBSD then came over with blacklistd about a decade ago, with patches for OpenSSH and postfix and some more, where at least authentication (failure) events are collected at the core where they happen, and then sent to the blacklistd, which has backends for certain firewalls. The newest OpenSSH has something internal built-in that does not report the event, as far as i know. All those do not help against nonsense connections, like premature forced breaks, or ditto with hanging on the port until the server (or OS even) closes the connection, TLS setup failures, downloading the same file a thousand times, etc etc. A pity it is. The server knows, the firewall or a controller needs to know. All that deep inspection shit of the past, and all the guessing on connection count, interpolating connectivity bandwidth, and what not, "done in the firewall", blind flight. Noone jumped on that train that blacklistd started, even when asked for such possibilities. Sorry, i do not see why a binary "journal" log is any better than a plain text file except that possibly programs can be addressed more easily ... by their name. Btw i hope for that new capability-for-namespaces thing that lwn reported, that would be cool! (Btw with 6.1.93 i could not "cryptsetup close" unmounted mediums that were mounted before kvm driven virtual machines moved to inside cgroup namespaces were started. Off-topic, of course.) --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From lars at nocrew.org Tue Jun 25 01:10:07 2024 From: lars at nocrew.org (Lars Brinkhoff) Date: Mon, 24 Jun 2024 15:10:07 +0000 Subject: [TUHS] Fwd: [multicians] Dennis Ritchie's 1993 Usenet posting "BTL Leaves Multics" In-Reply-To: (Dan Cross's message of "Mon, 24 Jun 2024 10:46:14 -0400") References: Message-ID: <7wfrt2cr74.fsf@junk.nocrew.org> Dan Cross wrote: > I can't resist asking about the nugget buried in here about Ken > writing a small kernel for the 645. Is that in the archives anywhere? As far as I understand, the code that is available is for the DPS-8M. I have asked around for older code. I don't remember exactly, but possibly 6180 era code might be around, but but unfortunately not 645. From imp at bsdimp.com Tue Jun 25 01:17:49 2024 From: imp at bsdimp.com (Warner Losh) Date: Mon, 24 Jun 2024 09:17:49 -0600 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87le2xvo4y.fsf@gmail.com> <76644602-7257-4050-b625-050966280e1c@case.edu> <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> <6e98901c-16cd-9a46-105e-8e694353c666@riddermarkfarm.ca> <20240624140337.GB280025@mit.edu> Message-ID: On Mon, Jun 24, 2024 at 8:33 AM Dan Cross wrote: > On Mon, Jun 24, 2024 at 10:12 AM Theodore Ts'o wrote: > > On Sun, Jun 23, 2024 at 04:15:40PM -0400, Stuff Received wrote: > > > My opinion is that the authors simply did not have access to other > > > systems or were not interested. Sometimes, one finds a disclaimer to > > > that effect. I understand that but I am irked when they claim POSIX > > > compliance. > > > > I get irked because Posix compliance applies to OS's (a specific > > binary release of the kernel plus userspace runtime environment), and > > not to applications. > > > > Also, compliance implies that it has passed a specific test process, > > after paying $$$$ to a Posix Test Compliance Lab, and said compliance > > certificate gets revoked the moment you fix a security bug, until you > > go and you pay additional $$$ to the Posix compliance lab. Basically, > > it's racket that generally only companies who need to sell into the US > > or European government market were willing to play. (e.g., at one > > point there were Red Hat and SuSE distributions which were POSIX > > certified, but Fedora or Debian never were.) > > > > A project or vendor could claim that there product was a "strictly > > conforming POSIX application[1], but that's hard to actually prove > > (which is why there is no compliance testing for it), since not only > > do you have to limit yourself to only those interface guaranted to be > > present by POSIX, but you must also not depend on any behavior which > > specified to be "implementation defined" (and very often many > > traditional Unix behaviors are technically "implementation defined", > > so that VMS and Windows could claim to be be "POSIX compliant > > implementation".) So a strictly POSIX conforming application was > > likely only relevant for very simple "toy" applications that didn't > > need to do anything fancy, like say, networking. > > Also, what is "POSIX" changes over time: new things are added, and > occasionally something is removed. Indeed, a new version was just > released a couple of weeks ago. So what does it mean to say that some > OS conforms to POSIX? Which version? For some very old systems, > particularly those that are no longer being substantially updated but > that may have conformed to an older version of the standard, they may > have credibly claimed "POSIX compliant" at some point in the past, but > time has left them behind. > Certification only lasts a certain amount of time... And the compliance isn't with POSIX, but POSIX.1-2008 or POSIX.1-2024. The advantage of POSIX, though, is that it tries to keep up with the changes in interfaces, tastes, etc. So aiming for it is a useful "fuzzy cloud" of what's currently most likely to be portable to most modern (read: released in the last decade or less) systems. > It is unreasonable to constrain program authors to ancient versions of > standards just because some tiny fraction of people want to use an old > system. > Indeed. Ancient systems in general are best dealt with by some common sense build hacks. libposix can handle the missing functionality for people that care about these ancient systems, and "layered" include systems work for systems that are at least new enough to have #include_next (and #include "/usr/include/stdio.h" for those that don't). Pushing that job to a thousand package writers is a loser. I've done this for various older systems that I've dabbled with and it becomes a question of how much is enough... I do similar things to build a few Linux applications on FreeBSD w/o bothering the authors too much (I fix bugs that don't matter on Linux, but make my shim layer smaller though). But that's mostly a modern -> modern so C dialect is identical (enough), and the only troublesome interfaces are the Linux Specific ones which I map to FreeBSD functionality. I've never formalized this since I only have a few I care about that are a bit resistant to accepting FreeBSD patches... > Consider 4.3BSD, for example: it shipped with a compiler that predated > the ANSI C standard, and doesn't understand the ANSI-style function > declaration syntax. Should one restrict oneself to the traditional C > dialect forever? If so, one loses out on the substantial benefits of > stronger type checking. Or consider better string handling functions > that came later (`snprintf` is an obvious example, but I would argue > `strlcpy` and `strlcat` as well). Should we restrict ourselves to > laborious and error-prone shenanigans with `strlen` and `strcpy` just > to keep code running on a Sun4c machine under SunOS 4? I really don't > think so. > Yea. > - Dan C. > > > > (Berkeley sockets couldn't be required because AT&T Streams. Oh, > > joy.) > Sockets are standardized these days in POSIX, though. IPv6 is optional, though if you support it, you have to support it with the interfaces defined. Same with Raw Sockets. But most of that's there (and been there since 2008 or earlier) > > [1] > https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap02.html#tag_02_02_01 > > > > Can you tell I'm a bit jaded and cynical about the whole Posix > > compliance/conformance thing? :-) > Yea, Just because POSIX says so has been a terrible excuse for doing something silly. FreeBSD has long recognized it. However, in moderation, POSIX is a very good thing. Exact, pedantic, 100% conformance with no flexibility... isn't. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuhs at tuhs.org Tue Jun 25 01:23:17 2024 From: tuhs at tuhs.org (Chet Ramey via TUHS) Date: Mon, 24 Jun 2024 11:23:17 -0400 Subject: [TUHS] Building programs (Re: Version 256 of systemd boasts '42% less Unix philosophy' The Register In-Reply-To: <6e98901c-16cd-9a46-105e-8e694353c666@riddermarkfarm.ca> References: <87jzikt900.fsf@gmail.com> <877cej5gsp.fsf@gmail.com> <87le2xvo4y.fsf@gmail.com> <76644602-7257-4050-b625-050966280e1c@case.edu> <1d0f3aa8-af0c-1ee2-7625-7dc1a825c457@makerlisp.com> <6e98901c-16cd-9a46-105e-8e694353c666@riddermarkfarm.ca> Message-ID: <83434e69-792b-4284-bf3d-8f5e2cc49394@case.edu> On 6/23/24 4:15 PM, Stuff Received wrote: > My opinion is that the authors simply did not have access to other systems > or were not interested.  Sometimes, one finds a disclaimer to that effect. > I understand that but I am irked when they claim POSIX compliance. These are not at all the same. "POSIX compliance," from an application's perspective, means that the application behaves the way POSIX says it should for the behaviors POSIX standardizes. That doesn't have much to do with the author's implementation choices or whether or not the application runs on arbitrary systems. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 203 bytes Desc: OpenPGP digital signature URL: From crossd at gmail.com Tue Jun 25 04:17:08 2024 From: crossd at gmail.com (Dan Cross) Date: Mon, 24 Jun 2024 14:17:08 -0400 Subject: [TUHS] Fwd: [multicians] Dennis Ritchie's 1993 Usenet posting "BTL Leaves Multics" In-Reply-To: <7wfrt2cr74.fsf@junk.nocrew.org> References: <7wfrt2cr74.fsf@junk.nocrew.org> Message-ID: On Mon, Jun 24, 2024 at 11:10 AM Lars Brinkhoff wrote: > Dan Cross wrote: > > I can't resist asking about the nugget buried in here about Ken > > writing a small kernel for the 645. Is that in the archives anywhere? > > As far as I understand, the code that is available is for the DPS-8M. > I have asked around for older code. I don't remember exactly, but > possibly 6180 era code might be around, but but unfortunately not 645. This is, as I understand it, true for Multics. But I think that Dennis was referring to something that wasn't Multics, or only tangentially related, done by Ken; this would have definitely preceded the 6180 (Tom's link here: https://groups.google.com/g/alt.os.multics/c/1iHfrDJkyyE/m/Nhmar1sFRBgJ). To quote: > Ken even created a tiny kernel of a system for the 645, which actually printed out an equivalent of `hello world.' This is the thing I'm curious about. - Dan C. From douglas.mcilroy at dartmouth.edu Tue Jun 25 22:51:39 2024 From: douglas.mcilroy at dartmouth.edu (Douglas McIlroy) Date: Tue, 25 Jun 2024 08:51:39 -0400 Subject: [TUHS] Documenting a set of functions with -man Message-ID: > The lack of a monospaced font is, I suspect, due either to > physical limitations of the C/A/T phototypesetter[1] or fiscal > limitations--no budget in that department to buy photographic > plates for Courier. Since the C/A/T held only four fonts, there was no room for Courier. But when we moved beyond that typesetter, inertia kept the old ways . Finally, in v9, I introduced the fixed-width "literal font", L, in -man and said goodbye to boldface in synopses. By then, though, Research Unix was merely a local branch of the Unix evolutionary tree, so the literal-font gene never spread. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: