Ns System Admin Log

From DisNCord Community Wiki
Jump to navigation Jump to search

NCommander (2022-11-13)

@Wiki Volunteer @N's Life currently thinking about system admin stiff, and SoylentNews, including Kerberos and Hesiod

NCommander (2022-11-13)

Biggest thought right now is that system administration is something of a lost art
By and large, you can get away with basically having a small team of sysadmins with a set of root passwords these days, especially with things like SaaS setups
One of the big aspects of SoylentNews was that user access was federated
I actually copied how both Ubuntu and Debian's infrastructure had been built, since it was essentially the same group of people
mcasadevall@helium:~$ id
uid=2500(mcasadevall) gid=2500(firefighters) groups=2500(firefighters),2501(sysops),2503(dev_team),2504(prod_access)
This is the ID line from my account on SN's backend
and uh, if you know anything abotu Linux, you know that's not what ID normally looks like
To understand what this means, I need to explain how SSH worked at SN
There was a single "people" box which handles log in directly, you could sign in with a SSH key, or with a Kerberos credential
Kerberos is what handles network authentication on Active Directory
Basically its a way of being to sign on to one place, then another, and then another, and not have to sign on again
And all credentials are stored in a single place, making it incredibly easy to do user add/remove
You still have to do users in a shared place
Originally, we used LDAP, but running LDAP on Linux is an incredible bit of pain
By and large, the idea of having a large UNIX or even Linux network deployment where you to share credentials is becoming increasing and increasingly rare
The historical solution for the problem was NIS; but NIS has rather distinct painpoints that make it difficult to use over the Internet
(NIS uses portmap like NFS does)
LDAP is, for want of a better word, a very overcomplicated nosql database
(LDAP predates the concept of NoSQL, but its the easiest way I can summarize working w/ it)
(please ask questions if anyone ahs any)
To say we had problems with LDAP was very much an understatement
and OpenLDAP was designed by a possessed madman who, in a fit of rage, decided that all other madman who needed a LDAP solution for Linux that didn't require Java were going to suffer for all of time
The bloody thing stores its own network and TLS configuration inside its own database, which means you can accidently misconfigure it and then not be able to reconfigure it again
it's command line tools are horrid
and I really pray that if you have to work with LDAP, you have phpMyLDAP admin or similar somewhere nearby
brainfart there
I do know Debian and Canonical both used LDAP as well
We did replace LDAP with Hesiod, which is uh, a story
but I'll get to it
Every user account on SN had a set of groups that basically said what machines you could and couldn't log into
everyone had firefighters
That was the staff box; while rehash had some file upload capabilities, they had bitrotted out of the codebase when we got it up and running
so we had people.soylentnews.org and then each account was ~username which mapped to their own public_html folder
it basically became a littering ground for whatever staff needed to get hosted
Again, directly modeled on people.debian.org, and people.canonical.com
Now adays, you have dropbox and shit
And you did then, but really, most people would basically upload to S3 or similar if you really needed a HTTP endpoint
Like that's an absolute lost art for just helping get files from one to another; very much a living off the land sorta thing
you know? I had that infrastructure setup in a week
Less than
I had gotten Slash mostly working in a VM in maybe a day?
Like, it wasn't easy, but I knew pretty much what I was doing from start to finish
I had done literial years of package management at this point

sadmac356 [emotional support mom friend] (2022-11-13)

I mean you modeled it on something you knew worked
And that

NCommander (2022-11-13)

Well, yeah, I knew it was possible
Most Linux sysadmin is focused around server ops; it might talk about access.conf in the notes, but
like how many people actually know that file exists
@BlackCoffeeDrinker (recounting here)
I didn't really think about that at the time, I had assumed what I saw was standard
But, how many organizations really dogfood to that level
I can't speak to Red Hat, although I assume being part of IBM, they're almost certainly on Active Directory
At Canonical, there was an expectation to dogfood
Uh, context: dogfooding in industry is that you use your own products to do something
i.e., if you build an program to send emails, you should be using it to send emails
Canonical was very much taking on the Linux Desktop battle
You can use RHEL, or CentOS as a daily driver, sure, but who really does?
By and large, the GUI is basically the same as CDE in modern AIX: no one is going to ever see it except those who have legacy apps
OpenCDE are basically folks who either need it for something, or just love seeing CDE (I personally think its a pretty awesome DE, although dated)
returns
There are probably a few places that are to that level, likely holdovers of the Sun era. SDF.org being probably the only other ones left?
Back in the day, there were very specific types of service providers called shell accounts
Basically, you'd dial a dedicated number, and get a restricted login prompt
sh -r exists for this reason

sadmac356 [emotional support mom friend] (2022-11-13)

That makes sense

NCommander (2022-11-13)

You see it in scripting, but by and large, this feature was designed for interactive use
The very lucky few got unrestricted accounts
Which could do anything a non-root user could
My dad worked as a researcher at Albert Einstein College of Medicine; and well, I was a kid, probably no more than 7-8?, and he gave me the dial-in number for his workplace
and uh, I knew his password. Like he knew I was doing it
His account could access a machine called brenner
It was a SGI IRIX I believe. Had to be IRIX 4 or 5
I think there was a system announcement that GCC or something was available?
Either way, early 90s man
Different era
Like, my dad probably assumed that I couldn't actually do anything w/ it
Anyway, I learned to use the system because I had read "UNIX for Dummies"
Image
... I read a lot of reference books
like jfc, I used to consider it a success story if we went to the book store as a kid, and I walked away that had a packin CD

sadmac356 [emotional support mom friend] (2022-11-13)

Mood though…

NCommander (2022-11-13)

like Windows 95 Resource Kit

sadmac356 [emotional support mom friend] (2022-11-13)

I just read a lot in general though

NCommander (2022-11-13)

There was no always on Internet, ro VPSes, or emulators back then
Sleep disorder thing for me; not a lot you can do as a 8 year old at 1am when you can't sleep

sadmac356 [emotional support mom friend] (2022-11-13)

Yeah that's fair

NCommander (2022-11-13)

Like they tried
but meh
I remember rading about mail, elm and pine in that book
I had seen and even used pine, others at my dad's worksplace used it via telnet from Windows
https://en.wikipedia.org/wiki/Pine_(email_client)
Pine (email client)
Pine is a freeware, text-based email client which was developed at the University of Washington. The first version was written in 1989, and announced to the public in March 1992. Source code was available for only the Unix version under a license written by the University of Washington. Pine is no longer under development, and has been replaced ...
Pine (email client)
That had to be one of the last of its kind
LIke normally, at his office, he was using Eudora mail using POP3
Exchange was always MAPI based
I think it gained IMAP support at some point? but it was very much propitary, must use Outlook thing at that point
or well, MS Mail for that era
Well, that makes sense, Dad was an academic, he basically was with an organization that was probably part of CSNET
Or had an indirect connection to the original ARPANET
... it was entirely possible that it was a UUCP site at one point. He had a T1 connection in his office for the building ...
In an era where 56k was nice
I think that's by and large why I'm always networking my old PCs
I even have a network card, a real NE1000, for the Compaq Portable
As far as I know, RetroSpectator27 is the only one who really does on camera, and I don't think he does it regularly
I mean, that ThinkPad 380D I have on camera is the one from when I was a kid
And it never had an ethernet card until I started YouTubing
I did get for a gift at some point a home networking kit, I can't remembered what I networked
it had a 10Mbps hub
I'm getting lost in my own memories
But ... even in retro computing how often do you setup full actual networking?
Even I don't setup NIS, despite having all the hardware and enough machines to do it
I guess that's why I say its a lost art
Kuberetes is essentially an extension of ansile and puppet
Modern cloud based systems basically script rebuilding and deploying machines because cost has come to the point that you can easily afford to throw hardware at the problem for basically peanuts
It's not hard to justify throwing money at Amazon Logstash and the such, and literially just deploy things like a HTTP fleet in a few commands
Instead of having a box run multiple roles
That's largely why devops replaced sysadmin as a role
Even for customers that want onpremise stuff, it's basically handled through IT; its something you're expected to do as part of another role, and by and large, IT departments don't really tend isolate access much
Like, at a nation state, or Fortune 500 level? Oh hell yes, but that's almost universally Active Directory these days
Like, unless you were a Mac only shop, which does exist, it's going tobe AD
And Apple's own "macOS Server" offers are pretty weak these days
But going back to SoylentNews
mcasadevall@helium:~$ id
uid=2500(mcasadevall) gid=2500(firefighters) groups=2500(firefighters),2501(sysops),2503(dev_team),2504(prod_access)
Each group represents a set of machines you could access
All users were in firefighters, it was my take on canonical's internal warthogs id; I think Debian just used people
sysops was essentially the domain admin of the network; you could go to any machine and sudo
sudoers was set to look at each of these groups, and use that to determine if you need to elevate
prod_access let access to the web frontends, and to parts of the rehash infrastructure
dev_team was root access on lithium
github was a thing in those days, but rehash was so difficult to setup, it was easier to have a single VM to share
That's actually not uncommon in kernel engineering; Canonical used to have a way to apply for a machine account for the public to parpicate in kernel development
The Linux kernel is basically the single most collaboratively developed piece of software on the planet
since it is the common thread to everything
github has better PRs and such, but PRs don't work as well if you want to do pair programming in a single repo acess the Internet
which does happen w/ the kernel
It's probably less common now since Linus uses github now, but originally, the master Linux tree was on kernel.org
So you had to git clone from Linus's master, create local forks, and then if necessary, submit for inclusion upstream
Upstreaming means that if you take a feature for Linux from an organization like Canonical, and then submit it for inclusion with Linus's tree
Usually, you don't apply to Linus directly
That's rare
Instead, each part of the Linux kernel has a specific maintainer, some are small, some are big
For instance, whoever submits an individual driver is the mainatiner for that driver, see: https://github.com/torvalds/linux/blob/master/MAINTAINERS
GitHub
linux/MAINTAINERS at master · torvalds/linux
Linux kernel source tree. Contribute to torvalds/linux development by creating an account on GitHub.
linux/MAINTAINERS at master · torvalds/linux
This can continue up to large parts, for instance, Russel King is the head of all ARM ports
By and large, how code gets into Linus's branches is by going through one of the maintainers
and then each maintainer merges with Linus
I never landed code on the main tree, I usually was in the bootloader and instrastructure part of the stack
I have worked on RedBoot, u-boot, and TianoCore
and wrote one of, if not the first ports of Tiano to KVM
TianoCore, incidentally, is what Intel called the codebase for it's reference implementation for UEFI
like, Tiano is a fucking beast, and very early boot is ooga booga
but that's why git was very much adopted for; to allow everyone to collobrate
System engineering is a pretty mature field at this point; by and large, Windows as a platform doesn't have the staying power it once did
For the most part, the Win32 API is becoming an increasing liability, since web is an open standard
Back in the 90s, and 00s, when computing and bandwidth were so limited, native apps were considered superior to web apps
Yahoo was probably the first real exception to this, but it's always been a mark of pride to not have a generic domain.
Everyone offered mail; it was the main reason to get online, and your email address was always username@domain.com
Compuserve used to use numbers, it was like 11075521@compuserve.com, while older accounts had a 7 number infront it, part of the TOPS infrastructure they used

BlackCoffeeDrinker (2022-11-13)

I just checked, what I committed to the kernel has been erased 😦 oh well, it was a weird ARM SBC that probably 5 ppl used

NCommander (2022-11-13)

By and large, early forums used to show your email or email domain
and the absolute worst questions came from @aol.com
The Eternal September basically hit USENET like a bomb
That was before my time (I was 3, and honestly, I was probably too young to be understanding all of this but ...)

BlackCoffeeDrinker (2022-11-13)

The vast majority of AOL members did not understand the internet or computers

NCommander (2022-11-13)

Those who fled from AOL users mostly went to mailing lists, which basically kept the old USENET culture alive
Basically all webstandards, and old software development is on a mailing list of some sort
Like
I have to wonder what the total number of words sent and received through LISTSERV and Mailman probably add up to
It has to be a stupidly large percentage of all printed human text
The domain changed over time, but basically, in different eras,there were domains yo'd watch out for
Yahoo was regarded worse than AOL, because it was basically the first free webmail. Everyone got one from your ISP, but accessing that on the go was kinda a pain, and Yahoo, then a web directory (since the Internet was small enough you could make a directory to all the endpoints), managed to get the leg up on the competition
There was Lycos, Atlavista, and a few others
They offered webmail as a response to Yahoo offering it
It was a very clunky thing. We didn't have reactive web back then, and loading webpages was pretty slow, even in the text based days
They might have supported POP3, but TBH, I don't remember
Like ... you know, the more I think about it, the less I realize you really need command line in web development
It's still very common, but I actually have to wonder off hand how many people would be able to simply implace reinstall Ubuntu like I did on stream; I basically spedran that shit
The trend in recent years is automate, automate, automate
SourceForge, and Savannah (the later I was involved with)was all sysadmin stuff on Linux
Basically, to run those things, you had to set it up, configure it, with multiple user accounts, and so forth
Huh, weird thought, I realize a lot of this is I loved setting up these types of systems. When I was a kid, I used to setup the copy of Sendmail included with Mac OS X, and use it to listen to the system notifications, and even send a couple of emails on the web
Spam wasn't as much of a problem as it used to be; it wasn't uncommon for people to run their own mailservers. Originally, everyone used port 25
For SMTP
(SMTP is the thing that makes email go, it's as critical to the Internet as DNS is)
Furthermore, we hadn't run out of Internet
IPv4 exhaustion hadn't occured yet
And realistically, most schools that had fixed lines were likely assigned class A or Bs in this era
Maybe Cs?, RFC1918 wasn't a big thing
so if you ran a service on a laptop, you could actually get a real IP address
even if you were on wifi
This meant if you could figure out how to run sendmail, you were your own postmaster
Spam filtering was very much not a thing in those days, so if you just made up a domain name, by and large, it would get delivered
It was very cool seeing an email I typed out with mail show up in my ISP email
But that actually became a mark of compliance. By and large, if you're going to be active on a mailing list, you're going to use a dedicated reader
How dedicated is how much of your life you're going to loose
laughs nerviously
Thunderbird, but also mutt will show the send path
So if you have the domain restless.systems, but gmail is hosting my email, it might say
NCommander <n@restless.systems> received from: googlemail.com
so ... well, I don't know if it was just me, but I used to notice things, and it did say quite a bit about the user who wrote them
There were actual plugins for Thunderbird that would add an icon next to the email showing what client they used
i.e., Mail.app, Thunderbird for {PLATFORM}, etc, since every email typically includes the mail client in the header
(its a de-facto part of the standard, since everyone broadcasts a user agent, its expected to be there, and a lot of spam didn't)
I know when I really had to do power mailing list discussion, I would use alpine or mutt
Basically, when you have to argue a standard, or other long discussion via email, threading is an immensely useful feature
But threading is only useful on mailing lists, if you have two long going email between two people, that's just an every growing line, whcih is why gmail doesn't thread.
nor is the default to thread even on clients that support it
So, what happens is how serious you are about email is going to depend on your client
GMail has basically destroyed this, but for a long time, you would never survive on a high traffic mailing list in which you were expected to participate daily that got 50-70 emails per day
The absolutely dedicated ran offlineimap or ran their own mailserver
The basic standard for remote email is IMAP4, and let me tell you, it sucks.
You have to wait for emails to load, or download, which can be a major problem when you're collectively getting a few thousand, and a few hundred you might need to actually thumb through
This is why fetchmail, offlineimap, and procmail exist
These let you dump your entire mailbox in a single go
you could then use any mail client of your choice locally
Either by running UW-IMAP, or it could just read mbox format
It was possible to do this on Windows, but by and large, I remember doing this on Mac OS X
Windows and Mac clients did have automatic downloads, and it did get better over time. Thunderbird was pretty much a universal step forward
It was based around the old Netscape Mail client, and that was a very solid workhorse, one of the first graphical mailreaders, and it basically decided to match the old command line readers of old.
@N's Life @Wiki Volunteer Right now, talking about email clients, I think this is all going to need to written up under multiple pages, but probably "system administration", "email culture" and more. Probably worth talking about how it was all done.
... oh right, it pulls everyone into the thread. That's probably a feature

sadmac356 [emotional support mom friend] (2022-11-13)

Probably yeah

amarioguy (2022-11-13)

You’re good in my view lol I’ll try to update the wiki with whatever insights I can - the issue is life has been particularly ridiculous wrt schedule

NCommander (2022-11-13)

@amarioguy don't worry about it, its mostly I don't think anyone has every written any of out this before

mineman [mineMAN] (2022-11-13)

Avengers, assemble

NCommander (2022-11-13)

But basically, the big advantage to having your mailbox downloaded was it was speed
The mail client would never stutter
You'd never have to wait for it to load from server, and you could use it when you were offline; hotspots weren't universal
There was an entire cottage industry to this actually
Like, AOL used to have FlashMail
Which would let you download all your email from their servers, and then read it in the client whenever you wanted
You could then write it offline, and schedule it to go next time you dialed in
since most households had only one phone line
it was often always tied up by the computer
... or at least in my household
I was usually up late cause sleep disorder, so I pretty much had free reign of the phone after 9PM, unlimited local calling 🙂
.... oh god, I just realized that the vast majority who read this won't even know what that means ...
gah
Ok, phone numbers under the North American Numbering Plan have a specific format
1, and area code, and then a phone number
The phone number is split into the exchange, and the subscriber number
so 1-555-212-5151 has a country code of 1, the area code of 5, exchange of 212, and subscriber 5151
By the 90s, this basically boiled down to local and long distance
Numbers within the same area code were generally free at this point, due to degulation of the bell system
For example, the area code 212 covered all of New York, while 914 was the area code for Westchester
917 was the area code for New York cellphone numbers
Phone calls were charged by distance, calls within your own area code or near by ones (such as the New York City area, which at that point had area codes 212, 917, and 716)
AOL became immensely populare because they had a stupid amount of local phone numbers; you could basically call AOL without having to pay a long distance fee
It, and Prodigy basically were the first ISPs that were basically free to make a phonecall to
Instead, the ISP picked up most of the tab with points of persense circuits licensed from your local Bell, allowing them to run a phone array
This basically made going online only cost the subscription fee, and AOL was relatively cheap
20/USD per month for all you could use. And AOL federated with USENET
Hence, the Ethernet September
Actually, this is an important bit of lore
Which I don't think I've seen documented
After Eternal September, a lot of organizations ran private newsservers
Basically, there were two ways to approach handling discussions. You could either use a mailing list software, like LISTSERV, or Mailman, or you could host an independent news sight something like INN or C-News
This was basically the continuance of USENET past Eternal September, primarily used and ran by those who were deeply involved with high volume discussions like standards
The two I actually remember using was news.microsoft.com, and news.mozilla.org, the later I think is still up
Microsoft Newsgroups were basically the early Webs version of Windows Questions pages

NCommander (2022-11-13)

but holy helly
I don't remember the last time I saw a framed website on the open website
@BlackCoffeeDrinker when did frames finally go extinct? 2005ish?
Geeklog didn't use frames

BlackCoffeeDrinker (2022-11-13)

CSS killed it

NCommander (2022-11-13)

Yeah but I can't say when that would have been popular
No, it had to been earlier
They were basically gone by Netscape Communicator
We didn't have tabbed browsing yet
Or it wasn't common

BlackCoffeeDrinker (2022-11-13)

Opera was kinda new
Yea

NCommander (2022-11-13)

I remember beign so happy to get rid of popover ads by going into Netscape's advanced settings and unchecking the permission for Javascript apps to do that

BlackCoffeeDrinker (2022-11-13)

Oh god yes
I did that too

NCommander (2022-11-13)

I think Firefox was the first one to have tabs? That was commonly adopted
iCab had tabs, but it was a shitty renderer
Internet Explorer for Mac was by far teh best browser >.>;

BlackCoffeeDrinker (2022-11-13)

😂
It was good

NCommander (2022-11-13)

Tasman do things
... oh god Trident
I mean, I referenced Trident in IE for UNIX
Since VBScript worked on Solaris/IE
But MSHTML.DLL be the devil

BlackCoffeeDrinker (2022-11-13)

Beginning 2000s was when frames went out of date

Krabs [enolAHPLA] (2022-11-13)

icab still exists, btw

NCommander (2022-11-13)

yeah but this was Mac OS 8/9 era
So the main problem with Netscape was it was bloated
Like, it was really bloated

BlackCoffeeDrinker (2022-11-13)

Very bloated

Krabs [enolAHPLA] (2022-11-13)

oh I know. I just find it funny it survived long enough to have 68k, powerpc, intel, and apple silicon builds

BlackCoffeeDrinker (2022-11-13)

It had a wysiwyg html editor

NCommander (2022-11-13)

Microsoft had coded MSHTML to do things like render webpages in Office documents, and that formed Internet Explorer 3
the first two were basically rebadged Spyglass Mosiac
So Microsoft specific HTML code used to crop up everywhere
Then MS shipped Internet Explorer with Windows with Win98
so MSHTML was basically always in memory
Since Active Desktop required it

BlackCoffeeDrinker (2022-11-13)

Explorer did need it too

NCommander (2022-11-13)

Well, the 98 explorer

BlackCoffeeDrinker (2022-11-13)

The whole display as webpage

NCommander (2022-11-13)

or if you install Active Desktop from IE4 into NT4

BlackCoffeeDrinker (2022-11-13)

Yes

NCommander (2022-11-13)

MSHTML is a COM object
you could pull it into anything
and you could mix THAT with Visual Basic to do a very quick and dirty kisok system
God those were everywhere
IExplore object making shit easier
Microsoft was actually sued and convinced ont his
It was what got them declared a monopoly
Embrace, extend extinquish
but basically, Netscape was a .com boom survivor
By and large, people saw the need that Microsoft not be uncontested on the web
And folks in Netscape, probably as a F-U to Microsoft, made sure the codebase to the last versions, Netscape Communicator, was released as Free and Open Source Software
that's how Mozilla got going
Mostly held together by Netscape employees
and Netscape is a big name
Netscape pioneered basically everything we consider the modern Internet
If you wanted the biggest, and badest web experience, you ran Netscape

BlackCoffeeDrinker (2022-11-13)

If you could

NCommander (2022-11-13)

that was a huge selling point, that Netscape was the browser of the UNIXs
if you were a power user, you'd have a home computer that could run it

BlackCoffeeDrinker (2022-11-13)

Netscape Communicator was easy to crash

NCommander (2022-11-13)

So was Internet Explorer
I remember both them going down in flames
it often took the system with it

BlackCoffeeDrinker (2022-11-13)

Yeah. But Netscape 3 was much more stable

NCommander (2022-11-13)

yeah, but Netscape 3 still is in the right time period for UNIX to be a valid selling point
I mean, Microsoft felt the need to have IE on UNIX
They might have had a WISE company help, but you saw it in the video, that was a polished product
With Motif integration even
UNIX really didn't stop being relevant until after the .com boom
the IBM RS/6000 43p was basically a viable product for that segment of the market

BlackCoffeeDrinker (2022-11-13)

Desktop Unix you mean

NCommander (2022-11-13)

Yeah
Really, RHEL1 was the flag day
by RHEL3, desktop UNIX was gone, and it was disappearing fast off netcraft
Even earlier actually, RHL was the first desktop Linux I considered usable, but really it was anyone who ran GNOME
or KDE
By and large, Linux was viable as a network kernel even by 95-96
SDF was on NetBSD throughout all the 90s
the BSDs were basically on par if not better than Linux on the desktop right up until the mid 2000s
Ubuntu was pretty much the peak of the Linux desktop experience for a long ass time, until it got displaced by Mint
and Pop!OS
but basically, over the 2000s, there was a steady move away from manual system administration
A lot of companies basically would ship a .NET app that would tie into Windows' IIS, or would ship JBoss or Tomcat as an installation app
With a premade config
Add the entry to active directory, and you'redone
System administration became less and less important
Especially in the era of Bring Your Own Device

BlackCoffeeDrinker (2022-11-13)

BYOD is another big topic

NCommander (2022-11-13)

Originally, you'd basically have to compile and setup this shit yourself, or trust your vendor to have it. Solaris was generally adored for this because Sun was the platform for Java
it's what kept them relevant

BlackCoffeeDrinker (2022-11-13)

This style of administration is probably due to regulations and security

NCommander (2022-11-13)

Essentially
Hardware got to the point you could throw more at the problem for a fraction of the cost
and by and large, internal network standards are not anything near what eCommerence needs
That's part of why Launchpad was such a joke
`waiting for launchpad.net ... was a meme
but since Canonical developed Ubuntu on Launchpad
A lot of what got Canonical's foot in the door was that Mark hired away a bunch of the core Debian Developers (and Mark was a DD himself) right at the word go
that was in 2004
There was also the race to create the next major development tool
By and large, software development was focused around CVS, and Subversion
CVS was all degrees of awful
It was basically a bunch of hacks built ontop of an older source control system called RCS
RCS was one of the one of the earlist versions of version control (ala git) for UNIX, it was common on BSD systems
AT&T had their own Source Change Control Management, or SCCS
I'm reminded I need to refix my backups
Ugh
Anyway
CVS was basically a networked version of RCS
And was written for local area networks, similar to visual sourcesafe
Originally, source control was never something made public, it was done on shared boxes just like zinc (the kernel development box for Ubuntu kernel developers) was open to the public
You'd generally apply, and may or may not get in
NetBSD was the first to create what was known as pservers
or public CVS
Which let anyone see the code as it was being developed and comment on it
No
wait
I'm getting this wrong
holy shit, there's no page on Wikipedia about pservers
What the hell?
@BlackCoffeeDrinker holy shit do you remember this?
The Internet has seemingly forgotten the Net/OpenBSD pservers
WHAT THE HELL?!

BlackCoffeeDrinker (2022-11-13)

Vaguely
Anoncvs of openbsd should have it

NCommander (2022-11-13)

No, I mean which one was first
that was why the project came famous

BlackCoffeeDrinker (2022-11-13)

I remember it from openbsd but… that doesn’t mean much

NCommander (2022-11-13)

Well, OpenBSD came from making the development process famous
Ok, I remember this now
Originally, when you wanted to be part of the Free Software Foundation, or a GNU thing, you needed access to their machines; usually through CSNET, or early Internet

mineman [mineMAN] (2022-11-13)

Now I have the avengers theme stuck in my head…

NCommander (2022-11-13)

This was fencepost for the GNU stuff
It was a machine with cvs access and shared files, plus the FSF offered free mailing lists and more if you were a GNU project
It's what very much made them relevant
By and large, code repos weren't open

BlackCoffeeDrinker (2022-11-13)

I thought gcc was their claim-to-fame

NCommander (2022-11-13)

The FSF didn't publish in development source code, you'd just get the tarballs off ftp.gnu.org

NCommander (2022-11-13)

they provided hosting for everything for the "GNU operating system"
It's basically why they had enough people in one place to actually sit and write a retargetable compiler

BlackCoffeeDrinker (2022-11-13)

Ahh right

NCommander (2022-11-13)

Like, the FSF was Stallman's vision for a free (as in speech) computer
directly as an outreach of the MIT AI Labs, part of the culture involving ITS (hi @suzuran)
(this is basically me recounting history as I remember it, since a lot of things have been lost)
Stallman formed the FSF, and basically couch surfed his way to making it work
it's why he's so incredibly well known
and the FSF has cult like zeal in certain circles
Now, to be perfectly fair, the FSF very much is why we have a non-commerical alternative to Windows
and it was the non-commerical alternative to UNIX

BlackCoffeeDrinker (2022-11-13)

Or even to old Unix
Yeah.

NCommander (2022-11-13)

Yeah, like GCC fucking sucked in its early versions

BlackCoffeeDrinker (2022-11-13)

Everything was proprietary

NCommander (2022-11-13)

But it was very accepting of code, and by and large, you could assume other people had GCC access
DJGPP goes brrr
like DJGPP was very much the poormans development tool on Windows
You only used it if you were desperate
or even DOS
But fencepost was basically the same as people.debian.org/people.canonical.com/peoplpe.soylentnews.org was formed in
Very much an artifact of a bygone era
... correction ... fencepost might have been the mailing list, but I think the term has been used in other places to refer to developer machines the FSF gave access to
It was also common to have porting machines, Debian Developers would offer remote access to anyone who was willing to work on a given port
I got my start in Linux developer by contributing to the Motorola 680x0 port, even though I owned no hardware at the time
But I also proved that building packages in an emulator was viable
Full computer emulation, by and large, was not an advanced field at the time
MAME was a thing, but the overall quality was very low
I mean, Virtual PC was a commerical production for full PC emulation on Mac
There were others
Debian-m68k was very much the last real attempt to keep anything going on those clunkers
It was already becoming a meme, and I think for those of us who were left, it was kinda an exercise on how much hardware we were going to throw at the problem
cause the beautiful thing about building a Linux distribution is its a highly distributable problem.
The usual standard for Debian buildds was two of each type, with i386, which was the most popular port having a few more
amd64 had two, powerpc had two, mips had two
ARM was an exception, they had 4 or 5
Since Debian can't be crossed compiled
(its gotten better, but by and large, you're going to suffer if you cross-compile LInux)
@sirocyl -^ you might want to weigh in on the state of 2000s emulation, you probably remember it better than I do
Anyway, m68k we had basically a 12-16 real hardware machines thrown at it
and we're dealing with week long build times

sirocyl (2022-11-13)

I have more 2000s emulation in my left nesticle than some of us do in our whole life.

NCommander (2022-11-13)

god I hate that I get that joke

sirocyl (2022-11-13)

lmao

NCommander (2022-11-13)

It what just an absurd thing
I wanted to be a Debian Developer, and I felt that getting involved with debian-68k was basically an easy way, there was basically four peple doing it

Krabs [enolAHPLA] (2022-11-13)

If I were making a pioneering emulator everyone would have to use for a while, you bet your ass the icon would be a pair of hairy balls
I'm like 8 mentally

sirocyl (2022-11-13)

but yeah, if we're talking systems emulation and not lol roms nintendo shit - then yeah, emulation in general was pisspoor until about 2007

NCommander (2022-11-13)

Only problem is I had no m68k hardware
at all
So, being broke college student
I just instlaled Aranym
which is an Atari TOS emulator
You can run Linux in Aranym
Thus I could run Debian-m68k in emulation on an old PC
... and some not so old
I wasn't a Debian Developer, but the way Debian uploads work is that someone has to manually sign them with a GPG key into the archive
(this is how it worked at the time)
It was allowed that a buildd could be run by someone who wasn't a DD
(this probably is still allowed TBH, its just immensely uncommon)
(context: Debian's uses tools known as sbuild to compile packages for upload, while buildd handled supervising, it had to connect to what is known as the wanna-build daemon which corridates the whole thing)
Canonical modernized it with Launchpad Soyuz, although only Ubuntu has ever used that
But that's how I eventually become a DD, since I was contributing to Debian
I eventually started getting involved with Debian ARM since the NSLU2 was popular at that time, and you could probably find early commit messages from me in the uNSLug repo
Which got replaced with BitBake
Which then forked five times
I think yocto is descended from that mess
Maybe it was buildroot

Krabs [enolAHPLA] (2022-11-13)

Yes, yocto uses bitbake
Well, openembedded does, and yocto uses that

NCommander (2022-11-13)

Its a very messy lineage involving inbreeding
I used those tools when they were first made, and even I couldn't tell you
What I vaguely remember is basically everyone was using custom Makefiles

Krabs [enolAHPLA] (2022-11-13)

I just give it a kconfig and don't look at the horrors beneath

NCommander (2022-11-13)

Or forkes of the uclibc/ugcc stuff
There was nothing worse than a uclibc embedded device
if you wanted to make it do something beyond route packets
You wanted something that could run the all bloated glibc
The largest problem is uclibc didn't support C++
You needed GCC
(this is incidentally why you never see shit like dnsmasq, it being even viable on embedded is a relatively recent invention)
uclibc also was missing things like unicode support, so while you could get Samba to run against it
You'd likely not enjoy the end results
Debian became rather popular in the embedded space because 1. it had an ARM port
the m68k effort very much pioneered the concept of slow compilation
oh god did we
oh, remind me to recount the mailserver horror from my college days
hint: I had to configure sendmail (or maybe postfix) to run on a network that would ban you from the network from being on prot 25, in before VPSes

BlackCoffeeDrinker (2022-11-13)

I remember when you could have port 25 opened on your home line

NCommander (2022-11-13)

I remember doing a happy dance when I got sendmail to send and receive from the open web from my laptop at school

BlackCoffeeDrinker (2022-11-13)

😂

NCommander (2022-11-13)

sendmail's config eats children
Weaponized M4
I have enough heartburn with postfix
sendmail was worse in every way
Anyway, I actually got involved with dak
The Debian Account Manager was very slow at approving accounts
oh holy shit
am I still in there
 % grep NCommander -r *
dak/update_db.py:# <NCommander> Ganneff, after you make my debian account
Holy shit I am
https://github.com/Debian/dak/blob/master/dak/update_db.py#L6
GitHub
dak/update_db.py at master · Debian/dak
[MIRROR] - send patches to https://salsa.debian.org/ftp-team/dak - dak/update_db.py at master · Debian/dak
dak/update_db.py at master · Debian/dak
I see myself
# <Ganneff> when do you have it written?
# <NCommander> Ganneff, after you make my debian account
# <Ganneff> blackmail wont work
# <NCommander> damn it
https://github.com/Debian/dak/blob/master/daklib/config.py#L25
GitHub
dak/config.py at master · Debian/dak
[MIRROR] - send patches to https://salsa.debian.org/ftp-team/dak - dak/config.py at master · Debian/dak
dak/config.py at master · Debian/dak
also there
Yeah, so I got kinda impatient dealing with the DAM making my account
And I really was fasciated by the entire process of building multiarch stuff
Cause of Debian-m68k
I actually went through the trouble of setting up dak
Which is basically the bit of magic software that powers the Debian archive. It's this entirely propertiary tool
That's tied to debian's backend
holy shit, I just realized I'm probably in the commit logs
that would have been git send-patch era
There wasn't github or the like

BlackCoffeeDrinker (2022-11-13)

Send patch
The pre pr

NCommander (2022-11-13)

did I just cause a PTSD flashback?
send-patch was the reason to run your own mailserver
You're still expected to use it in many places

BlackCoffeeDrinker (2022-11-13)

Send patch
Arghh
Aaaahhhhrhrhrhgrrh

NCommander (2022-11-13)

^- proof that git ruins lives
Having the ability to host a git repo was rare
there were a copy of free git provides, but they pretty much all sucked
and couldn't handle something like the kernel, cause it was too fat
The solution was send-patch
If you setup a local mailserver, you could use the system mail command, and have git spray it to a mailing list
This is why mutt was so popular
(I preferered alpine, but there were times I had to use mutt)
You could send an entire thread through | git am, and replay the other guys branch
and since git was decentralized, you could just keep it locally
It was astonishingly awesome at that
Canonical tried to complete with bzr and Launchpad; free code hosting if you used bazaar, and Atlassin did the same for hg

BlackCoffeeDrinker (2022-11-13)

And we know how that went

NCommander (2022-11-13)

git became usable
I remember having to use cogito, git was horrid
But git was an emergency replacement for bitkeeper
(oh look, we're back on VCS)
commit f71ac27c75a8ab5185508491e97bc6f237772aa6
Merge: aa0907e8 2288ab34
Author: Mike O'Connor <stew@vireo.org>
Date: Tue Jan 27 21:09:57 2009 -0500
Merge branch 'content_generation' of http://kernel.ubuntu.com/~mcasadevall/dak into content_generation
Conflicts:
dak/dakdb/update2.py
dak/update_db.py
This updates the "required_schema" to 4, moving mcasadevall's update2 to
update4
Signed-off-by: Mike O'Connor <stew@vireo.org>
Holy shit
Wow, that's literially what I was talking about
I was a volunteer Ubuntu developer, and I was a MOTU (master of the universe) at that point, meaning I could upload to the universe and multiverse archives
There was also "Core Developer" which could upload to main and restricted
God I remember this, my start date at canonical was the day before I finally got my DD upload
but because I was actively involved in Ubuntu development, I was able to request access to zinc, which was Canonical's git server
That was kinda an open secret; Canonical was pushing bzr real hard, but the kernel used git
and Canonical had to deal with git
So they had one of those git tree viewing things ... I don't remember what, and if you were active in Ubuntu development, they'd give you a shell account
Jan 27th ... that was my first month still at Canonical, I was likely just finishing some stuff up, although my earlier career was rough
I mean, I was 20
I was the youngest one in the room by a long stretch, and I ended up ARM tech lead by my second year
Primarily because I volunteered to do the paperwork
Basically, because I was the FNG, I wrote the meeting minutes
I also took it upon myself to write the blueprints for each development cycle for the team
I reported to David (my boss), and while no one reported to me, I basically make sure we were moving on cycle
I was probably closer to an executive assistant to some degree
but since I knew exactly what business needs had to be met, I could write the blueprints
We never missed a deadline, although we had some close calls.
The problems in Canonical didn't become evident until later
Part of a culture of constant crunch I think.
... I don't think I want to go into this story tonight, but someone remind me to talk about the Linux/ARM days
But, having your own domain on the Internet was very much a mark of success
Domain names were basically only in reach for people with money, and the technical skill to host them
There were domain hosting companies like DynDNS, who hosted DNS servers, but to even use it, you needed a permamently connected host
Most likely, you'd either have a geocities, angelfire page
some uses ISP space, like there was myspace.aol.com, which gave members 2mib of file sotrage
I used that to host an avatar on a Sonic the Hedgehog forum ....
Like, I remember I used to host what was SegaSonic News Network, and the Sonic 2 Beta forums, it was on Ivision's software, a PHP webforum which was basically a better and faster version of ikeaboard
We had a FTP account, no shell, and maybe CPanel

BlackCoffeeDrinker (2022-11-13)

Oh cpanel
Perl mess

NCommander (2022-11-13)

F
But it was orginally on some like pnemesis.ibforums.com
Like, there was a lot of forums, there was invision board which was originally free up to 2.0, vBulletin, UBB
the free hosting ones like ezBoard and Anyforum
YaST SE
Setting up a webforum was hard

BlackCoffeeDrinker (2022-11-13)

Php made it easier

NCommander (2022-11-13)

I ended up becoming server admin because I actually could host something that wasn't ezBoard (or ezBake as we called it)
Easier, not easy
Permissions were tricky, you had to deal with php.ini

BlackCoffeeDrinker (2022-11-13)

Yeah

NCommander (2022-11-13)

Geeklog and invisionboard is what I remembered
Then you had the oldschool shit like Slash for the big discussion sites
there was MacSlash for instance

BlackCoffeeDrinker (2022-11-13)

I used a lot of vBulletin, lots of custom things

NCommander (2022-11-13)

Geeklog was the thing we used before WordPress became common and as good of a security track record
geeklog required register_globals
Incidently, Sonic 2 Beta would merge with another forum called Area 51 to form Sonic Retro
Sonic 2 Beta well became Sonic Classic
Then that merge to form Sonic Retro, Scarred Sun basically wanted to buy me out, I said no, and then formed that
I still have an oldbie account on the forums I think
you know, Hidden Palace is actually another community that I ran with or its ancestors that now has a wiki
... I feel like I there's a trend in my life

BlackCoffeeDrinker (2022-11-13)

😂

NCommander (2022-11-13)

Like, I didn't have anything to compare it to, but I realize that was a big fucking deal in like 2003
I basically had to get money from folks to fund this little venture, it was like $30 USD/month
and I had to sysadmin it
We had a geeklog, and two invision board forums, one for the geeklog site, and one for Sonic Classic
All sharing a single MySQL database, because that's what CPanel limited you to
I had to keep thinks like mysql and apache built on Mac OS locally because it was the only way to test anything
like I had to have a full local Apache-MySQL-PHP, and that was back when you'd had to customize PHP plugins for a given site
Which is actually why soylentnews has it's own apache/perl in /srv/soylentnews.org/

BlackCoffeeDrinker (2022-11-13)

See, I dual booted to linux, I had a clone of the prod env on my local machine

NCommander (2022-11-13)

Very much lessons learned from late 90s/early2000s sysadmining
i was still in middle school
No, had to be later
No ... can't be
I was at boarding school my last year of middle school, and first year of high school
god was it really in middle school?
It might have bene
https://en.wikipedia.org/wiki/Ikonboard#Ikonboard_3.0 - ikonboard 3 just came out
Ikonboard
Ikonboard was a free online forum or Bulletin Board System developed in Perl, PHP for use on MySQL, PostgreSQL, Oracle, as well as flat file databases.
I remember that
clearly
https://en.wikipedia.org/wiki/Invision_Community#Version_1.x.x - and then we all went to invision
Invision Community
Invision Community (previously known as IPS Community Suite) is primarily an Internet community software produced by Invision Power Services, Inc. It is written in PHP and uses MySQL as a database management system. Invision Power Services sell applications that each can be bought and installed separately in addition to the Suite, the most widel...
2.0 was briefly under a free license, then changed
I started at Canonical in 2009
That was my junior year
-3 puts me back at Beekman
Before I left NYC
so it was my second year of high school
secondish
its complicated
Yes, ok its coming back to me
At that point, my parents had divorced
My mom was in Manhattan in a 1 br apartment, and my dad had still had the house we had together.
I typically stayed w/ my father in those days, and my PowerMac G4 was in the attic
(both my parents are doctors, for context)
I used to leave the G4 on permamently, and we had optimium online
When I was in middle school, I had an iBook, and used to telnet into my PowerMac; I usually was running Mac OS X on it, but I also had Linux on it
I also used ot keep an embedded box at my mom's
my NSLU I think
fucking hell, this is memories
So I ran sendmail on the G4, and then have a dyndns free domain pointed at th G4
I didn't use it for anything series, and I once configured sendmail as an open relay cause man, it just made everything work
cough
Anyway, it ran for about a week filled with spam
god what a memory trip
I had basically figured out how to setup ikonboard and iversion on my local system, on the G4
So I ended up hosting it on ibforum's own web hosting which had PHP and MySQL
and was basically how much you could fit on a single cpanel account
it was a lot
I honestly wonder how many people really know how to setup somethng like postfix anymore
Exchange makes it real damn easy
But the mail admin is one of the very few things that by and large doesn't get containerized (yet)
You either get a service to do it, or have a grizzled admin doing it
(although a person who runs their own mailserver has a fool of a mailmaster ...)
 % host casadevall.pro
casadevall.pro has address 96.126.124.51
casadevall.pro has IPv6 address 2600:3c00::f03c:91ff:fe69:753d
casadevall.pro mail is handled by 0 pathfinder.casadevall.pro.
I be that fool
I think it very much goes with the living off the land mindset
For simple webpages, a VPS is going to be dead cheap
But you have to admin it yourself
Otherwise, you need to do something like have a wix site
honestly
I'm wondering if a mailman site for doing stream announces isn't a bad idea
Because so many people complain about notifications
... god, I'd probabyl be the first YouTuber with a mailman instance
TT has me beat on the wiki
@graphickal you might want to see this

mineman [mineMAN] (2022-11-13)

so because I wasn't able to read all of this, what's a TL;DR if it's even possible to give one?

NCommander (2022-11-13)

It's mostly an account of sysadmins, 90s/early 2000s internet culture, being a mail admin in an era when that was very much a needed skill to play in FOSS development and more

mineman [mineMAN] (2022-11-13)

ah

NCommander (2022-11-13)

I'd probably break it into "Tales of the Last SysAdmins", "The @aol.com email, or how having your own domain name was a major status symbol on the net", and "Why knowing SMTP was essential FOSS skill"

mineman [mineMAN] (2022-11-13)

all stuff to add to the wiki I assume?

NCommander (2022-11-13)

I think so
SOme parts to the SN page
Because you know? I don't remember reading a lot of accounts of what it was like being a poor sysop in those days
You're basically talking the hobbyist netadmin in the twilight years of UNIX, and the rise of Linux
That was part of why I didn't start using Fedi seriously until I set my own instance
A lot of folks are basically going to one of the larger instances, but there's a lot of us who basically are running our own homeserver just as an evolution of what we were doing
Like, that was the dream for me, I always wanted to write a RFC. I used to read them, just for fun

mineman [mineMAN] (2022-11-13)

RFC?

NCommander (2022-11-13)

Request for Comments
They are literially the documents that define core Internet infrastructure
Going back to the original ARPANET
Every major web protocol standard is documented as a RFC
https://www.rfc-editor.org/rfc/rfc821 - what would become modern email started here, as a RFC in 1982
written by Jonathan B. Postel, pretty much a demigod of the early ARPANET
You'll find his name everywhere
https://en.wikipedia.org/wiki/Jon_Postel - dude basically was involved with everything that made the Inter was it was
Jon Postel
Jonathan Bruce Postel (; August 6, 1943 – October 16, 1998) was an American computer scientist who made many significant contributions to the development of the Internet, particularly with respect to standards. He is known principally for being the Editor of the Request for Comment (RFC) document series, for Simple Mail Transfer Protocol (SMTP),...
Jon Postel
Died at 55 :/

NCommander (2022-11-13)

I can't even imagine what EFNet is like in this day and age
Tcl was the key to being a IRC operator
It was fucking everywhere
That was the Tcl and Perl era
If it wasn't a perl script, it was tcl
I tend to be op in a lot of them
Discord is different
Since you can be in a lot of servers at once, but
I was usually op because I was there at the beginning
I mean, here at DisNCord, my nam is at the top of the list. , but on IRC, the channel operators always had @op, it used to be though that some channels decided that people were using operator as a status symbol
So it is pretty common to have a bot that handles things like oping stuff
That's also usually more flexible than having ChanServ do AOPs
You basically have few different types of servers
First are those focused around a single person, DisNCord would probably be an example of that, I exist here.
That was true on the IRC era as well, #geekissues was basically one for the creators of bash.org
Some are more social clubs. I'm not going to name a specific server, but places that basically have a large mod team and no specific leader tend to be this. Its usually when the server owner is very hands off and just lets things evolve naturely
Those style of server is basically you op everyone,
IRC evolved to gain what are known as half-ops
which could usually kick and even ban, but couldn't change channel modes
Although it was specific per server
Some networks have a specific owner status for a given room

BlackCoffeeDrinker (2022-11-13)

I do not miss the days of nickserv

NCommander (2022-11-13)

That tended to be &, then @ and some sort of symbol for half ops. Regulars usually got voice

BlackCoffeeDrinker (2022-11-13)

having mIRC scripts execute at each logon
to make sure your nick was your own

NCommander (2022-11-13)

Voice basically was a moderation safeguard
In times of spam, you could +m a channel, and then voice people as ops were online
Regulars would get auto voice

BlackCoffeeDrinker (2022-11-13)

a lot of channels were +m by default

NCommander (2022-11-13)

that's because channel bans are easy to fuck up
so you had a bot that +v everyone
And then you could just devoice them
eggdrop made that real easy

BlackCoffeeDrinker (2022-11-13)

by the end mIRC had right-click -> ban

NCommander (2022-11-13)

Finally, you have professional servers, which are things like Debian and Ubuntu
These have no specific ops
generally they're an elected or appointed position depending on the org
they keep the channels on topic
But they're not social for the most part
Although I never was in a dev group that didn't have a lounge type channel to the side
FOSS ones tend to be a lot more informal
Like, you'd have #ubuntu-devel on freenode or #debian-devel on OFTC
which a lot of the discussion took place
then you have #ubuntu-release, and #debian-release for the release team stuff
But you also had social club stuff, like I used to just chat socially in #debian-dak, which is where the blackmail comment came from
I should note these are all classifcations I just made up
but its broadly what you saw on AOL as well, and on Discord today
Like there were always Group Chat invites going around
IRC was the upgrade from AIM, it had actual rooms that were persistant

BlackCoffeeDrinker (2022-11-13)

AIM rooms were ... something

NCommander (2022-11-13)

oh yeah
public AOL ones were worse
LGR commented on that one even
You still had the era of paid forums back then
Honestly? I don't think its unfair to compare DisNCord to GEnie or CompuServe forums

BlackCoffeeDrinker (2022-11-13)

ahem wanna make a vBuletin forum ? 😄

NCommander (2022-11-13)

Probably
>.>;
oh god, my life is a circle

BlackCoffeeDrinker (2022-11-13)

SOB, I'm in
^_^

NCommander (2022-11-13)

I have infact found forum NCommander
thought long lost in the death of web forums
and vB is so much nicer than simple machines
I actually used to hack on forum software for fun
I wrote mods for invisionboard, like a RP store thing
and I worked on a mostly abandoned project alled Ajaboard

BlackCoffeeDrinker (2022-11-13)

I wrote vB mods 😄 😄

NCommander (2022-11-13)

... I don't even remember the last time I saw a modded forum ...
they were everywhere
like I was big in the sonic community, there were huge RP forums, which had inventory systems for specific characters

BlackCoffeeDrinker (2022-11-13)

I wrote a "store mod" for vB3 back when it was new

NCommander (2022-11-13)

holy shit
The other way the Eternal September was stemmed

BlackCoffeeDrinker (2022-11-13)

oh extracting binaries for that

NCommander (2022-11-13)

Um
Topic break
this probably is worth talking about
Cause man, I don't think moderated groups are brought up anywhere
You can see it on the great rename
but they really got popular for serious discussion

BlackCoffeeDrinker (2022-11-13)

I never liked news:

NCommander (2022-11-13)

Well, moderated groups required you to understand how USENET worked

BlackCoffeeDrinker (2022-11-13)

we could also run a usenet server

NCommander (2022-11-13)

you know, I was thinking that

BlackCoffeeDrinker (2022-11-13)

god, imagine dockerizing that

NCommander (2022-11-13)

shit
apparently, we're sharing a braincell
Basically, the network cabal has a file that denotes if a list is moderated or not
Back in the 90s/00s, moderated lists required having a header called "Approved: true", which isn't trivial to inject in
Instead posts for a moderated list would get sent to anemail address, and then a human would repost them to USENET

BlackCoffeeDrinker (2022-11-13)

Urgh
pain

NCommander (2022-11-13)

This basically worked like subreddits, expect the posts had to be approved by hand before going live
Regulars on a given group would often get allowlisted
@BlackCoffeeDrinker I did actually write a SoylentNews->NNTP prototype
We had INN up on SN's infrastructure briefly
I was thinking we could actually federate with USENET

BlackCoffeeDrinker (2022-11-13)

re-writing a NNTP server shouldn't be too hard
it's a stupid protocol

NCommander (2022-11-13)

The protocol is the devil
oh look, braincell

BlackCoffeeDrinker (2022-11-13)

the thing looks like SMTP

NCommander (2022-11-13)

The way I did it with SoylentNews was actually have the slashd daemon write out posts to text file, and then shove them in a UUCP queue
Then had INN pull

BlackCoffeeDrinker (2022-11-13)

that's not bad

NCommander (2022-11-13)

https://soylentnews.org/article.pl?sid=14/07/21/1627239
Request For Comments: SoylentNews-Netnews (NNTP) Gateway - SoylentNews
Request For Comments: SoylentNews->Netnews (NNTP) Gateway -- article related to Soylent and The Main Page.
God, this almost became the fedi of the CMSes

BlackCoffeeDrinker (2022-11-13)

https://datatracker.ietf.org/doc/html/rfc3977
NNTP operates over any reliable bi-directional 8-bit-wide data stream channel.
hahahaha

NCommander (2022-11-13)

Rehash has the ability to create sub-sites, caled nexus
... wow
they said it in a standard
That's fucking brave

BlackCoffeeDrinker (2022-11-13)

I think that got ignored

NCommander (2022-11-13)

SHOULD NOT
😉

BlackCoffeeDrinker (2022-11-13)

😛
Implementations SHOULD apply these same principles throughout.

NCommander (2022-11-13)

I mean, USENET still moves along UUCP though
NNTP can be used directly, but it rare

BlackCoffeeDrinker (2022-11-13)

8bit clean
bidi
stable
wow

NCommander (2022-11-13)

They were fucking optimistic

BlackCoffeeDrinker (2022-11-13)

I don't think I ever saw that written

NCommander (2022-11-13)

But NNTP was basically the less shit version of LISTSERV
which could be pumped over uucpnet
or well newgroups

BlackCoffeeDrinker (2022-11-13)

but UUCP didn't need 8bit bidi

NCommander (2022-11-13)

NNTP was an attempt to make them less shit then using trn
on a console
god, we should actually do a vintage computer UUCP thing

BlackCoffeeDrinker (2022-11-13)

7bit uucp ? 😛

NCommander (2022-11-13)

Kaypro in the cloud!

BlackCoffeeDrinker (2022-11-13)

over x25

NCommander (2022-11-13)

ITS!
36-bit computing
it's amazing how octal still shows up in places

BlackCoffeeDrinker (2022-11-13)

very

NCommander (2022-11-13)

its really weird thinking about this
But man, this was the ham radio for my generation
the code requirement was very much a big barrier to entry
and if you didn't know an elmer
good luck
notes this is a solid way to unwind after a fucking stressful week

BlackCoffeeDrinker (2022-11-13)

so.. putting nntpd in docker ... maybe we should make another thread

NCommander (2022-11-13)

god are we doing this?
Like ar ewe really going to make the shell acc-
holy fuck I forgot about that
That was the original plan for SoylentNews

BlackCoffeeDrinker (2022-11-13)

postfix's master(8) can invoke nntpd

NCommander (2022-11-13)

https://soylentnews.org/article.pl?sid=14/06/17/0059210
#define subscriptions (or, "How we want to work for you!") - Soyle...
#define subscriptions (or, "How we want to work for you!") -- article related to Soylent and The Main Page.

BlackCoffeeDrinker (2022-11-13)

imaging, bringing up a win3.1 machine online in 2022/2023 and posting from it

NCommander (2022-11-13)

This is what I planned the finance model for Soylent to be
no, look at that
That's basically what I wanted
That was why SN had such complicated by end infrastructure

BlackCoffeeDrinker (2022-11-13)

Shell account for $5
wow

NCommander (2022-11-13)

Like
I was modelling around SDF
Like, that was the dream
I used to imagine what it would been to an editor for Slashdot
SoylentNews was basically me answering that

BlackCoffeeDrinker (2022-11-13)

the comments are gold
2014 -- that was the flash ads days right ?

NCommander (2022-11-13)

It was

BlackCoffeeDrinker (2022-11-13)

oh god https://soylentnews.org/comments.pl?noupdate=1&sid=2411&cid=56378#commentwrap

NCommander (2022-11-13)

god I even talked about taking bitcoin https://soylentnews.org/article.pl?sid=14/06/17/2034252
Back To The Drawing Board: Rethinking Subscription + Explaing Expen...
Back To The Drawing Board: Rethinking Subscription + Explaing Expenses -- article related to The Main Page and Soylent.
... actualyl we did
I remember we had trouble figuring out how to file our taxes
JFC
although that wasn amazing typo (I wrote RFC)
Like, after I took control of the project
This was basically my loveletter to a dying era of the Internet
Personal blogs are basically gone

BlackCoffeeDrinker (2022-11-13)

They came and went away in a flash

NCommander (2022-11-13)

Yeah, but you also still had the old school forums, and the BBSes
This was basically my version of that, running on the same code as Slashdot, and gotten 3k signups in a week
Like
Hotdamn

BlackCoffeeDrinker (2022-11-13)

https://soylentnews.org/comments.pl?noupdate=1&sid=2411&cid=56406#commentwrap
I don't want/need any of that stuff
has sub badge

NCommander (2022-11-13)

you can see why I don't want it to die a slow painful death
Like, this was your vB forum
Like, I can very much see libertarian N in some of these comments. I was probably central left at the time
But you basically had nothing but your will and your direction holding together a bunch of strangers on the Internet
No one got paid
I was in the hole for the servers
and by and large, I made it count
We never ran ads, it was entirely selling subscriptions which equaled a little star

BlackCoffeeDrinker (2022-11-13)

Understandable, lots of sweat, swearing and blood went into my vB community & the custom mods

NCommander (2022-11-13)

Like, SN was basically version 1 of what would become DisNCord
We had an amazing chat community on a private IRCd
my first real start with YouTube came from SN, since I did write ups, and got my first video to 3.4k
which convinced me to stay with it
like I wasn't involved in the day to day
But I occassionally would post random stories
I basically treated it as my personal blog
The site also had a journal feature
Slashdot journals were a thing
There were little entire ini communities built into
Honestly, its very much like cohost.org
That's what cohost reminds me of
I thought it was LiveJournal, but I really never got into livejournaling
Like, posts on Slashdot and on rehash/SN have a "Last Journal: "
And you could follow other people through the Zoo

BlackCoffeeDrinker (2022-11-13)

I mad a mod that gave my users their own personal space
with journal/blog capabilities

NCommander (2022-11-13)

Damn, what a though of SN getting revitalized by the vintage computer crowd

BlackCoffeeDrinker (2022-11-13)

that ran on it's own subdomain
and if you know vB... that was an achievement

NCommander (2022-11-13)

The code is stupidly flexible, I also have a SN -> text generator
it even generates news article emails
Like the oldschool ones
@BlackCoffeeDrinker https://github.com/SoylentNews/rehash/tree/master/plugins/NetNews
GitHub
rehash/plugins/NetNews at master · SoylentNews/rehash
Forked from Slashcode, rehash is the codebase that powers SoylentNews.org, powered by mod_perl 2 - rehash/plugins/NetNews at master · SoylentNews/rehash
rehash/plugins/NetNews at master · SoylentNews/rehash
thank you past me for committing that

BlackCoffeeDrinker (2022-11-13)

use utf8;
visionary

NCommander (2022-11-13)

wow, I actually even had comments spooled out
I fully intended to make this bidirectional
# Need to determine what the references field should be, if pid == 0
# then it needs to point at the story, else at the comment listed
if ($comment{pid} == 0) {
$comment{reference} = "<story-$story{sid}\@soylentnews.org>";
} else {
$comment{reference} = "<comment-$comment{pid}!story-$story{sid}\@soylentnews.org>";
}

BlackCoffeeDrinker (2022-11-13)

oh wow
proper mappings

NCommander (2022-11-13)

https://github.com/SoylentNews/rehash/blob/master/plugins/NetNews/templates/netnews_story%3Bnetnews%3Bdefault - and I handled it as a template
GitHub
rehash/netnews_story;netnews;default at master · SoylentNews/rehash
Forked from Slashcode, rehash is the codebase that powers SoylentNews.org, powered by mod_perl 2 - rehash/netnews_story;netnews;default at master · SoylentNews/rehash
rehash/netnews_story;netnews;default at master · SoylentNews/rehash

NCommander (2022-11-13)

if you're going to do something :)
Like, this shit threaded properly in thunderbird
I was immensely pleased with it
The idea was we'd have a modified instance of inn that used the SN database to get login names
and then inject an internal header, which could be then mapped to rehash users
And complete the link

BlackCoffeeDrinker (2022-11-13)

so SN was basically, community in a box

NCommander (2022-11-13)

that's why I had the site appear as it's own news server
The idea was to create subnexuses, which were like apache.slashdot.org, or linux.slashdot.org
Very much our take on subreddits
I was both getting drawn into FTL, and my health was getting increasingly worse
Canonical was an immensely unhappy time, and it was becoming clear that SN wasn't going to reach the point that any of this would be viable
and my health really went to shit while I was at Mixer/Beam
The Apache 2 porting effort was the last serious thing I did on the cod ebase
I did work to make nexuses work, but no one but me ever wanted to write original content
I had offered multiple times
So I just kept writing on the homepage
https://soylentnews.org/article.pl?sid=20/05/10/1753203
Examining Windows 1.0 HELLO.C - 35 Years of Backwards Compatibility...
Examining Windows 1.0 HELLO.C - 35 Years of Backwards Compatibility -- article related to The Main Page and Software.
I think I wanted tomake one more go at reviving SN in 2020
I hadn't written a post in over a year at that point
I was basically done w/ staff
I wasn't idling or anything, and I basically went "yeah, fuck this" when multiple comments complaining about "omg images" reached +5
It was that and the COVID misinfo that basically made me wash my hands of it
Like I had stopped regularly reading it well before that point
But I still was there at least in spirit
And lurking on the IRC channels
But that was a very listless period of my life
@Blazak is about the only one I can remember at all very clearly through those years
Well, him and one other
as far as places I was in the net
Exoria just came out
(modpack)
So that was probably early 18, lateish 19
I'd stopped going to NYC Resistor, and started hanging with 2600nyc, and my former hacker group DCG201
No, I was active on some forums, I remember doing a Lets Paly in the KSP forums
god that was a really listless period
I had done some ICNAN policy stuff in this time, and I tried very hard to not loose myself in probably a very dark chapter
I still travelled some, I did my first bike tour ... it was right after Beam/Mixer
first going Rochester to Buffalo, and then all the way to NYC
I just shared photos on Facebook
That was the extent of documenting it
it didn't even seem notable at the time
Those weren't good years
I think SN was probably the only thing that kept me tethered to the world at times. I did a collobration with OS/2 Mueseum and that was when I met @neozeed
but ... I remember going to Texas, and then just went ...
I was in Texas in '18
I was at RTX Austin that year