Shopping for a Usenet Provider

This page discusses what to look for, what to expect, and what not to expect when shopping for a premium Usenet provider. The focus is on individual accounts, though if you are an ISP looking to outsource Usenet service, some of this may still apply to you.

  1. Cost
  2. Speed
  3. Completeness
  4. Group List
  5. Newsfeeds
  6. Retention
  7. Privacy
  8. Header Info
  9. Spam Filtering
  10. Cancels
  11. Abuse Policy
  12. Limits
  13. Support
  14. Reliability
  15. Technical Details
  16. Providers


Right now the going rate for basic Usenet service seems to be around $12US per month, but it varies from there. Don't just go and look for the lowest price. The old saying you get what you pay for applies very much here.

News servers are expensive. Bandwidth is expensive. A good admin staff and tech support staff are expensive (though not as expensive as the servers and bandwidth, unfortunately). Someone has to pay for it all, and that's the customers. A service that doesn't charge enough ends up having to make up for it somewhere, usually in the quality of the service.

Some services offer a lower-cost account without access to the binaries groups. If you don't care about binaries, this is a very good thing; if you pay full price you are paying for the expensive binary groups you aren't using.

Others may offer more general pricing tiers based on how much bandwidth you're allowed to use to download articles, or how much total volume you can download per month. A system like this ensures that the light users aren't subsidizing the cost of the heavy users. Heavy downloaders are more expensive for a provider, and most are now charging more for heavy use. Unlimited, unthrottled, flat-rate Usenet access is quickly becoming a thing of the past. Typical pricing schemes are either limited by your maximum download speed, or by the total volume you can download per month.

If you access the Internet via modem, speed limits aren't going to be a factor for you. Most download limits won't be very significant either, as it will be difficult to exceed them at modem speeds. The limits and higher-priced accounts affect users with faster Internet connections.


As high-speed Internet connections become more common (cable modems, DSL, etc), download speed is becoming a concern with Usenet users, particularly those who spend time in the binary groups.

Performance of a Usenet server isn't a simple thing to evaluate. It can vary for the same server from one person to another, or for the same person from one day to the next. The most important factor is the Internet connectivity between you and the server. This includes a lot of elements, such as your ISP, your ISP's backbone, the connections between your ISP's backbone and other backbones, and the Usenet provider's Internet connection. All but that last one are completely out of the control of the Usenet provider, but can play a major factor in whether a service is blazingly fast or unusably slow. Other factors, such as your connection quality and your system's TCP configuration, can make a difference as well.

The connection between you and the Usenet server has two components: the route data takes going from you to the server, and the route it takes coming back to you. Believe it or not, the two routes can be completely different. The return route is more important for newsreading because most of the data will be travelling from the server to your computer.

You can view the route from you to the news server with the traceroute program. This is present on most Unix systems, and on Windows 95, 98 and NT. For detailed information on how to use traceroute and read its output, see my Traceroute page.

Some providers may have their servers behind a firewall, blocking traceroute attempts, in order to prevent denial of service attacks. If the trace always times out at what should be the last hop, or you see !X (an ICMP error meaning that the trace has been purposely blocked), then this is probably the case.

The problem here is that you're only seeing the route from you to the server. Remember how I said that the return route is even more important? Okay, so how do you run a reverse traceroute? You don't. But, don't dispair. The Usenet provider may have a facility on their website where you can run a traceroute from their server back to your computer, and see how that looks. Find it. If there isn't one, send them your IP address and ask them to run a traceroute to you, but also ask them to put up a web page so you can do it yourself. I have compiled a list of Usenet provider traceroute pages.

If the route between your system and a news server is consistently bad, you won't be happy with that service, no matter how good the service is in general. It could work great for everyone else, and suck for you, and all you could really do about it is find another ISP with a better route to that system.

If the route is bad at the moment, but is usually okay, a traceroute can help you find the source of the problem. You can also try running traceroutes from elsewhere on the net (there is a good list of servers at

It's possible that you have a good, clean route to the server, but you still aren't seeing the download speed you expect. If you have a high-speed connection, you probably expect to download at the speed of your connection. This won't always be possible simply due to various forms of overhead. However, one factor that may be under your control is your operating system's TCP receive window setting. If you have a very fast Internet connection, a small window size can artificially limit your download speed. Take a look at the Cable Modem/DSL Tuning Guide for more information on this.

Finally, it's possible for download speed to be limited by the performance of the news server itself. An overloaded server with too many simultaneous users is going to slow down. The server may be poorly set up, or simply underpowered for the task of serving up Usenet to a large number of users. It could even be running Windows NT. It can be difficult to tell when any of this is happening, because a slowdown can be caused by so many other factors.


If you're interested in large binaries, you are probably going to find yourself downloading files that are posted in multiple parts. If a file has ten parts, and only nine appear on your news server, then it's useless to you; you need all ten parts to recreate the original file. If you are this type of user, the completeness of the binary groups is important to you. If a server consistently drops a lot of articles and ends up with most of the binaries you want being incomplete, you are not going to be happy.

So what causes incompleteness? And why does it afflict the binaries groups more than any others?

It's much more difficult (and expensive) to have a full, complete feed of the large binaries than of the smaller text groups. Binaries account for the overwhelming majority of traffic on Usenet, measured by volume; a full feed including all the binaries now exceeds 100 gigabytes per day. It's not easy to move that kind of traffic around.

Many sites, including most of the top backbone sites on Usenet, impose article size limits on what they will carry. The traditional article size limit is one megabyte. But, since there are so many binaries, most sites don't want to pay for the bandwidth and disk space it takes to carry them, and so smaller size limits are becoming more common. Many of the top transit sites use a very small limit, like 32k, which will effectively eliminate all of the binaries from that server. Since many of those top transit sites are either educational systems or privately-owned systems, they have no incentive (like paying customers) to carry the large binaries.

Thus, a system with feeds from these sites can have very fast and complete feeds of the text groups, but if only one or two of their feeds actually carry the large binaries, then their binary groups are going to suffer. It's important for a premium Usenet provider with customers seeking large binaries to arrange news peering with other sites that do carry the large binaries.

If a news server creates its messages out of order, and you are using Agent, there is an option in the configuration to tell Agent to deal with this. Make SURE you have server creates messages out of order checked before you try to judge the completeness of any such server using Agent. The provider's web page or tech support should be able to tell you if this is necessary for their server; since enabling this option unnecessarily can cause reduced performance, you should only use this option if it's needed.

It's also important to remember that the very oldest articles in a group will often be incomplete due to the articles starting to expire, and the very newest ones will sometimes be incomplete due to all the parts not having arrived yet. This is normal. Completeness is best judged looking at articles from the past few days, not counting the current day or the most recent posts, and certainly not counting the oldest posts.

Group List

Some providers try to impress you by quoting the sheer size of their newsgroup list (in Usenet-speak, it's called an active file). They will tell you they are a better provider because they carry “all” 60,000 newsgroups, or some such thing.


For starters, there are not 60,000 newsgroups. Anyone who says they have that many has an active file full of garbage. A lot of joke groups get created, especially in the alt.* hierarchy, and such a system is going to have all of them. They are also going to have old newsgroups that are long dead, misspelled newsgroups, and an assortment of other nonsense. A large active file gives a provider a simple number which is easily inflated to be larger than their competition, and they can then claim that “bigger is better” despite the fact that it isn't.

So why should you care? After all, if they have 60,000 groups in their active file, they must have all the good ones, right? No. There are several reasons you should care. Among the results of a bloated active file are a longer wait for the group list to download, and more difficultly in finding the group you're looking for. But even worse, you may end up wasting your time in a group that looks like what you're looking for, but in reality is either long dead, or never really existed in the first place outside someone's idea of a joke control message. For example, if you have a question about the Perl programming language, you might look at the group list and think comp.lang.perl would be a good place to ask, right? Wrong - that group hasn't existed for years, and most everyone, including the people you really want to see your question, have long since moved on to comp.lang.perl.misc or comp.lang.perl.moderated. Any server that still has the old group on it is doing you a disservice.

Something not so immediately obvious is that a larger active file can cause a decrease in server performance, especially if some of the garbage groups have names that attract spammers. Then you have groups full of nothing but huge amounts of spam, taking up space and processing time on the server.

Most of all, a bloated active file is a good indication that the provider is not maintaining the group list very well, and probably just doesn't care to. Such a group list is likely to be missing a good number of real, active newsgroups, either because they were created recently and were never added, or because there haven't been control messages for them lately and no one bothered to track down a good list. For a more detailed discussion of this, see a Usenet article I wrote on Why there aren't 40,000 newsgroups.

What you really want to look for is a provider with a complete, up-to-date newsgroup list. Is it regularly updated with newly added groups? Are the group lists current for the regional hierarchies? Will they add new alt.* groups upon request? Do they weed out garbage groups and groups that have been replaced with newer ones?

One way to tell the difference is to look for groups that shouldn't be there. The above-mentioned comp.lang.perl is a good choice, as it was supposed to have been deleted years ago. if a server has groups like news.admin.meow or news.admin.pedophile.barry-bouwsma, that's a good sign that they don't take care of their active file. Check the alt.* groups for misspellings like alt.bainaries or alt.binaries.puctures. Any* groups are good indicators of an out-of-date active file, as they were all removed some time ago, as was

Then, of course, check for the groups that should be there. If you find alt.* groups missing, don't jump to conclusions; it's hard to keep up with alt.*, so the real test is whether they will add the group if you request it.


The quality of a provider's newsfeeds is of paramount importance.

How fast articles get to their server, and how complete the feeds, makes the difference between a good Usenet experience and a bad one. Unfortunately, it's difficult to judge except by actually using the system for a while and seeing how it looks.

To have fast and complete feeds, a server should have multiple newsfeeds. The more, the better. Most news administrators have their feeder statistics on a web page somewhere, and if you understand them, they can be educational. However, these are often not linked from the provider's main page; they are mostly of interest only to other news administrators, so there is usually little need to make them obvious. If you are interested in seeing these statistics, ask your news administrator where they can be found. They should be happy to let you see them.

Looking at the stats can tell you who the provider's news peers are. What does that tell you? Unfortunately, very little, unless you are quite familiar with the “who's who” of Usenet.

One source of information is the Freenix Top 1000, which lists the top newsfeed sites on Usenet. This isn't the Final Word on who has the best servers, and in fact most news administrators will tell you it's basically just a big “dicksize war,” but it can give you an idea of who has good feeds and who is good to have as a peer. But you should keep in mind that not being high on the list doesn't mean you don't have good newsfeeds, and peering with the top systems doesn't mean you do. Peering with five of the top ten sites doesn't mean much if it's the binaries you're interested in, since most of the top sites don't carry binaries at all. But if your provider is high on the list, you can take that as a good indication that they have decent feeds, and that their news administrators are good at their jobs. You should not conclude, however, that one server is better than another just because it has a higher Freenix rating.

When using a server, you can view full headers on a bunch of articles and look at the Path headers. The Path header tells you what systems an article has passed through on its journey to you. Look at a bunch of them and see how long they tend to be, on average. If most of the paths are very long, that's an indication that the server could benefit from more or better newsfeeds (it means that articles are having to travel a long way before arriving at the system). If most of the paths are short, that's a good sign. Keep in mind that some systems will end up adding multiple entries to a Path as an article passes through multiple servers on their network; when evaluating path lengths, you should count multiple entries from one system as one for purposes of comparison.

The Path headers can also show you who the provider peers with. The first entry on the path after your provider will be a direct peer. (Path lines are written in reverse, with each system prepending its name onto the line, not adding it at the end.)


Retention means how long the news server keeps articles around before expiring them. Longer retention means more disk space. Keeping the same retention for a while also means more disk space, because Usenet keeps growing in size. More disk space means more money. So a provider can't keep things around forever.

Short retention results in missing articles in the binaries if you don't get to the group for a few days, and incomplete threads in the discussion groups when you want to go back and see what someone said a week ago. If the retention is too short, you aren't going to be happy.

More surprisingly, longer retention can cause problems as well. If there are too many articles in a newsgroup, your newsreader may have problems entering that group, and it may even crash when you try. At the very least, longer retention mean it will take longer to enter the group, because you need to download the overviews (what some newsreaders wrongly call headers) when you enter the group. And, on the server end, more articles in a group can mean slower performance, depending on what server software is being used.

Some providers give rough estimates of their retention times on their web pages. A few give real, accurate numbers. If you don't find any information on their site, you can ask them, or ask around in groups like

Expiration is handled differently by different Usenet server software. The “traditional” method is to specify the expire times, in days, for different groups or classes of groups. For example, a news administrator might set the non-binary groups to expire in 14 days, and the binaries to expire in 5 days. That's the easiest method to relate to for the non-technical user.

Unfortunately, the traditional method is also the biggest pain for the news admin. If a group suddenly gets a lot of traffic one day, the result can be the server's spool disks filling up, which is Bad. So, many Usenet providers use another method for determining retention: space-based. With this method, rather than specifying how long to keep a newsgroup or a list of newsgroups, the news admin specifies how much disk space those groups are allowed to take up. When new articles come in, the oldest ones are removed to keep the disk usage at the desired level. This has obvious advantages for the news administrator - the disks won't fill up, and large binary groups won't hog space intended for discussion groups. Unfortunately, another result is that a flood of posts in a group can result in the older articles being “pushed off” the server prematurely.


In general, you should expect that your provider will have a policy not to release information about you, such as your identity or any logged information about your usage, without a court order. If a provider can be intimidated into releasing information without a court order, for example by someone who “really sounded like a police officer on the phone,” you'll want to stay away. You may feel you have nothing to hide, but that doesn't mean you shouldn't expect privacy.

If you make sure a decent policy is in place, and followed, then the issue of what the provider logs about usage becomes unimportant. If no one but the news admins will ever see the logs (without a court order) then you have nothing to worry about; you will find that most news administrators don't care what you read and in fact are strong supporters of free speech.

If you're concerned about logging anyway for whatever reason, there are a few things to keep in mind. Any good news administrator will tell you that he needs at least some logs, to determine usage patterns so service can be improved, or to track down problems when they happen. Asking that a news service not log anything is asking too much. But if you are paranoid, asking that they not log which articles are being read by which users is not asking too much; there is no technical reason for a news admin to need that information. They do have reasons for needing to know what newsgroups are being accessed, and how often, but they don't need to know exactly which people are reading what.

Header Information

Related to privacy is the question of what information will be revealed about you in the headers of articles that you post. You may be concerned because of the material you are posting - if you are posting pornography, you might not want your employer or parents or wife or sister to find out about it. Or, more likely, you just don't necessarily want random people on the net to be able to get too much information about you. IRC users will be aware of what kinds of nasty things can happen to you if someone who isn't very nice gets hold of your IP address (more of a concern if you have a static IP rather than a dynamic one).

Traditionally, Usenet articles contain a header called NNTP-Posting-Host which will contain the IP address or hostname of the system from which it was posted. This information is useful when a provider needs to trace spam or other abuse; they need to have a way to determine which user posted the article. Most ISPs add this header to articles posted through their news servers, and most server software adds it by default.

There may also be an X-Trace header, containing IP address information and other things. More rarely, there could be other headers which can reveal information about you that you don't necessarily want revealed, like your username on the news server.

Many premium Usenet providers are responding to their customer's desire for privacy by removing such headers from their articles. If they do, they will have some other means of internally tracing an article back to the responsible user, without exposing that information to the world. Some providers will include the tracing headers, but encrypt their contents to protect the information from the outside world. Sometimes that encryption is weak enough to be broken, however.

Spam Filtering

Spam filtering is going to be a matter of opinion. Most people appreciate having a newsfeed filtered for spam, as long as all the legitimate articles are present. Others don't trust other people to filter for them and would rather have an unfiltered feed. If you're not sure where you stand on this, keep in mind that using a full, unfiltered feed can be difficult, especially if you are interested in sex or binaries groups; it takes a good killfile or other measures to keep the experience even remotely pleasant.

Many providers run some kind of filtering to keep spam off their servers. The two most popular spam filtering programs available are Cleanfeed (written by me) and Spam Hippo (developed by Newsguy, a Usenet service provider). In addition, there are other filters around, and a provider might also use a custom filter, or a combination of more than one filter.

A Usenet provider should be willing to tell you whether they run a spam filter, and they should be willing to tell you which filter they run, or whether it's a custom one. They may not be willing to reveal the gory details of how the filter works. That's okay. They just don't want the spammers to find out.

If they tell you they're using Cleanfeed, that doesn't mean it will be the same as all the other systems running that program. Cleanfeed is highly configurable, and thus might run very differently on different systems. It is also easy to customize, so many news admins have their own little modifications in their versions of the filter.

In general, a filtered feed is better than an unfiltered one, unless you really know that unfiltered is what you want.


Related to spam filtering, cancels are also going to be a matter of opinion.

A cancel is when someone sends out a control message requesting that news servers delete a certain article from their spools. For a long time, cancels have been used for spam control. Certain trusted despammers run cancelbots that detect spam and cancel it. This is generally regarded as a good thing, except by some of the same people who object to spam filtering. But the real purpose of cancels, the purpose for which they were designed, is so that the author of a message can delete his own messages - to take back his words, correct a mistake, etc.

Unfortunately, the security provided to make sure that a message can only be cancelled by its author, or by a “trusted” despammer, is not very strong. It is quite trivial for someone with a moderate amount of skill to cancel anything on Usenet. People being what they are, this happens. It's bad, and it's against the terms of service of most ISPs, but it still happens. And there is, alas, no reliable way to determine whether a cancel is legitimate or not.

So, in order to prevent rogues from cancelling whatever they don't like, some news administrators disable the cancel function completely. This is a trade-off, of course; you also lose the spam cancels and you lose the ability for a person to cancel his own articles. However, until better authentication methods are devised (and they are being worked on) this is the only reliable way to prevent rogue cancels from affecting a server.

Some systems accept cancels and some don't. If it's important to you one way or the other, ask them, and they should tell you.

Abuse Policy

You don't want to support, with your subscription fees, a service that supports spammers. You should find out what a provider's policy is in dealing with spam and other abuses of Usenet, and perhaps even look into their history in these matters. If they constantly allow spammers to spew crap from their servers, stay far away.

You can look at the Spam Hippo reports to get an idea of whether a system generates a large amount of spam (though what Hippo is considering to be spam lately is questionable; you should look very closely before deciding that something Hippo claims is spam is actually spam). You can also go to the Google Usenet Search and search the* hierarchy for mentions of the provider. When doing either of these things, keep in mind that all systems are going to generate some spam, and the larger the system, the more spam they're going to generate, just by virtue of having a lot of users. Also keep in mind that a spammer can take advantage of anyone for a little while, causing them to suddenly appear high in the stats. What to look for is an ongoing pattern of spam from a system, and nothing done to stop it.

The Hippo report has what could be a very useful number for the sites it lists - the percentage of spam coming from a site. This tells you what percent of the posts originating from that system are spam. This is a far better metric than the absolute number of spam posts. A provider who makes the list, but has a very low percentage of spam, is doing well. Unfortunately, the Hippo report has recently included a lot of things that most people wouldn't consider spam, like multipart binaries, so these figures should be taken with a grain of salt.


Some providers place certain limits on your usage. Most commonly, such limits will involve a cap on the amount of bandwidth you can use to download articles, the total volume you can download, or the number of simultaneous connections you are allowed to open to the server.

Unfortunately, the proliferation of high-speed Internet access via cable modems, ADSL, and other methods has created an impression among consumers that bandwidth is cheap. Thus, it may seem unreasonable for a provider to limit usage. Well, bandwidth is not cheap. In fact, it is extremely expensive. A DS3 line (45Mbps, sometimes called a T3) connection from UUNET (the largest backbone provider) costs about $55,000 US per month, not including telephone company line charges.

So you may think bandwidth is cheap because you can get a cable modem that claims to have better than T1 speeds for $40US per month, but you are being misled. And the recent rise of cheap bandwidth at the local loop level creates a problem for server operators.

Since the bandwidth is only cheap at the user end, it's not reasonable to expect unlimited use of all the bandwidth in the world. A few cable modem users can hog up all the available bandwidth from a Usenet provider, which is not fair when they are only paying $12 per month for the Usenet service. Beyond bandwidth, a high-speed downloader also uses more server resources. One high-speed user can use as much as 40 times the server resources of a user on a slower connection. So, many providers now limit the bandwidth you can use.

This may be done by throttling your connection speed. For example, you may find that a provider limits your download speed to 128k, to prevent you from consuming too much bandwidth. Obviously, if you use a 56k modem, a limit of 128k or even 64k means nothing to you. If you have a cable modem, though, you're going to notice.

If a provider lets you use huge amounts of bandwidth all day and night, they will probably be in trouble when they realize that their high-speed customers are costing them more than their subscription fees, which isn't a very good business model. Such a provider also risks having their network connection maxed out by their high-speed users, resulting in poor performance for everyone.

An increasing number of Usenet providers are imposing these limits in a different way, by limiting the total amount of data you can download. Rather than arbitrarily slowing down your connection, they will let you grab data as fast as you can, but impose a monthly limit on how much. The limit amounts to pretty much the same thing, but without making you wait as long for your downloads. This system is attractive to a Usenet provider because it truly allows them to charge you for what you use, whereas simple bandwidth throttling still leaves open the possibility that a user can stay connected 24 hours a day and cost them too much in bandwidth.

Randori has an interesting option, where you can pay a single price for a certain amount of data, regardless of how long it takes you to actually reach the limit. This is attractive from a user perspective, because you know that you will get every single byte of download that you pay for, without “losing” any of your quota that you don't use in a particular month.

There will also likely be a limit to the number of simultaneous connections you are allowed to open. You want more than one. Some newsreaders will open a second connection during normal operation, so if you're only allowed one, then you've got a problem..

In addition, there are other times when you'll end up with multiple connections through no real fault of your own. Netscape Communicator is famous (or should I say infamous?) for this; it provides a Stop button so you can abort the download of an article, and also allows you to proceed to the next article before the current one is finished downloading. Unfortunately, NNTP doesn't have any such facilitity, and the way Communicator implements it is to simply tear down the connection to the server and open up a new one. Aside from all the other reasons this sucks, it sometimes ends up with both the old and new connections being open, as far as the server is concerned, for a small period of time. If you're limited to one connection, then you'll get an error and you'll have to wait.

And, sometimes, despite the best efforts of the sysadmin, you will close a connection, and the server will fail to register it for some reason. It might sit there for a minute or three. If you are limited to one connection, you won't be able to connect again until that connection goes away.

It can also simply be useful to open a second connection to a news server. You can read some articles in a discussion group while waiting for some large binaries to download. You can run two download sessions at the same time.

You probably want to be allowed at least three simultaneous connections to the server. Two is acceptable.


If you think you're going to need technical support, the quality of support offered by the provider will be important to you. Unfortunately, all providers are likely to tell you they have top-notch support, whether it's true or not.

To find out the real story, you can ask around on newsgroups like You can also send a request to their support address and see how they respond (keeping in mind that they may give priority to current customers), perhaps asking them some of the questions this document raises about their service.

There are generally three different kinds of tech support: email support, local newsgroups, and telephone support. All providers will at least have email support addresses. (Whether they actually answer their mail is another matter.) Local newsgroups are better because everyone can see the questions and benefit from the answers, and because users can exchange information among themselves. It's always nice to see what other people think and what problems they may be having, and it can be extremely helpful if you are having an obscure problem to see if any other users are seeing the same thing, and what they may have done to correct it. Telephone support won't be offered by all providers, and is the least-used avenue of support, but it can come in handy in extreme situations.

What you should keep in mind when evaluating technical support is that, quite frankly, the job sucks. It sucks really, really bad. I did it, I know. The average burn-out time for a tech support person is between six and nine months. A smart employer knows this and tries to train the support people to move into other jobs after a while. If they don't, they end up with bitter, angry support reps who are no help to anyone and eventually quit. If they do, they end up having new support reps all the time.

Of course, you don't care about the management headaches, you just want good support. That's fine. Despite all the things I just said, it is possible to find good tech support in this business. But you'll probably have to look around and ask around to find it.

Reading the support replies in the provider's local newsgroups is a good way to judge the quality of support you can expect. Most providers don't propagate their local newsgroups out to Usenet, though, so you may have to actually have the account before you can do this. Some of the providers also maintain a presence in


A great news server is no good at all if you can't use it. If you are going to be using Usenet frequently, the amount of downtime you'll see becomes important.

No system can provide 100% uptime. There are going to be problems everywhere, and things are going to break. What you want is for that not to happen very often, and when it happens, for it to be fixed quickly. You should also expect to be told, perhaps through a status report on the provider's web page, when something is broken, so you don't sit there tearing your hair out thinking you're doing something wrong. Obviously, if the provider's entire network goes down, you won't be able to access the website to see the report, but most problems aren't that serious.

Unfortunately, a provider is not going to be the best source of information about reliability. They're probably not going to tell you if their system crashes every day and goes down for three hours. You'll have to see what other people have to say about the system by asking around.

Technical Details

This stuff may not matter to you. But, if you're technically knowledgable about the Internet and/or server administration, you may want to know what kind of stuff a provider is using, so you can, as Jeff Liebermann put it in a post to, “decide for yourself if they're a hole in the wall or a reputable operation.”

Some providers are more than willing to provide this kind of information; others are tight-lipped, in paranoid fear that their competition might gain an advantage by finding out how they do business. If you are interested in this information, and you don't find it on the provider's website or other documentation, it can't hurt to ask. After all, most people don't care, so they may not waste their time putting it on a web page.

If you're going to ask, the people to ask are the actual news administrators. You may or may not get answers from tech support, but more than likely, any answers you do get will be incorrect or outdated, not deliberately, but just because the support people may not know the right answers (and may not fully understand them either).


To start your shopping, see my list of Usenet providers.

A comparison table of some providers, including price, download limits, and connection limits, can be found here.

Another list of providers can be found at the newsfeeds page.