Discussion:
Best File System for partitions over 600GB
(too old to reply)
Siju George
2007-02-15 18:51:38 UTC
Permalink
Hi,

Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)

Thankyou so much

kind regards

Siju
Sergio Cuéllar Valdés
2007-02-15 19:13:43 UTC
Permalink
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)
Hi,

maybe you should read about LVM [1]. It is not about file systems, but
it can help you :)

[1] http://www.tldp.org/HOWTO/LVM-HOWTO/


Best regards,
Sergio Cuellar
--
"Meine Hoffnung soll mich leiten
Durch die Tage ohne Dich
Und die Liebe soll mich tragen
Wenn der Schmerz die Hoffnung bricht"
Mike McCarty
2007-02-16 01:27:22 UTC
Permalink
Post by Sergio Cuéllar Valdés
Hi,
maybe you should read about LVM [1]. It is not about file systems, but
it can help you :)
I'd rather deal with a case of the Clap.

LVM is worse than useless for most installations. It makes
the entire file system dependent on every drive in the Logical
Volume working. If any drive fails, then the entire FS becomes
corrupt. As you may know, as the number of devices goes up,
the MTBF goes down drastically, and the probability of failure
goes up dramatically. If one has a largish RAID, then LVM makes
sense, but without RAID or some other error correcting ability,
LVM makes the likelihood of a file system failure increase, and
makes the likelihood of recovery from it decrease, since the
normal recovery tools won't work.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Greg Folkert
2007-02-16 02:02:57 UTC
Permalink
Post by Mike McCarty
Post by Sergio Cuéllar Valdés
Hi,
maybe you should read about LVM [1]. It is not about file systems, but
it can help you :)
I'd rather deal with a case of the Clap.
LVM is worse than useless for most installations. It makes
the entire file system dependent on every drive in the Logical
Volume working. If any drive fails, then the entire FS becomes
corrupt. As you may know, as the number of devices goes up,
the MTBF goes down drastically, and the probability of failure
goes up dramatically. If one has a largish RAID, then LVM makes
sense, but without RAID or some other error correcting ability,
LVM makes the likelihood of a file system failure increase, and
makes the likelihood of recovery from it decrease, since the
normal recovery tools won't work.
It depends on how you use LVM.

If you use LVM with Logical Extents with Physical Extent mirroring (or
redundant PE) it works just fine with failure modes. Also, most people
ignore those noises portending a failure and continue on as if nothing
is going on.

LVM has come from the "Enterprise" world where you typically don't want
to be tied down with partitions. (Netware 5 and below were bad with
that). Or basically you want to be able to incrementally add Extents to
a Logical Volume as it is needed. It allows you to dole out space to
various file systems without worrying about massive disk sub-system
upgrade.

Also, if you are able to add another disk to your Volume Group that is
as big or larger than the failing (note I said failing not failed) you
can migrate everything off the the failing disk to the newly added disk.
Thereby saving you much headache.

There are many benefits to LVM, not just your view of the failure modes.
Those failure modes are common for *ANY* disk sub-system, BTW.

And RAID does not mean GOOD EVEN DURING FAILURE. I just had a call a
month ago to recover a failed 4-way mirrored system. 2 - RAID1 arrays in
a RAID1 array (or 4 disks all the same data). The company kept silencing
the alarm. Well 3 months since the last one... they had been running on
a single disk for that long. OOPS. Not a good backup of it either. Just
because you haev RAID, doesn't mean JACK. If you don't pay attention its
useless.
--
***@gregfolkert.net

Novell's Directory Services is a competitive product to Microsoft's
Active Directory in much the same way that the Saturn V is a competitive
product to those dinky little model rockets that kids light off down at
the playfield. -- Thane Walkup
Roberto C. Sanchez
2007-02-16 14:15:04 UTC
Permalink
Post by Mike McCarty
I'd rather deal with a case of the Clap.
LVM is worse than useless for most installations. It makes
Because it is not designed for reliability, but for flexibility. This
is wy it is best to have it ride over a reliability, like RAID.
Post by Mike McCarty
the entire file system dependent on every drive in the Logical
Volume working. If any drive fails, then the entire FS becomes
corrupt. As you may know, as the number of devices goes up,
the MTBF goes down drastically, and the probability of failure
goes up dramatically. If one has a largish RAID, then LVM makes
sense, but without RAID or some other error correcting ability,
LVM makes the likelihood of a file system failure increase, and
In fact it does not make the likelihood increase. The things that make
the likelihood of a failure increase are independent of LVM. Now, what
LVM often does is contribute to the severity or impact of a failure
because of the false sense of security it gives some admins.
Post by Mike McCarty
makes the likelihood of recovery from it decrease, since the
normal recovery tools won't work.
Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Andy Smith
2007-02-16 20:38:28 UTC
Permalink
Post by Mike McCarty
Post by Sergio Cuéllar Valdés
maybe you should read about LVM [1]. It is not about file systems, but
it can help you :)
I'd rather deal with a case of the Clap.
LVM is worse than useless for most installations. It makes
the entire file system dependent on every drive in the Logical
Volume working.
You are entirely correct that when set up in a moronic fashion, LVM
is worse than useless. Also the breaking news is that when a blind
chimp tries to fly a plane, the chances of crashing and burning are
greatly increased.

Fortunately setting up LVM properly and flying planes without
needing to use blind chimps are both well within humanity's grasp.
--
http://bitfolk.com/ -- No-nonsense VPS hosting
Encrypted mail welcome - keyid 0x604DE5DB
m***@web.de
2007-02-19 19:08:31 UTC
Permalink
Post by Mike McCarty
[...]
Post by Sergio Cuéllar Valdés
maybe you should read about LVM [1]. It is not about file systems, but
it can help you :)
I'd rather deal with a case of the Clap.
LVM is worse than useless for most installations. It makes
the entire file system dependent on every drive in the Logical
Volume working. If any drive fails, then the entire FS becomes
corrupt. As you may know, as the number of devices goes up,
the MTBF goes down drastically, and the probability of failure
goes up dramatically. If one has a largish RAID, then LVM makes
sense, but without RAID or some other error correcting ability,
LVM makes the likelihood of a file system failure increase, and
makes the likelihood of recovery from it decrease, since the
normal recovery tools won't work.
[...]
Actually this is correct only if one chooses to use LVM to stripe the
physical volumes. But nobody said it must be done that way. I use LVM
too and have two volume groups, one for each HD which is completely
managed by LVM. So each VG just contains one physical volume, So if
one HD fails, well shit happens, but that does not destroy the data on
my other VG.
And LVM provides more flexibility for partitioning. If I need some
more space on one partition and have another unecessarily big one, I
can shrink the filesystem on the bigger on, then shrink the logical
volume as well and afterwards hand over the free extents to the
partition, where they are needed and grow the filesystem appropriatly.
Plus I don't need to shift the partions, to get the wanted space
before or after the partition I want ot grow. So that is IMHO a big
gain in flexibility. Of this adds a little amount fragmentation, but
after some shifting extents here and there I have not recognized any
significant performance decrease.

So this is where my suggestion for a filesystem comes into play. I
used XFS in the beginning of my experiments with LVM but am migrating
to ext3 now, since XFS can only be grown but not shrunk. But growing
*and* shrinking are both natively supported features of ext3. Also it
seems to be slightly faster than XFS on my setup. I recently migrated
my /home (about 16GB) partition from XFS to ext3, using some spare
diskspace with an intermediate ext3 partition. Copying from the XFS to
the ext3 partition took about 14 minutes, whilst copying from the
intermediate ext3 to the final ext3 /home took 12 minutes.


Regards
--
Marcus Blumhagen

"Any intelligent fool can make things bigger, more complex, and more
violent. It takes a touch of genius -- and a lot of courage -- to move
in the opposite direction."
-- Albert Einstein
Siju George
2007-04-02 11:05:03 UTC
Permalink
Post by m***@web.de
So this is where my suggestion for a filesystem comes into play. I
used XFS in the beginning of my experiments with LVM but am migrating
to ext3 now, since XFS can only be grown but not shrunk. But growing
*and* shrinking are both natively supported features of ext3. Also it
seems to be slightly faster than XFS on my setup. I recently migrated
my /home (about 16GB) partition from XFS to ext3, using some spare
diskspace with an intermediate ext3 partition. Copying from the XFS to
the ext3 partition took about 14 minutes, whilst copying from the
intermediate ext3 to the final ext3 /home took 12 minutes.
But it seems ext3 has to be unmounted to increase and decrease in size right?
That would mean downtime for server.

ReiserFS seems to be only file system from

http://tldp.org/HOWTO/LVM-HOWTO/extendlv.html

that can be extended and shrunk while the file systems are mounted and
are online without disrupting the Services that the Server offers.

I wonder how the shrinking takes place.
If there are possiblilites for loss of data and if it will give any
warning before it happens and so on?

Thankyou so much

Kind Regards

Siju
Kushal Kumaran
2007-04-02 12:13:30 UTC
Permalink
Post by Siju George
Post by m***@web.de
So this is where my suggestion for a filesystem comes into play. I
used XFS in the beginning of my experiments with LVM but am migrating
to ext3 now, since XFS can only be grown but not shrunk. But growing
*and* shrinking are both natively supported features of ext3. Also it
seems to be slightly faster than XFS on my setup. I recently migrated
my /home (about 16GB) partition from XFS to ext3, using some spare
diskspace with an intermediate ext3 partition. Copying from the XFS to
the ext3 partition took about 14 minutes, whilst copying from the
intermediate ext3 to the final ext3 /home took 12 minutes.
But it seems ext3 has to be unmounted to increase and decrease in size right?
That would mean downtime for server.
ReiserFS seems to be only file system from
http://tldp.org/HOWTO/LVM-HOWTO/extendlv.html
that can be extended and shrunk while the file systems are mounted and
are online without disrupting the Services that the Server offers.
The resize_reiserfs manpage says that the filesystem has to be
unmounted before shrinking. IIRC, it will refuse to shrink if it is
mounted, although I don't know what happens if you try shrinking with
too much data on the filesystem.

Extending is no problem.
--
Kushal
Roberto C. Sánchez
2007-04-02 12:47:13 UTC
Permalink
Post by Siju George
But it seems ext3 has to be unmounted to increase and decrease in size right?
That would mean downtime for server.
There is currently an experimental online resizing patch for ext3. I am
not sure if it is already in the kernel, but it is out there.
Post by Siju George
ReiserFS seems to be only file system from
http://tldp.org/HOWTO/LVM-HOWTO/extendlv.html
that can be extended and shrunk while the file systems are mounted and
are online without disrupting the Services that the Server offers.
I wonder how the shrinking takes place.
If there are possiblilites for loss of data and if it will give any
warning before it happens and so on?
Not sure about resier, but XFS does support online growing of the
filesystem. It does not support shrinking the file system (online or
offline).

Regards,

-Roberto
--
Roberto C. Sánchez
http://people.connexer.com/~roberto
http://www.connexer.com
Matus UHLAR - fantomas
2007-02-18 18:09:50 UTC
Permalink
Post by Sergio Cuéllar Valdés
Post by Siju George
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)
maybe you should read about LVM [1]. It is not about file systems, but
it can help you :)
he asked about filesystem. I don't understand this - why do you recommend
using LVM when a user asks for filesystem?
--
Matus UHLAR - fantomas, ***@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
A day without sunshine is like, night.
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Ron Johnson
2007-02-15 20:04:41 UTC
Permalink
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Why? Sarge on that is definitely not supported. You should
definitely go with Etch.
Roberto C. Sanchez
2007-02-15 20:19:27 UTC
Permalink
Post by Ron Johnson
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Why? Sarge on that is definitely not supported. You should
definitely go with Etch.
Why would Sarge not be supported on that?

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Johannes Wiedersich
2007-02-15 20:28:06 UTC
Permalink
Post by Roberto C. Sanchez
Post by Ron Johnson
Post by Siju George
I am considering XFS. The System is Debian Sarge for amd64.
Why? Sarge on that is definitely not supported. You should
definitely go with Etch.
Why would Sarge not be supported on that?
Because sarge was not released for amd64 [1]. There is only an
unoffical, unsupported version.

Johannes

[1] http://www.de.debian.org/releases/stable/
Ron Johnson
2007-02-15 20:29:20 UTC
Permalink
Post by Roberto C. Sanchez
Post by Ron Johnson
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Why? Sarge on that is definitely not supported. You should
definitely go with Etch.
Why would Sarge not be supported on that?
Because the AMD64 tree did not get merged into the official system
until after Sarge was released.

http://www.debian.org/releases/stable/

The following computer architectures are supported in this release:

* Alpha
* ARM
* HP PA-RISC
* Intel x86
* Intel IA-64
* Motorola 680x0
* MIPS
* MIPS (DEC)
* PowerPC
* IBM S/390
* SPARC
Bob McGowan
2007-02-15 20:22:21 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Why? Sarge on that is definitely not supported. You should
definitely go with Etch.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFF1LzZS9HxQb37XmcRAtR+AKCKUGQG2nwcG/2m3inFtZcIt7lkeQCg3Z8W
sMVPLyDuLD1x96pSOkEpv9k=
=WsCC
-----END PGP SIGNATURE-----
This is true, but the OP's question goes unanswered.

I'd like to know, also: Are there any device size limits or issues for
the various filesystem types available?

And a corollary: are there reliability/speed/seek/read/write
differences that would have an impact on performance as the size increases?

I think these are valid questions, regardless of the Linux version being
discussed.

Bob
Roberto C. Sanchez
2007-02-15 20:18:54 UTC
Permalink
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)
The issue is not so much the size of the partition, but rather what you
intend to do with it. For mostly large files, XFS is the best. For
lots of small files (think a filesystem holding Maildirs for thousands
of users), I am told ReiserFS is best. For general purpose, ext3 is
still the best.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
h***@topoi.pooq.com
2007-02-16 12:41:51 UTC
Permalink
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)
Thankyou so much
kind regards
Siju
http://en.wikipedia.org/wiki/Comparison_of_file_systems

Also lots of links in
http://en.wikipedia.org/wiki/File_systems

-- hendrik
Henrique de Moraes Holschuh
2007-02-16 17:13:51 UTC
Permalink
Post by Siju George
Could some one recommend which File System is best for partitions above 600GB?
Depends on the use profile.
Post by Siju George
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
XFS does not take well to non-clean unmounts of any sort, and does not
journal data so the file data WILL be corrupt after a crash with in-flight
writes.

Any currently available filesystem will take ages to repair at that size,
AFAIK, at least when you have the very large number of files one usually
stores in big partitions. If you are going to use it to store 600 1GB
files, then it may not matter nearly as much.

If you can have a large number of small filesystems, it is much better for
disaster recovery. And for that you will *have* to use LVM when you hit a
non-trivial ammount of partitions.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Michelle Konzack
2007-02-21 16:44:33 UTC
Permalink
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)
I run a PostgreSQL Database (currently arround 560 GByte)
on a Partition of 1 TByte using ext3 without any problems.

I can not recommend ReiserFS and with XFS I have no experience.
I am looking forward to the new "ext4" which could give a
performance plus for databases

Thanks, Greetings and nice Day
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant
--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
50, rue de Soultz MSM LinuxMichi
0033/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)
Paul Johnson
2007-02-22 00:38:06 UTC
Permalink
Post by Michelle Konzack
I can not recommend ReiserFS
I second that advisory. I've found problems with data corruption on my
system using reiser.
Post by Michelle Konzack
and with XFS I have no experience.
I am looking forward to the new "ext4" which could give a
performance plus for databases
I tried XFS, but I discovered I got better performance out of JFS.
Douglas Allan Tutty
2007-03-12 01:58:29 UTC
Permalink
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)
There are a few comparisions out there but you need to look at the
design philosophies in relation to your application. There have been
some problems with ReiserFS (no references, but there were messages on
debian-user a while ago). When I looked at this it came down to a
choice between XFS and JFS. There have also been a lot of threads on
this topic on debian-user in the past few months.

Try wikipedia and google site:ibm.com

I looked at it this way: JFS was designed by IBM for server (database)
type filesystems so all their AIX boxes run JFS. Cray uses XFS for its
compute stuff. IFRC neither journals the data (only metadata) so after
a crash, the filesystem itself will be intact and a fast reboot is
possible but there could be some data corruption. ext3 journals data as
well as metadata but takes forever to regenerate after a crash and there
can still be errors.

I went from ext3 to reiser and having errors on power failure with both
went to JFS and have had no problems since. YMMV.

Doug.
Roberto C. Sanchez
2007-03-12 02:21:01 UTC
Permalink
Post by Douglas Allan Tutty
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)
There are a few comparisions out there but you need to look at the
design philosophies in relation to your application. There have been
some problems with ReiserFS (no references, but there were messages on
debian-user a while ago). When I looked at this it came down to a
choice between XFS and JFS. There have also been a lot of threads on
this topic on debian-user in the past few months.
Try wikipedia and google site:ibm.com
I looked at it this way: JFS was designed by IBM for server (database)
type filesystems so all their AIX boxes run JFS. Cray uses XFS for its
compute stuff. IFRC neither journals the data (only metadata) so after
a crash, the filesystem itself will be intact and a fast reboot is
possible but there could be some data corruption. ext3 journals data as
well as metadata but takes forever to regenerate after a crash and there
can still be errors.
I went from ext3 to reiser and having errors on power failure with both
went to JFS and have had no problems since. YMMV.
I personally am a fan of XFS. However, it is also possible to use ext3
on large partitions, as you point out. At work, I have a production
server (running RHEL, unfortunately) which is serving up a 6 TB
filesystem. I took to reading up on ext3 and judiciously set things
like the block size and some of the other filesystem parameters so that
crash recovery would not take ages and so that performance would be a
bit better. Of course, since Debian supports both XFS and JFS quite
nicely, I would opt for one of those.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Mike McCarty
2007-03-12 17:43:16 UTC
Permalink
Post by Roberto C. Sanchez
I personally am a fan of XFS. However, it is also possible to use ext3
on large partitions, as you point out. At work, I have a production
server (running RHEL, unfortunately) which is serving up a 6 TB
Why unfortunately? Do Linux fans have to hate other distros as well
as MS?
Post by Roberto C. Sanchez
filesystem. I took to reading up on ext3 and judiciously set things
like the block size and some of the other filesystem parameters so that
crash recovery would not take ages and so that performance would be a
bit better. Of course, since Debian supports both XFS and JFS quite
Care to share your insights? Or at least pointers where one may
obtain similar insights? Those of us who use ext3 would appreciate
any distillation of the information.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!
Ron Johnson
2007-03-12 18:20:57 UTC
Permalink
Post by Mike McCarty
Post by Roberto C. Sanchez
I personally am a fan of XFS. However, it is also possible to use ext3
on large partitions, as you point out. At work, I have a production
server (running RHEL, unfortunately) which is serving up a 6 TB
Why unfortunately? Do Linux fans have to hate other distros as well
as MS?
"unfortunately" != hatred.
Post by Mike McCarty
Post by Roberto C. Sanchez
filesystem. I took to reading up on ext3 and judiciously set things
like the block size and some of the other filesystem parameters so that
crash recovery would not take ages and so that performance would be a
bit better. Of course, since Debian supports both XFS and JFS quite
Care to share your insights? Or at least pointers where one may
obtain similar insights? Those of us who use ext3 would appreciate
any distillation of the information.
Mike
John Hasler
2007-03-12 18:41:31 UTC
Permalink
Roberto C. Sanchez wrote:
At work, I have a production
server (running RHEL, unfortunately)...
Why unfortunately?
Perhaps because he feels unfortunate in having to maintain multiple
distributions.
--
John Hasler
Roberto C. Sanchez
2007-03-12 20:07:30 UTC
Permalink
Post by Mike McCarty
Post by Roberto C. Sanchez
I personally am a fan of XFS. However, it is also possible to use ext3
on large partitions, as you point out. At work, I have a production
server (running RHEL, unfortunately) which is serving up a 6 TB
Why unfortunately? Do Linux fans have to hate other distros as well
as MS?
Ever worked with RHEL or Fedora (or Red Hat before that)? They have
their very own little RedHat-specific way of organizing /etc. Many of
the things that they do go against pretty much every other distro
(except for those which specifically try to emulate RedHat).

I don't hate RHEL. I just hate some of the broken defaults.
Additionally, they have *Enterprise* right in the name, but don't have
out of the box support for any filesystem other than ext2/3 (except for
maybe ReiserFS, but I would hardly call that enterprise quality). I
currently have a server in production which (because of where it is
located and the security policies of the organization/facility where it
is located), must run RHEL3 or RHEL4. Before I rebuilt it using RHEL4,
it was using RHEL3 to serve up three volumes from two external RAID
trays via NFS. It had to be three different volume because under RHEL3,
the biggest filesystem that could be supported out of the box was 2 TB
(because of 2.4 kernel and some other userland utility limitations).

I rebuilt that machine using RHEL4 so that the users would only need to
access one volume. Since I knew that right off the bat it would a
single volume of about 6 TB and that we would later want to add more
storage, I went looking for the XFS or JFS packages on the install CDs.
When I couldn't find them, I went into the #rhel channel and asked
around in there if anyone knew why RHEL did not support XFS or JFS. The
responses I got were along the line of, "ext3 is fine for everything."
To which I replied, "what about for filesystems over 8 TB?" Of course,
the answer to that was "build a cluster with GFS."

Ordinarily, I would just get the sources to the kernel and the
associated userland tools and build them myself. But, security at this
place would simply not go for it. So, in short, they make the life of
the admin exceptionally difficult if you want to do something which they
(the RHEL designers/developers) did not think you would want to do.
Post by Mike McCarty
Post by Roberto C. Sanchez
filesystem. I took to reading up on ext3 and judiciously set things
like the block size and some of the other filesystem parameters so that
crash recovery would not take ages and so that performance would be a
bit better. Of course, since Debian supports both XFS and JFS quite
Care to share your insights? Or at least pointers where one may
obtain similar insights? Those of us who use ext3 would appreciate
any distillation of the information.
There is a ton of information about JFS and XFS on the net. All you
need to do is check the Wikipedia filesystem comparison page or Google
search for filesystem comparisons. The short of it is:

ext3 - good general purpose FS (not the best performance, but stable)
xfs - excellent performance with huge files and huge filesystems
jfs - similar to XFS but I think it has better performance when under
heavy I/O load
reasierfs - good with lots small files and when you don't really value
your data (not that well understood)

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Mike McCarty
2007-03-12 21:01:00 UTC
Permalink
Post by Roberto C. Sanchez
Post by Mike McCarty
Post by Roberto C. Sanchez
I personally am a fan of XFS. However, it is also possible to use ext3
on large partitions, as you point out. At work, I have a production
server (running RHEL, unfortunately) which is serving up a 6 TB
Why unfortunately? Do Linux fans have to hate other distros as well
as MS?
Ever worked with RHEL or Fedora (or Red Hat before that)? They have
I don't run Debian.

$ uname -a
Linux Presario-1 2.6.10-1.771_FC2 #1 Mon Mar 28 00:50:14 EST 2005 i686
i686 i386 GNU/Linux

The first Linux I installed was Red Hat 6.something.

I help my girlfriend administer her machine, which has Debian on it.
That's why I subscribe here.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!
Roberto C. Sanchez
2007-03-12 21:05:09 UTC
Permalink
Post by Mike McCarty
Post by Roberto C. Sanchez
Post by Mike McCarty
Post by Roberto C. Sanchez
I personally am a fan of XFS. However, it is also possible to use ext3
on large partitions, as you point out. At work, I have a production
server (running RHEL, unfortunately) which is serving up a 6 TB
Why unfortunately? Do Linux fans have to hate other distros as well
as MS?
Ever worked with RHEL or Fedora (or Red Hat before that)? They have
I don't run Debian.
$ uname -a
Linux Presario-1 2.6.10-1.771_FC2 #1 Mon Mar 28 00:50:14 EST 2005 i686
i686 i386 GNU/Linux
The first Linux I installed was Red Hat 6.something.
I see. My first Linux install was RedHat 8. When a friend showed me
Debian I knew that it was possible for things to make sense.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Mike McCarty
2007-03-13 06:56:14 UTC
Permalink
Post by Roberto C. Sanchez
Post by Mike McCarty
Post by Roberto C. Sanchez
Ever worked with RHEL or Fedora (or Red Hat before that)? They have
I don't run Debian.
$ uname -a
Linux Presario-1 2.6.10-1.771_FC2 #1 Mon Mar 28 00:50:14 EST 2005 i686
i686 i386 GNU/Linux
The first Linux I installed was Red Hat 6.something.
I see. My first Linux install was RedHat 8. When a friend showed me
Debian I knew that it was possible for things to make sense.
Interesting. My girlfriend is getting very close to tossing
Debian out. Just last weekend, I convinced her to hang on
for just a bit longer, before reinstalling Windows. She's
pretty tired of no sound, inability to use a USB mouse,
inability to use her printer fully, inability to use her
camera's memory stick, and lack of response from Debian
maintainers.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!
Ron Johnson
2007-03-13 07:50:40 UTC
Permalink
Post by Mike McCarty
Post by Roberto C. Sanchez
Post by Mike McCarty
Post by Roberto C. Sanchez
Ever worked with RHEL or Fedora (or Red Hat before that)? They have
I don't run Debian.
$ uname -a
Linux Presario-1 2.6.10-1.771_FC2 #1 Mon Mar 28 00:50:14 EST 2005
i686 i686 i386 GNU/Linux
The first Linux I installed was Red Hat 6.something.
I see. My first Linux install was RedHat 8. When a friend showed me
Debian I knew that it was possible for things to make sense.
Interesting. My girlfriend is getting very close to tossing
Debian out. Just last weekend, I convinced her to hang on
for just a bit longer, before reinstalling Windows. She's
pretty tired of no sound, inability to use a USB mouse,
inability to use her printer fully, inability to use her
camera's memory stick, and lack of response from Debian
maintainers.
All of those should work, with (depending on the card/chip) the
possible exception of sound.

If you can't make Debian work, install Ubuntu. That's what it's for.

And don't feel yourself a failure. I couldn't get RH5.2 installed,
and, when it was time to buy a new computer, bought one
pre-installed with Mandrake 6.0. It took me quite a while to get
used to The Unix Way.
Post by Mike McCarty
Mike
Mike McCarty
2007-03-13 17:53:45 UTC
Permalink
Ron Johnson wrote:

[snip]
Post by Ron Johnson
All of those should work, with (depending on the card/chip) the
possible exception of sound.
If you can't make Debian work, install Ubuntu. That's what it's for.
And don't feel yourself a failure. I couldn't get RH5.2 installed,
and, when it was time to buy a new computer, bought one
pre-installed with Mandrake 6.0. It took me quite a while to get
used to The Unix Way.
I've been using *NIX like OS since, umm, 1984 or so. I guess I'm
accustomed to "The *NIX Way". I just don't like it.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!
Ron Johnson
2007-03-13 18:59:55 UTC
Permalink
Post by Celejar
[snip]
Post by Ron Johnson
All of those should work, with (depending on the card/chip) the
possible exception of sound.
If you can't make Debian work, install Ubuntu. That's what it's for.
And don't feel yourself a failure. I couldn't get RH5.2 installed,
and, when it was time to buy a new computer, bought one
pre-installed with Mandrake 6.0. It took me quite a while to get
used to The Unix Way.
I've been using *NIX like OS since, umm, 1984 or so. I guess I'm
accustomed to "The *NIX Way". I just don't like it.
Which "it"? Unix "it" or Debian "it"?
Post by Celejar
Mike
Paul Johnson
2007-03-14 01:47:20 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Post by Celejar
[snip]
Post by Ron Johnson
All of those should work, with (depending on the card/chip) the
possible exception of sound.
If you can't make Debian work, install Ubuntu. That's what it's for.
And don't feel yourself a failure. I couldn't get RH5.2 installed,
and, when it was time to buy a new computer, bought one
pre-installed with Mandrake 6.0. It took me quite a while to get
used to The Unix Way.
I've been using *NIX like OS since, umm, 1984 or so. I guess I'm
accustomed to "The *NIX Way". I just don't like it.
Which "it"? Unix "it" or Debian "it"?
There's a difference? :o) Seriously, RPM based distros are the Diet Coke
of Unix. Just one calorie, not unix enough.
Mathias Brodala
2007-03-12 21:42:45 UTC
Permalink
Hi Roberto.
Post by Roberto C. Sanchez
There is a ton of information about JFS and XFS on the net. All you
need to do is check the Wikipedia filesystem comparison page or Google
ext3 - good general purpose FS (not the best performance, but stable)
xfs - excellent performance with huge files and huge filesystems
jfs - similar to XFS but I think it has better performance when under
heavy I/O load
Could you define 'huge files' and 'huge filesystems'? Can you give me some numbers?


Regards, Mathias
--
debian/rules
Roberto C. Sanchez
2007-03-12 22:15:14 UTC
Permalink
Post by Mathias Brodala
Hi Roberto.
Post by Roberto C. Sanchez
There is a ton of information about JFS and XFS on the net. All you
need to do is check the Wikipedia filesystem comparison page or Google
ext3 - good general purpose FS (not the best performance, but stable)
xfs - excellent performance with huge files and huge filesystems
jfs - similar to XFS but I think it has better performance when under
heavy I/O load
Could you define 'huge files' and 'huge filesystems'? Can you give me some numbers?
At work we deal with files of size 1 GB to 100 GB on a regular basis. I
would classify those as large. XFS supports files up to a size of 8
exabytes and filesystems also of size 8 exabytes. I am not sure of the
limitations on JFS.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Mathias Brodala
2007-03-12 22:34:48 UTC
Permalink
Hi Roberto.
Post by Roberto C. Sanchez
Post by Mathias Brodala
Post by Roberto C. Sanchez
There is a ton of information about JFS and XFS on the net. All you
need to do is check the Wikipedia filesystem comparison page or Google
ext3 - good general purpose FS (not the best performance, but stable)
xfs - excellent performance with huge files and huge filesystems
jfs - similar to XFS but I think it has better performance when under
heavy I/O load
Could you define 'huge files' and 'huge filesystems'? Can you give me some numbers?
At work we deal with files of size 1 GB to 100 GB on a regular basis. I
would classify those as large.
I see. I was asking since I have a whole drive full of videos and such which are
usually between 100MB and 300MB per file. So I guess XFS would not really be the
best choice for them. I got ext3 everywhere at the moment and wondered if I
could get a bit more performance by using another filesystem. And since I only
used ext3 up until now, I don’t really know which other filesystem to trust.
Post by Roberto C. Sanchez
XFS supports files up to a size of 8
exabytes and filesystems also of size 8 exabytes. I am not sure of the
limitations on JFS.
OK, that seems only important for enterprise levels. I don’t think that I will
reach these sizes at the moment.


Regards, Mathias
--
debian/rules
Roberto C. Sanchez
2007-03-12 23:06:43 UTC
Permalink
Post by Mathias Brodala
Hi Roberto.
I see. I was asking since I have a whole drive full of videos and such which are
usually between 100MB and 300MB per file. So I guess XFS would not really be the
best choice for them. I got ext3 everywhere at the moment and wondered if I
could get a bit more performance by using another filesystem. And since I only
used ext3 up until now, I don???t really know which other filesystem to trust.
I would certainly trust XFS. Of course, if you don't have your machine
on an UPS, it can cause problems on a crash or power outage. How are
your video files being used? Played locally? Streamed to one or two
devices? Streamed to hundreds of devices?

Unless you are streaming to many devices, it is likely that you are not
yet hitting a bottleneck. As they say, "if it ain't broke." That said,
do you notice a particular performance problem?
Post by Mathias Brodala
Post by Roberto C. Sanchez
XFS supports files up to a size of 8
exabytes and filesystems also of size 8 exabytes. I am not sure of the
limitations on JFS.
OK, that seems only important for enterprise levels. I don???t think that I will
reach these sizes at the moment.
I read on Slashdot a while back that Seagate announced 37.5 TB drives
will be available in a few years. Petabyte-sized home RAIDs won't be
far off :-)

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Mathias Brodala
2007-03-12 23:12:06 UTC
Permalink
Hello Roberto.
Post by Roberto C. Sanchez
I would certainly trust XFS. Of course, if you don't have your machine
on an UPS, it can cause problems on a crash or power outage. How are
your video files being used? Played locally? Streamed to one or two
devices? Streamed to hundreds of devices?
Only played locally and sometimes distributed and seldom streamed over a local
network.
Post by Roberto C. Sanchez
Unless you are streaming to many devices, it is likely that you are not
yet hitting a bottleneck. As they say, "if it ain't broke." That said,
do you notice a particular performance problem?
Not really but it could have been that I am missing a performance boost only
because I never really tried other filesystems.
Post by Roberto C. Sanchez
Post by Mathias Brodala
Post by Roberto C. Sanchez
XFS supports files up to a size of 8
exabytes and filesystems also of size 8 exabytes. I am not sure of the
limitations on JFS.
OK, that seems only important for enterprise levels. I don???t think that I will
reach these sizes at the moment.
I read on Slashdot a while back that Seagate announced 37.5 TB drives
will be available in a few years.
Ouch. I’m thinking about getting a 750GB Seagate at the moment if only the prize
gets a bit lower.
Post by Roberto C. Sanchez
Petabyte-sized home RAIDs won't be
far off :-)
What I cannot really imagine at the moment might be true in a few years. Let’s
see what the glory future brings.


Regards, Mathias
--
debian/rules
Ron Johnson
2007-03-13 00:20:43 UTC
Permalink
Post by Mathias Brodala
Hello Roberto.
[snip]
Post by Mathias Brodala
Post by Roberto C. Sanchez
I read on Slashdot a while back that Seagate announced 37.5 TB
drives will be available in a few years.
Ouch. I?m thinking about getting a 750GB Seagate at the moment if
only the prize gets a bit lower.
Post by Roberto C. Sanchez
Petabyte-sized home RAIDs won't be far off :-)
What I cannot really imagine at the moment might be true in a few
years. Let?s see what the glory future brings.
37.5TB (or even 1TB) passing thru a 100MBps pipe (which can only max
out at 133MBps, unless mobo design moves towards "internal" PCIe)
sounds really painful.

The side-effect of needing to throw lots of spindles and SCSI cards
and PCI busses at large data stores is that it gives you lots of
throughput.
Post by Mathias Brodala
Regards, Mathias
Eduard Bloch
2007-03-13 11:58:31 UTC
Permalink
#include <hallo.h>
Post by Roberto C. Sanchez
Post by Mathias Brodala
Hi Roberto.
I see. I was asking since I have a whole drive full of videos and such which are
usually between 100MB and 300MB per file. So I guess XFS would not really be the
best choice for them. I got ext3 everywhere at the moment and wondered if I
could get a bit more performance by using another filesystem. And since I only
used ext3 up until now, I don???t really know which other filesystem to trust.
I would certainly trust XFS. Of course, if you don't have your machine
on an UPS, it can cause problems on a crash or power outage. How are
Great, that is the usual propaganda from XFS users with the same lame
excuse written with small letters. It has this bad tendency to shred the
file contents after powerouts or sudden kernel crashes... silently
inserting lots of 0x0s, IIRC sometimes only a 512 byte block, sometimes
filling the rest of a file after a certain position. I cannot prove it
either, it is just the experience which I had every time after I tried
XFS in the last years. And every time I came back to ext3 where I can
not remember such trouble.

Eduard.
--
<HE> meebey: Mail kannst du eigentlich nicht verschachteln ... und dann
kam MIME.
Tarek Soliman
2007-03-13 12:31:10 UTC
Permalink
Post by Eduard Bloch
Post by Roberto C. Sanchez
I would certainly trust XFS. Of course, if you don't have your machine
on an UPS, it can cause problems on a crash or power outage.
Great, that is the usual propaganda from XFS users with the same lame
excuse written with small letters. It has this bad tendency to shred the
file contents after powerouts or sudden kernel crashes... silently
inserting lots of 0x0s, IIRC sometimes only a 512 byte block, sometimes
filling the rest of a file after a certain position. I cannot prove it
either, it is just the experience which I had every time after I tried
XFS in the last years. And every time I came back to ext3 where I can
not remember such trouble.
What about hard locks? Will the magical keystrokes prevent these
disasters with XFS?

Most of my JFS/XFS usage has been for data which I cannot backup (MythTV Video
Recording that are going to be erased in a few days after watching them)
And therefore I don't care about it being lost (compared to say /etc or
/home which do get backups and are on ext3)
--
Tarek
Daniel Palmer
2007-03-13 12:36:38 UTC
Permalink
Post by Eduard Bloch
#include <hallo.h>
Post by Roberto C. Sanchez
Post by Mathias Brodala
Hi Roberto.
I see. I was asking since I have a whole drive full of videos and such which are
usually between 100MB and 300MB per file. So I guess XFS would not really be the
best choice for them. I got ext3 everywhere at the moment and wondered if I
could get a bit more performance by using another filesystem. And since I only
used ext3 up until now, I don???t really know which other filesystem to trust.
I would certainly trust XFS. Of course, if you don't have your machine
on an UPS, it can cause problems on a crash or power outage. How are
Great, that is the usual propaganda from XFS users with the same lame
excuse written with small letters. It has this bad tendency to shred the
file contents after powerouts or sudden kernel crashes... silently
inserting lots of 0x0s, IIRC sometimes only a 512 byte block, sometimes
filling the rest of a file after a certain position. I cannot prove it
either, it is just the experience which I had every time after I tried
XFS in the last years. And every time I came back to ext3 where I can
not remember such trouble.
Eduard.
This happened in the past. But I haven't experienced it recently. The
general propaganda for XFS is justified in my opinion. I've never seen
an XFS filesystem explode in the same way a Reiser or Ext3 one, i.e lose
of the entire filesystem. And the fs support tools are very nice.

There again everyone has their story of "I had this happen with X
filesystem so I switched to Y and it never happened again (in the mean
time I replaced my crappy hardware with newer stuff which actually fixed
the issues I was having, but I won't mention that for the sake of
tooting my chosen fs's horn), so Y filesystem is the greatest filesystem
on Earth and you must use it or be labelled a noob. *roaring BOFH laugh*".

XFS is good for big files, I have big files, I like XFS ...
Douglas Allan Tutty
2007-03-13 13:02:10 UTC
Permalink
Post by Eduard Bloch
#include <hallo.h>
Post by Roberto C. Sanchez
Post by Mathias Brodala
I see. I was asking since I have a whole drive full of videos and such which are
usually between 100MB and 300MB per file. So I guess XFS would not really be the
best choice for them. I got ext3 everywhere at the moment and wondered if I
could get a bit more performance by using another filesystem. And since I only
used ext3 up until now, I don???t really know which other filesystem to trust.
I would certainly trust XFS. Of course, if you don't have your machine
on an UPS, it can cause problems on a crash or power outage. How are
Great, that is the usual propaganda from XFS users with the same lame
excuse written with small letters. It has this bad tendency to shred the
file contents after powerouts or sudden kernel crashes... silently
inserting lots of 0x0s, IIRC sometimes only a 512 byte block, sometimes
filling the rest of a file after a certain position. I cannot prove it
either, it is just the experience which I had every time after I tried
XFS in the last years. And every time I came back to ext3 where I can
not remember such trouble.
I avoided XFS for this reason. I went with JFS. If you read IBM's
design philosophy on it, it is designed to get a server back up and
running ASAP with data intact after a crash or power failure. When I
made the switch, I didn't have a UPS and I did have unreliable power (I
eventually put the whole house on a big UPS). JFS has been perfect.

Doug.
Michelle Konzack
2007-03-27 14:53:21 UTC
Permalink
Post by Douglas Allan Tutty
running ASAP with data intact after a crash or power failure. When I
made the switch, I didn't have a UPS and I did have unreliable power (I
eventually put the whole house on a big UPS). JFS has been perfect.
ROTFL!

This why I have installed 20 Gel-Batteries from Sonnenschein (G120)
which mean: 24V/1200Ah = 28.8kWh

A Soloar-Charger with 16 x 75W and a 230/400V standard charger of
3.6KVA plus a Yamaha-Ship-Generator of some KVA.

The computers are using DC/DC (24V) converters pluged directly on the
ATX-Power connector on the Mainboard...

The rest goes over the DC/AC-Converter of 7kVA/5kW (not realy used
often)


Thanks, Greetings and nice Day
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant
--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
50, rue de Soultz MSN LinuxMichi
0033/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)
Roberto C. Sanchez
2007-03-13 14:07:30 UTC
Permalink
Post by Eduard Bloch
Great, that is the usual propaganda from XFS users with the same lame
excuse written with small letters.
How is it propaganda? It was a statement of fact.
Post by Eduard Bloch
It has this bad tendency to shred the
file contents after powerouts or sudden kernel crashes... silently
inserting lots of 0x0s, IIRC sometimes only a 512 byte block, sometimes
filling the rest of a file after a certain position.
FYI, *any* filesystem has the potential to lose data on a sudden power
outage.
Post by Eduard Bloch
I cannot prove it
either, it is just the experience which I had every time after I tried
XFS in the last years.
So, in other words, you are giving anecdotal "evidence" as the backing
for sweeping generalizations?
Post by Eduard Bloch
And every time I came back to ext3 where I can
not remember such trouble.
Well, as an anecdote of my own, I have used both XFS and ext3 quite
extensively and found that they are equally as good, given *quality*
hardware.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Tarek Soliman
2007-03-13 14:34:45 UTC
Permalink
Post by Roberto C. Sanchez
Well, as an anecdote of my own, I have used both XFS and ext3 quite
extensively and found that they are equally as good, given *quality*
hardware.
I assume quality hardware is mutually exclusive with a home PC
Is that correct?
--
Tarek
Greg Folkert
2007-03-13 15:01:38 UTC
Permalink
Post by Tarek Soliman
Post by Roberto C. Sanchez
Well, as an anecdote of my own, I have used both XFS and ext3 quite
extensively and found that they are equally as good, given *quality*
hardware.
I assume quality hardware is mutually exclusive with a home PC
Is that correct?
No. There are quite a few recent studies and statistical reports from
companies like Google that are disputing Hard Drive data. It seems
hard-drives being "regular" home system drives or "enterprise" drives
have near the same failure and problems rates. Also, some "consumer"
drives actually out perform many "enterprise" drives.

The real differences between the hard drives are being pinned down to
the "electronics" on the printed circuit board included on the drive.

Also, the differences between the GigE NICs included on many of the
"servers" I have recently purchased are the same chipsets I find on
"consumer" motherboards. Being driven by the same drivers and are
getting the same through-put.

Now as far as video, who cares about that... servers don't need GUI
stuff.

And now we get to motherboards, in all reality, now that PCIe has come
along, servers motherboards are showing up with only 1 PCI-X slot and
many PCIe slots. Once again, similar chipsets, if not the same ones with
features not supported on "consumer" versions and then the features
turned on with the additional support chips on the "server" versions.

Reality speaking, it comes down to the quality of the power supplies and
the engineering of the case and air-flow through it that is typically
defining servers vs workstation/consumer machines. Even then some
high-end workstations have better air-flow design than many servers.

Ever crack open a well designed 1U servers capable of 2-4 CPUs? I have,
it is all about the air-flow. My personal HP Proliant DL145 G2 has 15
double fans (back to back for redundancy) for airflow. All thermally
controlled for variable speeds.
--
greg, ***@gregfolkert.net

Novell's Directory Services is a competitive product to Microsoft's
Active Directory in much the same way that the Saturn V is a competitive
product to those dinky little model rockets that kids light off down at
the playfield. -- Thane Walkup
Tarek Soliman
2007-03-13 15:33:38 UTC
Permalink
Post by Greg Folkert
Now as far as video, who cares about that... servers don't need GUI
stuff.
Tell that to our admins who run redhat and suse. Want to disable these
guys? Remove some X libraries. (The one guy who uses CLI uses telnet)

Yes they really have X on ALL of the servers.
--
Tarek
Roberto C. Sanchez
2007-03-13 15:39:54 UTC
Permalink
Post by Tarek Soliman
Post by Greg Folkert
Now as far as video, who cares about that... servers don't need GUI
stuff.
Tell that to our admins who run redhat and suse. Want to disable these
guys? Remove some X libraries. (The one guy who uses CLI uses telnet)
Yes they really have X on ALL of the servers.
I unfortunately deal with similar situations often. It doesn't help
that many "enterprise" software packages assume that the admin will
install using a local GUI (*cough* Oracle *cough*).

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Ron Johnson
2007-03-13 16:18:14 UTC
Permalink
Post by Tarek Soliman
Post by Greg Folkert
Now as far as video, who cares about that... servers don't need GUI
stuff.
Tell that to our admins who run redhat and suse. Want to disable these
guys? Remove some X libraries. (The one guy who uses CLI uses telnet)
Yes they really have X on ALL of the servers.
That's the root cause of why I left Mandrake. Packages are so
globulous and coarse-grained that some upgrades are (or, were) just
impossible without totally reinstalling the system.
Greg Folkert
2007-03-13 19:43:03 UTC
Permalink
Post by Tarek Soliman
Post by Greg Folkert
Now as far as video, who cares about that... servers don't need GUI
stuff.
Tell that to our admins who run redhat and suse. Want to disable these
guys? Remove some X libraries. (The one guy who uses CLI uses telnet)
Yes they really have X on ALL of the servers.
WTF, I see Windows mentality has become the norm.

and RIP TelnetD (IOW the telnet Daemon) right out of the machine.
OpenSSH (as done by OpenBSD devs) is what should be defacto standard.

You have IDIOT admins.
--
greg, ***@gregfolkert.net

Novell's Directory Services is a competitive product to Microsoft's
Active Directory in much the same way that the Saturn V is a competitive
product to those dinky little model rockets that kids light off down at
the playfield. -- Thane Walkup
Roberto C. Sanchez
2007-03-13 23:41:24 UTC
Permalink
Post by Greg Folkert
WTF, I see Windows mentality has become the norm.
and RIP TelnetD (IOW the telnet Daemon) right out of the machine.
OpenSSH (as done by OpenBSD devs) is what should be defacto standard.
I wish it could really be that way everywhere. I have been places where
they run telnetd on all the Solaris and Linux servers because (get this)
windows only comes with a telnet client and not an ssh client.

Absolutely. Exasperating.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Paul Johnson
2007-03-14 01:54:22 UTC
Permalink
Roberto C. Sanchez wrote in Article
Post by Roberto C. Sanchez
Post by Greg Folkert
WTF, I see Windows mentality has become the norm.
and RIP TelnetD (IOW the telnet Daemon) right out of the machine.
OpenSSH (as done by OpenBSD devs) is what should be defacto standard.
I wish it could really be that way everywhere. I have been places where
they run telnetd on all the Solaris and Linux servers because (get this)
windows only comes with a telnet client and not an ssh client.
They do know about putty, right? It's only a few kB...
Roberto C. Sanchez
2007-03-14 12:35:53 UTC
Permalink
Post by Paul Johnson
Roberto C. Sanchez wrote in Article
Post by Roberto C. Sanchez
I wish it could really be that way everywhere. I have been places where
they run telnetd on all the Solaris and Linux servers because (get this)
windows only comes with a telnet client and not an ssh client.
They do know about putty, right? It's only a few kB...
I know about it. But (and you might want to sit down for this) I was
once at a place where I suggested PuTTY and they said no, citing that it
was developed by a foreigner. I didn't have the heart to tell them that
all their Linux (and even Windows) machines were running oodles of
software developed by foreigners :-)

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Tarek Soliman
2007-03-14 12:45:09 UTC
Permalink
Post by Roberto C. Sanchez
Post by Paul Johnson
Post by Roberto C. Sanchez
I wish it could really be that way everywhere. I have been places where
they run telnetd on all the Solaris and Linux servers because (get this)
windows only comes with a telnet client and not an ssh client.
They do know about putty, right? It's only a few kB...
I know about it. But (and you might want to sit down for this) I was
once at a place where I suggested PuTTY and they said no, citing that it
was developed by a foreigner. I didn't have the heart to tell them that
all their Linux (and even Windows) machines were running oodles of
software developed by foreigners :-)
Them: PuTTY is UNACCEPTABLE. It was made by ... FOREIGNERS!
Me: I ... am a foreigner!
Them: *GASP*
--
Tarek
Ron Johnson
2007-03-15 13:53:24 UTC
Permalink
[snip]
No kidding. Microsoft hires how many H1Bs while Washington's unemployment
rate is how astronomical again?
Tell me about it. I mean heck, with 4.6% unemployment [0] (being at
0.1% below the national average), I can see how Washington's
unemployment rates can be considered "astronomical" in every way.
Stop confusing Paul with the facts!!!
Regards,
-Roberto
[0] http://en.wikipedia.org/wiki/List_of_U.S._states_by_unemployment_rate
Paul Johnson
2007-03-19 12:51:15 UTC
Permalink
Roberto C. Sanchez wrote in Article
Tell me about it. I mean heck, with 4.6% unemployment [0] (being at
0.1% below the national average), I can see how Washington's
unemployment rates can be considered "astronomical" in every way.
I meant the real rate, unemployed & discouraged rate. Unemployment rate
fails to count those unemployed so long they no longer receive benefits.
--
Paul Johnson
Email and IM (XMPP & Google Talk): ***@ursine.ca
Michelle Konzack
2007-03-27 14:53:29 UTC
Permalink
No kidding. Microsoft hires how many H1Bs while Washington's unemployment
rate is how astronomical again?
Same in France since Orange/FranceTelecom is going
to Bejing and created there a Development FooBar.

In France 6.8 million unemployed and many hidden by
"short term continue studies".

Thanks, Greetings and nice Day
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant
--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
50, rue de Soultz MSN LinuxMichi
0033/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)
Tarek Soliman
2007-03-14 12:17:40 UTC
Permalink
Post by Roberto C. Sanchez
Post by Greg Folkert
WTF, I see Windows mentality has become the norm.
and RIP TelnetD (IOW the telnet Daemon) right out of the machine.
OpenSSH (as done by OpenBSD devs) is what should be defacto standard.
I wish it could really be that way everywhere. I have been places where
they run telnetd on all the Solaris and Linux servers because (get this)
windows only comes with a telnet client and not an ssh client.
Absolutely. Exasperating.
The place I talk about has legacy stuff (long forgotten cron jobs on
random servers) that used to telnet and FTP stuff around)

I was trying to tell the admins to switch and they said that they were
told not to, because the legacy stuff shouldn't be "disturbed"
That's the price of high turnover over 10+ years

The other reason is that their VB programmers don't know how to SCP.
--
Tarek
Roberto C. Sanchez
2007-03-14 12:38:28 UTC
Permalink
Post by Tarek Soliman
The place I talk about has legacy stuff (long forgotten cron jobs on
random servers) that used to telnet and FTP stuff around)
Eeek!
Post by Tarek Soliman
I was trying to tell the admins to switch and they said that they were
told not to, because the legacy stuff shouldn't be "disturbed"
That's the price of high turnover over 10+ years
Ah yes, the old "it works but we don't know how, so must not disturb
it."
Post by Tarek Soliman
The other reason is that their VB programmers don't know how to SCP.
Which is precisely what WebDAV over HTTPS is for. Unfortunately, it
seems as though there are lots "savvy" people who really don't
understand the basics. I'm not saying everyone needs to be a network
engineer or a computer scientist. But for crying out loud, even your
below average Joe knows enough to lock his car when he walks away from
it.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Paul Johnson
2007-03-14 01:53:21 UTC
Permalink
Greg Folkert wrote in Article
Post by Greg Folkert
Post by Tarek Soliman
Post by Greg Folkert
Now as far as video, who cares about that... servers don't need GUI
stuff.
Tell that to our admins who run redhat and suse. Want to disable these
guys? Remove some X libraries. (The one guy who uses CLI uses telnet)
Yes they really have X on ALL of the servers.
WTF, I see Windows mentality has become the norm.
In the RPM world. But this has been true since the last time I was
inflicted with such an atrocity against unix when Red Hat 5.2 was current
and Fedora was as yet completely unheard of.
Celejar
2007-03-13 23:37:27 UTC
Permalink
On Tue, 13 Mar 2007 15:43:03 -0400
Greg Folkert <***@gregfolkert.net> wrote:

[snip]
Post by Greg Folkert
and RIP TelnetD (IOW the telnet Daemon) right out of the machine.
OpenSSH (as done by OpenBSD devs) is what should be defacto standard.
I'm curious about telnet(d)-ssl. I don't know any reason to use it over
ssh, but I wonder how secure it actually is?

Celejar
Celejar
2007-03-15 14:41:41 UTC
Permalink
On Thu, 15 Mar 2007 09:14:22 +0100
Post by Celejar
On Tue, 13 Mar 2007 15:43:03 -0400
Post by Greg Folkert
and RIP TelnetD (IOW the telnet Daemon) right out of the machine.
OpenSSH (as done by OpenBSD devs) is what should be defacto standard.
I'm curious about telnet(d)-ssl. I don't know any reason to use it over
ssh, but I wonder how secure it actually is?
ssl'ed telnet can't forward tcp/x11 connections, which is an advantage for
some networks) but it does not have native check gfor host keys. I hope this
answers both questions.
Interesting; thanks.

Celejar
Matus UHLAR - fantomas
2007-03-15 15:08:19 UTC
Permalink
ssl'ed telnet can't forward tcp/x11 connections, which is an advantage for
some networks) but it does not have native check gfor host keys. I hope
this answers both questions.
If telnet-ssl is just telnet wrapped in SSL, then it sure can forward X11
connections (as can telnet). It just doesn't make you explicitly pass a -Y
or -X to make it so like SSH tends to.
do you mean "telnet can forward X11 connections" ot "X11 connections can be
forwarded over telnet connection"?
--
Matus UHLAR - fantomas, ***@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
REALITY.SYS corrupted. Press any key to reboot Universe.
Paul Johnson
2007-03-19 12:49:22 UTC
Permalink
Post by Matus UHLAR - fantomas
Matus UHLAR - fantomas wrote in Article
ssl'ed telnet can't forward tcp/x11 connections, which is an advantage
for some networks) but it does not have native check gfor host keys. I
hope this answers both questions.
If telnet-ssl is just telnet wrapped in SSL, then it sure can forward X11
connections (as can telnet). It just doesn't make you explicitly pass a
-Y or -X to make it so like SSH tends to.
do you mean "telnet can forward X11 connections" ot "X11 connections can
be forwarded over telnet connection"?
The latter.
--
Paul Johnson
Email and IM (XMPP & Google Talk): ***@ursine.ca
Matus UHLAR - fantomas
2007-03-19 13:19:04 UTC
Permalink
Post by Paul Johnson
Post by Matus UHLAR - fantomas
If telnet-ssl is just telnet wrapped in SSL, then it sure can forward X11
connections (as can telnet). It just doesn't make you explicitly pass a
-Y or -X to make it so like SSH tends to.
do you mean "telnet can forward X11 connections" ot "X11 connections can
be forwarded over telnet connection"?
The latter.
so, it's either different protocol than telnet, or a hack to forward X11
over it, right?
--
Matus UHLAR - fantomas, ***@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Posli tento mail 100 svojim znamim - nech vidia aky si idiot
Send this email to 100 your friends - let them see what an idiot you are
Paul Johnson
2007-03-19 16:15:25 UTC
Permalink
Post by Matus UHLAR - fantomas
Post by Matus UHLAR - fantomas
If telnet-ssl is just telnet wrapped in SSL, then it sure can forward X11
connections (as can telnet). It just doesn't make you explicitly pass
a -Y or -X to make it so like SSH tends to.
Matus UHLAR - fantomas wrote in Article
Post by Matus UHLAR - fantomas
do you mean "telnet can forward X11 connections" ot "X11 connections
can be forwarded over telnet connection"?
The latter.
so, it's either different protocol than telnet, or a hack to forward X11
over it, right?
I'm not sure which, I just know it works. :o)
--
Paul Johnson
Email and IM (XMPP & Google Talk): ***@ursine.ca
Matus UHLAR - fantomas
2007-03-19 16:19:37 UTC
Permalink
Post by Paul Johnson
Post by Matus UHLAR - fantomas
Post by Matus UHLAR - fantomas
If telnet-ssl is just telnet wrapped in SSL, then it sure can forward X11
connections (as can telnet). It just doesn't make you explicitly pass
a -Y or -X to make it so like SSH tends to.
Matus UHLAR - fantomas wrote in Article
Post by Matus UHLAR - fantomas
do you mean "telnet can forward X11 connections" ot "X11 connections
can be forwarded over telnet connection"?
The latter.
so, it's either different protocol than telnet, or a hack to forward X11
over it, right?
I'm not sure which, I just know it works. :o)
This is important to know, because you can e.g. run ssh on port 23 (reserved
for telnet) which can usually be detected by firewall due to its connect
string.
--
Matus UHLAR - fantomas, ***@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
We are but packets in the Internet of life (userfriendly.org)
Paul Johnson
2007-03-14 01:52:04 UTC
Permalink
Post by Tarek Soliman
Post by Greg Folkert
Now as far as video, who cares about that... servers don't need GUI
stuff.
Tell that to our admins who run redhat and suse. Want to disable these
guys? Remove some X libraries. (The one guy who uses CLI uses telnet)
Yes they really have X on ALL of the servers.
What moron at that company did they talk to to get their jobs, are they on
good terms with said moron, and do they cover medical/dental and transit
fare?
Ron Johnson
2007-03-14 04:48:05 UTC
Permalink
Post by Paul Johnson
Post by Tarek Soliman
Post by Greg Folkert
Now as far as video, who cares about that... servers don't need GUI
stuff.
Tell that to our admins who run redhat and suse. Want to disable these
guys? Remove some X libraries. (The one guy who uses CLI uses telnet)
Yes they really have X on ALL of the servers.
What moron at that company did they talk to to get their jobs, are they on
good terms with said moron, and do they cover medical/dental and transit
fare?
Larry Ellison.

If you want to install Oracle on Linux (and *lots* of companies do,
so don't bleat about not infecting your system with closed-source),
you need X.
Greg Folkert
2007-03-14 05:07:23 UTC
Permalink
Post by Ron Johnson
Post by Paul Johnson
Post by Tarek Soliman
Post by Greg Folkert
Now as far as video, who cares about that... servers don't need GUI
stuff.
Tell that to our admins who run redhat and suse. Want to disable these
guys? Remove some X libraries. (The one guy who uses CLI uses telnet)
Yes they really have X on ALL of the servers.
What moron at that company did they talk to to get their jobs, are they on
good terms with said moron, and do they cover medical/dental and transit
fare?
Larry Ellison.
If you want to install Oracle on Linux (and *lots* of companies do,
so don't bleat about not infecting your system with closed-source),
you need X.
No, you only need a few libraries. The Display can be a local
workstation.

I know this, I've done it, as far back as 1998 when the universal
installer finally became somewhat *un-buggy* enough to be used. Of
course, this was on AIX, Tru64 and HP-UX mostly, but also Linux on the
Dev and QA systems. The only REAL problems I ran into ~2001 was when the
Pentium 4 was not recognized by the Java included on the installation
CDs. The only X stuff was some runtime libraries needed. Very little
compared to a full setup.
--
greg, ***@gregfolkert.net

Novell's Directory Services is a competitive product to Microsoft's
Active Directory in much the same way that the Saturn V is a competitive
product to those dinky little model rockets that kids light off down at
the playfield. -- Thane Walkup
Tarek Soliman
2007-03-14 12:11:06 UTC
Permalink
Post by Greg Folkert
Post by Ron Johnson
If you want to install Oracle on Linux (and *lots* of companies do,
so don't bleat about not infecting your system with closed-source),
you need X.
No, you only need a few libraries. The Display can be a local
workstation.
I know this, I've done it, as far back as 1998 when the universal
installer finally became somewhat *un-buggy* enough to be used. Of
course, this was on AIX, Tru64 and HP-UX mostly, but also Linux on the
Dev and QA systems. The only REAL problems I ran into ~2001 was when the
Pentium 4 was not recognized by the Java included on the installation
CDs. The only X stuff was some runtime libraries needed. Very little
compared to a full setup.
Is there any compatibility issues as far as versions of X, the server
being non-linux (or even not the same distro as the workstation), etc?

The only time I tried this was to run mythtv-setup on a MythTV backend
(whose config utility (has to be run once at least, and one more every
time you want to change something in the "infrastructure")

Both PCs were the same exact debian sid though.
--
Tarek
Roberto C. Sanchez
2007-03-14 12:33:19 UTC
Permalink
Post by Tarek Soliman
Is there any compatibility issues as far as versions of X, the server
being non-linux (or even not the same distro as the workstation), etc?
Nope. X is a protocol, much the same as FTP or HTTP. If your client
(or server in the case of X) speaks it, the server (or client in the
case of X) can speak to you.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Tarek Soliman
2007-03-14 12:39:17 UTC
Permalink
Post by Roberto C. Sanchez
Post by Tarek Soliman
Is there any compatibility issues as far as versions of X, the server
being non-linux (or even not the same distro as the workstation), etc?
Nope. X is a protocol, much the same as FTP or HTTP. If your client
(or server in the case of X) speaks it, the server (or client in the
case of X) can speak to you.
What I meant was the situations where the protocol changes in a certain
version of X.

I guess you answered this already. I was thinking more like MySQL in sid
cannot accept auth from MySQL in sarge because the new MySQL uses a
different algorithm in the password handling. (you can of course make it
accept the old ways but that's another issue entirely)
--
Tarek
Greg Folkert
2007-03-14 13:03:08 UTC
Permalink
Post by Tarek Soliman
Post by Roberto C. Sanchez
Post by Tarek Soliman
Is there any compatibility issues as far as versions of X, the server
being non-linux (or even not the same distro as the workstation), etc?
Nope. X is a protocol, much the same as FTP or HTTP. If your client
(or server in the case of X) speaks it, the server (or client in the
case of X) can speak to you.
What I meant was the situations where the protocol changes in a certain
version of X.
I guess you answered this already. I was thinking more like MySQL in sid
cannot accept auth from MySQL in sarge because the new MySQL uses a
different algorithm in the password handling. (you can of course make it
accept the old ways but that's another issue entirely)
As long as it adhere to the X protocol, which hasn't changed in eons,
the answer is: No Problems.

Ancient CDE using Motif, (being the version included with OSF/1 v3.01g,
more than 10 years old) I've used complies just fine with current
versions of XORG and actually displays just fine, though ugly widgets
suck.
--
greg, ***@gregfolkert.net

Novell's Directory Services is a competitive product to Microsoft's
Active Directory in much the same way that the Saturn V is a competitive
product to those dinky little model rockets that kids light off down at
the playfield. -- Thane Walkup
Greg Folkert
2007-03-14 12:59:41 UTC
Permalink
Post by Tarek Soliman
Post by Greg Folkert
Post by Ron Johnson
If you want to install Oracle on Linux (and *lots* of companies do,
so don't bleat about not infecting your system with closed-source),
you need X.
No, you only need a few libraries. The Display can be a local
workstation.
I know this, I've done it, as far back as 1998 when the universal
installer finally became somewhat *un-buggy* enough to be used. Of
course, this was on AIX, Tru64 and HP-UX mostly, but also Linux on the
Dev and QA systems. The only REAL problems I ran into ~2001 was when the
Pentium 4 was not recognized by the Java included on the installation
CDs. The only X stuff was some runtime libraries needed. Very little
compared to a full setup.
Is there any compatibility issues as far as versions of X, the server
being non-linux (or even not the same distro as the workstation), etc?
The X-Server is the local display on Linux. The X-Client is actually the
"program running on the server". Remember, X is opposite of what most
people think, the server runs the display, the client runs and send
display info to the server. Yeah it seems whacked but...

Not as far as I know. I've been using Linux as a Display since 1996
(maybe earlier, but 1996 for sure). I've only ever had an issue with
"sound" or network audio... like I care about that.
Post by Tarek Soliman
The only time I tried this was to run mythtv-setup on a MythTV backend
(whose config utility (has to be run once at least, and one more every
time you want to change something in the "infrastructure")
Both PCs were the same exact debian sid though.
As long as the X libraries comply with the X protocol, things should
just "work". If ssh -X remote_host_name_or_ip works or you could also do
the "xhost +SERVER_NAME_OR_IP" (xhost +192.168.1.10) and then at the
command prompt on the server set the $DISPLAY variable proper.

There is very little to making this work.

All I can do is tell you to try this:

ssh -X servername_or_ip

Then once logged in:

set | grep DISPLAY

note what it says, if you get nothing, then the admin has disabled
localhost display offset.

Here is what I get:

***@princess:~$ ssh -X duke
***@duke:~$ set | grep DISPLAY
DISPLAY=localhost:10.0
***@duke:~$ xterm &
[1] 23201

And an xterm pops up on my desktop locally. I used xterm mainly to make
sure you get the idea.

"duke" in my network is an XDMCP server. It manages X Displays. It
doesn't run the X server. My daughter's and wife's machines are crap
machines used only for the Video part or the X server. The XDMCP Chooser
runs locally on their machines, as GDMs default mode, but logs into
"duke" and give them the desktop and all programs running on the
"powerhouse" machine. But still the Display is run on the minimal
PentiumII 300 machines.

Now if "ssh -X" doesn't work, then you need to work with xhost and
manually setting the display variable, but once you authorize a host
with "xhost +hostname_or_ip" on the local machine you are using and then
setup the proper $DISPLAY string, it should just work.

Here is what I mean:
***@princess:~$ xhost +duke
duke being added to access control list
***@princess:~$ xhost
access control enabled, only authorized clients can connect
INET:duke.gregfolkert.net
***@princess:~$ ssh duke
***@duke:~$ set | grep DISPLAY
***@duke:~$
***@duke:~$ export DISPLAY=192.168.1.8:0.0
***@duke:~$ set | grep DISPLAY
DISPLAY=192.168.1.8:0.0
***@duke:~$ xterm &
[1] 23646

Up pops up an xterm just like before, but the main difference is the
transport is not sent over SSH. Less secure than the other way, but if
it is your network and your network *IS* secure against sniffers, there
is not a big amount of difference. Though the amount of traffic on the
first "ssh" method is less.

I have yet to have any real problems... except Network Audio, unless you
know how, it is difficult. Though, it seems most things are worked out
now.
--
greg, ***@gregfolkert.net

Novell's Directory Services is a competitive product to Microsoft's
Active Directory in much the same way that the Saturn V is a competitive
product to those dinky little model rockets that kids light off down at
the playfield. -- Thane Walkup
Roberto C. Sanchez
2007-03-13 15:15:49 UTC
Permalink
Post by Tarek Soliman
Post by Roberto C. Sanchez
Well, as an anecdote of my own, I have used both XFS and ext3 quite
extensively and found that they are equally as good, given *quality*
hardware.
I assume quality hardware is mutually exclusive with a home PC
Is that correct?
Not necessarily. However, it is mutually exclusive with bottom of the
barrel hardware.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Mike McCarty
2007-03-13 17:59:18 UTC
Permalink
Roberto C. Sanchez wrote:

[snip]
Post by Roberto C. Sanchez
FYI, *any* filesystem has the potential to lose data on a sudden power
outage.
Umm, no. I suppose you haven't worked in telecomm. I've supported
file systems which never, ever, lost anything. If the system call
came back, and said it was on disc, then it was. If power failed,
then any writes in progress might not get committed, but no data
scrambling could take place, even if the hardware scribbled on
the disc.
Post by Roberto C. Sanchez
Post by Eduard Bloch
I cannot prove it
either, it is just the experience which I had every time after I tried
XFS in the last years.
So, in other words, you are giving anecdotal "evidence" as the backing
for sweeping generalizations?
What are you doing, making sweeping claims about every file system
in the world, when you cannot possibly know everything about
every file system?
Post by Roberto C. Sanchez
Post by Eduard Bloch
And every time I came back to ext3 where I can
not remember such trouble.
Well, as an anecdote of my own, I have used both XFS and ext3 quite
extensively and found that they are equally as good, given *quality*
hardware.
A good FS should not suffer corruption regardless of what the
hardware does, if we're talking *quality*, that is.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!
Ron Johnson
2007-03-13 19:07:23 UTC
Permalink
Post by Celejar
[snip]
Post by Roberto C. Sanchez
FYI, *any* filesystem has the potential to lose data on a sudden power
outage.
Umm, no. I suppose you haven't worked in telecomm. I've supported
file systems which never, ever, lost anything. If the system call
came back, and said it was on disc, then it was. If power failed,
then any writes in progress might not get committed, but no data
scrambling could take place, even if the hardware scribbled on
the disc.
Post by Roberto C. Sanchez
Post by Eduard Bloch
I cannot prove it
either, it is just the experience which I had every time after I tried
XFS in the last years.
So, in other words, you are giving anecdotal "evidence" as the backing
for sweeping generalizations?
What are you doing, making sweeping claims about every file system
in the world, when you cannot possibly know everything about
every file system?
Post by Roberto C. Sanchez
Post by Eduard Bloch
And every time I came back to ext3 where I can
not remember such trouble.
Well, as an anecdote of my own, I have used both XFS and ext3 quite
extensively and found that they are equally as good, given *quality*
hardware.
A good FS should not suffer corruption regardless of what the
hardware does, if we're talking *quality*, that is.
ODS-2 (the OpenVMS file system) is like that. But you pay $15000
per CPU per year support, and it's a hell of a lot slower than ext3,
XFS, JFS, ReiserFS.

OpenVMS used to be more popular with geeks than Unix was. But
businesses and Universities decided that it was worth it to trade 2
slow-but-reliable VAXen for 10 fast-but-flaky Suns.
Post by Celejar
Mike
Roberto C. Sanchez
2007-03-13 23:40:03 UTC
Permalink
Post by Ron Johnson
OpenVMS used to be more popular with geeks than Unix was. But
businesses and Universities decided that it was worth it to trade 2
slow-but-reliable VAXen for 10 fast-but-flaky Suns.
Hmmm. Then they went from 10 fast-but-flaky Suns to 100
slow-and-disease-ridden generic PCs with Windows. I'd hate to think
what is coming next :-)

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Douglas Allan Tutty
2007-03-14 01:48:30 UTC
Permalink
Post by Roberto C. Sanchez
Hmmm. Then they went from 10 fast-but-flaky Suns to 100
slow-and-disease-ridden generic PCs with Windows. I'd hate to think
what is coming next :-)
Vista.

Word Processing online via Google.

Disposable printers in a hosptial patient-care area.
Paul Johnson
2007-03-14 01:55:46 UTC
Permalink
Roberto C. Sanchez wrote in Article
Post by Roberto C. Sanchez
Post by Ron Johnson
OpenVMS used to be more popular with geeks than Unix was. But
businesses and Universities decided that it was worth it to trade 2
slow-but-reliable VAXen for 10 fast-but-flaky Suns.
Hmmm. Then they went from 10 fast-but-flaky Suns to 100
slow-and-disease-ridden generic PCs with Windows. I'd hate to think
what is coming next :-)
A thousand fast and reliable Debian boxes? (I can dream, right?)
Roberto C. Sanchez
2007-03-13 23:38:33 UTC
Permalink
Post by Celejar
[snip]
Post by Roberto C. Sanchez
FYI, *any* filesystem has the potential to lose data on a sudden power
outage.
Umm, no. I suppose you haven't worked in telecomm. I've supported
file systems which never, ever, lost anything. If the system call
came back, and said it was on disc, then it was. If power failed,
then any writes in progress might not get committed, but no data
scrambling could take place, even if the hardware scribbled on
the disc.
You can achieve the same thing with any decent filesystem. You just
have put the hardware into writethrough instead of writeback and you
also give up a lot of performance. It depends on what you need.
Post by Celejar
What are you doing, making sweeping claims about every file system
in the world, when you cannot possibly know everything about
every file system?
Except that there are conditions under which just about every filesystem
will lose data. The amounts vary. The conditions vary. The results
vary. However, no filesystem is so good that it will handle every
single possible case.
Post by Celejar
Post by Roberto C. Sanchez
Post by Eduard Bloch
And every time I came back to ext3 where I can
not remember such trouble.
Well, as an anecdote of my own, I have used both XFS and ext3 quite
extensively and found that they are equally as good, given *quality*
hardware.
A good FS should not suffer corruption regardless of what the
hardware does, if we're talking *quality*, that is.
I wouldn't say regardless. If the whole disk melts down, I would wager
that there is going to be some corruption.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Ron Johnson
2007-03-14 02:40:53 UTC
Permalink
Post by Roberto C. Sanchez
Post by Celejar
[snip]
[snip]
Post by Roberto C. Sanchez
Post by Celejar
A good FS should not suffer corruption regardless of what the
hardware does, if we're talking *quality*, that is.
I wouldn't say regardless. If the whole disk melts down, I would wager
that there is going to be some corruption.
As in, a fire at the CO.

Unless you are mirroring writes to a DR site. But then you're
talking *real* money.
Post by Roberto C. Sanchez
Regards,
-Roberto
Mike McCarty
2007-03-27 02:04:03 UTC
Permalink
Post by Roberto C. Sanchez
Post by Celejar
[snip]
Post by Roberto C. Sanchez
FYI, *any* filesystem has the potential to lose data on a sudden power
outage.
Umm, no. I suppose you haven't worked in telecomm. I've supported
file systems which never, ever, lost anything. If the system call
came back, and said it was on disc, then it was. If power failed,
then any writes in progress might not get committed, but no data
scrambling could take place, even if the hardware scribbled on
the disc.
You can achieve the same thing with any decent filesystem. You just
have put the hardware into writethrough instead of writeback and you
also give up a lot of performance. It depends on what you need.
This is untrue. If power fails during a write, and the drive
scribbles on the disc in a spiral pattern as the head moves
toward the parking area, that particular disc is hosed.
Post by Roberto C. Sanchez
Post by Celejar
What are you doing, making sweeping claims about every file system
in the world, when you cannot possibly know everything about
every file system?
Except that there are conditions under which just about every filesystem
will lose data. The amounts vary. The conditions vary. The results
vary. However, no filesystem is so good that it will handle every
single possible case.
This is untrue. I have myself supported file systems which
would not ever under any circumstance corrupt a disc. If the call
came back, and said that the data were on disc and not corrupt,
then that was so.

[snip]
Post by Roberto C. Sanchez
Post by Celejar
A good FS should not suffer corruption regardless of what the
hardware does, if we're talking *quality*, that is.
I wouldn't say regardless. If the whole disk melts down, I would wager
that there is going to be some corruption.
Not true. Read what I wrote above. Even in the face of a complete
meltdown of a disc, the systems I'm talking about would not
lose data.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!
Henrique de Moraes Holschuh
2007-03-27 04:15:47 UTC
Permalink
Post by Mike McCarty
This is untrue. If power fails during a write, and the drive
scribbles on the disc in a spiral pattern as the head moves
toward the parking area, that particular disc is hosed.
This is a device issue, no filesystem may fix it. Not that I expect even
the crap we buy today for desktops and servers to be THIS dumb.
Post by Mike McCarty
Not true. Read what I wrote above. Even in the face of a complete
meltdown of a disc, the systems I'm talking about would not
lose data.
Easy to do with a RAID with enough redundancy, but then you may get a lot of
problems if something else than a disc meltdowns, and that is NOT something
that uncommon.

The bottom line is: you need a filesystem that fully journals everything
that always need a rollback (data doesn't when you only write unused areas
of the disk), always orders everything that needs ordering, AND you need the
entire chain from that filesystem to the disc platter to behave. Otherwise,
you can lose data indeed. It is not easy even if you don't factor in
defective software, firmware or hardware.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Mike McCarty
2007-03-27 05:41:08 UTC
Permalink
Post by Henrique de Moraes Holschuh
Post by Mike McCarty
This is untrue. If power fails during a write, and the drive
scribbles on the disc in a spiral pattern as the head moves
toward the parking area, that particular disc is hosed.
This is a device issue, no filesystem may fix it. Not that I expect even
the crap we buy today for desktops and servers to be THIS dumb.
Yes, a file system can fix that. But it has to be a file system
which understands redundant hardware.
Post by Henrique de Moraes Holschuh
Post by Mike McCarty
Not true. Read what I wrote above. Even in the face of a complete
meltdown of a disc, the systems I'm talking about would not
lose data.
Easy to do with a RAID with enough redundancy, but then you may get a lot of
problems if something else than a disc meltdowns, and that is NOT something
that uncommon.
No, not true. The system I'm talking about can recover from any
single component failure without any data loss. Depending on what
fails, there may be some reduction in processing capacity.
Post by Henrique de Moraes Holschuh
The bottom line is: you need a filesystem that fully journals everything
that always need a rollback (data doesn't when you only write unused areas
of the disk), always orders everything that needs ordering, AND you need the
entire chain from that filesystem to the disc platter to behave. Otherwise,
you can lose data indeed. It is not easy even if you don't factor in
defective software, firmware or hardware.
What makes you think that the FS I am talking about doesn't
have those features (except journalling, which is not necessary)?
The system I'm referring to has:

redundant separate power supplies
redundant separate processors
redundant separate backplane connections
redundant separate disc controllers,
each of which is accessible from both processors
via both backplanes
redundant discs
each of which is accessible from both controllers
a file system which is aware of all the above, and
which negotiates control of said hardware
via a separate, redundant, communication path
especially made for that purpose

No journalling or rollback is supported[*]. All writes take
place first to one disc, verified, then to the other disc.
No corruption is possible unless a two-point failure occurs.
No one component failing can cause corruption that the file
system cannot recover from, period. The system requires no
down time to replace any one failed component. It just
continues to run, and gracefully recovers from the failure.
Eventually, the system is fully functional and fully
redundant again. System failures are automatically noted,
and failed components are not used. When components are
replaced, this is automatically noted, and the system
automatically begins recovery procedures.

[*] The database system on there does support journalling
and "commit", but not the file system per se. That's at a
higher level.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!
Henrique de Moraes Holschuh
2007-03-27 18:56:43 UTC
Permalink
Post by Mike McCarty
Post by Henrique de Moraes Holschuh
This is a device issue, no filesystem may fix it. Not that I expect even
the crap we buy today for desktops and servers to be THIS dumb.
Yes, a file system can fix that. But it has to be a file system
which understands redundant hardware.
I think I understand what is happening in this thread, finally.
Post by Mike McCarty
No, not true. The system I'm talking about can recover from any
single component failure without any data loss. Depending on what
fails, there may be some reduction in processing capacity.
This is not a filesystem. If you got anywhere beyond software, you are not
talking about a filesystem.
Post by Mike McCarty
What makes you think that the FS I am talking about doesn't
have those features (except journalling, which is not necessary)?
I didn't. It would *have* to implement rollback, or it would not be
failure-proof, and rollback *requires* either journals or that you only
write over unused areas.
Post by Mike McCarty
redundant separate power supplies
redundant separate processors
...

This is hardware, not a file system. Your "system" is a file system and a
storage system. Which is fine, you can't guarantee data safety without
*both* of them playing well together.

But it certainly explains why I could not make sense of what the heck you
wanted from filesystems, and what magic filesystem of yours was that which
would be absolutely safe *regardless of the storage system it ran on top
of*.
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
Ron Johnson
2007-03-27 19:15:26 UTC
Permalink
Post by Henrique de Moraes Holschuh
Post by Mike McCarty
Post by Henrique de Moraes Holschuh
This is a device issue, no filesystem may fix it. Not that I expect even
the crap we buy today for desktops and servers to be THIS dumb.
Yes, a file system can fix that. But it has to be a file system
which understands redundant hardware.
I think I understand what is happening in this thread, finally.
Post by Mike McCarty
No, not true. The system I'm talking about can recover from any
single component failure without any data loss. Depending on what
fails, there may be some reduction in processing capacity.
This is not a filesystem. If you got anywhere beyond software, you are not
talking about a filesystem.
True and not true. Your thinking seems to be too much clouded by
open systems.

What is to stop very closed systems (think mainframes, minicomputers
and specialized systems) from deeply integrating the operating
system with the hardware?
Post by Henrique de Moraes Holschuh
Post by Mike McCarty
What makes you think that the FS I am talking about doesn't
have those features (except journalling, which is not necessary)?
I didn't. It would *have* to implement rollback, or it would not be
failure-proof, and rollback *requires* either journals or that you only
write over unused areas.
If all writes are sector-sized and there is no write cache, then
they can be atomic without needing rollback.

Throughput, of course, suffers.
Post by Henrique de Moraes Holschuh
Post by Mike McCarty
redundant separate power supplies
redundant separate processors
...
This is hardware, not a file system. Your "system" is a file system and a
storage system. Which is fine, you can't guarantee data safety without
*both* of them playing well together.
But it certainly explains why I could not make sense of what the heck you
wanted from filesystems, and what magic filesystem of yours was that which
would be absolutely safe *regardless of the storage system it ran on top
of*.
- --
Ron Johnson, Jr.
Jefferson LA USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!
Ron Johnson
2007-03-27 05:36:10 UTC
Permalink
On 03/26/07 21:04, Mike McCarty wrote:
[snip]
Post by Mike McCarty
This is untrue. If power fails during a write, and the drive
scribbles on the disc in a spiral pattern as the head moves
toward the parking area, that particular disc is hosed.
Does that happen anymore? Drive manufacturers engineered that
problem away long ago, I think.

- --
Ron Johnson, Jr.
Jefferson LA USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!
Mike McCarty
2007-03-27 06:38:28 UTC
Permalink
Post by Celejar
[snip]
Post by Mike McCarty
This is untrue. If power fails during a write, and the drive
scribbles on the disc in a spiral pattern as the head moves
toward the parking area, that particular disc is hosed.
Does that happen anymore? Drive manufacturers engineered that
problem away long ago, I think.
I dunno. It's been perhaps 6 or 8 years since I worked on
file systems and hardware interfaces with discs. You may
be right.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!
Daniel B.
2007-03-28 20:00:11 UTC
Permalink
... If power fails during a write, and the drive
scribbles on the disc in a spiral pattern as the head moves
toward the parking area, that particular disc is hosed.
But the disks almost surely don't scribble on the disk in a spiral
pattern. (They'd detect that power is failing (voltage is dropping)
and turn off the write current before that happened.)

Daniel
Michelle Konzack
2007-03-27 14:53:16 UTC
Permalink
Hello Eduard and *,
Post by Eduard Bloch
Post by Roberto C. Sanchez
I would certainly trust XFS. Of course, if you don't have your machine
on an UPS, it can cause problems on a crash or power outage. How are
Great, that is the usual propaganda from XFS users with the same lame
excuse written with small letters. It has this bad tendency to shred the
file contents after powerouts or sudden kernel crashes... silently
inserting lots of 0x0s, IIRC sometimes only a 512 byte block, sometimes
filling the rest of a file after a certain position. I cannot prove it
either, it is just the experience which I had every time after I tried
XFS in the last years. And every time I came back to ext3 where I can
not remember such trouble.
I have had the same experience...

Even ext3 takes ages on s ICP/Vortex with fiveteen 300 GByte SCSI
drivers (15.000 RpM) I have never had grave losts of Data.

While using ReiderFS I have gotten over 1,3 TByte of ZEROed files.
The Kernel crash had killed my whole filesystem.

I think, ext3 and now ext4 should be the only reliable FileSystem
for @home users which want to avoid to buy an UPS even if they are
not realy expensive. A SmartUPS 650 from APC should do for ALL
Workstations and small FileServers @home.

Thanks, Greetings and nice Day
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant
--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack Apt. 917 ICQ #328449886
50, rue de Soultz MSN LinuxMichi
0033/6/61925193 67100 Strasbourg/France IRC #Debian (irc.icq.com)
Ron Johnson
2007-03-12 22:49:55 UTC
Permalink
On 03/12/07 17:15, Roberto C. Sanchez wrote:
[snip]
Post by Roberto C. Sanchez
At work we deal with files of size 1 GB to 100 GB on a regular
basis. I would classify those as large. XFS supports files up
to a size of 8 exabytes and filesystems also of size 8 exabytes.
I am not sure of the limitations on JFS.
I've read that XFS is very fragile during system crashes and easily
loses the contents of files.
Post by Roberto C. Sanchez
Regards,
-Roberto
Roberto C. Sanchez
2007-03-12 23:08:21 UTC
Permalink
Post by Celejar
[snip]
Post by Roberto C. Sanchez
At work we deal with files of size 1 GB to 100 GB on a regular
basis. I would classify those as large. XFS supports files up
to a size of 8 exabytes and filesystems also of size 8 exabytes.
I am not sure of the limitations on JFS.
I've read that XFS is very fragile during system crashes and easily
loses the contents of files.
It can. In flushing the buffers, it can start writing crap out to disk.
This is because in the event of a power loss/fluctuation the SDRAM is
the first thing to go usually. There was a very interesting post about
it on the SGI XFS list from a few years back, but I can't seem to locate
it at the moment.

Regards,

-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
Paul Johnson
2007-03-13 08:39:15 UTC
Permalink
Post by Mike McCarty
Post by Roberto C. Sanchez
I personally am a fan of XFS. However, it is also possible to use ext3
on large partitions, as you point out. At work, I have a production
server (running RHEL, unfortunately) which is serving up a 6 TB
Why unfortunately? Do Linux fans have to hate other distros as well
as MS?
As far as I can tell, Linux fans aren't annoyed so much at that it's Red Hat
as much as using it means dealing with RPM is requisite, and that alone is
a Microsoftian pain.
Bob
2007-03-12 08:59:21 UTC
Permalink
Post by Douglas Allan Tutty
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)
There are a few comparisions out there but you need to look at the
design philosophies in relation to your application. There have been
some problems with ReiserFS (no references, but there were messages on
debian-user a while ago). When I looked at this it came down to a
choice between XFS and JFS. There have also been a lot of threads on
this topic on debian-user in the past few months.
When I was building my first MythTV box 2 years ago, I did some research
and found some benchmarks, the link for which I failed to keep, that put
XFS and JFS pretty much neck and neck in terms of performance, except
when it came to deleting big files when JFS was significantly faster,
since this is something you do a lot of with a MythTV I went for JFS and
have had no problems.

The only feature I'd like is file system shrinking which I heard rumor
you can now do with the fs offline.
Celejar
2007-03-12 13:30:30 UTC
Permalink
On Sun, 11 Mar 2007 21:58:29 -0400
Douglas Allan Tutty <***@porchlight.ca> wrote:

[snip]
Post by Douglas Allan Tutty
possible but there could be some data corruption. ext3 journals data as
well as metadata but takes forever to regenerate after a crash and there
can still be errors.
Mount options for ext3
The ‘ext3’ file system is a version of the ext2 file system which has
been enhanced with journalling. It supports the same options as ext2
[snip]
Post by Douglas Allan Tutty
data=journal / data=ordered / data=writeback
Specifies the journalling mode for file data. Metadata is
always journaled. To use modes other than ordered on the root
file system, pass the mode to the kernel as boot parameter, e.g.
rootflags=data=journal.
journal
All data is committed into the journal prior to being
written into the main file system.
ordered
This is the default mode. All data is forced directly
out to the main file system prior to its metadata being
committed to the journal.
writeback
Data ordering is not preserved - data may be written into
the main file system after its metadata has been commit‐
ted to the journal. This is rumoured to be the highest-
throughput option. It guarantees internal file system
integrity, however it can allow old data to appear in
files after a crash and journal recovery.
So IUUC, ext3 only journals metadata by default, not data, although it
jounals data also if 'data=journal' is specified.

Celejar
--
To UNSUBSCRIBE, email to debian-user-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Jiann-Ming Su
2007-03-14 06:35:39 UTC
Permalink
Post by Siju George
Hi,
Could some one recommend which File System is best for partitions above 600GB?
I am considering XFS. The System is Debian Sarge for amd64.
Hope there are no issues with this setup. please let me know if i
should be careful in any area.
Also if a better file system suits for such large partitions :-)
http://linuxgazette.net/122/piszcz.html
--
Jiann-Ming Su
"I have to decide between two equally frightening options.
If I wanted to do that, I'd vote." --Duckman
"The system's broke, Hank. The election baby has peed in
the bath water. You got to throw 'em both out." --Dale Gribble
Continue reading on narkive:
Loading...