[131005400010] |
en0
.
[131005410030] |You may have to restart any servers listening on that interface, and any established TCP connections using it will drop when you do this.
[131005410040] |It's brief, though, so I don't really view such a test as "downtime".
[131005410050] |Be sure not to do this while ssh'd in to the box on the interface you're bouncing.
[131005410060] |It's best to log in on the console when you do this, if you can.
[131005410070] |If the server is remote, a modem connection is best, since bouncing the network interfaces won't affect serial gettys.
[131005410080] |If you must do this while logged in over the network, be sure your connection is coming in over a different interface.
[131005420010] |kill
the process (including the forked bash) to stop it, you might try skill
to kill the user by name, but I suspect if that user is root... it might try killing things it can't.
[131005450060] |answer has been updated a few times
[131005460010] |As mentioned, screen is probably your best bet.
[131005460020] |If/when your session is dropped, it will keep running.
[131005460030] |Type 'screen -list' to see available sessions, and 'screen -r ' with the session you want.
[131005460040] |A little tip, you can tab-complete the session string rather than copy/paste the whole thing.
[131005460050] |However, screen can be annoying.
[131005460060] |In my client, you can't easily scroll up to see history in the console.
[131005460070] |It acts a little differently than you'd expect.
[131005460080] |If all you want to do is make sure your long running procs don't get killed, use 'nohup'.
[131005460090] |It will prevent your proc from being shut down when you lose your connection.
[131005460100] |Make sure you send your proc output to a file if you want to monitor it, though.
[131005460110] |Also, if you have a bash script, I think you need to explicitly call 'bash' in front of nohup.
[131005460120] |An example...
[131005460130] |nohup bash ./myscript.sh >output.log 2>&1 &
[131005460140] |That means nohup, to prevent killing the proc, bash to explicitly call bash if you have bash specific stuff in your script, your script called 'myscript.sh' in the current dir, output.log as the file to send both std out and error out to, and the '&' at the end to run the proc in the background.
[131005470010] |By default you cannot re-connect to an abandoned ssh session.
[131005470020] |However, you can set up processes inside an ssh session, which you can reconnect to after your re-establish a new ssh session.
[131005470030] |What you want to use is screen or even better a user-friendly wrapper around screen called byobu.
[131005470040] |Screen allows you to run multiple virtual terminal sessions in the same ssh session.
[131005470050] |A tutorial and help pages are available.
[131005470060] |byobu is a wrapper that allows to easily open new screens with a simple function key instead of key combination from ctrl-a.
[131005470070] |It also shows a status line with all the open virtual terminals which can be named.
[131005470080] |Another nice feature is the fact that all your screen can stay up while your ssh connection is disconnected.
[131005470090] |You just connect again via ssh and call byobu and everything is like before.
[131005470100] |At last some screenshots of byobu.
[131005480010] |sort
command, though.
[131005480080] |Then there would be long, long blocks of identical lines.
[131005480090] |So I'm trying to figure a way to store only the diffs.
[131005480100] |I could start with a master dump, and diff against that each night.
[131005480110] |But the diffs would be larger each night.
[131005480120] |Or, I could make rolling diffs, which individually would be very small, but seems like it would take longer and longer to compute, if I have to put together a master diff of the whole series each night.
[131005480130] |Is this feasible?
[131005480140] |With what tools?
[131005480150] |Edit I'm not asking how to do mysql backups.
[131005480160] |Forget mysql for the moment.
[131005480170] |It's a red herring.
[131005480180] |What I'm wanting to know is how to make a series of rolling diffs from a series of files.
[131005480190] |Each night we get a file ( which happens to be a mysqldump file ) that is 99% similar to the one before it.
[131005480200] |Yes, we gzip them all.
[131005480210] |But it's redundant to have all that redundancy in the first place.
[131005480220] |All I really need is the differences from the night before... which is only 1% different from the night before... and so on.
[131005480230] |So what I'm after is how to make a series of diffs so I need only store that 1% each night.
[131005490010] |(I have not done this in production.)
[131005490020] |Do a full backup once per day or week.
[131005490030] |Backup relay logs once per hour or day.
[131005500010] |You could do something like this (with a.sql
as your weekly backup).
[131005500020] |Your diff files will become larger by the end of the week.
[131005500030] |My suggestion though is just gzip it (use gzip -9
for maximum compression).
[131005500040] |We do this at the moment and that gives use a 59MB gz-file while the original is 639MB.
[131005510010] |Lately I've been trying out storing database dumps in git.
[131005510020] |This may get impractical if your database dumps are really large, but it's worked for me for smallish databases (Wordpress sites and the like).
[131005510030] |My backup script is roughly:
[131005520010] |Two backup tools that can store binary diffs are rdiff-backup and duplicity.
[131005520020] |Both are based on librsync
, but above that they behave quite differently: rdiff-backup stores the latest copy and reverse diffs, while duplicity stores traditional incremental diffs, and they offer a different set of peripheral features.
[131005530010] |grep
assumes all input is in your default system encoding.
[131005540070] |To grep
a file in a different encoding, use iconv
to convert it:
[131005540080] |I realize this is highly inconvenient for your recursive example, but the broader lesson is that if that fixes the problem, you should convert all the text files in that directory tree so they're compatible with your system character encoding.
[131005540090] |If you need Windows text editor compatibility, don't worry, most Windows text editors that focus on code editing cope with UTF-8, even though Windows uses UTF-16 natively these days.
[131005540100] |Another possibility is that your file uses curly quotes.
[131005540110] |The quotes you type on your keyboard are straight quotes -- ASCII 39 -- but some word processors and text editors replace them with curly quotes, or U+2019 in this example.
[131005540120] |I like to use this command for poking through a file to investigate character coding issues:
[131005540130] |There are various "hexdump" programs available, but they often do unhelpful things like display the data as 16-bit words in little-endian format.
[131005540140] |Because od
doesn't also have a printable text display column like any decent hexdump program, though, it works best for short files.
[131005540150] |I often cut down the example to something easy to test first.
[131005550010] |dd if=/dev/zero of=/dev/sdc bs=4k
-- this will erase all data on /dev/sdc
find / -xdev -type f -print0 | xargs -0 ...
to read every file on the system.
[131005580170] |Make a note of the pending count before running this.
[131005580180] |If the sector is inside a file, you will get an error message from the tool you used to read the files (eg md5sum) showing you the path to it.
[131005580190] |You can then focus your attentions on re-reading just this file until it reads successfully.
[131005580200] |Often this will solve the problem, if it's an infrequently-used file which just needed to be reread a few times.
[131005580210] |If the error goes away, or you don't encounter any errors in reading all the files, check the pending count to see if it's decreased.
[131005580220] |If it has, the problem was solved by reading.
[131005580230] |If the file cannot be read successfully after multiple tries (eg 20) then you need to overwrite the file, or the block within the file, to allow the drive to reallocate the sector.
[131005580240] |You can use ddrescue on the file (rather than the partition) to overwrite just the one sector, by copying to a temporary file and then copying back again.
[131005580250] |Note that just removing the file at this point is a bad idea, because the bad sector will go into the free list where it will be harder to find.
[131005580260] |Completely overwriting it is bad too, because again the sectors will go into the free list.
[131005580270] |You need to rewrite the existing blocks.
[131005580280] |The notrunc
option of dd
is one way to do this.
[131005580290] |If you encounter no errors, and the pending count did not decrease, then the sector must be in the freelist or in part of the filesystem infrastructure (eg an inode table).
[131005580300] |You can try filling up all the free space with cat /dev/zero >tempfile
, and then check the pending count.
[131005580310] |If it goes down, the problem was in the free list and has now gone away.
[131005580320] |If the sector is in the infrastructure, you have a more serious problem, and you will probably encounter errors just walking the directory tree.
[131005580330] |In this situation, I think the only sensible solution is to reformat the drive, optionally using ddrescue to recover data if necessary.
[131005580340] |Keep a very close eye on the drive.
[131005580350] |Sector reallocation is a very good canary in the coal mine, potentially giving you early warning of a drive that is failing.
[131005580360] |By taking early action you can prevent a later catastrophic and very painful landslide.
[131005580370] |I'm not suggesting that a few sector reallocations are an indication that you should discard the drive.
[131005580380] |All modern drives need to do some reallocation.
[131005580390] |However, if the drive isn't very old (< 1 yr) or you are getting frequent new reallocations (> 1/month) then I recommend you replace it asap.
[131005580400] |I don't have empirical evidence to prove it, but my experience suggests that disk problems can be reduced by reading the whole disk once in a while, either by a dd
of the raw disk or by reading every file using find
.
[131005580410] |Almost all the disk problems I've experienced in the past several years have cropped up first in rarely-used files, or on machines that are not used much.
[131005580420] |This makes sense heuristically, too, in that if a sector is being reread frequently the drive has a chance to reallocate it when it first detects a minor problem with that sector rather than waiting until the sector is completely unreadable.
[131005580430] |The drive is powerless to do anything with a sector unless the host accesses it somehow, either by reading or writing it or by conducting one of the SMART tests.
[131005580440] |I'd like to experiment with the idea of a nightly or weekly cron job that reads the whole disk.
[131005580450] |Currently I'm using a "poor man's RAID" in which I have a second hard drive in the machine and I back up the main disk to it every night.
[131005580460] |In some ways, this is actually better than RAID mirroring, because if I goof and delete a file by mistake I can get yesterday's version immediately from the backup disk.
[131005580470] |On the other hand, I believe a hardware RAID controller does a lot of good work in the background to monitor, report and fix disk problems as they emerge.
[131005580480] |My current backup script uses rsync
to avoid copying data that hasn't changed, but in view of the need to reread all sectors maybe it would be better to copy everything, or to have a separate script that reads the entire raw disk every week.
[131005590010] |apt-get update
) after a few days, I get this:
[131005590030] |How do I work around that?
[131005600010] |The release files have a valid-until entry, e.g. Valid-Until: Thu, 07 Oct 2010 08:17:56 UTC
[131005600020] |If the release file isn't valid anymore, you should run debmirror again to check if there are any changes in the repository.
[131005600030] |One change will be the release file and you will get a new validity for it.
[131005600040] |You could easily automate this with a crontab entry.
[131005610010] |Some of the mirrors out there might have stale files.
[131005610020] |This happened to me recently, and it was in also tied to the caching server I'm using (apt-cacher-ng) which tries to save bandwidth by redirecting the repositories for same archive to a single entity (in my case if was a Hungarian mirror).
[131005610030] |Direct updates through German mirror worked ok, for example.
[131005610040] |Try changing the mirror you're using.
[131005610050] |In case you're using apt-cacher-ng, you'll need to do something in the line of changing the following file's contents:
[131005610060] |/etc/apt-cacher-ng/backends_debian /etc/apt-cacher-ng/backends_debvol
[131005610070] |After that you should also restart apt-cacher-ng for changes to take effect.
[131005620010] |cat
or pv
at the sending side and using tee
on the middle server to both send the data to a file there and send a copy over the another ssh link the other side of which just writes the data to a file.
[131005640020] |The exact voodoo required I'll leave as an exercise for the reader, as I've not got time to play right now (sorry).
[131005640030] |This method would only work if the second destination is publicly accessible via SSH which may not be the case as you describe it as a client machine.
[131005640040] |Another approach, which is less "run and wait" but may otherwise be easier, it to use rsync
between the server and client B. The first time you run this it may get a partial copy of the data, but you can just re-run it to get more data afterwards (with one final run once the Client1->Server transfer is complete).
[131005640050] |This will only work if the server puts the data direct into the right file-name during the SFTP transfer (sometimes you will see the data going into a temporary file which is then renamed once the file is completely transferred - this is done to make the file update more atomic but will render the rsync idea unusable).
[131005640060] |You could also use rsync for the C1->S transfer instead of scp (if you use the --inplace
option to avoid the problem mentioned above) - using rsync would also give you protection against needing to resend everything if the C1->Server connection experiences problems during a large transfer (I tend to use rsync --inplace -a --progress
instead of scp/sftp when rsync is available, for this "transfer resume" behaviour).
[131005640070] |To summarise the above, running:
[131005640080] |on client1 then running
[131005640090] |on client2 repeatedly until the first transfer is complete (then running once more to make sure you've got everything). rsync
is very good at only transferring the absolute minimum it needs to update a location instead of transferring the whole lot each time.
[131005640100] |For paranoia you might want to add the --checksum
option to the rsync commands (which will take much more CPU time for large files but won't result in significantly more data being transfered unless it is needed) and for speed the --compress
option will help if the data you are transferring is not already in a compressed format.
[131005650010] |I think this should work:
[131005650020] |and then
[131005650030] |Add the pv command if you want to see your throughput.
[131005660010] |You could use a fifo for it.
[131005660020] |For simplicity first without ssh only involving two xterms:
[131005660030] |At xterm A:
[131005660040] |At xterm B:
[131005660050] |With ssh it should be something along these lines - perhaps you have to disable the escape-character in ssh (-e none):
[131005660060] |client A:
[131005660070] |client B:
[131005670010] |I'm not sure that the tail -f method works ( though it probably does if the file is text ).
[131005670020] |The reason is that I don't know how tail -f, and sftp transfer and rely on meta information.
[131005670030] |If sftp transfers the meta information first and tail -f relies on the meta information to tell it that there is no more file, then tail may bad the end with EOFs or nulls.
[131005670040] |If you do not care about the path of upload ie Computer 1 uploads to computer 2 uploads to computer 3, then you might try top use bittorent instead of sftp.
[131005670050] |It seems that is what it was designed for.
[131005680010] |dd
to backup a 80GB drive
[131005680030] |Now I need to access some files on that drive, but I don't want to copy the ".img" back over the drive.
[131005680040] |mount ~/sdb.img /mnt/sdb
doesn't work either.
[131005680050] |It returns :
[131005680060] |I tried to find the filesystem type with file -s
[131005680070] |Is it possible to mount sdb.img
, or must I use dd
to restore the drive?
[131005690010] |When you use dd
on /dev/sdb
instead of /dev/sdb1
or /dev/sdb2
, you copy all the partitions from the said drive into one file.
[131005690020] |You must mount each partition separately.
[131005690030] |To mount a partition from a file, you must first find out where in the file that partition resides.
[131005690040] |Using your output from file -s sdb.img
we find the startsectors
for each partition:
[131005690050] |sdb.img: x86 boot sector; partition 1: ID=0x12, starthead 1, startsector 63, 10233342 sectors; partition 2: ID=0xc, active, starthead 0, startsector 10233405, 72517410 sectors; partition 3: ID=0xc, starthead 0, startsector 82750815, 73545570 sectors, code offset 0xc0
[131005690060] |Partition Startsector 1 63 2 10233405 3 82750815
[131005690070] |To mount a single partition, where X
is the startsector of that partition, run:
[131005690080] |So to mount the second partition, you will have to run:
[131005690090] |sidenote: make sure that /mnt/sdb2
exists before you run this.
[131005690100] |Have fun!
[131005690110] |update: In the answer, I assumed that the sector size for you image was 512
, please see this question on how to calculate that.
[131005700010] |SystemEvents
table within the `Syslog database (if you use the default schema provided).
[131005740030] |I would like to use a regular expression to filter inbound messages into separate database tables.
[131005740040] |I've played with this, but I'm having a hard time figuring out the best way to accomplish this (or even a way that functions).
[131005740050] |In my rsyslog.conf:
[131005740060] |This was my latest attempt, but I'm stuck.
[131005740070] |(the RogueAPs table is just a clone of the default SystemEvents table that ships with rsyslog)
[131005740080] |Version Info:
[131005750010] |Hi,
[131005750020] |from taking a look at this tutorial, I see no difference.
[131005750030] |But taking a look at the template documentation from rsyslog, there seems to be a difference with mysql depending on the setting for parameter NO_BACKSLASH_ESCAPES
.
[131005750040] |From the docs:
[131005750050] |Hope this helps.
[131005760010] |ls
shows colors if the CLICOLOR
environment variable is set or if -G
is passed on the command line.
[131005770030] |The actual colors are configured through the LSCOLORS
environment variable (built-in defaults are used if this variable is not set).
[131005770040] |To show directories in light blue, use
[131005770050] |ls
shows colors if --color
is passed on the command line.
[131005770060] |The actual colors are configured through the LS_COLORS
environment variable, which can be set with the dircolors
command (built-in defaults are used if this variable is not set).CLICOLOR
will turn colors on or off. LS_COLORS
is not necessary, and will let you customize the colors.
[131005780040] |Note that on some distros (Like MacOSX and FreeBSD), the value is LSCOLORS
not LS_COLORS
.
[131005780050] |I avoid setting the color using LS_COLORS, because then I need to apply that same color scheme on a hundred different computers and 10 different Un*xes.
[131005780060] |Instead, I modify the color in my Terminal progra,, which works the same on most Unix hosts.
[131005790010] |base64
linux command but I don't understand how I can apply it on integers.
[131005820040] |I have tried with base64 10
but then I get the error message base64: 10: No such file or directory
[131005820050] |I think that the problem can be that Base64 is used for Binary to Textual conversion, so there is no point to pass a Textual argument to it.
[131005820060] |So my argument 10
is read as an textual string and not as a binary number.
[131005820070] |Is there any way I can turn a texttual argument to a binary?
[131005820080] |How can I use the base64 command to get the Base64 value for an integer?
[131005830010] |Convert the number into hex than use echo
to print the according byte sequence and pipe that into base64
.
[131005830020] |So to encode the integer 10 with base64, you can use:
[131005830030] |To explain the result.
[131005830040] |The byte 10 has the following binary representation:
[131005830050] |What base64 does is, it chunks those into groups of 6 bits.
[131005830060] |So with padding we get the following two 6lets:
[131005830070] |Which in decimal are 2 and 32, which correspond to the letters C and g.
[131005840010] |/etc
under version control, under various unices?
[131005840030] |Turnkey doesn't necessarily mean part of the base install, but the following features would be nice:
[131005840040] |git
, especially since by using different branches I can keep my /etc as similar as possible over different distributions as possible while keeping as much stuff in one place as possible (for some areas that obviously fails, apache configuration for example is really different across different distributions).
[131005850030] |It works like this:
[131005850040] |I have my master
repo with my default configuration files.
[131005850050] |Now I come in touch with a new distro so I create a new branch based on my master
branch based on the distribution's name (in this example debian).
[131005850060] |Debian keeps some config file in a location different from my master
so I do a git mv file new_loc
.
[131005850070] |And everything is fine.
[131005850080] |I switch back to master
and change that file because I added some specific config directive, when I merge master
into my debian
branch the moved file is changed, so I can basically just change most things within my master
branch and just have to merge changes in my "distribution" branches (usually they tend to be more of a mix of distribution and purpose branches, a debian server has some differences to a debian workstation obviously but the features still work).
[131005850090] |So basically I have a "generic configuration" in master
and (to say it in object-oriented programming terms) inherit those into my branches (who also can inherit from each other).
[131005850100] |Apart from that, git
's mechanisms to "cherry-pick" commits (in this case changes to /etc/) has been quite helpful to me at times where I only needed parts of a certain configuration.
[131005850110] |Now to some of your ideas:
[131005850120] |git
, it's just another branch that you sometimes merge (partially) into master
rm -rf
with relative path from shell history in different directory than previously, or something like this)sudo
and try to hurt your system (but I don't think that this kind of malicious software is very popular)NOPASSWD
option only for selected commands that won't hurt your system (for editors or for restarting services) and keep password for other commands.
[131005930010] |eth1
for packets that have the mark 1 (except packets to localhost).
[131005940020] |The ip
command is from the iproute2 suite (Ubuntu: iproute , iproute-doc ).
[131005940030] |The other half of the job is recognizing packets that must get the mark 1; then use iptables -t mangle -A OUTPUT … -j MARK --set-mark 1
on these packets to have them routed through routing table 1. I think the following should do it (replace 1.2.3.4 by the address of the non-default-route interface):
[131005940040] |I'm not sure if that's enough, maybe another rule is needed on the incoming packets to tell the conntrack module to track them.
[131005950010] |winetricks
handle.
[131005970010] |mdm
package in Ubuntu is a homonym).
[131006020010] |sendmail
in RHEL 5. I have three mail accounts for the users jack, bob and alice.
[131006020030] |I want to make sure that user bob can send mail to alice but jack can't send mail to alice.
[131006020040] |But user jack can send mail to bob.
[131006020050] |How can i do this?
[131006030010] |Hi there,
[131006030020] |Try to look at this link http://www.linuxquestions.org/questions/slackware-14/sendmail-block-email-to-particular-user-on-same-domain-676596/
[131006030030] |That might work for you too.
[131006030040] |Ismael Casimpan :)
[131006040010] |A simple approach would be to set up a .procmailrc file in Alice's home directory to throw away mail from Jack (see "man procmail").
[131006050010] |xinput
can be put in two separate groups.
[131006050030] |Can I lock each group to one screen?
[131006050040] |And how can this be done permanently?
[131006050050] |Alternatively, instructions for turning this "lite multiseat" configuration into multiseat are appreciated as well.
[131006050060] |This is the xorg.conf as setup by the Catalyst Center:
[131006050070] |(the latter entry is from VirtualGL, see this question, it should be irrelevant here)
[131006060010] |Have you tried something like MDM?
[131006060020] |It looks like it can handle the keyboard/video/mouse mappings in it's config file.
[131006070010] |Look for multiseat and you will find the info you need.
[131006070020] |The linked Wikipedia article even describes where MDM fits in.
[131006070030] |Im a Debian fan so check out Debian Wiki or the Ubuntu Community Docs.
[131006070040] |XORG has a good collection of multiseat info, including this detailed how-to.
[131006070050] |Good Luck!
[131006080010] |If I understood your needs you have to bind one screen, keyboard and one mouse to one ServerLayout and the others to the second one.
[131006080020] |http://cambuca.ldhs.cetuc.puc-rio.br/multiuser/
[131006080030] |This is, as far as I know, the only way to proceed.
[131006080040] |Also Arch as one good tutorial:
[131006080050] |https://wiki.archlinux.org/index.php/Xorg_multiseat
[131006080060] |And Linux Toys show you even how to put in place a 6 seated setup
[131006080070] |http://www.linuxtoys.org/multiseat/multiseat.html
[131006090010] |The other answers were certainly on the right path, but the MDM/multiseat documentation is quite lacking and disperse.
[131006090020] |Some of the links provided here were outdated, referencing XFree86, Xorg's predecessor.
[131006090030] |Some digging shows that most MDM configurations use Xephyr.
[131006090040] |Here is a HOWTO on building Multiseat Xephyr configuration:
[131006090050] |http://en.wikibooks.org/wiki/Multiterminal_with_Xephyr
[131006100010] |One interesting possibility I forgot is what Tyler Szabo's answer to my question Multiseat gaming? @gaming.SE suggests:
[131006100020] |I would use VMWare.
[131006100030] |This might be possible with just VMWare player (you will need to be able to allocate a mouse to a single VM), or you might need to try VMWare workstation (for which I'm quite sure it works).
[131006100040] |The hardware/software you will need is as follows:
[131006100050] |/dev/sdb
.
[131006120010] |Most USB keys use the FAT format (more precisely FAT32), which is a simple format native to older versions of Windows and almost universally supported.
[131006120020] |If you formatted the key using HFS(+) or UFS, and you now want to format it as ext3, first find out if there is a partition on the key.
[131006120030] |Run ls /dev/sdb*
.
[131006120040] |If this shows only /dev/sdb
, there is no partition, so create the filesystem directly onto /dev/sdb
.
[131006120050] |If this shows one partition (probably /dev/sdb1
but it could be a different number), create the filesystem there.
[131006120060] |If there are several partitions, you can put different filesystems on them, or repartition the disk.
[131006120070] |Run file - to check what filesystem is currently on that partition (maybe use a different number or no number as determined above).
[131006120080] |If you're sure you want to make a new filesystem, run mkfs.ext3 /dev/sdb1
.
[131006120090] |For removable media, you probably don't want any reserved block, so run
[131006120100] |If your Linux is recent enough, you may want ext4 or btrfs, as they are supposed to be better for flash devices (though I don't know if this applies to low-end flash media as found on USB keys).
[131006120110] |But again, there's rarely a reason not to use FAT on a USB key.
[131006130010] |Depending on what level of compatibility you want you can either go for ext3 or the more universal FAT32.
[131006130020] |To format as FAT32:
[131006130030] |Or if you want to format for ext3:
[131006130040] |As an aside if you want to rename the USB pendrive; to rename FAT32:
[131006130050] |Or for ext3:
[131006130060] |For a general look at formatting, check out this howto.
[131006140010] |Light-weight alternative to Apache
[131006140020] |In my set-up, I run (as root) ln -s ~/share /var/www
.
[131006140030] |By doing this, anyone on the local network will access the ~/share directory by pointing their browsers at http://hostname/share.
[131006140040] |I use apache
, but I suspect that it's overkill (memory usage) for this simple use.
[131006140050] |What alternative light-weight web servers can I use for this?
[131006140060] |Will it also be tinker-free (in Debian, there's no extra set-up for apache
, beyond installation)?
[131006150010] |You will have a long list for this.
[131006150020] |At my workplace we have Cherokee instead of Apache.
[131006150030] |Cherokee has a nice web admin interface that makes configuration really easy.
[131006150040] |I also heard about Nginx and lighttpd.
[131006160010] |for temporary share, I have this shell script in my path:
[131006160020] |By default, shares the current directory on port 8000, syntax is scriptname [ []]
[131006170010] |You could try thttpd.
[131006170020] |It's very lightweight, uses very few resources.
[131006170030] |It does have some slightly odd policies about file access as I recall.
[131006180010] |nginx (pronounced Engine-X) is a very popular lightweight alternative for serving static (and other) content.
[131006180020] |It works out of the box on most distributions.
[131006190010] |Debian packages that provide a web server provide the httpd
package.
[131006190020] |Just about anything in there that isn't a version of Apache is a lightweight web server.
[131006200010] |Lighthttpd is lightweight and easy to setup. nginx seems to be the more popular choice nowadays, tho.
[131006200020] |It really isn't going to make much of a difference either way.
[131006200030] |I'd just stick with Apache, if you want to tweak it to use less memory try apache2-mpm-worker.
[131006200040] |Also, since you want all this stuff working 'out of the box', what version Debian are you running?
[131006210010] |What does etc stand for?
[131006210020] |What does the "etc" folder in the root directory stand for?
[131006210030] |I think knowing this will help me remember where certain files are located.
[131006210040] |Update: Might be useful for others, the folder is used for "Host specific configuration files" - reference.
[131006220010] |I thought it just meant "et cetera" as in "etc..." on a list.
[131006220020] |This product helps household pets like dogs, cats, etc...
[131006230010] |I always assumed it was to suggest a miscellaneous directory, like "etc = all the rest we do not bother to categorize further"; I personally use such a folder too on my home directory as well (not called etc but "oars" = "(something) else").
[131006240010] |Define - /etc? has some good history.
[131006240020] |You can find references to "et cetera" in old Bell Labs UNIX manuals and so on – nowadays it's used only for system configuration, but it used to be where all the stuff that didn't fit into other directories went.
[131006250010] |It means "et cetera".
[131006250020] |In Latin literally "and the rest".
[131006250030] |And I have evidence.
[131006260010] |Adding a network card to the modprobe file
[131006260020] |I'm trying to setup an ADSL connection on an Arch Linux machine.
[131006260030] |I read the Arch wiki article on Internet Access, but I'm still confused -- how do I add my network card to the modprobe
file?
[131006270010] |Looking at the link you are pointing to, I find:
[131006270020] |"Most users of external modems or those behind routers should consult the Configuring Network article instead."
[131006270030] |Have you tried that?
[131006280010] |Indeed, do you plug the phone cord directly into your PC?
[131006280020] |Otherwise, if you have a router, you don't need to do anything except configure your network correctly (that is, with the correct gateway -- for example 192.168.1.1).
[131006280030] |To answer your question, they probably mean adding the module (driver) to /etc/modules
(might be named differently on Arch), so that it is forced to load on boot (most modules are loaded automatically as their corresponding hardware is detected).
[131006290010] |What happens when your password expires and you're using key authentication?
[131006290020] |on one of the boxes I don't control at work I use ssh
keys to log in.
[131006290030] |Our passwords are set to expire after a few months.
[131006290040] |What happens if I don't reset my password and it expires? will I still be able to login?
[131006290050] |Does what happens depend on some sshd
setting? or some pam setting?
[131006300010] |Under default behavior, you will still be able to log in using your ssh key, but the system administrator is free to change this behavior using pam
or other methods.
[131006300020] |OpenSSH doesn't care about the expiration date on your password if it's not using password authentication, but pam
can be set up to check password expiration even after sshd
has authenticated your key.
[131006300030] |It could probably even be set up to force you to enter and change your expired password before handing you the shell prompt.
[131006300040] |For the best answer, ask your sysadmin.
[131006310010] |dns queries not using nscd for caching
[131006310020] |I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it.
[131006310030] |I've gotten it started and ntpd seems to attempt to use it.
[131006310040] |But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache.
[131006310050] |I'm viewing the cache stats using nscd -g
to determine whether it's been used.
[131006310060] |I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd.
[131006310070] |nsswitch.conf
[131006310080] |nscd.conf
[131006310090] |resolv.conf
[131006310100] |as kind of a side note I'm using archlinux.
[131006310110] |note: this has been moved twice, I've never figured out why apps, excluding dig, are not hitting the nscd cache, browsers, IM, irc, all should have been, but they didn't
[131006320010] |I don't know that much about nscd except that it so often caused trouble with DNS lookups that I always disabled it (or at least the host lookups part of it).
[131006320020] |Nscd lets you set the time-to-live values and I know DNS expects to "own" those values and have all resolvers honor them.
[131006320030] |You can end up with weird results if the TTLs in DNS aren't honored.
[131006320040] |My recommendation is not to use nscd for caching DNS.
[131006320050] |It looks like you already have a caching name server running on your local box, so no need to cache DNS lookups twice.
[131006330010] |You're missing the hosts configuration in nscd.conf.
[131006330020] |I'm posting mine as an example:
[131006330030] |This will break some things.
[131006330040] |The following information is from the Debian package:
[131006340010] |nscd is really unreliable for everything, not just DNS.
[131006340020] |It's well worth avoiding unless you desperately need it for some reason.
[131006340030] |You should use a purpose-made DNS caching daemon if you want to cache DNS locally (which is a good idea!).
[131006340040] |Two of my favourites are dnsmasq and dnscache from djbdns.
[131006350010] |If there is DNS caching in Hell, it is provided by nscd.
[131006350020] |Don't. Use.
[131006350030] |It.
[131006350040] |Just to be different: pdnsd is actually a very nice replacement.
[131006350050] |Or unscd (used by default at least in openSUSE).
[131006360010] |The reason why you are missing the cache hits is that dig queries the DNS directly.
[131006360020] |You can try and see whether the cache works with the getent
command:
[131006360030] |Running a separate caching DNS is a good idea, but you should consider running it on the network level if possible.
[131006360040] |If each host cache the data separately they will still run multiple queries for the same hosts.
[131006360050] |Single cache works around this problem.
[131006360060] |Nscd itself is a caching daemon for NSS functions.
[131006360070] |So the focus is a bit different than native caching nameservers.
[131006360080] |So if you just want a caching nameserver, use something else than nscd.
[131006360090] |If instead you wish to cache things like shared usernames and hostdata outside of the normal DNS system, go for nscd.
[131006360100] |And for the record, I've grown quite fond of powerdns resolver (pdns-resolver).
[131006370010] |Stuck on "grub>" prompt when dual-booting Ubuntu
[131006370020] |On the dual boot option screen I choose Ubuntu and only get:
[131006370030] |I want to get into Ubuntu and start learning it a bit but am stuck.
[131006370040] |What would be the best way to fix my problem?
[131006380010] |Next time I need to pay attention to where I install Ubuntu.
[131006380020] |If it is on an external media, like in this case, I either need to leave the drive plugged in, although I dont believe you can readily boot an OS via USB, or install Ubuntu locally.
[131006390010] |I can't comment yet, so I'm answering here.
[131006390020] |You can most definitely single-, dual- or multiple-boot OSs from a USB drive*.
[131006390030] |What do you mean "install Ubuntu locally?
[131006390040] |*As long as the BIOS supports booting from USB of course.
[131006390050] |All modern BIOSes support this.
[131006390060] |If you have a BIOS which does not, boot off the Plop Boot CD which then provides USB boot support.
[131006400010] |What is the most advanced video editing FLOSS?
[131006400020] |There are so many video editors out there (at least half a dozen during my search), and I wonder which is the most advanced among them?
[131006400030] |How does it compare with commercial offerings?
[131006410010] |There are different ways of rating video edition software, and depending on which attribute (features ? user-friendliness?) you want to focus on, the answer to your question will be different.
[131006410020] |Assuming you mean "Which Open Source video edition software is the most complete (it terms of features)", then the answer is probably Cinerella.
[131006410030] |To get an idea of how it compares to other video edition software, I suggest you have a look at the appropriate Wikipedia page.
[131006420010] |http://www.linuxalt.com/ shows Linux equivalents for Windows software; that might help
[131006430010] |Kdenlive
[131006430020] |I've only had a little experience with this, but I rarely need to edit video.
[131006430030] |It crashed a few times when I used it about a year ago, but maybe it was just me or it's improved.
[131006430040] |It's fairly easy to use as well.
[131006430050] |The screenshots on the site show it in KDE, but I don't think it depends on KDE.
[131006430060] |I could be wrong though.
[131006440010] |How do I resize a partition in Ubuntu linux without losing data?
[131006440020] |I ran out of space in my on the drive only to find that there was another unformatted partition in the system that is available.
[131006440030] |I now want to resize the current partition to take in the empty partition without losing data.
[131006440040] |Any ideas?
[131006440050] |Thanks.
[131006450010] |boot from live Linux distro (you can use Ubuntu install disk) and use gparted
[131006450020] |But always something can go wrong, so it is advisable to make a backup
.
[131006450030] |The other option is to format the unused partition and mount it and use it (depending on the size) as /home or /usr
[131006460010] |Try this live CD: PartedMagic
[131006470010] |LVM is the way to go.
[131006470020] |Turn your whole spindles into PV's and migrate from legacy partition-based model to LVM model.
[131006470030] |RedHat has some good documentation on LVM, check it out.
[131006480010] |to find that there was another unformatted partition
[131006480020] |Are you sure that this isn't your swap-partition?
[131006480030] |Beside from that I would recommend LVM as mentioned before by slashdot.
[131006490010] |NIS: How to allow access to both local and remote users' home directories?
[131006490020] |We have a NIS server with shared users' home directories in '/home'.
[131006490030] |We're used to mounting the server's '/home' using '/home' as the mount point on each NIS client.
[131006490040] |However, if we do this with a machine that has existing local users, their home directories will not be accessible because '/home' is now a mount point to the server.
[131006490050] |How can we make both local and remote users' home directories accessible on the client?
[131006500010] |What you can do is to setup an autofs mountpoint in, say, /home2.
[131006500020] |Then set each user's home directory as /home2/machine/user.
[131006500030] |If you set autofs to mount machine:/home in /home2/machine, then you have what you want, because the local /home can be mounted as /home2/localmachinename.
[131006500040] |And you can of course configure autofs via NIS.
[131006510010] |If it's at all an option, I recommend making the “physical” location of each home directory something like /net/$hostname/$username
, i.e., include the name of the server as part of the path.
[131006510020] |Then arrange for /home/$username
to point to /home/$(server-of $username)/$username
.
[131006510030] |One possibility is to make /home
a union mount of all the /net/*
.
[131006510040] |Alternatively, you can make /home
an automount point and set up the automounter to mount /net/file-server/$username
for a NIS user or /home.local/$username
for a local user.
[131006520010] |What we do is mount the server's home on /mnt/server/export/home, then on other machines symlink each user's home directory into the local /home.
[131006520020] |This can be maintained semi-automatically across machines with rsync.
[131006530010] |Bash script IDE
[131006530020] |Is there a bash/ksh/any shell script IDE.
[131006530030] |Don't you get annoyed when you forget the space inside if or I don't know, some minor syntax mistakes you do from time to time, but takes you a long time to figure it out(especially when one is tired).
[131006530040] |I knew about some suggestion listed below, but I'm looking for something like eclipse(i.e. for java).
[131006540010] |Just about every editor support syntax highlighting for shell - this can help you spot problems.
[131006540020] |In addition, you can put set -x
and set -e
at the top of your scripts.
[131006540030] |The -x
tells the shell to print out every command before it executes it.
[131006540040] |The -e
tells the shell to terminate the script if any errors occur.
[131006540050] |These should really help cut down on time spent looking for bugs.
[131006550010] |How to do a binary install of Python 2.7 on SUSE Linux Enterprise Server 11?
[131006550020] |My SLES11 box came with Python 2.6 installed.
[131006550030] |I would like to upgrade to 2.7.
[131006550040] |What is the easiest way to do this?
[131006560010] |Most likely, you're not going to want to replace the existing python, since that would probably break the existing OS software.
[131006560020] |You could either build a package for python 2.7, and have it install as /usr/bin/python2.7, or install in another location like /usr/local/bin/python.
[131006560030] |Or, you could just compile manually and install in /usr/local.
[131006560040] |If you're installing to an alternate location, use make altinstall.
[131006580010] |What is a valid hostname label?
[131006580020] |I set up my hostname to a number, where running hostname
gives:
[131006580030] |But when I run ping 6592
, I get:
[131006580040] |I checked the related Wikipedia page, and it does say that such a hostname is allowed (IIUC).
[131006580050] |What am I missing?
[131006590010] |Well, not exactly...
[131006590020] |What Wikipedia, and in turn the RFCs say is that since the original RFC 952, which didn't allow leading numerics, you can now have them. ( Per RFC 1123 ) You still can't have all numeric though, which is your problem.
[131006590030] |Your '6952' isn't a valid hostname, while '6952x' should be fine.
[131006590040] |But, RFCs aside, I've had problems within the last year or so with leading numerics.
[131006590050] |I'd avoid them, unless there's a compelling reason not to.
[131006600010] |What the RFC says is actually immaterial here.
[131006600020] |The RFC specifies what goes on at the DNS level, but that's moot if ping doesn't make a DNS query in the first place.
[131006600030] |When ping receives an all-numeric argument, it interprets it as an IP address.
[131006600040] |IPv4 addresses are technically 32-bit numbers.
[131006600050] |They are almost always written in dot-decimal notation, so-called “dotted quads” like 127.0.0.1
.
[131006600060] |But they can also be written as a single number in decimal like 2130706433
or in hexadecimal like 0x7f000001
.
[131006600070] |Addresses in the range 0.0.0.0/8 are reserved for use as source addresses in broadcasts.
[131006600080] |You can't send a packet to them, which is why connect(2)
returns EINVAL
.
[131006600090] |Many programs, including most ping implementations, have no option to force a DNS lookup.
[131006600100] |You would run into similar trouble if you made a local network with all-numeric subdomains, and ended up with a valid hostname that looked like a dotted quad.
[131006600110] |If your local network has a name, ping 6592.mynetwork
will work.
[131006600120] |But you're likely to run into similar trouble down the line, as sooner or later you'll want to omit the domain name.
[131006600130] |Just go with the flow and include a letter, preferably at the start.
[131006610010] |A computer file is a block of arbitrary information, or resource for storing information, which is available to a computer program and is usually based on some kind of durable storage
[131006630010] |How to stop Vim from changing my directory when executing a makefile?
[131006630020] |OK, I posted this question before I knew the extent of what was happening.
[131006630030] |Suppose I have a mix of Python and C++ code.
[131006630040] |I use a simple makefile which copies the files from my editing directory to where they are compiled (with a separate makefile.
[131006630050] |Originally I was editing source.c, running :make, and then a new buffer opened in my current window (replacing source.c) with /some/other/dir/source.h.
[131006630060] |I then tried this again while editing some_python.py and running :make and yet again it opened /some/other/dir/source.h. Can anyone explain this?
[131006630070] |Am I going crazy?
[131006630080] |Hello all,
[131006630090] |I am having this annoying issue with Vim.
[131006630100] |I like the fact that I can be editing a file and then type :make to automatically execute a Makefile in the same directory.
[131006630110] |However, Vim is changing my directory after it is executed.
[131006630120] |For example, if my source file is /some/dir/source.c and it needs to be compiled in /some/dir/library/, my makefile first copies the file to the library folder and then executes another makefile.
[131006630130] |The problem happens after compilation finishes.
[131006630140] |If I launch vim as 'vim /some/dir/source.c' and then use :make, when the compile finishes I am looking at '/some/dir/libary/source.c'.
[131006630150] |I would like to be looking at the file in the original location.
[131006630160] |Does this make sense?
[131006630170] |What can I do to disable this behavior?
[131006630180] |Thanks!
[131006630190] |UPDATE I was mistaken before -- when the make completes, a new buffer is opened in my window which has the copied header file (even if I was editing the .c before the compile).
[131006630200] |So I open /some/dir/source.c, then do :make, then in my current window a new buffer is opened with /some/dir/library/source.h. Weird?
[131006630210] |The original buffer is still open, but I need to switch back to it since its now in the background.
[131006640010] |It seems like your makefile (stdout/stderr) output triggers the default quickfix mode of your vim.
[131006640020] |Perhaps /some/other/dir/source.h
is compiled by your recursive make call and a warning is produced and the quickfix mode jumps to its location.
[131006640030] |Or the filename is part of other makefile output and the quickfix mode mistakes it for a warning/error message of the compiler.
[131006640040] |You can try to disable the quickfix mode for your session (if you don't need it), change the error format or change your makefile to generate less output.
[131006650010] |How can I avoid "Run fsck manually" messages while allowing experimenting with system time changes?
[131006650020] |I'm working with a system where we want to allow users to play around with the date and time if they want, and where reboots may happen arbitrarily.
[131006650030] |This is fine, except for one thing: if there's a large time jump backwards, the following error appears on reboot:
[131006650040] |…and then the boot hangs waiting for user console input, and even once console access is gained, requires a root password to continue.
[131006650050] |This is decidedly less than ideal.
[131006650060] |Is there any way to either skip the check or force the check to happen automatically on reboot?
[131006650070] |Google has only provided help that requires running fsck manually if/when this is hit, which is not what I'm after.
[131006650080] |Running fsck manually after setting the time doesn't work as the filesystem is still mounted at that point, and just disabling fsck entirely is less than ideal.
[131006650090] |I'm using RedHat 6.
[131006650100] |Update: The solution I'm currently going with is to hack fstab to disable fsck checking on reboot.
[131006650110] |I'd tried editing the last mount time on the disks using debugfs
, which works fine for ext3 drives, but appears to fail inconsistently on ext4.
[131006660010] |I doubt there's a way to remove this check specifically, short of modifying the source code.
[131006660020] |Ignoring all errors from fsck sounds dangerous, what if there was some other problem?
[131006660030] |Therefore I'll suggest the following workaround: change the boot scripts to set the system date to some time in the future (say 2038-01-18 on a 32-bit machine) just before running fsck, and read it back from the hardware clock afterwards (hwclock --hctosys
, with more options as needed depending on your hardware and use of GMT in the hardware clock.)
[131006670010] |This sounds like it should be run in a virtual machine, where you can have more control (or just revert to a snapshot).
[131006680010] |I was going to suggest hacking e2fsck
to disable the specific checks for a last mount time or last write times in the future.
[131006680020] |These are defined in problem.c / problem.h, and used in super.c.
[131006680030] |But in looking, I discovered that E2fsprogs 1.41.10 adds a new option to /etc/e2fsck.conf
called broken_system_clock.
[131006680040] |This seems to be exactly what you need, and since you're using Red Hat Enterprise Linux 6, you should have 1.41.12, which includes this option.
[131006680050] |From the man page:
[131006680060] |Yes, the man page can't spell "heuristics".
[131006680070] |Oops.
[131006680080] |But presumably the code works anyway. :)
[131006690010] |Assistance wanted in removing the "No mail." notification in MOTD for Ubuntu 10.10
[131006690020] |How do I remove the No mail.
notification on Ubuntu 10.10 Server?
[131006690030] |This occurs when I log in, and is located underneath the message of the day
[131006690040] |I have a separate script which uses a different messaging system than the default one
[131006700010] |That message is coming from the pam_mail
module.
[131006700020] |Look in /etc/pam.d/*
for calls to pam_mail
.
[131006700030] |Replace standard
or empty
by quiet
if you want to suppress No mail
but keep the notification when there is mail.
[131006700040] |Replace by nopen
to suppress the notification altogether.
[131006710010] |Is it secure to leave a root shell running in detached screen session?
[131006710020] |I'm curious about the security of leaving a root shell running inside a detached screen session.
[131006710030] |I typically never do this.
[131006710040] |Aside from the potential of my non-root user account being compromised (password exposed, ssh key compromised, etc), are there other vectors of entry into a detached, password-protected screen session I should be worried about, or can a detached screen session be considered inert?
[131006720010] |The pipes created by screen are only accessible by the owner, therefore this shouldn't be a security issue.
[131006730010] |I think it is a security issue, because that "Aside from the potential of my non-root user account being compromised" can be rather large.
[131006730020] |But there are other increased risks beyond that.
[131006730030] |For example, you've now opened yourself up to a theoretical exploit which allows one to change permissions in the screen socket dir (/var/run/screen
on my system, but sometimes /tmp
is used).
[131006730040] |That exploit now has an path to getting root, which it might not otherwise.
[131006730050] |sudo
has other advantages, if you can train yourself to use it for each command rather than doing sudo su -
.
[131006730060] |It logs actions (which, unless you're logging remotely, doesn't meaningfully increase security, but does give you a trail of what you've done).
[131006730070] |And it helps prevent accidents by requiring intentional escalation for each command, rather than switching to an entirely-privileged session.
[131006740010] |If you have a root shell in a screen session (detached or not, password-protected or not), and your screen
executable is not setxid, then an attacker who gains your privileges can run commands in that shell.
[131006740020] |If nothing else, they can do it by ptracing the screen process.
[131006740030] |If screen is setuid or setgid, and the session is detached and password-protected, then in principle it takes the screen password to run commands in that shell.
[131006740040] |If this principle holds, someone who'd only compromised your account would have to put a trojan in place and wait for you to type the password.
[131006740050] |However the attack surface (i.e. the number of places where things can go wrong due to a bug or misconfiguration) is uncomfortably large.
[131006740060] |In addition to the basic system security features, you're trusting:
[131006740070] |screen to get the password check right.
[131006740080] |screen to prevent access to the session by other means.
[131006740090] |screen to use the OS access control mechanisms properly (e.g. permissions on the pipes).
[131006740100] |the kernel to perform the ptrace security checks correctly (this is a frequent source of vulnerabilities).
[131006740110] |the running shell not to do anything stupid.
[131006740120] |some other feature not to bite you.
[131006740130] |“Some other feature not to bite you”: yeah, that's vague.
[131006740140] |But it's always a concern in security.
[131006740150] |You might be tempted to dismiss this as just plain wishful thinking, but did you really think of everything?
[131006740160] |For example…
[131006740170] |As long as you can write to the terminal device, you can inject data into that shell's input.
[131006740180] |Under screen's default configuration on my machine:
[131006740190] |This inserts ␛]lfoobar␛l
in the shell's input stream. \ek
is the control sequence that lets an application (or anything that can write to the terminal device) set the window title (see the “Naming windows” section in the screen manual), and \e[21t
makes the terminal report its title on the application's standard input (screen doesn't document this sequence, but does implement it; you can find it under CSI Ps ; Ps ; Ps ; t
in the xterm control sequences list.
[131006740200] |In fact, at least under screen 4.0.3, all control characters are stripped from the reported title, so the shell reads lfoobar
(assuming ␛]
is not bound to an editing command) and no newline.
[131006740210] |So the attacker can't actually execute a command that way, but can stuff a command like chmod u+s /bin/sh
followed by a lot of spaces and a likely-looking prompt.
[131006740220] |Screen implements several other similar risky control sequences, I don't know what their potentiality for vulnerabilities is.
[131006740230] |But hopefully by now you can see that the protection offered by screen session passwords is not that great.
[131006740240] |A dedicated security tool such as sudo is a lot less likely to have vulnerabilities.
[131006750010] |How can I play a sound when script execution is ready?
[131006750020] |I am executing every now and then some python scripts which take quite long to execute.
[131006750030] |I execute them like this: $ time python MyScript.py
[131006750040] |How can I play a sound as soon as the execution of the script is done?
[131006750050] |I use Ubuntu 10.10 (Gnome desktop).
[131006760010] |play
is a very basic (no UI) sound player from the sox package.
[131006760020] |You can replace it by any other command-line-driven sound player.
[131006770010] |Just pick a sound on your hard drive, and put a command to play it right after the command you're waiting on; they'll happen sequentially:
[131006770020] |(You can use any player, naturally).
[131006770030] |I have a script called alertdone
that plays a tone and shows an libnotify alert when run; I use it for exactly this occasion:
[131006770040] |It's really simple, so if you want to make your own you can base it on this (mine requires notify-more
, mplayer
, and ~/tones/alert_1.wav
though):
[131006780010] |Append any command that plays a sound; this could be as simple as
[131006780020] |or as complex as
[131006780030] |(Commands assume pulseaudio is installed; substitute your sound player, which will depend on your desktop environment.)
[131006790010] |You can also make this happen automatically.
[131006790020] |I will show you how in zsh
, then add info about bash
.
[131006790030] |The essence looks like this:
[131006790040] |You can also make it only do it if the program was Python, e.g.
[131006790050] |In bash
, the best way is to download preexec.bash.txt and source it (e.g. . ~/preexec.bash.txt
at the top of your ~/.bashrc
, then the above (or something close to it) should work.
[131006790060] |(Not sure about the $3
bit to check if the command is Python.)
[131006790070] |If you're using GNOME Terminal, I would also point you to Flashing GNOME Terminal.
[131006790080] |It's a patch I wrote that makes the terminal blink when a command is done, so you can Alt-Tab to something else, then it lets you know when it's done.
[131006800010] |how can i use sudo within a function?
[131006800020] |i have written a function which acts in a similar way to tee
but also pre-pends a datestamp. everything works fine except when i want to output to a file which is only root writeable (in my case a logfile within /var/log
). i've simplified the following code snippet to just include the bits which are not working:
[131006800030] |it works fine if i run the whole script like so sudo ~/test_logger.sh
but i can't always do this since i want to use the logger function in files like ~/.bash_logout
which are run automatically. i've tried putting sudo
in front of the echo in the while loop but this does not work. any ideas?
[131006810010] |sudo
does not work in the way you might think when you use redirection or pipe operators.
[131006810020] |The stream change is not executed with sudo
permissions.
[131006810030] |This is why
[131006810040] |will not work if bar is only root-writable.
[131006810050] |When you run the script under sudo
, everything in the script gets superuser permissions so it works correctly in that circumstance.
[131006810060] |A workaround is to do this to make sure the writing command is run under sudo
:
[131006810070] |Bear in mind, however, that this does not append to the file.
[131006810080] |It overwrites the file.
[131006820010] |it's generally bad practice to put sudo
in a script.
[131006820020] |A better choice would be to call the script with sudo
from ~/.bash_logout
or wherever else you want to use it, if you must, or better still just make /var/log/test.log
world-writable.
[131006830010] |as you've found, sudo command >out
doesn't work because 'command' is run by sudo, but '>out' is a function of the shell, not 'command'.
[131006830020] |So, you need to escalate the shell itself:
[131006830030] |note that you want to be really, really sure what's in $data doing this:
[131006830040] |hence simon's warning.
[131006840010] |Well if you take a look at the man page of sudo you can see examples of how to use it in the scripts... the -c option lets execute a command.
[131006850010] |EXT3 file system pre digest material
[131006850020] |I am looking for understanding EXT3 filesystem source code.
[131006850030] |I think I need a little pre-digestion to fully understand the code.
[131006850040] |Can anyone please suggest some material(blog etc.) where I can get some basic understanding of the source code.
[131006860010] |I don't know of any online resources that are going to be as helpful as this book: Understanding the Linux Kernel.
[131006860020] |Chapter 12 covers the Linux VFS layer, and Chapter 18 covers ext2/ext3 specifically.
[131006860030] |The book probably about due for a fourth edition, since it's circa 2.6.10, but the basics are still the same.
[131006860040] |There's a lot going on in filesystems these days, though, so it'd be nice if the book covered ext4 and btrfs as well.
[131006870010] |Can I get an 8-bit clean connection with 'socat'?
[131006870020] |This question is mostly about "socat", but here's some background to go with it:
[131006870030] |I am trying -- for reasons having mostly to do with nostalgia -- to write a virtual modem driver for use with VirtualBox.
[131006870040] |It should listen to the socket that VirtualBox connects to the virtual guest, and emulate (a) a standard Hayes command set and (b) let one connect to remote systems using "atd some.host.name".
[131006870050] |Mostly it works, but I've run into problems with data transfers.
[131006870060] |I assume the problem is mine, because I seldom have the chance to muck about with mult-channel communication, select loops, and the like...
[131006870070] |...so I though I would prototype my solution using the "socat" command, like this:
[131006870080] |This works, sort of, just like my solution -- basic interactive typing seems fine, but try a file transfer and it just falls over.
[131006870090] |I've also tried this, just in case there was some sort of tty line discipline in the way:
[131006870100] |That didn't work any better.
[131006870110] |I'm wondering if anyone here has thoughts on solving this.
[131006870120] |The problem is not with telnet; using -E8
provides an 8-bit clean path that works fine by itself (e.g., when not involved in this sort of pty-proxying).
[131006870130] |This is obviously not a critical problem, but I'm hoping that someone else out there finds it at least mildly interesting.
[131006880010] |Although you say telnet is not the culprit, I would test taking it away.
[131006880020] |Have you tried the following?
[131006880030] |This should be 8-bit clean by itself.
[131006890010] |Why is Vim eating up Ctrl when used with Ctrl+v and how to fix it?
[131006890020] |I'm using Vim /etc/zsh/zshrc
to add key bindings for zsh
because it doesn't work with inputrc
.
[131006890030] |In my terminal with tmux when I type Ctrl+v then Ctrl+LeftArrow the shell will show ^[OD
.
[131006890040] |However, when I'm in Vim insert mode, pressing the same sequence will result in ^[[D
.
[131006890050] |I found out that ^[[D
is what the shell produces when I type Ctrl+v then LeftArrow.
[131006890060] |I have also changed ^[[D
to ^[OD
in the file /etc/zsh/zshrc
and it works as expected (pressing Ctrl+LeftArrow causes the cursor to move back a word).
[131006890070] |Here is the line I'm talking about:
[131006890080] |I guess something is wrong with Vim because it's consuming the Ctrl.
[131006890090] |How do I fix this?
[131006900010] |This is actually your terminal doing something weird, not Vim.
[131006900020] |Terminals have two sets of control sequences associated with cursor keys, for historical reasons: one for full-screen applications, often called “application cursor keys mode”, and one for read-eval-print applications (e.g. shells).
[131006900030] |In the old days, read-eval-print applications didn't have any line edition features, and it was intended that the terminal, or the OS terminal driver, would eventually become more sophisticated.
[131006900040] |So the terminal sent control sequences intended for the terminal driver.
[131006900050] |Somehow the unix terminal drivers never gained decent line-editing features; these were added to applications instead (e.g. through the readline library).
[131006900060] |Your terminal is sending ␛OD
for Ctrl+Left in line edition cursor keys mode, and ␛[D
in application cursor keys mode.
[131006900070] |You have two options:
[131006900080] |Configure your terminal not to make a difference between the two modes.
[131006900090] |How to do this is entirely dependent on your terminal emulator.
[131006900100] |Live with it.
[131006900110] |Since any given application always sets the terminal in the same mode, just configure its key bindings according to the mode it uses.
[131006910010] |Kernel Hacking Environment
[131006910020] |I have been working in embedded OS like uCOS, ThreadX.
[131006910030] |While i have coded apps in Linux, now i'm planning to start learning Linux Kernel.
[131006910040] |I have few questions regarding the environment.
[131006910050] |Which is best distro, which has easy to use tools for kernel development ? ( so far i had used RHEL and Fedora.
[131006910060] |While i am comfortable with these, it also looks like Ubuntu has in-built scripts for easy kernel compilation like 'make_kpkg'..etc.)
[131006910070] |Can you describe the best setup for kernel debugging ?
[131006910080] |While debugging other embedded OSes, i have used serial port to dump progress,JTAG..etc.
[131006910090] |Which kind of setup does linux kernel devs use ? ( Will my testbed pc with serial port enough for my needs ? if yes, how to configure kernel to dump to serial port ?)
[131006910100] |I'm planning to redirect kernel messages to serial console which will be read in my laptop.
[131006910110] |What tool is best for debugging and tracing kernel code ? as mentioned earlier is serial console the only way ? or any IDE / JTAG kind of interface exists for PC ?
[131006920010] |JTAG probes do exist, but these are fairly expensive (and the companies building them have exclusive contracts).
[131006920020] |The best way to debug kernel code is to start it in kvm or qemu with gdbserver inside the emulation.
[131006930010] |My personal flavor for Linux Kernel development is Debian.
[131006930020] |Now for your points:
[131006930030] |As you probably guessed Ubuntu doesn't bring nothing new to kernel ease kernel development afaik, apart from what's already available in Debian.
[131006930040] |For e.g. make_kpkg is a Debian feat. and not Ubuntu.
[131006930050] |Here are some links to get you started on common Linux Kernel development tasks in Debian:
[131006930060] | Chapter 4 - Common kernel-related tasks of Debian Linux Kernel Handbook
[131006930070] |Chapter 10 - Debian and the kernel of The Debian GNU/Linux FAQ
[131006930080] |The easiest way to do kernel debugging is using QEMU and gdb.
[131006930090] |Some links to get you started:
[131006930100] | http://files.meetup.com/1590495/debugging-with-qemu.pdf
[131006930110] |http://www.cs.rochester.edu/~sandhya/csc256/assignments/qemu_linux.html
[131006930120] |Though, you should be aware that this method is not viable for certain scenarios like specific hardware issues debugging and such, for which you would be better of using physical serial debugging and real hardware.
[131006930130] |For this you can use KGDB(it works using ethernet too).
[131006930140] |KDB is also a good choice.
[131006930150] |Oh, and by the way, both KGDB and KDB have been merged into the Linux Kernel.
[131006930160] |More on those two here.
[131006930170] |Another cool method, which works marvelously for non-hardware related issues, is using the User-mode Linux Kernel.
[131006930180] |Running the Kernel in user-mode as any other process allows you to debug it just like any other program(examples).
[131006930190] |More on User-mode Linux here.
[131006930200] |UML is part of the Linux Kernel since 2.6.0 , thus you can build any official kernel version above that into UML mode by following these steps.
[131006930210] |See 2.
[131006930220] |Unfortunately there is no best here, since each tool/method has its pro's and con's.
[131006930230] |Hope this helps you start your crazy journey in Linux Kernel Development.
[131006940010] |If you're developing for an embedded platform that's not based on i386 hardware, you'll need to cross-compile.
[131006940020] |The Emdebian project provides toolchains to develop for many architectures (ARM, m68k, MIPS and more) on PCs (i386 or amd64).
[131006940030] |That means under Debian, you can simply add the repositories and apt-get install the toolchain for the target(s) of your choice.
[131006950010] |Linux equivalent to ReadyBoost?
[131006950020] |Is there a kernel module or some other patch or something similar to Windows' ReadyBoost?
[131006950030] |Basically I'm looking for something that allows disk reads to be cached on a Flash drive.
[131006960010] |Linux has cachefs, which allows you to add a backing cache filesystem to any filesystem.
[131006960020] |It was originally designed and released in 1993 by Sun Microsystems for use with NFS, and was quickly copied by other Unix-like systems.
[131006960030] |So yes, it's already there and has been for years. :)
[131006970010] |Best practices for writing Blu-Ray discs on Linux
[131006970020] |I recently bought a Blu-Ray writer and am wondering how to best write the discs.
[131006970030] |The scenario is: I have a directory full of files and want to put them on the disc, read them back once to verify, and then put the disc on a shelf (i.e., the main purpose is for backup).
[131006970040] |Some of the files are bigger than 4.4GB or whatever the limit is for ISO filesystems.
[131006970050] |For writing to DVDs, I currently use growisofs
, with split
to break the files into bite-size chunks. growisofs doesn't seem to have good UDF support and splitting the files is lame, which is the motivation for my question.
[131006970060] |What is the current best practice for writing files onto a BD-R disc?
[131006970070] |I am on Debian Wheezy (Testing).
[131006980010] |I successfully used udftools to write DVDs with larger than 4GB files.
[131006980020] |In theory it supports Blu-ray writing but I lack the necessary hardware to test it.
[131006980030] |I recommend using a graphical application like k3b if you can.
[131006980040] |It's not an option for servers without monitors and automated backup scripts but for causal use it is more convinient.
[131006990010] |I ended up creating a zero file with dd, making a UDF filesystem on that with mkudffs, loop-mounting it, populating it, and then writing the UDF image with growisofs -Z /dev/dvd=foo.udf.
[131006990020] |Whether that's a best practice, I can't say, but it's a bit roundabout.
[131006990030] |On the other hand it does work.
[131006990040] |Packet writing led to much sadness and doesn't seem to work on DVD+R, which I also want to write using the same process.
[131007000010] |Filesystem and journal layout
[131007000020] |Is there any tools or some way in linux which can be used to view internals of filesystems ?
[131007000030] |How to view the inode related structures and journal ? and cached pages of files (pagecache).
[131007010010] |This will of course depend on what filesystem you are using,
[131007010020] |e2fsprogs contains debugfs which will work with ext2, ext3 and ext4, and is used to manually view or modify internal structures of the file system
[131007010030] |man page for debugfs is here
[131007020010] |Orphaned connections in CLOSE_WAIT state.
[131007020020] |I've got a SLES machine that accumulates TCP connections in a CLOSE_WAIT state for what appears to be forever.
[131007020030] |These descriptors eventually suck up all available memory.
[131007020040] |At the moment, I've got 3037 of them, but it was much higher before a hurry-up reboot recently.
[131007020050] |What's interesting is that they're not from connections to local ports that I expect to have listening processes.
[131007020060] |They have no associated PIDs, and their timers seem to have expired.
[131007020070] |I'm not a black-belt when it comes to the TCP stack, or kernel networking, but the TCP config seems sane, since these values are default, per the man page:
[131007020080] |So what gives?
[131007020090] |If the timers have expired, shouldn't the stack automatically clear this stuff out?
[131007020100] |I'm effectively giving myself a long-term DoS as these things build up.
[131007030010] |Close wait indicates that the client is closing the connection but the application hasn't closed it yet, or the client is not You should identify which program(s) have this problem.
[131007030020] |Try using netstat -tonp 2>&1 | grep CLOSE
to determine which programs as holding the connections.
[131007030030] |EDIT: If there is not program, then the service is being provided by the kernel.
[131007030040] |These are likely RPC services such as nfs
or rpc.lockd
.
[131007030050] |Listening kernel services can be listed with netstat -lntp 2>&1 | grep -- -
.
[131007030060] |Unless the RPC services have been bound to fixed ports, they will bind to ephemeral ports as your connections appear to show.
[131007030070] |You may also want to check the processes and mounts on the other server.
[131007030080] |You can may be able to bind your NFS services to fixed ports by doing the following: - Select four unused ports for NFS (32763-32766 used here) - Add fixed ports for NFS to /etc/services
[131007030090] |- Configure statd to use the options --port 32763 --outgoing-port 32764
- Configure rpcmountd to use the option --port 32765` - Shutdown and restart NFS and RPC services.
[131007040010] |No, there is no timeout for CLOSE_WAIT
.
[131007040020] |I think that's what the off
means in your output.
[131007040030] |To get out of CLOSE_WAIT
, the application has to close the socket explicitly (or exit).
[131007040040] |See How to break CLOSE_WAIT.
[131007040050] |If netstat
is showing -
in the process column:
[131007040060] |are you running with the appropriate privileges and capabilities (e.g. as root)?
[131007040070] |they could be kernel processes (e.g. nfsd)
[131007050010] |how to concatenate strings in bash?
[131007050020] |I need to concatenate 2 strings in bash, so that:
[131007050030] |echo mystring
should produce
[131007050040] |helloworld
[131007060010] |simply concatenate the variables:
[131007070010] |In case you need to concatenate variables with literal strings:
[131007070020] |echo mystring
would produce:
[131007070030] |some hello arbitrary world text
[131007080010] |You don't need to use {} unless you're going to use bash variable parameters or immediate append a character that would be valid as part of the identifier.
[131007080020] |You also don't need to use double quotes unless you parameters will include special characters.