[131084460010] |
$PS1
is that one host has a read-only $PROMPT_COMMAND
set before .bashrc is read.
[131084490010] |Try:
[131084500010] |When you run PS1='${RED}\h $(get_path) ${exitStatus}${NONE} '
, the PS1
variable is set to ${RED}\h $(get_path) ${exitStatus}${NONE}
, where only \h
is a prompt escape sequence.
[131084500020] |After the prompt sequences are expanded (yielding ${RED}darkstar $(get_path) ${exitStatus}${NONE}
), the shell performs the usual expansions such as variable expansions.
[131084500030] |You get a displayed prompt that looks like \e[1;31mdarkstar PATH 0\e[m
.
[131084500040] |Nothing along the way expands the \e
sequences to actual escape characters.
[131084500050] |When you run PS1="${RED}\h $(get_path) ${exitStatus}${NONE} "
, the PS1
variable is set to \e[1;31m\h PATH 0\e[m
.
[131084500060] |The variables RED
, exitStatus
and NONE
are expanded at the time of the assignment.
[131084500070] |Then the prompt contains three prompt escape sequences (\e
, \h
, and \e
again).
[131084500080] |There are no shell variables to expand at this stage.
[131084500090] |In order to see colors, you need the color variables to contain actual escape characters.
[131084500100] |You can do it this way:
[131084500110] |$'…'
expands backslash-octal sequences and some backslash-letter sequences such as \n
, but not including \e
.
[131084500120] |I made three other changes to your prompt:
[131084500130] |\[…\]
around non-printing sequences such as color-changing commands.
[131084500140] |Otherwise your display will end up garbled because bash can't figure out the width of the prompt.\w
is a built-in escape sequence to print the current directory.$?
in the prompt if you don't have a PROMPT_COMMAND
in the first place.echo
on your PS1
, not echo -e
.
[131084510030] |So it's like you're doing:
[131084510040] |If you try running that, you will see it doesn't work.
[131084510050] |But bash gives you a way to write special characters that doesn't require using echo -e
.
[131084510060] |It looks like $'\octal number'
.
[131084510070] |The special character in all the escape sequences is \e
, which just means Escape.
[131084510080] |Escape's octal value is \033
.
[131084510090] |So we want it to expand to this instead:
[131084510100] |To do this, you can change your definition of GREEN
, RED
, and NONE
, so their value is the actual escape sequence.
[131084510110] |If you do that, your first PS1
with the single quotes should work:
[131084510120] |However, then you will have a second problem.
[131084510130] |Try running that, then press Up Arrow, then Home, and your cursor will not go back to the start of the line.
[131084510140] |To fix that, change PS1
to include \[
and \]
around the color escape sequences, e.g.
[131084510150] |And it should all be working.
[131084510160] |(I'm not sure why putting \[
around ${exitStatus}
works, because the exit status number shouldn't have those around it, but it seems to work for me.)
[131084520010] |/dev/sda1
:
[131084550030] |I use the shell to go to the /dev
directory, and I find that it have no sda*
or hd*
files.
[131084550040] |However, when I boot into my Debian kernel (vmlinuz-2.6.32-5-686), it can find and mount the /dev/sda1
partition.
[131084550050] |What's the problem?
[131084550060] |How can I fix it?
[131084550070] |I didn't add any module names to /etc/modules.autoload.d/kernel-2.6
file to let them auto-load during boot -- is that the cause of the problem?
[131084560010] |You need to make sure that you select the correct SATA drivers while configuring your kernel.
[131084560020] |I don't know what your hardware is, so you'll need to find out for yourself which drivers will work for you.
[131084560030] |I always build the SATA drivers into the kernel rather than as a module, it's just easier to deal with.
[131084570010] |For first-time Gentoo users, I strongly suggest using genkernel
instead of plain make menuconfig
.
[131084570020] |By default, genkernel
will include all drivers it thinks is required.
[131084570030] |If you have the time, re-run genkernel
and gradually reduce the drivers you don't need, edit grub.conf
, and reboot.
[131084570040] |Keep reducing.
[131084570050] |Note the settings (get the .config
file).
[131084570060] |In my case, I chose to do a re-installation, this time use the .config
file and go straight to make
.
[131084570070] |Not necessary, but I just dislike having many half-baked kernels lying around, plus some (possibly irrational) dislike of having to boot initrd.
[131084570080] |(Well, at least that's how * I * did it.
[131084570090] |Takes time, but ultimately satisfying :-) )
[131084580010] |SEVERE
, show the line in red; if it's INFO
, in green.
[131084630040] |What kind of alias can I setup for a tail
command that would help me do this?
[131084640010] |Have you had a look at ccze?
[131084640020] |You have the possibility to customize the default colors of some keywords using the option -c
or directly in your configuration file.
[131084640030] |Edit:
[131084640040] |If you really would like to have the complete line colored in red, you could also have a try at the following:
[131084640050] |\e[1;31m
will give you the red color.
[131084640060] |If you would like some yellow, use \e[1;33m
.
[131084650010] |Try out multitail.
[131084650020] |This is an übergeneralization of tail -f
.
[131084650030] |You can watch multiple files in separate windows, highlight lines based on their content, and more.
[131084650040] |The colors are configurable.
[131084650050] |If the default color scheme doesn't work for you, write your own in the config file.
[131084650060] |For example, call multitail -cS amir_log /path/to/log
with the following ~/.multitailrc
:
[131084650070] |Another solution, if you're on a server where it's inconvenient to install non-standard tools, is to combine tail -f
with sed or awk to add color selection control sequences.
[131084650080] |This requires tail -f
to flush its standard output without delay even when its standard output is a pipe, I don't know if all implementations do this.
[131084650090] |Yet another possibility is to run tail -f
in an Emacs shell buffer and use Emacs's syntax coloring abilities.
[131084660010] |Try a little piece of software called 'ccze'.
[131084660020] |tail -f somelogfile | ccze -A
[131084670010] |You can use colorex, which allows to colorize lines based on regular expressions:
[131084670020] |It also comes bundled with predefined configs, for example for Tomcat logs:
[131084690010] |Also note that if you just want to look for one matching regex, GNU grep with --color
will work — just pipe your tail
output through that.
[131084710010] |fsck - check and optionally repair a file system
[131084720010] |xft
use flag when building Emacs.
[131084740030] |You'll need to turn that flag on (either globally or just for app-editors/emacs) and reinstall Emacs.
[131084750010] |-p
on your rmdir
, that'll work in one pass.
[131084780020] |It won't be pretty or optimal, but it should get everything.
[131084780030] |That tells rmdir to remove any non-empty parent directories of the one you're removing.
[131084780040] |You can save a little bit by adding the -empty
test to find, so it doesn't bother with non-empty directories.
[131084790010] |List the directories deeply-nested-first.
[131084790020] |This won't help with the speed.
[131084790030] |Try one of
[131084790040] |Both are likely to be noticeably faster than the original, especially under Cygwin.
[131084790050] |I don't expect much difference between these two.
[131084800010] |~/virtualenv/bin/python
by just typing python
.
[131084840040] |Now in my project there is a shell script that goes, for example:
[131084840050] |Can I make the script use my aliased python
instead of the python
it finds in $PATH
, without making changes to the script?
[131084850010] |Yes.
[131084850020] |If you put your aliases in ~/.aliases
, then you can do
[131084850030] |This assumes your script starts with #!/bin/bash
, because #!/bin/sh
is a little less predictable.
[131084850040] |Here's what I'd suggest:
[131084850050] |[ -f ~/.bashenv ] &&source ~/.bashenv
BASH_ENV=~/.bashenv
in /etc/environment#!/bin/bash
if they don't already~/virtualenv/bin
near the front of your PATH
, then change your Python scripts to have #!/usr/bin/env python
as the first line.
[131084860010] |Supposing that your alias file is "~/.bash_aliases", put this in your script:
[131084860020] |(via)
[131084870010] |inodes
(which I know very little about), to start reading from each split point, until I find a suitable offset from where to start my text search for a line cotaining the date?
[131084870040] |tail
's read of the last line doesn't use a normal sequential read, so I wonder if this facility is somehow available in bash, or would I need to use Python or C/C++... but I am specifically interested in a bash
option..
[131084880010] |I'd try something like this to split the log into 512MiB chunks for quicker parsing.
[131084880020] |If you are looking for the file the following would work:
[131084880030] |Use that output to determine which file to grep for your date.
[131084890010] |which .. creates no temp-split files, skips blocks * 512MB of data at each run, reads 64 bytes from that position and limits the output to the first line of that 64 bytes.
[131084890020] |you might want to adjust 64 to whatever you think you need.
[131084900010] |It sounds like you want:
[131084900020] |or whatever number of bytes you want to skip.
[131084900030] |The plus sign tells tail to measure from the start of the file instead of the end.
[131084900040] |If you're using the GNU version of tail you can write that as:
[131084900050] |To get a fixed number of bytes after the cut, instead of all the rest of the file, just pipe it through head:
[131084910010] |mail
assumes that there is a functioning MTA on localhost that is 1) capable of accepting mail and 2) knows how to pass it on.
[131084980020] |To find out what mail server you're running, try telnet localhost 25
and look at the identifier string.
[131084980030] |The command mailq
, if it exists for you, will show you what messages are currently in the local mail server's queue, possibly with an explanation as to why it hasn't been passed on to its destination yet.
[131084980040] |In addition, most distributions by default configure MTAs and syslog to report mail log messages to either /var/log/mail.log
or similar.
[131084980050] |Look in /var/log/
for any file that looks viable, and grep it for 'bar.com'
[131084980060] |Without more information as to what's going on it's hard to offer better advice than this, sorry.
[131084990010] |Use a "mail" command that has an option to show you the SMTP dialog.
[131084990020] |The "heirloom" project has a good version of such a command: http://heirloom.sourceforge.net/mailx.html
[131084990030] |Here's an example "mailx" (apparently a 4-year-old v12.1) command invocation, showing the SMTP dialog:
[131084990040] |That sort of information can be invaluable in figuring out what goes wrong with email delivery.
[131085000010] |Spreadsheet::ParseExcel::Simple
is probably your best bet for a quick solution.
[131085010030] |It's in debian 5.0 (Lenny) as libspreadsheet-parseexcel-simple-perl
; Other distributions may have their own naming schemes.
[131085010040] |Depending on what you want to do with it a quick perl script should do the trick.
[131085020010] |It is hard to work with closed format like the old Office formats, convert it into a xml based format using Office/OpenOffice/LibreOffice.
[131085020020] |Then use xsltproc (or some other xml parser) to get the data in a way that you can work with.
[131085030010] |php -v
and it says
[131085030090] |Why is this and how can I get php running?
[131085040010] |What version of Debian is that?
[131085040020] |You might run updatedb
and then locate php | grep bin
, this should check if there's anything PHP-y installed.
[131085040030] |Also, check if your executable isn't php-cgi
or php5-cgi
(you need an extra package for the CLI: php5-cli
).
[131085040040] |In any event, tell us your Debian version.
[131085050010] |Moving that comment to its own answer, looks like your /etc/apt/sources.list
is faulty.
[131085050020] |Edit it to remove the line that contains debian-security, and replace it with
[131085050030] |deb http://ftp.nl.debian.org/debian/ lenny main contrib non-free
[131085050040] |for the main distribution,
[131085050050] |deb http://security.debian.org/ lenny/updates main contrib non-free
[131085050060] |for security updates, and
[131085050070] |deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free
[131085050080] |For so-called 'volatile' updates, then run apt-get update; apt-get -uf upgrade
to bring your entire system up to date, and then try installing php5-cgi again.
[131085050090] |(ETA: You can replace 'nl' with your own country code to get servers a little closer to your physical location and hopefully better download speeds)
[131085060010] |alias
from current session without closing that session?
[131085120010] |Supposing that your alias to python
is py
, do:
[131085120020] |(via)
[131085130010] |unicode
useflag?
[131085160030] |Without it zsh
won't be compiled with Unicode support.
[131085160040] |If you're using bash
, it should have Unicode support through libreadline.
[131085160050] |Also, ksh
and tcsh
don't support Unicode at all. /etc/locale.gen
and generate it with locale-gen
on the command line.bash
from gnome-terminal
(or detach some-boring-process
from bash and redirect its output somewhere)?
[131085170070] |If I just kill gnome-terminal
, bash
will be killed to will all its subprocesses
[131085180010] |This is exactly what screen and tmux were created for.
[131085180020] |You run the shell inside the screen/tmux session, and you can disconnect/reconnect at will.
[131085180030] |You can also have multiple shell sessions running inside one gnome-terminal.
[131085190010] |screen
, tmux
, or dtach
(possibly with dvtm
) are all great for this, but if it's something where you didn't think to use one of those, you may be able to leverage nohup
.
[131085200010] |If I fire something up which I want to finish no matter what (short of system reboot), I use nohup
and run it in the background.
[131085200020] |Unlike screen and the like you can't reattach to the processs.
[131085200030] |However, baring redirection elsewhere any output can be found in nohup.out
.
[131085200040] |I do use screen
when I want to be able to switch terminals for a process.
[131085200050] |Such as starting a process from home/work and switching to the other.
[131085200060] |Like any other terminal session output will eventual scroll off the top of the buffer.
[131085210010] |If some-boring-process
is running in your current bash session:
[131085210020] |ctrl-z
to give you the bash promptbg
jobs
commanddisown -h %1
(substitute the actual job number there).mv a-folder /home/me
on a machine and half way through the move the destination device filled up.
[131085230030] |a-folder
was made up of folders and files in various subdirectories.
[131085230040] |Does mv
leave the source folder intact until the move has successfully completed?
[131085230050] |The source and destination folders were on different filesystems.
[131085230060] |The reason I ask is that I ran this command on the wrong machine, so if the source folder is intact then that makes my life a lot easier :)
[131085240010] |No your source folder is not intact...
[131085240020] |On the same file system all mv
does is add and remove directory entries.
[131085240030] |But on a different filesystem...
[131085240040] |I'm not sure at what point it unlinks the file, and whether it removes data as it goes... but once a file is moved, it is unlinked.
[131085240050] |This is unless of course you used a special option to mv
.
[131085240060] |Some of which are mentioned in this question which might interest you in the future.
[131085250010] |xenoterracide's answer is 100% correct, I'll just add to it.
[131085250020] |I often monitor the copy or move process of large amounts of files with:
[131085250030] |This will show you how the cumulated file size changes over time, proving to you that files are moved and removed (with mv) progressively along the way.
[131085260010] |yum install yum-plugin-versionlock
, and then check the configuration, usually under /etc/yum/pluginconf.d/
.
[131085280140] |(There's also decent man pages for yum-versionlock
and for yum-versionlock.conf
.)
[131085290010] |hgfoo
or hgfoo :
exhibits the bug, but { hgfoo :; }
and (hgfoo)
work fine.
[131085310040] |Using a function instead of an alias also works.
[131085310050] |I investigated a bit, trying to extract a smaller example.
[131085310060] |The weirdness seems to stem from having a newline in the alias.
[131085310070] |Here is a simple example where bash (3.2.29 or 4.1.9) misbehaves.
[131085310080] |This alias works as expected (from my reading of Single Unix v3) under pdksh, ksh93, ash and zsh under Debian lenny, namely oops
prints foo bar
.
[131085310090] |The bug may be related to a reported bug about “strange delayed aliases”, but that discussion focuses on line continuations (backslash-newline) which is not used here.
[131085310100] |In any case, use a function!
[131085310110] |Aliases are for one-liners where you're just specifying some default options for a command.
[131085310120] |A function would have the additional benefit that you could pass arguments to hg
:
[131085310130] |¹ I initially thought this was a Cygwin bug, but as ak2 pointed out, this bug occurs under Linux as well.
[131085320010] |xclock
but this seems like a hack.
[131085320070] |Ideally I could send the notification anywhere, like my phone, screen, etc.
[131085330010] |du
(disk usage) summary takes over two minutes.
[131085330030] |I'd like to find a way to speed up a disk usage summary for arbitrary directories on that file system.
[131085330040] |For small branches I've noticed that du
results seem to be cached somehow, as repeat requests are much faster, but on large branches the speed up become negligible.
[131085330050] |Is there a simple way of speeding up du
, or more aggressively caching results for branches that haven't been modified since the previous search?
[131085330060] |Or is there an alternative command that can deliver disk usage summaries faster?
[131085340010] |What you are seeing when you rerun a du command is the effect of disk buffering.
[131085340020] |Once you read a block its disk buffer is kept in the buffer cache until that block is needed.
[131085340030] |For du you need to read the directory and the inode for each file in the directory.
[131085340040] |The du results are not cached in this case, but can be derived with far less disk IO.
[131085340050] |While it would be possible to force the system to cache this information, overall performance would suffer as the required buffer space would not be available for actively accessed files.
[131085340060] |The directory itself has no idea how large a file is, so each file needs to be accessed.
[131085340070] |To keep the cached value up to date every time a file changed size the cached value would need to be updated.
[131085340080] |As a file can be listed in 0 or more directories this would require each file's inode to know which directories it is listed in.
[131085340090] |This would greatly complicate the inode structure and reduce IO performance.
[131085340100] |Also as du allows you to get results assuming different block sizes, the data required in the cache would need to increment or decrement the cached value for each possible block size further slowing performance.
[131085350010] |If you can arrange for the different hierarchies of files to belong to different groups, you can set up disk quotas.
[131085350020] |Don't give an upper limit (or make it the size of the disk) unless you want one.
[131085350030] |You'll still be able to tell instantly how much of its (effectively infinite) quota the group is using.
[131085350040] |This does require that your filesystem supports per-group quotas.
[131085350050] |Linux's Ext[234] and Solaris/*BSD/Linux's zfs do.
[131085350060] |It would be nice for your use case if group quotas took ACLs into account, but I don't think they do.
[131085360010] |Permission denied
on a lot of files when trying to remove the new directory and files.
[131085370060] |As a note, I found this weird behavior because a friend sent me a .tgz
of a snapshot of his /proc
dir.
[131085370070] |I extracted the directory and when I was finished looking through it I had the same problem.
[131085370080] |rm -rf
as root does work.
[131085370090] |lsattr
shows the e attribute (which is what all of my files/directories show).
[131085370100] |EDIT: Oh wow, I can't believe I didn't notice.
[131085370110] |The directories had no write permission on them o.o
[131085380010] |If there is a non-empty directory where you don't have write permission, you can't remove its contents.
[131085380020] |The reason is that rm
is bound by permissions like any other command, and permission to remove bar
requires write permission on foo
.
[131085380030] |This doesn't apply when you run rm
as root because root always has the permission to remove a file.
[131085380040] |To make the directory tree deletable, make all the directories in it writable (the permissions of regular files don't matter when it comes to deletion with rm -f
).
[131085380050] |You can use either of these commands:
[131085390010] |control = freeze
to hold messages for inspection when I am testing rules.
[131085400040] |If I find it accurate enough I change it to a deny rule.
[131085400050] |EDIT: I tested this rule in my database of emails.
[131085400060] |Using zen.spamhaus.org
as a DNS blacklist catches almost all these cases (467 of 483).
[131085400070] |Greylisting catches most of the rest (11 of 16).
[131085400080] |I found five message made it past those to tests.
[131085400090] |Of those three (60%) were legitimate email.
[131085400100] |The others had helo names which where either hostnames or second level domains.
[131085400110] |Adding a condition to check to ensure the helo name is at least a third level domain makes the rule reasonably safe.
[131085400120] |I am testing with:
[131085410010] |The "sender", as Exim sees it is the envelope-from address, and that was in domain returns.groups.yahoo.com.
[131085410020] |Once I put that domain (completely; groups.yahoo.com doesn't work, neither does yahoo.com) into my local_sender_whitelist, the ACL worked.
[131085410030] |It had worked during testing because I had used the envelope-from address of yahoogroups.com, the same as the From: address.
[131085410040] |Never bothered to check if that was the case in the emails from yahoo groups.
[131085420010] |/proc/scsi/scsi
and parted --list
shows the RAID controller (3ware 9650SE-4LP):
[131085430010] |od
prints two-byte words¹ by default.
[131085450020] |The number 020061 (octal) corresponds to the two-byte sequence 1␣
(␣
is a space character).
[131085450030] |Why?
[131085450040] |It's clearer if you use hexadecimal: 0o20061 = 0x2031, and ␣
is 0x20 (32) in ASCII and 1
is 0x31 (49). Notice that the lower-order bits (0x31) correspond to the first character and the higher-order bits correspond to the second character: od is assembling the words in little-endian order, because that happens to be your system's endianness.²
[131085450050] |Little-endian order is not very natural here because one of the output formats (-c
) prints characters, the other one (-o
) prints words.
[131085450060] |Each word is printed as a number in the usual big-endian notation (the most significant digit comes first in our left-to-right reading order).
[131085450070] |This is even more apparent in hexadecimal where the byte boundaries are clearly apparent in the numerical output:
[131085450080] |If you prefer to view the file as a sequence of bytes, use od -t x1
(or hd
if you have it).
[131085450090] |¹ Once upon a time, men were real men, computers were real computers, numbers were often written in octal, and words were two bytes long.
[131085450100] |² All PCs (x86, x86-64) are little-endian, as was the PDP-11 where Unix started.
[131085450110] |ARM CPUs can cope with either endianness but Linux and iOS use it in little-endian mode.
[131085450120] |So most of the platforms you're likely to encounter nowadays are little-endian.
[131085460010] |xmms2d
process running, but two possible executable files (in different directories, both in the executable path) that could have spawned it.
[131085460030] |I suspect that one of those is corrupted, because sometimes this program works and sometimes it doesn't.
[131085460040] |The process running now works, so I want to delete (or rename) the other one.
[131085460050] |ps ax|grep "xmms"
returns 8505 ? SLl 2:38 xmms2d -v
without path information.
[131085460060] |Given the PID, could I find whether it was run from /usr/bin/xmms2d
or /usr/local/bin/xmms2d
?
[131085460070] |Thanks!
[131085470010] |Try this:
[131085470020] |Or if you don't want to parse the output of ls
, just do:
[131085470030] |or
[131085480010] |I start mousepad from the shell:
[131085480020] |check, where it comes from:
[131085480030] |start it with path:
[131085480040] |look via ps:
[131085480050] |As we see, the one invoked without path is displayed without path, and started with the PATH, and therefore to be found by
[131085480060] |Simple as that, isn't it?
[131085480070] |Wait!
[131085480080] |What if I start mousepad from the menu?
[131085480090] |Well, it might be specified with or without using the PATH settings.
[131085480100] |I tried.
[131085480110] |A normal start leads to a simple 'mousepad' in ps v -C
.
[131085480120] |Since ~/bin is the first part of my PATH I create a dummy there, and, voila, the dummy is started from the menu.
[131085480130] |But what if you start a program which deletes itself?
[131085480140] |Which will not find the deleted program and report a wrong one, if there is a second in the path.
[131085480150] |So that's a race condition.
[131085480160] |If you know that your programs don't delete themselves or aren't moved while you're investigating their location, ps v -C NAME
and which NAME
should work pretty well.
[131085490010] |If you are running Solaris, the way is slightly different from the Linux one suggested:
[131085490020] |Should you want to know the current working directory of running processes, you can use:
[131085490030] |eg:
[131085500010] |mdadm -D
gives
[131085500080] |and /proc/mdstat
reads
[131085500090] |Any ideas?
[131085500100] |Thanks in advance
[131085510010] |Open the /etc/mdadm/mdadm.conf file, find the line that begins with ARRAY /dev/md1
and remove the line immediately following which states 'spares=1'.
[131085510020] |Then restart mdadm service.
[131085510030] |If you did a mdadm --examine --scan
to retrieve the array definitions while the md1 array was still rebuilding, one partition was seen as spare at that moment.
[131085520010] |Select a shared printer by name
. (\servername) brings up a drop-down list.
[131085540010] |é => e
.
[131085550020] |And forget my previous answer mentioning konwert
, which is used to format conversion.
[131085560010] |By default, iconv refuses to convert the file if it contains characters that do not exist in the target character set.
[131085560020] |Use //TRANSLIT
to “downgrade” such characters.
[131085570010] |eth0
, get that line and the next one (-A 1
), get only the last line, get the second part of that line when splitting with :
, then get the first part of that when splitting with space.
[131085590010] |I believe the "modern tools" way to get your ipv4 address is to parse 'ip' rather than 'ifconfig', so it'd be something like:
[131085590020] |or something like that.
[131085600010] |I use this one-liner:
[131085600020] |Uses ifconfig
(widely available), does not take localhost
address, does not bind you to a given interface name, does not take into account IPv6 and tries to get the IP of the first network interface available.
[131085610010] |gnome-panel
seems to act up now and again.
[131085610030] |I've not found a way to force it to happen, but it seems related to launching processes.
[131085610040] |Basically, I launch some process, and I see 1 core max itself out, and the taskbar freezes.
[131085610050] |I pkill gnome-panel
, the taskbar reappears, and everything is ok.
[131085610060] |If I don't notice it, eventually my entire computer freezes and I have to hard boot.
[131085610070] |If you're familiar with this, great; but if not, how would I go about getting some kind of information on why this is happening that could help me or developers?
[131085610080] |Is there a debug build or something I could run?
[131085610090] |Thank you.
[131085620010] |You could try taking a look at the ~/.xsession-errors file.
[131085620020] |If you're lucky you might find some failed assertion or error in there.
[131085620030] |You could also install the gnome-panel-dbg package and attach gdb to the running panel to get a backtrace in case of crash (more information here).
[131085630010] |ip
again:
[131085650020] |(The ninth column is the state of the interface)
[131085660010] |You say you simply want the online/offline status of an interface, and aren't concerned with speed or link-type.
[131085660020] |Try ethtool, as root:
[131085660030] |ifconfig
can also show you the online/offline status, and this command is usually available to any user on the system.
[131085670010] |another one... for older NICs, the command mii-tool is awesome
[131085680010] |*.png
pattern instead of passing it as it is to the script.
[131085700050] |How can I achieve this (script, alias or any other equivalent solution is fine)?
[131085710010] |Have you tried
[131085710020] |But I can only see zsh (not bash) expanding it like you say.
[131085720010] |Since the shell performs glob expansion before the arguments are handed over to the command, there's no way I can think of to do it transparently: it's either controlled by the user (quote the parameter) or brute-force (disable globbing completely for your shell with set -o noglob
).
[131085720020] |You're looking at the problem from the wrong end.
[131085720030] |Change your script to accept multiple filename arguments:
[131085730010] |sudo -k
Will kill the timeout timestamp.
[131085740020] |You can even put the command afterwards, like sudo -k test_my_privileges.sh
[131085740030] |From man sudo
:
[131085740040] |-K The -K (sure kill) option is like -k except that it removes the user's time stamp entirely and may not be used in conjunction with a command or other option.
[131085740050] |This option does not require a password.
[131085740060] |-k When used by itself, the -k (kill) option to sudo invalidates the user's time stamp by setting the time on it to the Epoch.
[131085740070] |The next time sudo is run a password will be required.
[131085740080] |This option does not require a password and was added to allow a user to revoke sudo permissions from a .logout file.
[131085740090] |You can also change it permanently.
[131085740100] |From man sudoers
:
[131085740110] |timestamp_timeout
[131085740120] |Number of minutes that can elapse before sudo will ask for a passwd again.
[131085740130] |The timeout may include a fractional component if minute granularity is insufficient, for example 2.5.
[131085740140] |The default is 5.
[131085740150] |Set this to 0 to always prompt for a password.
[131085740160] |If set to a value less than 0 the user's timestamp will never expire.
[131085740170] |This can be used to allow users to create or delete their own timestamps via sudo -v and sudo -k respectively.
[131085750010] |Shawn's answer is great but there is an additional configuration option that might be useful in this situation.
[131085750020] |From man sudoers
:
[131085750030] |tty_tickets
[131085750040] |If set, users must authenticate on a per-tty basis.
[131085750050] |With this flag enabled, sudo will use a file named for the tty the user is logged in on in the user's time stamp directory.
[131085750060] |If disabled, the time stamp of the directory is used instead.
[131085750070] |This flag is on by default.
[131085750080] |From man sudo
:
[131085750090] |When the tty_tickets option is enabled in sudoers, the time stamp has per-tty granularity but still may outlive the user's session.
[131085750100] |On Linux systems where the devpts filesystem is used, Solaris systems with the devices filesystem, as well as other systems that utilize a devfs filesystem that monotonically increase the inode number of devices as they are created (such as Mac OS X), sudo is able to determine when a tty-based time stamp file is stale and will ignore it.
[131085750110] |Administrators should not rely on this feature as it is not universally available.
[131085750120] |I think its relatively new.
[131085750130] |If your system supports it, if you logout then login, sudo will request your password again.
[131085750140] |(I have sudo -K
in my shells logout script too.)
[131085760010] |fdisk
sudo file - `
sudo file -s `/dev/sda5`
sudo tail -c +513 /dev/sda2 | file -
to see if there's something recognizable at the very beginning of the extended partition.
[131085770070] |(I'm not sure the offset is always 512, it might be 4096 or 32256 or some other number; note that you need to add 1 to the offset for the tail command.)
[131085770080] |If the problem is indeed that your partition table flipped a bit, use fdisk
or your favorite partition editor to change /dev/sda5
back to starting at cylinder 8903.
[131085780010] |unable to handle kernel paging request
error, and eventually a kernel panic.
[131085790130] |I couldn't find anything about this error and how it specifically relates to Xbox modding, but what information I could find suggested that I might have a bad stick of RAM.
[131085790140] |I've not been able to test this yet, but I'm going to run MEMTEST as soon as I get home.
[131085790150] |I don't have the setup with me - I'm at work, and it's at home - but if anybody's interested in lending a hand, I'll take pictures tonight and post them up.
[131085790160] |The only reason that I'm asking here is because I'm still a fairly new *nix convert, and I'm not quite sure how it all works.
[131085790170] |I'm assuming that unable to handle kernel paging request
is a fairly standard error message, too... correct me if I'm wrong.
[131085790180] |Anyhow, thanks in advance for any help.
[131085800010] |Well.
[131085800020] |How's that for fried RAM?
[131085800030] |Guess that was the culprit, after all.
[131085800040] |Thanks to everyone for their help and advice!
[131085810010] |/etc/apache2/
and the file you have to change is httpd.conf
.
[131085820040] |If in your document root you have the symlink wiki/media -> /real/wiki/media
then you will need to create a Directory
section like this:
[131085820050] |Please note that I am writing these from memory without any testing, so don't use these directions as is, consult the comments in the file, configuration guide for your distro and the Apache reference when in doubt.
[131085830010] |/sbin
is not part of your PATH
and thats why its complaining.
[131085840020] |So try /sbin/modprobe
.
[131085850010] |rm -rf
will fail if something tries to delete the same file tree (I think because rm
enumerates the files first, then deletes).
[131085850030] |A simple test:
[131085850040] |There will be some output into stderr, e.g.:
[131085850050] |I can ignore all the stderr output by redirecting it to /dev/null
, but removing of /tmp/dirtest
actually fails!
[131085850060] |After both commands are finished, /tmp/dirtest
is still there.
[131085850070] |How can I make rm
delete the directory tree properly and really ignore all the errors?
[131085860010] |I'm curious how the build system ended up like this.
[131085860020] |Are you able to change it?
[131085860030] |At a minimum, you can create a flag that lets the scripts know the other one is already doing the job...
[131085860040] |It would be better to re-architect the thing so that this isn't necessary.
[131085870010] |Nasty.
[131085870020] |But in a sense, you're looking for trouble when two concurrent processes are manipulating a directory tree.
[131085870030] |Unix provides primitives for atomic manipulation of a single file, but not for whole directory trees.
[131085870040] |A simple workaround would be for your script to rename the directory before removing it.
[131085870050] |Since your use case has cooperating scripts, it's ok for the new name to be predictable.
[131085870060] |Maybe you can even do the rm
in the background later, while your build performs some CPU-bound tasks.
[131085880010] |xsltproc
and similar tools (saxon
).
[131085900040] |for json: i also think its better to just use python, ruby, perl and transform it.
[131085910010] |su root
and switch to root with my root password.
[131085910050] |However, when I try to login to Gnome as root, the same password does not work.
[131085910060] |I'm using Fedora 13 on a Dell Inspiron 6400
[131085920010] |You're not allowed to log in to the desktop as root by default.
[131085920020] |See Enabling Root User for GNOME Display Manager, which says:
[131085920030] |su -c 'gedit /etc/pam.d/gdm-password'
auth required pam_succeed_if.so user != root quiet
to # auth required pam_succeed_if.so user != root quiet
auth required pam_succeed_if.so user != root quiet
from "/etc/pam.d/gdm"./sbin/start_udev
, how can I remove the [ OK ]
so it's not printed?
[131085980030] |I'm trying to change /etc/rc.sysinit
to display the information I like, and I have managed to remove all the info output apart from the annoying [ OK ]
[131085980040] |Any ideas?
[131085980050] |I'm using Fedora 13
[131085990010] |This is all controlled by /etc/sysconfig/init
.
[131085990020] |I'm pretty sure all you have to do is change it from
[131085990030] |to
[131085990040] |After doing that, it should change from something like:
[131085990050] |to
[131085990060] |Have a look at /etc/init.d/functions
to see how that works.
[131085990070] |start_udev
calls success
to print the [ OK ]
message, and /etc/init.d/functions
is where success
is defined.
[131086000010] |tcpdump -w httpdebug.pcap -i eth0 port 80
will sniff all packets heading to or from port 80 on the eth0 interface and output them to httpdebug.pcap
, which you can then read at your leisure, either with tcpdump again (with multiple -x
options, refer to the tcpdump manpage ) in console if you're feeling masochistic, or with wireshark.
[131086010030] |I really can't recommend the latter highly enough, as it will let you sort out packets and follow the exact stream you want to see.
[131086020010] |If you really want to use command line for this there is tcpflow.
[131086020020] |It saves TCP streams to different files.
[131086020030] |The HTTP request and responses will be stored separately.
[131086020040] |If you can use GUI try Wireshark.
[131086020050] |You can right click any packet and pick "Follow TCP stream".
[131086030010] |Location: ...
redirect which CURL isn't following.
[131086040030] |Open up the .jar file in a text editor and see what you've got.
[131086040040] |A real .jar should start with 'PK' (since it's a .zip file).
[131086050010] |curl -L
[131086050020] |works.
[131086050030] |It even follows redirects.
[131086050040] |I found this out in this answer.
[131086050050] |Refer to working script.
[131086060010] |sudo
always preserves environment variables, but this is not always the case.
[131086070020] |Here is an excerpt from the sudo manpage:
[131086070030] |There are two distinct ways to deal with environment variables.
[131086070040] |By default, the env_reset sudoers option is enabled.
[131086070050] |This causes commands to be executed with a minimal environment containing TERM, PATH, HOME, SHELL, LOGNAME, USER and USERNAME in addition to variables from the invoking process permitted by the env_check and env_keep sudoers options.
[131086070060] |There is effectively a whitelist for environment variables.
[131086070070] |If, however, the env_reset option is disabled in sudoers, any variables not explicitly denied by the env_check and env_delete options are inherited from the invoking process.
[131086070080] |In this case, env_check and env_delete behave like a blacklist.
[131086070090] |Since it is not possible to blacklist all potentially dangerous environment variables, use of the default env_reset behavior is encouraged.
[131086070100] |In all cases, environment variables with a value beginning with () are removed as they could be interpreted as bash functions.
[131086070110] |The list of environment variables that sudo allows or denies is contained in the output of sudo -V when run as root.
[131086070120] |So if env_reset
is enabled (the default), an attacker can't override your PATH
or other environment variables (unless you specifically add them to a whitelist of variables which should be preserved).
[131086080010] |The safest approach is ssh login using (at least) 2048 long key (with password login disabled) using a physical device to store the key.
[131086090010] |Security is always about trade-off.
[131086090020] |Root would be most secure if there were no way to access it at all.
[131086090030] |I notice that your LD_PRELOAD and PATH attacks assume an attacker with acesss to your account already, or at least to your dotfiles.
[131086090040] |Sudo doesn't protect against that very well at all — if they have your password, after all, no need to try tricking you for later... they can just use sudo
now.
[131086090050] |Another thing to think about is what Sudo was designed for originally: delegation of specific commands (like those to manage printers) to "sub-administrators" (perhaps grad students in a lab) without giving away root completely.
[131086090060] |Using sudo to do everything is the most common use I see now, but it's not necessarily the problem the program was meant to solve (hence the ridiculously complicated config file syntax).
[131086090070] |But, sudo-for-unrestricted-root does attempt to address another security problem: manageability of root passwords.
[131086090080] |At many organizations, these tend to be passed around like candy, written on whiteboards, and left the same forever.
[131086090090] |That leaves a big vulnerability, since revoking or changing access becomes a big production number.
[131086090100] |Even keeping track of what machine has what password becomes a challenge — let alone who knows which one.
[131086090110] |And, remember that most "cyber-crime" comes from within.
[131086090120] |With the root password situation described, it's hard to track down who did what — something sudo with remote logging deals with pretty well.
[131086090130] |On your home system, I think it's really more a matter of the convenience of not having to remember two passwords.
[131086090140] |It's probable that many people were simply setting them to be the same — or worse, setting them to be the same initially and then letting them get out of sync, leaving the root password to rot.
[131086090150] |Using passwords at all for SSH is dangerous, since password-sniffing trojaned ssh daemons are put into place in something like 90% of the real-world system compromises I've seen.
[131086090160] |It's much better to use SSH keys, and this can be a workable system for remote root access as well.
[131086090170] |But the problem there is now you've moved from password management to key management, and ssh keys aren't really very manageable.
[131086090180] |There's no way of restricting copies, and if someone does make a copy, they have all the attempts they want to brute-force the passphrase.
[131086090190] |You can make policy saying that keys must be stored on removable devices and only mounted when needed, but there's no way of enforcing that — and now you've introduced the possibility of a removable device getting lost or stolen.
[131086090200] |The highest security is going to come through one-time keys or time/counter-based cryptographic tokens.
[131086090210] |These can be done in software, but tamper-resistant hardware is even better.
[131086090220] |In the open source world, there's WiKiD and YubiKey, and of course there's also the proprietary heavyweight RSA SecurID.
[131086090230] |If you're in a medium-to-large organization, or even a security-conscious small one, I highly recommend looking into one of these approaches for administrative access.
[131086090240] |It's probably overkill for home, though, where you don't really have the management hassles — as long as you follow sensible security practices.
[131086100010] |This is a very complex question. mattdm has already covered many points.
[131086100020] |Between su and sudo, when you consider a single user, su is a little more secure in that an attacker who has found your password can't gain root privileges immediately.
[131086100030] |But all it takes is for the attacker to find a local root hole (relatively uncommon) or install a trojan and wait for you to run su.
[131086100040] |Sudo has advantages even over a console login when there are multiple users.
[131086100050] |For example, if a system is configured with remote tamper-proof logs, you can always find out who last ran sudo (or whose account was compromised), but you don't know who typed the root password on the console.
[131086100060] |I suspect Ubuntu's decision was partly in the interest of simplicity (only one password to remember) and partly in the interest of security and ease of credential distribution on shared machines (business or family).
[131086100070] |Linux doesn't have a secure attention key or other secure user interface for authentication.
[131086100080] |As far as I know even OpenBSD doesn't have any.
[131086100090] |If you're that concerned about root access, you could disable root access from a running system altogether: if you want to be root, you would need to type something at the bootloader prompt.
[131086100100] |This is obviously not suitable for all use cases. (*BSD's securelevel works like this: at a high securelevel, there are things you can't do without rebooting, such as lowering the securelevel or accessing mounted raw devices directly.)
[131086100110] |Restricting the ways one can become root is not always a gain for security.
[131086100120] |Remember the third member of the security triad: confidentiality, integrity, availability.
[131086100130] |Locking yourself out of your system can prevent you from responding to an incident.
[131086110010] |Agree with Let_Me_Be.
[131086110020] |Also agree with you about sudo not being anymore secure than using the root account itself.
[131086110030] |It pains me hearing people talk out of their ass on how you should never use the root account directly ...
[131086110040] |Sudo was meant to give access to only specific commands, but even even using it that way it is very easy to configure incorrectly and leave a big gaping hole.
[131086110050] |What I do is disable passwords via SSH and make everyone use keys.
[131086110060] |Depending on the box, I'll either put people's keys in root's auth keys or to their own user and add them to wheel.
[131086110070] |sudo is annoying and gives people a false sense of security.
[131086120010] |The designers of the secured OpenWall GNU/*/Linux distro have also expressed critical opinions on su
(for becoming root) and sudo
.
[131086120020] |You might be interested in reading this thread:
[131086120030] |...unfortunately both su and sudo are subtly but fundamentally flawed.
[131086120040] |Apart from discussing the flaws of su
and other things, Solar Designer also targets one specific reason to use su
:
[131086120050] |Yes, it used to be common sysadmin wisdom to "su root" rather than login as root.
[131086120060] |Those few who, when asked, could actually come up with a valid reason for this preference would refer to the better accountability achieved with this approach.
[131086120070] |Yes, this really is a good reason in favor of this approach.
[131086120080] |But it's also the only one. ...(read more)
[131086120090] |In their distro, they have "completely got rid of SUID root programs in the default install" (i.e., including su
; and they do not use capabilities for this):
[131086120100] |For servers, I think people need to reconsider and, in most cases, disallow invocation of su and sudo by the users.
[131086120110] |There's no added security from the old "login as non-root, then su or sudo to root" sysadmin "wisdom", as compared to logging in as non-root and as root directly (two separate sessions).
[131086120120] |On the contrary, the latter approach is the only correct one, from a security standpoint:
[131086120130] |http://www.openwall.com/lists/owl-users/2004/10/20/6
[131086120140] |(For accountability of multiple sysadmins, the system needs to support having multiple root-privileged accounts, like Owl does.)
[131086120150] |(For desktops with X, this gets trickier.)
[131086120160] |You also absolutely have to deal with...
[131086120170] |BTW, they were to replace sulogin
with msulogin
to allow the setup with multiple root accounts: msulogin
allows one to type in the user name also when going into the single user mode (and preserve the "accountability") (this info comes from this discussion in Russian).
[131086130010] |If the concern is that a compromised user account can be used to sniff the password used for sudo or su, then use a one-time passcode for sudo and su.
[131086130020] |You can force the use of keys for remote users, but that might not pass muster for compliance purposes.
[131086130030] |It might be more effective to setup an SSH gateway box that requires two-factor auth, then permit key use from there. here's a doc on such a setup: http://www.howtoforge.com/secure_ssh_with_wikid_two_factor_authentication
[131086140010] |I just want to add something a bit off topic. (for the topic one check '/bin/su -' here after)
[131086140020] |I think that the above "security" should also be linked to the actual data we want to secure.
[131086140030] |It will and it should be different if we want to secure: my_data, my_company_data, my_company_network.
[131086140040] |Usually, if I speak about security I also speak about "data security" and backup.
[131086140050] |We can also add fault-tolerant systems and the like.
[131086140060] |Given this, I think that security as a whole is an equilibrium between the usability, the "data security" and the required effort to implement a secure system.
[131086140070] |Ubuntu's target was, and mostly still is, the final user: Sudo is the default.
[131086140080] |Fedora is the free version of RedHat which in turn is more servers oriented: Sudo used to be disabled.
[131086140090] |For the other distributions I have no direct information.
[131086140100] |I am using right now, mostly, fedora.
[131086140110] |And as an old style user I never typed 'su'.
[131086140120] |But I can type "/bin/su -" in a very short time even if I am not exactly a typist.
[131086140130] |The PATH variable.. should not be a problem (I type the path). Also the "-" (minus) in principle should remove my user environment variables and load only the root ones. i.e. avoiding some extra possible troubles.
[131086140140] |Probably also the LS_PRELOAD.
[131086140150] |For the rest I guess that @mattdm was pretty precise.
[131086140160] |But lets put it in the correct box.
[131086140170] |Assume that a scripting smart kit get access to my data.
[131086140180] |What the hell do you think is he going to do with it? - Publish my pictures? my? - Trying to find out my girlfriend name and tell her that I visit porno sites?
[131086140190] |In the single user picture the two worst situations are: - The kid delete all my data: for fun or by mistake - The kid uses my machine to create a further attack to some other entity.
[131086140200] |Or similar targets.
[131086140210] |For the first case, I mentioned above, better putting efforts on a backup than on network security.
[131086140220] |Yep, you are save.
[131086140230] |I mean an hardware crash is not that different.
[131086140240] |The second case is more subtle.
[131086140250] |But there are signals about these activities.
[131086140260] |In any case, you can do the possible, but I would not configure my home PC to be protected from a terroristic attacks!
[131086140270] |I will skip the other scenarios.
[131086140280] |cheers F