[131058800010] |
Σ(%CPU) ≤ logicalcores*(1+ε)
(where ε is measure and rounding error) but how can I have on 2 core system 2 different processes each taking 200% of CPU (as measured by htop)?
[131058800030] |EDIT Cores in above equation means logical cores i.e. taking into account all hyperthreading, number of CPUs etc.
[131058800040] |EDIT 2 Despite htop displays the number of processors I attach cpuinfo
[131058810010] |Do you have 2 processors or 4?
[131058810020] |If you have two cores with two CPUs each through hyperthreading, that's a total of four processors (as far as the OS is concerned).
[131058820010] |Two dual-core processors contain 4 CPUs.
[131058820020] |Multi-threaded processors also exist.
[131058820030] |Each thread appears as a CPU to the OS, although they are not true CPUs as are cores.
[131058820040] |Most usually multi-threaded CPUs have two threads per core, so you can have a single, dual-threaded, dual-core processor that appears to your OS at 4 CPUs.
[131058820050] |It should be noted that Sun Microsystems had a T2 chip which had 8 cores and 8 threads per core for 64 virtual CPUs on a single processor.
[131058830010] |A single physical hyperthreaded processor will appear as two logical processors.
[131058830020] |Thus your dual core processor in which each core is hyperthreaded will appear as four logical processors to things like top, cpuinfo, prstat, activity monitor, etc.
[131058840010] |foo
should be mapped to foo@mydomain
, with possible modifications for uid
<1024, and sent off to an external smtp server without involving /var/mail
.
[131058890050] |Some years ago, I used sSMTP for a similar task, and I was wondering if this is still the way ahead?
[131058890060] |Also, how much of the default debian mail system should/could I remove?
[131058890070] |Update Ended up Googling a bit, and the obvious candidates seem to be
[131058890080] |foo
via a Mail Delivery Agent (MDA) instead of pushing it out over smtp to foo@some.domain
.
[131058890140] |Or maybe it does do the mapping if I add qualify_domain
to the config.
[131058890150] |Might have to try it out... nullmailer appears to be running a queue in /var
, which is not something I want.
[131058890160] |Does anybody have experience with any of these?
[131058900010] |Nullmailer might fit the bill:
[131058900020] |Nullmailer is a mail transport agent designed to only relay all its messages through a fixed set of "upstream" hosts.
[131058900030] |If you just install the Debian package, it will replace whatever MTA was there before.
[131058910010] |My recommendation would be nullmailer too.
[131058910020] |We were in a similar situation before (need to use a second MTA in Zimbra boxes for system-generated mail) and it fits the bill nicely.
[131058910030] |However, your requisite of not using /var is a hurdle.
[131058910040] |Could you elaborate on the reasons why?
[131058910050] |If the problem is using the disk at all, you could mount nullmailer's queue on tmpfs.
[131058910060] |It wouldn't be quite safe, but at least it wouldn't touch the disk at all.
[131058920010] |IMHO, the easiest way to do this is to install postfix (if debconf asks you questions, pick "internet site"), then run these commands:
[131058920020] |As long as the SMTP server specified in relayhost
doesn't require authentication, allows you to relay or is a valid destination for the recipient address this will work.
[131058920030] |The mailname
parameter will be the default domain name.
[131058920040] |So if you send mail to foo it will go to foo@my.domainname.com.
[131058920050] |Mail sent by user bar will be from bar@my.domainname.com.
[131058920060] |If you do need to authenticate and want to support TLS, also run the following commands:
[131058920070] |Then in /etc/postfix/sasl_password_maps
have this content:
[131058920080] |All mail will be sent to your relayhost with the specified username and password.
[131058930010] |--exclude
flag.
[131058940020] |For example, a local rsync:
[131058940030] |It really is that simple -- that exclude rule will match a directory named "cache" anywhere in your tree.
[131058940040] |For more information, look for "--exclude" and the "FILTER RULES" section on the rsync man page:
[131058940050] |http://www.samba.org/ftp/rsync/rsync.html
[131058950010] |pidof program
gives:
[131058950030] |top -p
accepts comma-delimited arguments: 1, 2, 3
.
[131058950040] |This means that top -p `pidof program`
won't work:
[131058950050] |Can you show me how to do this.
[131058950060] |I'm not familiar with awk, sed, etc...
[131058960010] |sed
replaces your
with ,
:
[131058970010] |An alternative to sed
for simple things like this is tr
:
[131058970020] |tr
can also easily handle a variable number of spaces:
[131058970030] |Additionally, if you have it available, pgrep
can work well here:
[131058980010] |top -p $(pidof init)
and top -p `pidof init`
gives the same output.
[131058980030] |Are these two ways of doing one thing, or are there differences?
[131058990010] |Obvious difference I observe is that you cannot nest backticks while you can nest $() .
[131058990020] |May be both exists for legacy reasons.
[131058990030] |Similarly the . command and source commands are synonyms.
[131059000010] |The old-style backquotes ` `
do treat backslashes and nesting a bit different.
[131059000020] |The new-style $()
interprets everything in between ( )
as a command.
[131059000030] |works if the nested backquotes are escaped:
[131059000040] |backslash fun:
[131059000050] |The new-style $()
applies to all POSIX-conformant shells.
[131059000060] |As mouviciel pointed out, old-style ` `
might be necessary for older shells.
[131059000070] |Apart from the technical point of view, the old-style ` `
has also a visual disadvantage:
[131059000080] |I like $(program) better than `program`
'`'`''`''`'`''`'
` `
for own purpose, it was a pain writing this answer :)
[131059010010] |$()
does not work with old Bourne shell.
[131059010020] |But it has been years since I worked with old Bourne shell.
[131059020010] |ssh hostname command
.
[131059080020] |If you have an entire script you need to execute, first use scp to transfer it to the remote host, then ssh to execute it.
[131059090010] |What about using configuration management like puppet or chef?
[131059090020] |This is maybe a little over the top for only one script, but if you need several such scripts it might be worth to consider.
[131059100010] |I have been pretty happy with a shell script called dssh.sh that utilizes ssh to communicate with many machines simultaneously.
[131059100020] |It can execute the same command across lots of machines simultaneously and wait for them all to exit before returning.
[131059100030] |To download and learn more about it, the best reference I have found is the BASH Cures Cancer blog.
[131059110010] |A quickie bash 'for' loop might be easiest, perhaps something like:
[131059110020] |Of course, cfengine/puppet/chef/capistrano are better configuration management options.
[131059110030] |If you wanted to interactively send commands to the various shells, clusterm (http://sourceforge.net/projects/clusterm/) is a solid choice too.
[131059120010] |Puppet and Chef are "pull" systems and I've found that a complementary "push" system implemented using Capistrano, Fabric, or ssh
(1) in a for
-loop is necessary.
[131059120020] |Of course, that means public keys in place for authentication, too; fortunately, those can be managed by Puppet or Chef.
[131059130010] |somecmd | sed 's/$/\n/' | tr -s '\n'
Is there a better way to do this?
[131059140010] |Feed it through some utility which read input in lines and output lines, like in awk { print $0 }
.
[131059150010] |Just run echo after it, it should generate a newline
[131059150020] |And If you need to feed it to something else, run it in a sub-shell:
[131059150030] |Or.. as @camh points out, the subshell is actually not needed you can execute it with a command list in the current shell environment with:
[131059160010] |ps
at specific intervals, to build up a profile of a particular process.
[131059180030] |The process can be launched by the monitoring tool itself, or it can be an independent process (specified by pid or command pattern).
[131059190010] |sar
(System Activity Reporter) from the sysstat package is your friend in case like these.
[131059190020] |Another way would be monitoring combined with historical data, e.g. Munin, pnp4nagios, rrdtools, ...
[131059200010] |Besides the aforementioned sar, I'd recommend atop.
[131059200020] |It saves a binary log that you can peruse afterwards, and besides memory saves a lot of other information.
[131059210010] |Occasionally when the need arises I just do "top -d 1 -b |grep >>somefile" .
[131059210020] |Not an elegant solution, but gets the job done if you want the quick crude value to verify your hypothesis.
[131059220010] |You could try Valgrind.
[131059220020] |Valgrind is an instrumentation framework for building dynamic analysis tools.
[131059220030] |There are Valgrind tools that can automatically detect many memory management and threading bugs, and profile your programs in detail.
[131059220040] |You can also use Valgrind to build new tools.
[131059220050] |The Valgrind distribution currently includes six production-quality tools: a memory error detector, two thread error detectors, a cache and branch-prediction profiler, a call-graph generating cache and branch-prediction profiler, and a heap profiler.
[131059230010] |RFS
) in my target.
[131059230040] |I have a setup in which my RFS is in development state so it will undergo changes on the fly.
[131059230050] |My requirement is that, every time I make a change, I need to sync my current RFS to my target RFS.
[131059230060] |One of my colleague suggested that I can use rsync
to attain my task.
[131059230070] |Please shed your valuable suggestions on this idea.
[131059230080] |All other ideas are also welcomed.
[131059240010] |What kind of access do you have to the target?
[131059240020] |If you can ssh into it, it's just
[131059240030] |Note that you'll need an rsync
executable on the target as well.
[131059240040] |You'll want to set up some kind of passwordless authentication.
[131059240050] |I guess during development you would just allow root so ssh in, and put a public key for root in place.
[131059240060] |If the ssh server on the target is OpenSSH, then you need PermitRootLogin Yes
in /etc/sshd_config
(or /etc/ssh/sshd_config
or something), and the root public key would be in /root/.ssh/authorized_keys
or /.ssh/authorized_keys
depending on where root's home directory is set in /etc/password
.
[131059240070] |If you changed the bootloader, and perhaps if you changed the kernel, you'll also need to run the bootloader update utility.
[131059250010] |Interrupt
is a hardware signal assertion
caused in a processor pin.
[131059250030] |But I would like to know how Linux OS handles it.
[131059250040] |What all are the things that happen when an interrupt occurs?
[131059260010] |Here's a high-level view of the low-level processing.
[131059260020] |I'm describing a simple typical architecture, real architectures can be more complex or differ in ways that don't matter at this level of detail.
[131059260030] |When an interrupt occurs, the processor looks if interrupts are masked.
[131059260040] |If they are, nothing happens until they are unmasked.
[131059260050] |When interrupts become unmasked, if there are any pending interrupts, the processor picks one.
[131059260060] |Then the processor executes the interrupt by branching to a particular address in memory.
[131059260070] |The code at that address is called the interrupt handler.
[131059260080] |When the processor branches there, it masks interrupts (so the interrupt handler has exclusive control) and saves the contents of some registers in some place (typically other registers).
[131059260090] |The interrupt handler does what it must do, typically by communicating with the peripheral that triggered the interrupt to send or receive data.
[131059260100] |If the interrupt was raised by the timer, the handler might trigger the OS scheduler, to switch to a different thread.
[131059260110] |When the handler finishes executing, it executes a special return-from-interrupt instruction that restores the saved registers and unmasks interrupts.
[131059260120] |The interrupt handler must run quickly, because it's preventing any other interrupt from running.
[131059260130] |In the Linux kernel, interrupt processing is divided in two parts:
[131059260140] |divide_error()
).
[131059270300] |Through the IDT, the kernel knows exactly how to handle the occurred interrupt or exception.
[131059270310] |So, what does the kernel when an interrupt occurres?
[131059270320] |rm
command, e.g.:
[131059300020] |rm /tmp/pacman.lck
[131059300030] |I hate to be "that guy", but if you don't know how to delete a file from the Linux command line, Arch is not the Linux distribution for you.
[131059300040] |Try something easier, like Ubuntu or Linux Mint first.
[131059310010] |If pacman is complaining there is ususally a good reason; don't ignore complaints unless you find good reason to, e.g. if it says "it's okay to ignore the error".
[131059310020] |Good practice would be to check your logs first and foremost.
[131059310030] |A lot of the logs live in /var/log/
.
[131059310040] |This will show the tail end or last couple of lines from everything.log
; which contains most errors across the system as they happen:
[131059310050] |It's also worth seriously noting that running two package managers/instances of pacman
at the same time is not a good idea.
[131059310060] |Did you start one then try to start another?
[131059310070] |It may be waiting for your input:
[131059310080] |Then you try to use `pacman to install something else without answering and get:
[131059310090] |The one thing that you should learn early is that deleting anything should be your absolute last resort, in any situation with linux.
[131059310100] |There is no 'trash bin' with the commandline, delete really means its gone.
[131059320010] |termcap
and terminfo
.
[131059330040] |(This is one of the many BSD vs. AT&T differences you still find in modern Unix systems.)
[131059330050] |These databases contain maps that tell how to control the many terminal types.
[131059330060] |The vast majority of the terminal types you'll find defined in these databases didn't survive the days of real terminals, and so are now only of historical interest.
[131059330070] |What's survived, and are used by programs like minicom
and GUI "terminal" programs like xterm
, GNOME Terminal, the OS X Terminal, etc., are a few common standards:
[131059330080] |ESC
[
1
;
1
H
.
[131059330100] |The first two characters tell the terminal to expect a control sequence, the 1s are the row and column, and H
is the command meaning "move cursor".
[131059330110] |Unix programs don't embed these escape sequences, it's all handled by the libraries mentioned above.
[131059330120] |Trivia: many PC BBSes used ANSI codes, too.
[131059330130] |(Still do, actually.)curses
(or whatever) that you want programs using it to use the VT320 protocol.xterm-color
variant.xterm
protocols.
[131059330240] |Most of its extensions have to do with the differences between a PC and a glass terminal.
[131059330250] |For example, the IBM keyboard has some keys not on a DEC VT-whatever.
[131059330260] |(And vice versa.)
[131059330270] |Like Linux, some Unix systems have their own console terminal protocol, too.
[131059330280] |There's the scoansi
ANSI X3.64 variant for SCO Unixes, for example.curses
to use by looking at the TERM
environment variable:
[131059330330] |When you ssh
to another system, the TERM
variable is carried along so the remote Unix box knows how to communicate with your local terminal.
[131059330340] |Because so many of these protocols are ANSI X3.64 variants, and because the ubiquitous ASCII and UTF-8 character coding standards take care of so much else, an incorrect TERM
variable isn't typically catastrophic.
[131059330350] |Things that tend to break are extended keys like Home and Page Up, Alt-whatever key combinations, and typographical display features like color, boldface, etc.
[131059340010] |~/.ssh/authorized_keys
file manually?
[131059350010] |If you have multiple ssh keys and already have one set up for authentication on the server or if you have some other non-password authentication system in place, I believe you can use ssh-copy-id
to copy a ssh identity to the server.
[131059350020] |In the ssh-copy-id
manual page it says:
[131059350030] |ssh-copy-id is a script that uses ssh to log into a remote machine (presumably using a login password, so password authentication should be enabled, unless you've done some clever use of multiple identities)
[131059350040] |ssh-copy-id
uses ssh
to log into the remote machine, so if you can currently log into the machine, you can use ssh-copy-id
.
[131059350050] |If you can not log into the machine, then you can not use ssh-copy-id
.
[131059350060] |New id's will be appended to ~/.ssh/authorized_keys
.
[131059360010] |/var/log/messages
, /var/log/syslog
, and some other log files use a timestamp which contains an absolute time, like Jan 13 14:13:10
.
[131059360030] |/var/log/Xorg.0.log
and /var/log/dmesg
, as well as the output of $ dmesg
, use a format that looks like
[131059360040] |I'm guessing/gathering that the numbers represent seconds and microseconds since startup.
[131059360050] |However, my attempt to correlate these two sets of timestamps (using the output from uptime
) gave a discrepancy of about 5000 seconds.
[131059360060] |This is roughly the amount of time my computer was suspended for.
[131059360070] |Is there a convenient way to map the numeric timestamps used by dmesg and Xorg into absolute timestamps?
[131059360080] |/var/log/syslog
and output the time skew.
[131059360100] |On my machine, running ubuntu 10.10, that file contains numerous kernel-originated lines which are stamped both with the dmesg timestamp and the syslog timestamp.
[131059360110] |The script outputs a line for each line in that file which contains a kernel timestamp.
[131059360120] |rel_offset
is 0 for all intervening lines ...
[131059360150] |... rel_offset
is -5280 for all remaining lines ...
[131059360160] |...
[131059360170] |The final lines are from a bit further down, still well above the end of the output.
[131059360180] |Some of them presumably got written to dmesg
's circular buffer before the suspend happened, and were only propagated to syslog
afterwards.
[131059360190] |This explains why all of them have the same syslog timestamp.
[131059360200] |abs
is the time logged by syslog.
[131059360220] |abs_since_boot
is that same time in seconds since system startup, based on the contents of /proc/uptime
and the value of time.time()
.
[131059360230] |rel_time
is the kernel timestamp.
[131059360240] |rel_offset
is the difference between abs_since_boot
and rel_time
.
[131059360250] |I'm rounding this to the tens of seconds so as to avoid one-off errors due to the absolute (i.e. syslog
-generated) timestamps only having seconds precision.
[131059360260] |That's actually not the right way to do it, since it really (I think..) just results in a smaller chance of having an off-by-10 error.
[131059360270] |If somebody has a better idea, please let me know.
[131059360280] |I also have some questions about syslog's date format; in particular, I'm wondering if a year ever shows up in it.
[131059360290] |I'm guessing no, and in any case could most likely help myself to that information in TFM, but if somebody happens to know it would be useful. ..Assuming, of course, that someone uses this script at some point in the future, instead of just busting out a couple of lines of Perl code.
[131059360300] |uname -a
doesn't quite work, since Darwin kernel versions don't always change with the rest of the system.
[131059400010] |Here is a Blog article with instructions How to Get the Mac OS X Version in a Shell Script
[131059410010] |Try this:
[131059420010] |The answer that suggested "system_profiler | grep 'System Version'" is what I have tried to use in the past, but it has 2 problems.
[131059420020] |uvcvideo
?
[131059460010] |Perhaps this works:
[131059470010] |logfile "%t-screen.log"
(probably in a .screenrc
file) to configure the name of the log file that will be started later.title
(C-a A) screen command to set the title of a new window, or you do screen -t ssh0
to start a new screen session.pipe-pane
shell command; pipe-pane
is available in tmux 1.0+):
[131059500060] |.tmux.conf
):
[131059500070] |tmux rename-window
(C-b ,) to rename an existing window, or use tmux new-window -n 'ssh '
to start a new tmux window, or use tmux new-session -n 'ssh '
to start a new tmux session..tmux.conf
or one you source
). tmux needs to see both the backslash and the semicolon; if you want to configure this from the a shell (e.g. tmux bind-key …
), then you will have to escape or quote both characters appropriately so that they are delivered to tmux intact.
[131059500110] |There does not seem to be a convenient way to show different messages for toggling on/off when using only a single binding (you might be able to rig something up with if-shell
, but it would probably be ugly).
[131059500120] |If two bindings are acceptable, then try this:
[131059510010] |screen -m -d -S minecraft /var/minecraft/bin/server_nogui.sh
[131059580050] |This starts the minecraft server without any trouble.
[131059580060] |However, the issue is that even simple followups like this fail:
[131059580070] |screen -r minecraft -X "stop"
[131059580080] |I get no error message or success message, and the server does not actually disconnect clients and shut down, like it should.
[131059580090] |I assume I'm doing something wrong, but I don't know what.
[131059580100] |Is there some obvious mistake I'm making?
[131059580110] |I've read the man page a bit but I'm having no luck figuring it out myself.
[131059590010] |You have to give the parameter -X
a screen
command, I think you want to "stuff" a minecraft-server command to the screen
session.
[131059590020] |The echo
send a carriage return, so the command "stop" gets executed.
[131059590030] |For sending it over ssh
you have to enclosure the command in " "
(you could also use ` `
, but that wouldn't let you do the command substitution).
[131059590040] |Beware that !
is a reserved word, you have to escape it.
[131059590050] |It is also possible to include a user generated newline into the command to execute it:
[131059590060] |Escaping !
isn't necessary here.
[131059600010] |tr
command.
[131059630020] |For example:
[131059630030] |To delete the control character:
[131059630040] |To replace the control character with another:
[131059630050] |If you are not sure what the value of the control character is, perform an octal dump and it will print it out:
[131059630060] |So the value of control character ^[
is \033
.
[131059640010] |This will replace all non-printable characters with a #
[131059650010] |I'm not sure if I understand what you want, but if it is to substitute for occurrences of the successive hex bytes 0x00 0x03, this should work:
[131059660010] |ashtanga
doesn't have access to /home/custom-django-projects/SiteMonitor/sender.py
.
[131059670020] |This looks like another user's home area?
[131059670030] |Try running the script as ashtanga
.
[131059670040] |It's always a good first step, before you add anything to cron.
[131059670050] |It might be to do with your cron environment.
[131059670060] |Take a look at this Cron FAQ: It works from the command line but not in crontab
[131059680010] |The user does have permission as the permission is set to 755 The problem is that the user doesn't know of the environment variables needed.
[131059680020] |Try using bash instead and see if it picks them up then.
[131059680030] |Otherwise, set them up manually
[131059680040] |Start troubleshooting by running the script using the /bin/sh
shell.
[131059680050] |You should get the same error then.
[131059690010] |/home/foo/bar/baz
it will tell me what the permissions are for baz
, bar
, foo
, and home
.
[131059710040] |Does anyone know what this command is or another way of doing this?
[131059710050] |The command basically starts at the argument, and works it's way up to /
letting you know what the permissions are along the way so you can see if you have a permission problem.
[131059720010] |I'm not aware of any commands, but it is quite easy to write a script:
[131059720020] |Example:
[131059730010] |How about a recursive bash function for a fun solution:
[131059740010] |This could easily be made a one-liner.
[131059740020] |This is not recursive and should be a relatively fast way of doing this in bash.
[131059740030] |Calling pwd in each loop isn't particularly fast, so avoid if you can.
[131059740040] |Alternative, a one-liner for the current directory.
[131059750010] |The utility you may be thinking of is the namei
command.
[131059750020] |According to the manual page:
[131059750030] |Namei uses its arguments as pathnames to any type of Unix file (symlinks, files, directories, and so forth).
[131059750040] |Namei then follows each pathname until a terminal point is found (a file, directory, char device, etc).
[131059750050] |If it finds a symbolic link, we show the link, and start following it, indenting the output to show the context.
[131059750060] |The output you desire can be received as follows:
[131059750070] |The namei
command is part of the linux-util-ng software package.
[131059750080] |See the manual page for more details.
[131059760010] |find -name somefile.txt
)vim
-
[131059780020] |as a file name which means stdin.
[131059790010] |Try this:
[131059790020] |:r!find / -name 'expression'
[131059790050] |The results should appear in vim when the search is complete.
[131059790060] |Or
[131059790070] |Try:
[131059800010] |I like to to use the back ticks ` (Its on the same key as the ~)
[131059800020] |The back ticks executes the command inside the ticks and the output can then be used by the command.
[131059800030] |The above will find all files somefile.txt thus allowing you to use :next
to move through all the files.
[131059800040] |Its very usefull if you spend a couple of tries refining the command, because you can then use history substitution to repeat the command for the editor.
[131059810010] |If you don't mind running the command again: press Up and append an xargs
command.
[131059810020] |Or use history substitution and run
[131059810030] |There's a lightweight way of saving the output of a command that works in ksh and zsh but not in bash (it requires the output side of a pipeline to be executed in the parent shell).
[131059810040] |Pipe the command into the function K
(zsh definition below), which keeps its output in the variable $K
.
[131059810050] |Automatically saving the output of each command is not really possible with the shell alone, you need to run the command in an emulated terminal.
[131059810060] |You can do it by running inside script
(a BSD utility, but available on most unices including Linux and Solaris), which saves all output of your session through a file (there's still a bit of effort needed to reliably detect the last prompt in the typescript).
[131059820010] ||xargs rm -f
to that command.
[131059850020] |Here's what it would look like
[131059850030] |Note that the xargs rm
command works here because you know there aren't any special characters in the file names.
[131059850040] |If there might be spaces in the file names, you can use xargs -d '\n' rm -f
(Linux only).
[131059860010] |/usr/matlab
.
[131059860030] |What do I do to make it appear in the application launcher on the top left of the taskbar?
[131059870010] |You can add a launcher to the panel by right clicking on a free area on the panel and selecting "Add to panel" and then "Custom Application Launcher" (or if the application is already present in the applications menu, you can select "Application Launcher" and then select the application from the menu).
[131059870020] |You can add an entry into the applications menu by right clicking on it and selecting "Edit menu".
[131059880010] |systemd
mentioned on Arch General ML today.
[131059900020] |So read up on it.
[131059900030] |The H Online as ever is a great source for Linux Technology and is where I found my place to start researching Systemd as SysV Init and Upstart alternative.
[131059900040] |However the H Online article (in this case) isn't a very useful read, the real use behind it is it gives links to the useful reads.
[131059900050] |The real answer is in the announcement of systemd.
[131059900060] |Which gives some crucial points of what's wrong with SysV initd, and what new systems need to do
[131059900070] |/home/
, etc (not to be confused with /etc
) to mount, and/or fsck
when you could be starting daemons as /
and /var/
etc, are already mounted.
[131059900120] |It said it was going to use autofs to this end.
[131059900130] |It also has the goal of creating .desktop
style init descriptors as a replacement for scripts.
[131059900140] |This will prevent tons of slow sh
processes and even more forks of processes from things like sed
and grep
that are often used in shell scripts.
[131059900150] |They also plan not to start some services until they are asked for, and perhaps even shut them off if they are no longer needed, bluetooth module, and daemon are only needed when you're using a bluetooth device for example.
[131059900160] |Another example given is the ssh daemon.
[131059900170] |This is the kind of thing that inetd is capable of. Personally I'm not sure I like this, as it might mean latency when I do need them, and in the case of ssh I think it means a possible security vulnerability, if my inetd were compromised the whole system would be.
[131059900180] |However, I've been informed that using this to breach this system is infeasible and that if I want to I can disable this feature per service and in other ways.
[131059900190] |Another feature is apparently going to be the capability to start based on time events, either at a regularly scheduled interval or at a certain time.
[131059900200] |This is similar to what crond
and atd
do now.
[131059900210] |Though I was told it will not support user "cron".
[131059900220] |Personally this sounds like the most pointless thing.
[131059900230] |I think this was written/thought up by people who don't work in multiuser environments, there isn't much purpose to user cron if you're the only user on the system, other than not running as root.
[131059900240] |I work on multiuser systems daily, and the rule is always run user scripts as the user.
[131059900250] |But maybe I don't have the foresight they do, and it will in no way make it so that I can't run crond
or atd
, so it doesn't hurt anyone but the developers I suppose.
[131059900260] |The big disadvantage of systemd is that some daemons will have to be modified in order to take full advantage of it.
[131059900270] |They'll work now, but they'd work better if they were written specifically for its socket model.
[131059900280] |It seems for the most part the systemd's peoples problem with upstart is the event system, and that they believe it to not make sense or be unnecessary.
[131059900290] |Perhaps their words put it best.
[131059900300] |Or to put it simpler: the fact that the user just started D-Bus is in no way an indication that NetworkManager should be started too (but this is what Upstart would do).
[131059900310] |It's right the other way round: when the user asks for NetworkManager, that is definitely an indication that D-Bus should be started too (which is certainly what most users would expect, right?).
[131059900320] |A good init system should start only what is needed, and that on-demand.
[131059900330] |Either lazily or parallelized and in advance.
[131059900340] |However it should not start more than necessary, particularly not everything installed that could use that service.
[131059900350] |As I've already said this is discussed much more comprehensively in the announcement of systemd.
[131059910010] |Both upstart and systemd are attempts to solve some of the problems with the limitations of the traditional SysV init system.
[131059910020] |For example, some services need to start after other services (for example, you can't mount NFS filesystems until the network is running), but the only way in SysV to handle that is to set the links in the rc#.d directory such that one is before the other.
[131059910030] |Add to that, you might need to re-number everything later when dependencies are added or changed.
[131059910040] |Upstart and Systemd have more intelligent settings for defining requirements.
[131059910050] |Also, there's the issue with the fact that everything is a shell script of some sort, and not everyone writes the best init scripts.
[131059910060] |That also impacts the speed of the startup.
[131059910070] |Some of the advantages of systemd I can see:
[131059910080] |iwconfig
which returns normal data, and ping -c 100 -i 0.2
on the router and some stabile website ip addresses, but the summary doesn't give me all that good data, only the occasional packet loss.
[131059930060] |One information that's missing from the summary is the count of packets with clearly deviating roundtrip time, since that's one of the symptoms I've noticed - most packets come back with a regular time, but some of them take a lot longer.
[131059930070] |So what tools can I use to get some actual, numerical data on the quality of my internet connection?
[131059930080] |(And just in case someone's wondering, yes, the problem is real and not just confirmation bias, as it sometimes appears bad enough to throw me off the WLAN connection.
[131059930090] |It's probably somehow related to this Ubuntu bug and/or this Redhat bug)
[131059940010] |Maybe setup smokeping on the Linux side, and point it at your AP?
[131059940020] |Smokeping will periodically (configurable) send -20 pings at the same time, and then graph how how many returned and the range of times that they returned in.
[131059940030] |If you have a lot of dropped packets, or the really wide range, then you should be concerned.
[131059940040] |If you want to run smokeping you could use fping, which is what Smokeping is calling to collect the data.
[131059940050] |It is a lot easier to interpret with the graph though.
[131059950010] |Use tcpdump to capture packets that are leaving your local LAN subnet.
[131059950020] |Then use tools like Wireshark or tshark to do some analysis on how much loss you're experiencing, as well as what the variance in round trip time is, and how TCP is behaving.
[131059950030] |(Windowing, retransmits, etc).
[131059950040] |The reason I suggest this rather than running some sort of ping/traceroute based monitoring software is that many network operators treat ICMP traffic (and generation of ICMP unreachables, which traceroute relies on) differently to actual UDP/TCP traffic.
[131059950050] |Using an ICMP based tool may therefore give you spurious results.
[131059960010] |Red Hat nash version 5.1.19.6 starting
I get the following lines:
[131059960080] |Is there something I can tweak to get this to possibly boot?
[131059960090] |I'd really like to not have to reload CentOS 5.5 and the specialized software on this machine.
[131059960100] |I do have a grub menu setup on this drive, could this by chance be my problem?
[131059960110] |The drives in the old machine are setup with Linux as drive 1, and Windows as Drive 2, and the Linux drive hosts the grub menu allowing me to boot to Linux or Windows.
[131059960120] |Could this some how be the problem?
[131059960130] |I do know of a way around this with Windows: install a secondary HDD controller card in the machine, install the drivers, hooked up drive to controller in old machine and make sure it boots, move the drive and controller to the new machine and boot off it, load the motherboard drivers (specifically the hdd controller drivers) and then you can take out the controller card, connect the hdd directly to the motherboard and you're set.
[131059960140] |This same thing is probably accomplish able in Linux, but I'm not sure.
[131059960150] |This might be a last ditch effort to try if nothing else works.
[131059970010] |If you get this far, it means the bootloader loaded the kernel and initrd/initramfs successfully, but the kernel is not finding the root device.
[131059970020] |So you should be able to boot by passing something like root=/dev/sda42
on the kernel command line.
[131059970030] |At the Grub prompt, edit the entry for Linux, and look for the line that begins with linux
.
[131059970040] |On that line, there should be a parameter that looks like root=/dev/sda42
.
[131059970050] |Change it to root=/dev/sdb42
, i.e. a different drive.
[131059970060] |The current letter might not be a
, and the letter that works might not be b
, though if you have two drives you'll probably just need to swap sdb
for sda
or vice versa.
[131059970070] |The order of the drive letters in Linux is unrelated (or at least not directly related) to the order in the BIOS, in Grub or in Windows (it depends on the order in which the drivers are loaded).
[131059970080] |(There are ways around this, but they won't help you right now.)
[131059970090] |When you boot, you might get errors if entries in /etc/fstab
don't match the current disk device names.
[131059970100] |If you're not able to get to a repair console, reboot and (in addition to the root=
change) add init=/bin/sh
to drop directly to a shell, then run
[131059980010] |fdisk
deserves a separated question, but have you ever really tried to use it?
[131059990110] |I find fdisk
pretty straightforward.
[131059990120] |If you find it complicated you can try a live CD with GParted.
[131059990130] |The openSUSE live CD should have a GUI partitioning tool as well, but I'm not sure (I'm more familiar with Ubuntu).
[131060000010] |.bash_aliases
file
[131060000030] |alias auth="grep \"$(date|awk '{print $2,$3}')\" /var/log/auth.log|grep -E '(BREAK-IN|Invalid user|Failed|refused|su|Illegal)'"
[131060000040] |This is supposed to:
[131060000050] |auth.log
for todays messagesdate
and pipe the result to awk
[131060000110] |date
outputs Sat Jan 1 04:56:10 GMT 2011
and then awk captures $2
and $3
and feeds them into grep as follows
[131060000120] |Jan 1
[131060000130] |However, when there's a single digit day, messages in auth.log
appear as follows
[131060000140] |So there are two spaces following Jan
in the auth.log
but only one space following Jan
in my grep command
[131060000150] |How can I modify the command to allow for the additional space?
[131060010010] |Rather than using date | awk ...
, you can use a format specifier with the date command for the format you want.
[131060010020] |According to the date(1)
man page, %b
is the abbreviated month name, and %e
is the day of month, space padded, same as %_d
.
[131060010030] |The following date command should give you a string in the form you want:
[131060010040] |You can also put other characters into the format specifier, so if you use:
[131060010050] |you'll get a grep pattern that matches the date only at the beginning of the line.
[131060010060] |This would prevent any false matches where there is a date in the message part of the log.
[131060010070] |As pointed out by Steven D, you can also do this with a single invocation of grep
:
[131060010080] |I've made a few changes based on issues mentioned in comments related to quoting.
[131060010090] |My rules for quoting are to use single quotes when grouping separate words into a single word and to protect against shell expansion of metacharacters, and to use double quotes only when you want to expansion inside a multi-word string.
[131060010100] |The original answer had the date
format string in double quotes, which was wrong according to my above rules.
[131060010110] |I've now changed that.
[131060010120] |An edit put the grep string into double quotes.
[131060010130] |I've put it back into single quotes because there is so often an overlap between shell metacharacters and grep regular expression (RE) metacharacters that you almost always want to single-quote REs to grep.
[131060010140] |The current string may not need single quotes but if this shell function evolves over time, it may break with future changes.
[131060010150] |Because the question was asking about a command to put inside an alias, there was an additional level of quoting that was not shown in this answer.
[131060010160] |It would be simpler to use a shell function instead of an alias so you don't need to deal with this extra level of quoting.
[131060010170] |Nested quoting can get messy quickly, so anything you can do to avoid it, you should do.
[131060010180] |I have tested this as a shell function, using Gilles suggestion for futzing the date and it "works for me".
[131060020010] |wget --mirror http://tshepang.net/
, but it only retrieves one page, "tshepang.net/index.html".
[131060040030] |Is this a bug in wget?
[131060040040] |Here's the output, from using the --debug
option:
[131060050010] |Assuming wget is in your path (if not, you’ll have to enter the full path) issue the following commands:
[131060060010] |The --no-cookies
option helped (thanks to wag):
[131060060020] |It seems like all the redirection caused wget to interrupt the request.
[131060060030] |Try with --no-cookies.
[131060060040] |This was determined from reading the attached log.
[131060070010] |You also need to set -r
for recursive and -l X
for link depth, where X is an integer.
[131060070020] |It's also a good idea to set -A
to set the list of acceptable file types to keep (otherwise you only get HTML files).
[131060080010] |find /opt/path -exec setacl -d user:myUser {} ';'
[131060110050] |After this executes and the acl is removed I am left with an acl that looks as follows
[131060110060] |user:101:--- /opt/path
[131060110070] |How can I properly call setacl
to remove the user without leaving behind a uid?
[131060120010] |Is user 101 the owner of the file?
[131060120020] |If so, you need to change the file to a different user ID, with chown
(in addition to, or in lieu of, the setacl
call).
[131060120030] |Every file belongs to one user and one group; ACLs come in addition to that.
[131060120040] |Note that I've never used ACLs on HP/UX, so I may be missing something.
[131060120050] |It might help if you showed the output of ls -ld /opt/path
and getacl /opt/path
before you run that find
command.
[131060130010] |If you've quoted your command accurately as:
[131060130020] |you are missing a crucial space:
[131060130030] |The former invokes undefined (or maybe implementation-defined) behaviour from find
; it might or might not expand the file name when the {}
is not in an argument on its own.
[131060130040] |But it then invokes the setacl
command with no filename; it combines the filename with the control argument user:myUser
.
[131060130050] |It is most unlikely to be correct as written - but I'm hoping that it is just a typo in your transcription from your system to SO.
[131060140010] |curl -F "sprunge=<-" http://sprunge.us | xclip
aliased to webshare
on my system, so it becomes simply command | webshare
.
[131060150040] |The added xclip at the end gets the url into the X clipboard; it's not on every system, and there are several other tools out there like it.
[131060160010] |I use ix.io with an account set up in .netrc with its command line tool installed; its simple and cool.
[131060160020] |Then you can either pipe stuff through it like the above answer:
[131060160030] |or directly paste a file:
[131060160040] |this returns the url.
[131060160050] |Then I additionally set up a git alias for this so that I can easily paste my format-patches and get an url for it:
[131060160060] |To paste a patch I do, for example:
[131060160070] |or to paste whatever is in your current buffer in vim:
[131060160080] |for uploading files, not too big: http://paste.xinu.at/ with its client.
[131060170010] |/bin
folder it has far less content than the /usr/bin
folder (atleast on my running system).
[131060170070] |So can someone please explain the difference?
[131060180010] |What? no /bin/
is not a symlink to /usr/bin
at least not on any FHS compliant system.
[131060180020] |/bin
[131060180030] |contains commands that may be used by both the system administrator and by users, but which are required when no other filesystems are mounted (e.g. in single user mode).
[131060180040] |It may also contain commands which are used indirectly by scripts
[131060180050] |/usr/bin/
[131060180060] |This is the primary directory of executable commands on the system.
[131060180070] |essentially, /bin
contains executables which are required by the system for emergency repairs, booting, and single user mode. /usr/bin
contains any binaries that aren't required.
[131060180080] |I will note, that they can be on separate disks/partitions, /bin
must be on the same disk as /
. /usr/bin
can be on another disk.
[131060180090] |For full correctness, some unices may ignore FHS, as I believe it is only a Linux Standard, I'm not aware that it has yet been included in SUS, Posix or any other UNIX standard, though it should be IMHO.
[131060180100] |It is a part of the LSB standard though.
[131060190010] |There are many UNIX-based systems.
[131060190020] |Linux, AIX, Solaris, BSD, etc.
[131060190030] |The original quote gives historical context that applies to all flavors.
[131060190040] |If you look on any one specific system, you will see different results.
[131060190050] |The last sentence of the original quote is specific to only some versions and distributions.
[131060200010] |On Linux /bin
and /usr/bin
are still separate because it is common to have /usr
on a separate partition.
[131060200020] |In /bin
is all the commands that you will need if you only have /
mounted.
[131060200030] |On Solaris (and probably others) /bin
is a symlink to /usr/bin
.
[131060200040] |Of particular note, the statement that /bin
is for "system administrator" commands and /usr/bin
is for user commands is not true (unless you think that bash
and ls
are for admins only, in which case you have a lot to learn).
[131060200050] |Administrator commands are in /sbin
and /usr/sbin
.
[131060210010] |/sbin
- Binaries needed for booting, low-level system repair, or maintenance (run level 1 or S)
[131060210020] |/bin
- Binaries needed for normal/standard system functioning at any run level.
[131060210030] |/usr/bin
- Application/distribution binaries meant to be accessed by locally logged in users
[131060210040] |/usr/sbin
- Application/distribution binaries that support or configure stuff in /sbin.
[131060210050] |/usr/share/bin
- Application/distribution binaries or scripts meant to be accesed via the web, i.e. Apache web applications
[131060210060] |*local*
- Binaries not part of a distribution; locally compiled or manually installed.
[131060210070] |There's usually never a /local/bin
but always a /usr/local/bin
and /usr/local/share/bin
.
[131060220010] |pkg install OSOLvpanels
and then it will appear under the System->Administration menu in GNOME as "Services" or you can start it with the command vp svcs
.
[131060240010] |S("cp a b")
[131060300060] |Maybe not :)
[131060310010] |There are some packages available for node that facilitate system scripting.
[131060310020] |The node package manager is probably the easiest way to install such packages; node itself can be built from source (with the v8 engine it runs on) or installed via some system package managers.
[131060310030] |You may need to learn to use evented I/O in order to get much done.
[131060320010] |You should search for "learn python in 10 minutes".
[131060320020] |It covers the most useful python features: lists, tuples, dictionnaries, classes, and of course its awesome indentation system.
[131060320030] |Learn it, I personnaly considering python as important after C\C++, because it does so much by default, and as a scripting language, it serves a lot.
[131060320040] |Advantages:
[131060320050] |/usr/local
or opt
for an alternate perl installation, check for a PERL5LIB
setting in /etc/profile
.
[131060370040] |I wouldn't do it that way, because as you noticed it will break dependencies, but I can see why someone might be tempted.
[131060370050] |Maybe if you post the full set of exclusions someone will spot a pattern.
[131060370060] |Is there any comment in the file that might give a hint?
[131060370070] |To avoid this kind of issue in the future, you should put all configurations under version control.
[131060370080] |Then the changelog would indicate when the surprising configuration was set up, and hopefully why.
[131060370090] |On Debian/Ubuntu I use etckeeper, which I think has been packaged for CentOS too.
[131060370100] |On a multi-administrator machine, it should be set up never to commit changes automatically, forcing the administrator to make an explicit commit before they can run yum install
or yum update
.
[131060380010] |cPanel keeps track own copy of Perl.
[131060380020] |The default install adds that exclude rule.
[131060380030] |I think they do it because many people rely heavily on cPanel working and doing all server the work and there may have been issues in the past regarding the packages and Perl.
[131060380040] |You can install git by using the --disableexcludes
option to disable the excludes on the repository:
[131060390010] |It is unlikely that installing Perl into its normal root will interfere with cPanel, depending on the configuration.
[131060390020] |What does which perl
return?
[131060390030] |Technically you can install Git, or even Git plus its dependencies, without having Perl installed.
[131060390040] |Please note that doing so may affect certain functionality within Git.
[131060390050] |yum -y install yum-downloadonly &&yum install --downloadonly --downloaddir=/foo/bar/ git
[131060390060] |This will download current rpm's for Git and its dependencies (perl-Error and perl-Git) to /foo/bar/.
[131060390070] |Now you can rpm -ivh --nodeps /foo/bar/{git,perl-{Error,Git}}*.rpm
[131060400010] |infocmp >>~/etc/terminfo.txt
.
[131060410090] |Edit the description to change the rs1
(basic reset) sequence, e.g. replace rs1=\Ec
by rs1=\Ec\E]4;4;#6495ed\E\\
.
[131060410100] |With some programs and settings, you may need to change the rs2
(full reset) as well.
[131060410110] |Then compile the terminfo description with tic ~/etc/terminfo.txt
(this writes under the directory $TERMINFO
, or ~/.terminfo
if unset)./etc/termcap
). Change the is
(basic reset) and rs
(full reset) sequences to append your settings, e.g. :is=\Ec\Ec\E]4;4;#6495ed\E\\:
.
[131060410130] |Set the TERMCAP
environment variable to the edited value (beginning and ending with :
).~/.profile
:
[131060420010] |You're ssh
-ing into just one box right? why not just set the PS1
variable on that box to use the colorscheme you want?
[131060420020] |If you keep it to 16 colors you shouldn't have a problem on any modern TERM
, most should support 256 colors, but most don't set TERM=xterm-256color
out of the box, and some fools (cough my employer cough) sanitize TERM
to be alpha-numeric only.
[131060420030] |Unfortunately what to put in your PS
vars, is highly dependent on the shell you are using.
[131060430010] |grub.cfg
/ menu.lst
is configured correctly?
[131060440040] |My GRUB normally outputs this line after the root (hd0,X)
command...
[131060440050] |I can't tell much more without some extra details of what software you're running, full output, and at what part of the boot process this occurs :)
[131060450010] |/etc/X11/xorg.conf
, I have:
[131060450050] |It is recognized when X11 starts:
[131060450060] |The operating system is Debian 5 (Lenny).
[131060450070] |The graphics card is:
[131060450080] |X11 is:
[131060460010] |--color
for grep
, ls
, etc.
[131060530010] |FreeBSD has CLICOLOR.
[131060530020] |On Linux and any other system with GNU tools, you need to set LS_COLORS, GREP_COLOR, and GREP_OPTIONS='--color=auto', but even then you still need to run ls --color=auto
.
[131060530030] |Run info coreutils 'ls invocation'
for more details.
[131060530040] |The easiest way I know to avoid typing --color
on Linux is to make ls
run ls --color=auto
using an alias.
[131060530050] |This is what I put in my .bashrc (well, really my .env, but it's like .bashrc) to make it happen by default:
[131060540010] |apt
[131060580040] |Damn Small Linux is a very versatile 50MB mini desktop oriented Linux distribution.
[131060580050] |DSL was originally developed as an experiment to see how many usable desktop applications can fit inside a 50MB live CD.
[131060580060] |It was at first just a personal tool/toy.
[131060580070] |But over time Damn Small Linux grew into a community project with hundreds of development hours put into refinements including a fully automated remote and local application installation system and a very versatile backup and restore system which may be used with any writable media including a hard drive, a floppy drive, or a USB device.
[131060580080] |Important note: Apparently Damn Small Linux is no longer maintained
[131060590010] |I am not aware of any apt-based Linux that is actually small.
[131060590020] |However, if you remove that one requirement (apt-based), I can recommend Slitaz.
[131060590030] |The ISO file is 30MB, it comes with a GUI and a functional Web browser.
[131060590040] |It is actually still maintained as opposed to DSL which seems to have been abandoned for a while.
[131060590050] |Slitaz uses tazpkg
for management, which from my opinion is as easy as apt:
[131060600010] |Debian can be quite small.
[131060600020] |During the install when you get to tasksel, unselect everything.
[131060600030] |You'll get a very minimal system taking up only 512M. Even then, you can still remove packages that you won't use.
[131060610010] |Crunchbang Linux
[131060610020] |It used to be based on Ubuntu, but now it's based on Debian.
[131060610030] |It comes with Openbox as the default window manager.
[131060610040] |The default Debian install uses Gnome which is quite a bit heaver than Openbox.