[131064210010] |
vimprobable
and uzbl
.
[131064290040] |Whenever I start one of those two browsers, I get the error message:
[131064290050] |icedteanp plugin error: Failed to run etc/alternatives/../../bin/java.
[131064290060] |For more detail rerun "firefox -g" in a terminal window.
[131064290070] |After some time (about 1 minute), the message disappears by itself (or when I click the "close" button). The browser(s) continue loading and again, this error appears on specific sites.
[131064290080] |What can I do to fix this problem?
[131064290090] |Thank you.
[131064300010] |Your java
alternative is not configured properly, the webbrowser cannot find the binary.
[131064300020] |Update your alternatives:
[131064300030] |This should output only one (with a path) or multiple alternatives to choose from.
[131064300040] |If only one, check if the path exists, else select your alternative.
[131064300050] |Try again opening a page containing java elements.
[131064300060] |Still the same error?
[131064300070] |First, find out what provides java
:
[131064300080] |If this returns nothing, you have to install java
first.
[131064300090] |Otherwise, check where exactly the binary lies:
[131064300100] |This should show the java
binary.
[131064300110] |Now update your alternatives to refer to that binary:
[131064310010] |dpkg --search /bin/ls
gives:
[131064310030] |That is, the file "/bin/ls" belongs to the Debian package named coreutils.
[131064310040] |But this only works if the package is installed.
[131064310050] |What if it's not?
[131064320010] |The standard tool for this is apt-file.
[131064320020] |Run apt-file update
to download the index file.
[131064320030] |Here's the output:
[131064320040] |After that, run apt-file search search_term
.
[131064330010] |apt-file
[131064330020] |apt-file
provides the feature of searching for a package providing a binary (like Debian or Ubuntu), it is not installed by default but in the repositories.
[131064330030] |For example, let's search for the not installed binary mysqldump
:
[131064330040] |It's also possible to list the contents of a (not-installed) package:
[131064330050] |yum
[131064330060] |yum
accepts the command whatprovides
(or provides
) to search for installed or not installed binaries:
[131064330070] |Again, the not installed mysqldump
:
[131064330080] |zypper
[131064330090] |opensuse'szypper
does include the command what-provides
and concerning the manual, this should also search through the contents of not installed packages.
[131064330100] |Apparently, it does not work like intended.
[131064330110] |There is a request for this feature.
[131064330120] |Webpin provides a webbased solution, there is even a script for the command-line.
[131064330130] |pkgfile
[131064330140] |Available as pkgtools
for pacman
based systems.
[131064330150] |Provides a similar search feature like the others above:
[131064340010] |startx -- :1
.
[131064410040] |You should end up with another X session on reachable by ctrl+alt+f8.
[131064410050] |Any number of X servers can be started by changing the number after the colon; I don't know how to get to ones after you run out of f-keys.
[131064410060] |If you want, you can setup special .xinitrc
files that start different desktop environments.
[131064410070] |So you might have a .xinitrc-kde
that starts a KDE session.
[131064410080] |In that file, you'd have something like exec startkde
.
[131064410090] |And you'd start X like by doing startx ./.xinitrc-kde -- :1
.
[131064410100] |If you plan on running Firefox on both the sessions, there may be some issues.
[131064410110] |Look into the "no-remote" and "ProfileManager" command line options for Firefox.
[131064420010] |If you've planned in advance that you want to access one application from several different X sessions, you can run it inside a virtual X server: the application displays inside the virtual X server, and the virtual X server appears as a window inside any number of real X servers.
[131064420020] |One possibility for the virtual X server is VNC.
[131064420030] |Start the vncserver
program; this creates a virtual X server and runs ~/.vnc/xstartup
, which typically runs ~/.xinitrc
like startx
.
[131064420040] |Then call xvncviewer
to show a window containing the virtual X server's display.
[131064420050] |The virtual server keeps running until the session exits or you run vncserver -kill
; you can connect and disconnect viewers at will.
[131064420060] |You may need to specify a display number on the command line, e.g. vncserver :3
and xvncviewer :3
.
[131064420070] |VNC sessions can be accessed from different machines if no firewall gets in the way: xvncviewer somehost:3
.
[131064420080] |There are multiple implementations of VNC servers and viewers.
[131064430010] |You can start a nested X server (Xnest or better Xephyr) and launch another graphic environment in it.
[131064440010] |wreckcreations
with no color.
[131064440060] |On the right you have Putty connected directly to wreckcreations
with color.
[131064440070] |Note that normally byobu has color when working locally.
[131064440080] |I tried messing with $TERM
, $PSI
, and other variables to no avail.
[131064440090] |Any idea's what would cause this?
[131064450010] |It could be many things.
[131064450020] |Please provide the output of:
[131064450030] |Expected results:
[131064450040] |My guess: TERM
is set to something unusual, and dircolors doesn't know about it, therefore ls
doesn't know what colors to use.
[131064450050] |If this is the case, running the above commands inside your byobu/screen session, you would see:
[131064450060] |Confirm that this is the case by running:
[131064450070] |I would expect it to print nothing.
[131064450080] |The simplest fix, depending on your configuration, would be:
[131064460010] |mplayer
has -softvol-max
option that allows to amplify loudness.
[131064480020] |VLC also by default allow you to raise volume up to 200%, but it can be changed in options.
[131064490010] |sigaction()
.
[131064520040] |So here is my question: what causes each of the signals to be sent?
[131064520050] |I realize that you can manually send signals to running applications via the -s
parameter to kill
, but what are the natural circumstances under which these signals are sent?
[131064520060] |For example, when does SIGINT
get sent?
[131064520070] |Also, are there any restrictions on which signals can be handled?
[131064520080] |Can even SIGSEGV
signals be processed and control returned to the application?
[131064530010] |In addition to processes calling kill(2)
, some signals are sent by the kernel (or sometimes by the process itself) in various circumstances:
[131064530020] |SIGINT
(please go back to the main loop) on Ctrl+C, SIGQUIT
(please quit immediately) on Ctrl+\, SIGTSTP
(please suspend) on Ctrl+Z.
[131064530040] |The keys can be changed with the stty
command.SIGTTIN
and SIGTTOU
are sent when a background process tries to read or write to its controlling terminal.SIGWINCH
is sent to signal that the size of the terminal window has changed.SIGHUP
is sent to signal that the terminal has disappeared (historically because your modem had hung up, nowadays usually because you've closed the terminal emulator window).SIGBUS
for an unaligned access memory;SIGSEGV
for an access to an unmapped page;SIGILL
for an illegal instruction (bad opcode);SIGFPE
for a floating-point instruction with bad arguments (e.g. sqrt(-1)
).SIGALRM
notifies that a timer set by the process has expired.
[131064530160] |Timers can be set with alarm
, setitimer
and others.SIGCHLD
notifies a process that one of its children has died.SIGPIPE
is generated when a process tries to write to a pipe when the reading end has been closed (the idea is that if you run foo | bar
and bar
exits, foo
gets killed by a SIGPIPE
).SIGPOLL
(also called SIGIO
) notifies the process that a pollable event has occured.
[131064530200] |POSIX specifies pollable events registered through the I_SETSIG
ioctl
.
[131064530210] |Many systems allow pollable events on any file descriptor, set via the O_ASYNC
fcntl
flag.
[131064530220] |A related signal is SIGURG
, which notifies of urgent data on a device (registered via the I_SETSIG
ioctl
) or socket.SIGPWR
is sent to all processes when the UPS signals that a power failure is imminent.signal.h
.
[131064530260] |Most signals can be caught and handled (or ignored) by the application.
[131064530270] |The only two portable signals that cannot be caught are SIGKILL
(just die) and STOP
(stop execution).
[131064530280] |SIGSEGV
(segmentation fault) and its cousin SIGBUS
(bus error) can be caught, but it's a bad idea unless you really know what you're doing.
[131064530290] |A common application for catching them is printing a stack trace or other debug information.
[131064530300] |A more advanced application is to implement some kind of in-process memory management, or to trap bad instructions in virtual machine engines.
[131064540010] |To answer your second question first: SIGSTOP
and SIGKILL
cannot be caught by the application, but every other signal can, even SIGSEGV
.
[131064540020] |This property is useful for debugging -- for instance, with the right library support, you could listen for SIGSEGV
and generate a stack backtrace to show just where that segfault happened.
[131064540030] |The official word (for Linux, anyway) on what each signal does is available by typing man 7 signal
from a Linux command line. http://linux.die.net/man/7/signal has the same information, but the tables are harder to read.
[131064540040] |However, without some experience with signals, it's hard to know from the short descriptions what they do in practice, so here's my interpretation:
[131064540050] |SIGINT
happens when you hit CTRL+C
.SIGQUIT
is triggered by CTRL+\
, and dumps core.SIGTSTP
suspends your program when you hit CTRL+Z
.
[131064540090] |Unlike SIGSTOP
, it is catchable, which gives programs like vi
a chance to reset the terminal to a safe state before suspending themselves.SIGHUP
("hangup") is what happens when you close your xterm (or otherwise disconnect the terminal) while your program is running.SIGTTIN
and SIGTTOU
pause your program if it tries to read from or write to the terminal while it's running in the background.
[131064540130] |For SIGTTOU
to happen, I think the program needs to be writing to /dev/tty
, not just default stdout.SIGILL
means an illegal or unknown processor instruction.
[131064540170] |This might happen if you tried to access processor I/O ports directly, for example.SIGFPE
means there was a hardware math error; most likely the program tried to divide by zero.SIGSEGV
means your program tried to access an unmapped region of memory.SIGBUS
means the program accessed memory incorrectly in some other way; I won't go into details for this summary.SIGPIPE
happens if you try to write to a pipe after the pipe's reader closed their end.
[131064540230] |See man 7 pipe
.SIGCHLD
happens when a child process you created either quits or is suspended (by SIGSTOP
or similar).SIGABRT
is usually caused by the program calling the abort()
function, and causes a core dump by default.
[131064540270] |Sort of a "panic button".SIGALRM
is caused by the alarm()
system call, which will cause the kernel to deliver a SIGALRM
to the program after a specified number of seconds.
[131064540290] |See man 2 alarm
and man 2 sleep
.SIGUSR1
and SIGUSR2
are used however the program likes.
[131064540310] |They could be useful for signaling between processes.kill
command, or fg
or bg
in the case of SIGCONT
.
[131064540340] |SIGKILL
and SIGSTOP
are the unblockable signals.
[131064540350] |The first always terminates the process immediately; the second suspends the process.SIGCONT
resumes a suspended process.SIGTERM
is a catchable version of SIGKILL
.mdadm
seems to be the better approach ATM.
[131064560110] |(You can use the md device as physical device for some lvm setup.)
[131064560120] |First, it does not need an extra log (and does not do an expensinve resync at every startup).
[131064560130] |Second, lvm mirrors does not seem to support parallel reading, i.e. md mirrors should have better read performance:
[131064560140] |http://serverfault.com/questions/97845/lvm-mirroring-vs-raid1
[131064560150] |http://serverfault.com/questions/126851/linux-lvm-mirror-vs-md-mirror
[131064570010] |.zshrc
which save on typing and allow integration with screen.
[131064570030] |For example:
[131064570040] |However, when I use these shell functions, I can't take advantage of zsh's built in command completion features.
[131064570050] |(In the case of SSH, automatically completing hostnames based on known_hosts and ssh_config).
[131064570060] |What's the preferred way to hook in to the existing completion mechanisms for commands I've created wrappers for?
[131064570070] |EDIT:
[131064570080] |Thanks to Gilles for his answer below.
[131064570090] |Using compdef appears to be the way to do this, but interestingly it doesn't seem to work uniformly.
[131064570100] |In this case, I have the following setup:
[131064570110] |Completion for my wrapper function 'm' works as expected, however completion for my function 's' does not; rather than attempting to complete hostnames, it appears to fall back to default autocompletion by providing me a list of files when I hit 'tab'.
[131064570120] |Is there some oddity in the way that SSH completion is handled which means I need to do something further?
[131064580010] |I use the following function to say that a function or alias or wrapper script (e.g. s
) is to be completed like an existing command (e.g. ssh
):
[131064580020] |Some completion commands apply to a family of functions and read the first word of the command line to determine which particular command to complete.
[131064580030] |For example, the commands ssh
, scp
, sftp
and a few more are all completed by the function _ssh
.
[131064580040] |In that case, you need to tell the completion function which “service” your function is like (by default, the service is the executable name, here your function's name).
[131064590010] |*.deb
format's biggest strength.
[131064600020] |Since the advent of yum
, that's less true, if it's true at all.
[131064610010] |One thing I like about RPMs is the (recent?) addition of delta RPMs.
[131064610020] |This allows for easier updating, reducing bandwidth required.
[131064610030] |DEBs are standard ar files (with more standard archives inside), RPMs are "proprietary" binary files.
[131064610040] |I personally think the former is more convenient.
[131064610050] |Just two things I can think off the top of my head.
[131064610060] |Both are very comparable.
[131064610070] |Both have excellent tools for packaging.
[131064610080] |I don't think there are so many merits for one over the other or vice versa.
[131064620010] |RPM:
[131064620020] |apt-get
to rpm -i
, and therefore say DEB better.
[131064660020] |This however, has nothing to do with the DEB file format.
[131064660030] |The real comparison is dpkg
vs rpm
and apitude
/apt-*
vs zypper
/yum
.
[131064660040] |For a user point of view, there isn't much difference in these tools.
[131064660050] |The RPM and DEB formats are both just archive files, with some metadata attached to them.
[131064660060] |They are both as arcane, have hardcoded install paths (yuk!) and only differ in subtle details.
[131064660070] |Both dpkg -i
and rpm -i
have no way of figuring out how to install dependencies, except if they happen to be specified at the command line.
[131064660080] |On top of these tools, there is a repository management in the form of apt-...
or zypper
/yum
.
[131064660090] |These tools download repositories, track all metadata and automate the downloading of dependencies.
[131064660100] |The final installation of each single package is handed over to the low-level tools.
[131064660110] |For a long time, apt-get
has been superior in processing the enormous metadata really fast, where yum
would take ages to do it.
[131064660120] |RPM also suffered from sites like rpmfind, where you found 10+ incompatible packages for different distributions.
[131064660130] |Apt completely hid this problem for DEB packages, because all packages got installed from the same source.
[131064660140] |In my opinion, zypper
has really closed to gap with apt, and there is no reason to be ashamed of using an RPM-based distribution these days.
[131064660150] |It's just as good, if not easier to use with the openSUSE build service at hand for a huge compatible package index.
[131064670010] |There is also the "philosophical" difference where in Debian packages you can ask questions and by this, block the installation process.
[131064670020] |The bad side of this is that some packages will block your upgrades until you reply.
[131064670030] |The good side of this is, also as a philosophical difference, on Debian based systems, when a package is installed, it is configured (not always as you'd like) and running.
[131064670040] |Not on Redhat based systems where you need to create/copy from /usr/share/doc/* a default/template configuration file.
[131064680010] |From a system administrator's point of view, I've found a few minor differences, mainly in the dpkg/rpm tool set rather than the package format.
[131064680020] |dpkg-divert
makes it possible to have your own file displace the one coming from a package.
[131064680030] |It can be a lifesaver when you have a program that looks for a file in /usr
or /lib
and won't take /usr/local
for an answer.
[131064680040] |The idea has been proposed, but as far as I can tell not adopted, in rpm.*.rpmsave
(IIRC).
[131064680060] |This has made my system unbootable at least once.
[131064680070] |Dpkg asks me what to do, with keeping my customizations as the default.ar
, tar
, gzip
) so you can inspect, and in a pinch tweak) deb packages easily.
[131064680160] |Rpm packages aren't nearly as friendly.mount
command.
[131064730080] |I want to mount /dev/sda1 to /target. /dev/sda1 is ext3.
[131064730090] |When I try
[131064730100] |mount -t ext3 /dev/sda1 /target it states: mount -t ext3 /dev/sda1 /target/ failed: Invalid argument.
[131064730110] |To get a place (/target) I simply did mkdir /target.
[131064730120] |Perhaps this is not the proper way to do this?
[131064730130] |Gracias =)
[131064740010] |You're doing it the right way.
[131064740020] |It may be that the device /dev/sda1
doesn't exist yet.
[131064740030] |You also probably don't need to specify -t ext3
since that should be default.
[131064740040] |I don't expect having it would cause any problem though.
[131064750010] |/home
between radically different distributions.
[131064780020] |Two versions of the same program reading and writing the same config files could result in problems, e.g. if the newer version writes something that the older version does not understand.
[131064780030] |If you don't mind the paths being different, save your files in the /home
for one distro and mount that /home at another location on the other distro (such as /home//fedora
).
[131064780040] |Then, /home//foo/bar
can be accessed via /home//fedora/foo/bar
on arch, for example.
[131064780050] |If you want the paths to be the same, save most of your files to a third, distinct partition, and mount it in the same place within both distributions e.g. /home//stuff
.
[131064790010] |You can use symlinks
[131064790020] |On each distro once.
[131064790030] |Now each distro has it's own configuration files
[131064800010] |You can set the default Documents folder on a different location or partition and the same for other folders, like the Desktop folder, the Download folder and so forth.
[131064800020] |Each application has it's own way of using the default paths, so the first time will be a long job...
[131064800030] |Some examples
[131064800040] |KDE http://docs.kde.org/stable/en/kdebase-workspace/kcontrol/paths/index.html
[131064800050] |GNOME http://ubuntuforums.org/showthread.php?t=631711
[131064800060] |If you don't find instruction on how to change defaults for some software you can try it ask here again.
[131064800070] |Then there is the hard but intelligent way, that is to setup different distribution on the same PC sharing the same kernel.
[131064800080] |I advise you (all) for the sake of the curiosity, to take a look at this article:
[131064800090] |http://teddziuba.com/2011/01/multiple-concurrent-linux-distros.html
[131064810010] |I'd recommend using symlinks for all the common configuration files you find yourself missing from one to the other.
[131064810020] |Create a new directory in a place accessible to both distros, move the files and symlink from there.
[131064810030] |Not only does this control exactly what gets shared, but it makes it very easy to move your preferences to other machines, to put them under version control if you need and to back them up.
[131064810040] |There are even tools to help you do these things based on the assumption you are working this way (see, for example, homesick).
[131064810050] |As far as setting common directories for things such as documents, videos, music etc, there is a standard for this in the form of XDG user dirs, which configures things like desktop, music, images, videos, etc. ( http://freedesktop.org/wiki/Software/xdg-user-dirs).
[131064810060] |The directories can be outside your home dir, or you can symlink as you like and set the dirs to point at the symlinks.
[131064810070] |I know Gnome works with these and assume KDE does too.
[131064810080] |I did try using the entire same home dir in the past, and different versions of applications quickly caused problems.
[131064820010] |You can share home directories between distributions, even between different unix variants.
[131064820020] |People with home directories shared via NFS on a heterogeneous network do it all the time.
[131064820030] |You may run into trouble if you run different versions of some programs on different systems sharing the same home directories.
[131064820040] |Troublesome programs are typically the ones with the fanciest GUIs, such as Gnome.
[131064820050] |For example Firefox will happily upgrade your profile to a newer version but might not let you load that profile again in an earlier version.
[131064830010] |It certainly is possible to share a home folder (or partition) over different linux distributions.
[131064830020] |But take the following notes:
[131064830030] |eclipse
IDE installed on all distributions and want the same configuration and source files available everywhere.
[131064830140] |You can create symbolic links on each distributions home folder to the shared one to achieve this.
[131064830150] |This would be Ubuntu:
[131064830160] |And openSUSE:
[131064830170] |And so on..
[131064830180] |If you're not sure about interfering configuration files, try the second, safer way and find out which home components can be shared easily between the installed distributions.
[131064840010] |ubuntu
or centos
(I haven't tried others) as the release name however, I have a feeling that there must be an easier, more reliable way of finding this out...
[131064840050] |No?
[131064850010] |Most recent distributions have a tool called lsb_release
.
[131064850020] |Your /etc/*-release
will be using /etc/lsb-release
anyway, so if that file is there, running lsb_release
should work too.
[131064850030] |I think uname
to get ARCH
is still the best way.
[131064850040] |e.g.
[131064850050] |Or you could just source /etc/lsb-release
:
[131064850060] |If you have to be compatible with older distributions, there is no single file you can rely on.
[131064850070] |Either fall back to the output from uname
, e.g.
[131064850080] |or handle each distribution separately:
[131064850090] |Of course, you can combine all this:
[131064850100] |Finally, your ARCH
obviously only handles Intel systems.
[131064850110] |I'd either call it BITS
like this:
[131064850120] |Or change ARCH
to be the more common, yet unambiguous versions: x86
and x64
or similar:
[131064850130] |but of course that's up to you.
[131064860010] |If you can't or don't want to use the LSB release file (due to the dependencies the package brings in), you can look for the distro-specific release files.
[131064860020] |Bcfg2 has a probe for the distro you might be able to use: http://trac.mcs.anl.gov/projects/bcfg2/browser/doc/server/plugins/probes/group.txt
[131064870010] |If the file /etc/debian_version, it is Debian, or a Debian derivative.
[131064870020] |This file may have a release number; on my machine it is currently 6.0.1.
[131064870030] |If it is testing or unstable, it may say testing/unstable, or it may have the number of the upcoming release.
[131064870040] |My impression is that on Ubuntu at least, this file is always testing/unstable, and that they don't put the release number in it, but someone can correct me if I am wrong.
[131064870050] |Fedora (recent releases at least), have a similar file, namely /etc/fedora-release.
[131064880010] |Type below command
[131064880020] |cat /etc/issue
[131064890010] |lsb_release -a
.
[131064890020] |Works on Debian and I guess Ubuntu, but I'm not sure about the rest.
[131064890030] |Normally it should exist in all GNU/Linux distributions since it is LSB (Linux Standard Base) related.
[131064900010] |I'd go with this as a first step:
[131064900020] |Gentoo, RedHat, Arch &SuSE have a file called e.g. /etc/gentoo-release
.
[131064900030] |Seems to be popular, check this site about release-files.
[131064900040] |Debian &Ubuntu should have a /etc/lsb-release
which contains release info also, and will show up with the previous command.
[131064900050] |Another quick one is uname -rv
.
[131064900060] |If the kernel installed is the stock distro kernel, you'll usually sometimes find the name in there.
[131064910010] |In order of most probable success, these:
[131064910020] |cover most cases (AFAIK): Debian, Ubuntu, Slackware, Suse, Redhat, Gentoo, *BSD and perhaps others.
[131064920010] |This is a duplicate of Bash: Get Distribution Name and Version Number
[131064920020] |In brief, most of the time lsb_release -a
or lsb_release -si
will work.
[131064920030] |Or you can use a script like this to handle the case where lsb_release
is not available.
[131064920040] |If you're typing it interactively, and so prefer something easy to type, and don't care what the output is, you can just do.
[131064930010] |mkdir A B C D E F
to create each directory.
[131064930030] |How do I create directories A-Z or 1-100 with out typing in each letter or number?
[131064940010] |It's probably easiest to just use a for
loop:
[131064940020] |You need at least bash 3.0 though; otherwise you have to use something like seq
[131064950010] |On Linux you can generate sequences of digits with the "seq" command, but this doesn't exist on all Unix systems.
[131064950020] |For example to generate directories from 1 to 100:
[131064950030] |While you can certainly make directories A to Z with shell utils:
[131064950040] |It's probably a lot less ugly to just use Perl:
[131064960010] |The {}
syntax is Bash syntax not tied to the for
construct.
[131064960020] |is sufficient all by itself.
[131064960030] |http://www.gnu.org/software/bash/manual/bashref.html#Brace-Expansion
[131064970010] |You can also do more complex combinations (try these with echo
instead of mkdir
so there's no cleanup afterwards):
[131064970020] |Compare
[131064970030] |to
[131064970040] |If you have Bash 4, try
[131064970050] |and
[131064980010] |su
ing to root from the user account?
[131064990040] |Having set a root password but not being able to login to the terminal as root or su
ing to root would indicate a problem.
[131064990050] |Not being able to login to X or via ssh as root would more likely be the result of good default security restrictions.
[131064990060] |If su
works but you still want sudo
then you can just run su -c visudo
then add your useraccount to the sudoers file.
[131065000010] |aptitude search '!~i'
.
[131065020030] |The list is very long (more than 30k lines).
[131065020040] |It can be interesting to suppress virtual packages also: aptitude search '!~i !~v'
[131065030010] |Type the command as follows:
[131065030020] |with this command you can check your installed RPM list after that you can easly to find which RPM is not installed.
[131065040010] |e2image
utility to take an online backup of that server, but when I run e2image -r /dev/sda1 sda1image
, the command does not run, instead it shows this error:
[131065040050] |Could any one help me, how I can take my whole server backup?
[131065050010] |deploy-user
and have written a backup script to backup a number of websites associated with this user.
[131065050030] |However one of the sites I'm trying to backup has a directory /home/usera/web/www.example.com/some/random_dir
is owned by apache-data-user
.
[131065050040] |What permissions would I give deploy-user
to be able to backup that directory.
[131065050050] |Options I am aware of are either:
[131065050060] |root
, which I don't really want to do.apache-data-user
and deploy-user
to the same group.
[131065050080] |But then apache-data-user
will have to many permissions.deploy-user
the right to read /home/usera/web/www.example.com/some/random_dir
and its contents.
[131065090020] |To enable ACLs, you may need to add the acl
option to the entry for the filesystem in /etc/fstab
and install your distribution's acl
package.
[131065090030] |Under Linux, the following commands give deploy-user
the right to read and traverse the whole hierarchy rooted at /home/usera/web
:
[131065100010] |rc.xml
file, then add something like this in the middle of it:
[131065110060] |Unless you meant unminimize / restore rather than maximize, i.e. a binding that works even when the window isn't focussed.
[131065110070] |In that case, I'd suggest using xbindkeys
and wmctrl
.
[131065110080] |You'd have to write a script that runs wmctrl
to find the uzbl
window using wmctrl -l
, then run either wmctrl -a
or wmctrl -R
, then add an entry in .xbindkeysrc
to run that script whenever a specific keyboard combination was pressed.
[131065120010] |In that case, I'd suggest using xbindkeys and wmctrl.
[131065120020] |I've googled for "wmctrl examples" and i found that: http://spiralofhope.com/wmctrl-examples.html#s12
[131065120030] |So i added th following lines to my openbox configuration:
[131065120040] |Its works exactly as i wanted, tkanks !
[131065130010] |chmod g+w testfile
) and running ls -l testfile
gives:
[131065130030] |I then added a user to that group ("/etc/group" has user1:x:1000:user2
line), but am failing to edit that file as user2.
[131065130040] |Why is this so?
[131065140010] |You might need to have user2 log out and back in (or just try ssh'ing in to create a new login session).
[131065140020] |Check the output of id --groups
to show the numeric group ids for a user.
[131065150010] | element from each of these file so I'm going to write a script to loop through each file.
[131065150040] |The element structure is like this:
[131065150050] |Can anyone suggest a method by which I can extract the div the_div_id
and all the child elements and content from a file using the linux command line?
[131065160010] |The html-xml-utils package, available in most major Linux distributions, has a number of tools that are useful when dealing with HTML and XML documents.
[131065160020] |Particularly useful for your case is hxselect
which reads from standard input and extracts elements based on CSS selectors.
[131065160030] |Your use case would look like:
[131065160040] |You might get a complaint about input not being well formed depending on what you are feeding it.
[131065160050] |This complaint is given over standard error and thus can be easily suppressed if needed.
[131065160060] |An alternative to this would to be to use Perl's HTML::PARSER package; however, I will leave that to someone with Perl skills less rusty than my own.
[131065170010] |Here's an untested Perl script that extracts elements and their contents using HTML::TreeBuilder
.
[131065170020] |If you're allergic to Perl, Python has HTMLParser
.
[131065170030] |P.S.
[131065170040] |Do not try using regular expressions..
[131065180010] |How do you move all files (including hidden) in a directory to another?
[131065180020] |Possible Duplicate: How do you move all files (including hidden) in a directory to another?
[131065180030] |How do I move all files in a directory (including the hidden ones) to another directory?
[131065180040] |For example, if I have a folder "Foo" with the files ".hidden" and "notHidden" inside, how do I move both files to a directory named "Bar"?
[131065180050] |The following does not work, as the ".hidden" file stays in "Foo".
[131065180060] |Note: Try it yourself.
[131065190010] |How do you move all files (including hidden) in a directory to another?
[131065190020] |How do I move all files in a directory (including the hidden ones) to another directory?
[131065190030] |For example, if I have a folder "Foo" with the files ".hidden" and "notHidden" inside, how do I move both files to a directory named "Bar"?
[131065190040] |The following does not work, as the ".hidden" file stays in "Foo".
[131065190050] |Note: Try it yourself.
[131065200010] |Linux: How to move all files from current directory to upper directory ?
[131065210010] |From man bash
[131065210020] |dotglob If set, bash includes filenames beginning with a '.' in the results of pathname expansion.
[131065220010] |One way is to use find
:
[131065220020] |The -type f
restricts the find command to finding files.
[131065220030] |You should investigate the -type
, -maxdepth
, and -mindepth
options of find
to customize your command to account for subdirectories.
[131065220040] |Find has a lengthy but very helpful manual page.
[131065230010] |Zsh
[131065230020] |or
[131065230030] |(Leave out the (N)
if you know the directory is not empty.)
[131065230040] |Bash
[131065230050] |Ksh93
[131065230060] |If you know the directory is not empty:
[131065230070] |Standard (POSIX) sh
[131065230080] |If you're willing to let the mv
command return an error status even though it succeeded, it's a lot simpler:
[131065230090] |GNU find and GNU mv
[131065230100] |Standard find
[131065230110] |If you don't mind changing to the source directory:
[131065240010] |How to remove an audio track from an mp4 video file?
[131065240020] |I have an mp4 video file with multiple audio tracks.
[131065240030] |I would like to strip away the rest of the tracks and keep just one.
[131065240040] |How do I do this?
[131065250010] |FFMPEG might be a helpful solution for you.
[131065260010] |http://howto-pages.org/ffmpeg/#strip
[131065260020] |please see the example.
[131065270010] |You could try avidemux
which handles only one audio file internally (and therefore forces you to select one specific track).
[131065270020] |I've never tried it with .mp4 but it works with other formats.
[131065280010] |First run ffmpeg -i file.mp4
to see which streams exists in your file.
[131065280020] |You should see something like this:
[131065280030] |Then run ffmpeg -map 0:0 -map 0:2 -acodec copy -vcodec copy -i file.mp4 new_file.mp4
to copy video stream and 2nd audio stream to new_file.mp4
.
[131065290010] |How does Linux kernel compare to microkernel architectures?
[131065290020] |I read once that one advantage of a microkernel architecture is that you can stop/start essential services like networking and filesystems, without needing to restart the whole system.
[131065290030] |But considering that Linux kernel nowadays (was it always the case?) offers the option to use modules to achieve the same effect, what are the (remaining) advantages of a microkernel?
[131065300010] |A microkernel limits the time the system is in kernel mode, as opposed to userspace, to the absolute minimum possible.
[131065300020] |If a crash happens in kernel mode, the entire kernel goes down, and that means the entire system goes down.
[131065300030] |If a crash happens in user mode, just that process goes down.
[131065300040] |Linux is robust in this regard, but it's still possible for any kernel subsystem to write over the memory of any other kernel subsystem, either purposefully or accidentally.
[131065300050] |The microkernel concept puts a lot of stuff that is traditionally kernel mode, such as networking and device drivers, in userspace.
[131065300060] |Since the microkernel isn't really responsible for a lot, that also means it can be simpler and more reliable.
[131065300070] |Think of the way the IP protocol, by being simple and stupid, really leads to robust networks by pushing complexity to the edges and leaving the core lean and mean.
[131065310010] |Microkernels require less code to be run in the innermost, most trusted mode than monolithic kernels.
[131065310020] |This has many aspects, such as:
[131065310030] |Microkernels allow non-fundamental features (such as drivers for hardware that is not connected or not in use) to be loaded and unloaded at will.
[131065310040] |This is mostly achievable on Linux, through modules.
[131065310050] |Microkernels are more robust: if a non-kernel component crashes, it won't take the whole system with it.
[131065310060] |A buggy filesystem or device driver can crash a Linux system.
[131065310070] |Linux doesn't have any way to mitigate these problems other than coding practices and testing.
[131065310080] |Microkernels have a smaller trusted computing base.
[131065310090] |So even a malicious device driver or filesystem cannot take control of the whole system (for example a driver of dubious origin for your latest USB gadget wouldn't be able to read your hard disk).
[131065310100] |A consequence of the previous point is that ordinary users can load their own components that would be kernel components in a monolithic kernel.
[131065310110] |Unix GUIs are provided via X window, which is userland code (except for (part of) the video device driver).
[131065310120] |Many modern unices allow ordinary users to load filesystem drivers through FUSE.
[131065310130] |Some of the Linux network packet filtering can be done in userland.
[131065310140] |However, device drivers, schedulers, memory managers, and most networking protocols are still kernel-only.
[131065310150] |A classic (if dated) read about Linux and microkernels is the Tanenbaum–Torvalds debate.
[131065310160] |Twenty years later, one could say that Linux is very very slowly moving towards a microkernel structure (loadable modules appeared early on, FUSE is more recent), but there is still a long way to go.
[131065310170] |Another thing that has changed is the increased relevance of virtualization on desktop and high-end embedded computers: for some purposes, the relevant distinction is not between the kernel and userland but between the hypervisor and the guest OSes.
[131065320010] |The case is that linux kernel is a hybrid of monolithic and microkernel.
[131065320020] |In a pure monolithic implementation there are no modules loading at runtime.
[131065330010] |Just take a look at x86 architecture -- monolithic kernel only uses rings 0 and 3.
[131065330020] |A waste, really.
[131065330030] |But than again it can be faster, because of less context switching.
[131065340010] |You should read the other side of the issue:
[131065340020] |Extreme High Performance Computing or Why Microkernels suck
[131065340030] |The File System Belongs In The Kernel
[131065350010] |How come I suddenly don't have any audio?
[131065350020] |Just a few minutes ago my sound was working, and then it stopped!
[131065350030] |I suspect it was caused by me checking out some video editors.
[131065350040] |How do I go about troubleshooting this?
[131065350050] |What I've so far tried:
[131065350060] |I ran lsof | grep audio
and lsof | grep delete
to see if there's any process locking the audio path(?), but nothing looks suspect.
[131065350070] |VLC and MPlayer are affected, while Quod Libet (GStreamer) isn't.
[131065350080] |[update] Strange one.
[131065350090] |I don't know if it has anything to do with Quod Libet, but I noticed that after closing (and reopening) it, the problem seemed to disappear.
[131065350100] |Note that I haven't logged out yet.
[131065360010] |You might try logging out of your desktop and logging back in.
[131065360020] |Sometimes this is enough to kill any locks, delete tmp files, reset any other configuration gizmos that might have gotten left by a mis-behaving application.
[131065360030] |You might also try poking through the Sound Preferences configuration for the hardware selections and make sure that your selected output hardware looks correct.
[131065360040] |Knowing which distro you're using might help in getting more suggestions.
[131065370010] |It might be because of pulseaudio.
[131065370020] |Try killing it and rerunning the application.
[131065380010] |Any way to sync directory structure when the files are already on both sides?
[131065380020] |I have two drives with the same files, but the directory structure is totally different.
[131065380030] |Is there any way to 'move' all the files on the destination side so that they match the structure of the source side?
[131065380040] |With a script perhaps?
[131065380050] |For example, drive A has:
[131065380060] |Whereas drive B has:
[131065380070] |The files in question are huge (800GB), so I don't want to re-copy them; I just want to sync the structure by creating the necessary directories and moving the files.
[131065380080] |I was thinking of a recursive script that would find each source file on the destination, then move it to a matching directory, creating it if necessary.
[131065380090] |But that's beyond my abilities ...
[131065380100] |Any help greatly appreciated!
[131065380110] |Thanks
[131065380120] |UPDATE: Another elegant solution was given here: http://superuser.com/questions/237387/any-way-to-sync-directory-structure-when-the-files-are-already-on-both-sides/238086
[131065390010] |How about something like this:
[131065390020] |This assumes that names of the files you want to sync are unique across the whole drive: otherwise there's no way it can be fully automated (however, you can provide a prompt for user to choose which file to pick if there's more that one.)
[131065390030] |The script above will work in simple cases, but may fail if name
happens to contain symbols which have special meaning for regexps.
[131065390040] |The grep
on list of files can also take a lot of time if there's lot of files.
[131065390050] |You may consider translating this code to use hashtable which will map filenames to paths, e.g. in Ruby.
[131065400010] |I'll go with Gilles and point you to Unison as suggested by hasen j. Unison was DropBox 20 years before DropBox.
[131065400020] |Rock solid code that a lot of people (myself included) use every day -- very worthwhile to learn.
[131065400030] |Still, join
needs all the publicity it can get :)
[131065400040] |This is only half an answer, but I have to get back to work :)
[131065400050] |Basically, I wanted to demonstrate the little-known join
utility which does just that: joins two tables on a some field.
[131065400060] |First, set up a test case including file names with spaces:
[131065400070] |(edit some directory and/or file names in new
).
[131065400080] |Now, we want to build a map: hash -> filename for each directory and then use join
to match up files with the same hash.
[131065400090] |To generate the map, put the following in makemap.sh
:
[131065400100] |makemap.sh
spits out a file with lines of the form, 'hash "filename"', so we just join on the first column:
[131065400110] |This generates moves.txt
which looks like this:
[131065400120] |The next step would be to actually do the moves, but my attempts got stuck on quoting... mv -i
and mkdir -p
should come handy.
[131065410010] |Here's my attempt at an answer.
[131065410020] |As a forewarning, all my scripting experience comes from bash, so if you are using a different shell, the command names or syntax may be different.
[131065410030] |This solution requires creating two seperate scripts.
[131065410040] |This first script is responsible for actually moving the files on the destination drive.
[131065410050] |The second script creates the md5 map file used by the first script and then calls the first script on every file in the destination drive.
[131065410060] |Basically, what is going on is the two scripts similuate an associative array with $md5_map_file
.
[131065410070] |First, all the md5s for the files on the source drive are computed and stored.
[131065410080] |Associated with the md5s are the relative paths from the drive's root.
[131065410090] |Then, for each file on the destination drive, the md5 is computed.
[131065410100] |Using this md5, the path of that file on the source drive is looked up.
[131065410110] |The file on the destination drive is then moved to match the path of the file on the source drive.
[131065410120] |There are a couple of caveats with this script:
[131065410130] |It assumes that every file in $dst is also in $src
[131065410140] |It does not remove any directories from $dst, only moves the files.
[131065410150] |I am currently unable to think of a safe way to do this automatically
[131065410160] |Good luck and I hope this helped.
[131065420010] |Use Unison as suggested by hasen j.
[131065420020] |I'm leaving this answer up as a potentially useful scripting example or for use on a server with only basic utilities installed.
[131065420030] |I'll assume that the file names are unique throughout the hierarchy.
[131065420040] |I'll also assume that no file name contains a newline, and that the directory trees only contain directories and regular files.
[131065420050] |First collect the file names on the source side.
[131065420060] | Then move the files into place on the destination side.
[131065420070] |First, create a flattened tree of files on the destination side.
[131065420080] |Use ln
instead of mv
if you want to keep hard links around in the old hierarchy.
[131065420090] | If some files may be missing in the destination, create a similarly flattened /A.staging
and use rsync to copy the data from the source to the destination.
[131065420100] | Now rename the files into place.
[131065420110] |Equivalently:
[131065420120] | Finally, if you care about the metadata of the directories, call rsync with the files already in place.
[131065420130] |Note that I haven't tested the snippets in this post.
[131065420140] |Use at your own risk.
[131065420150] |Please report any error in a comment.
[131065430010] |There's a utility called unison:
[131065430020] |http://www.cis.upenn.edu/~bcpierce/unison/
[131065430030] |Description from site:
[131065430040] |Unison is a file-synchronization tool for Unix and Windows.
[131065430050] |It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.
[131065440010] |ls, regexp and environment variable
[131065440020] |Hello, I wanted to declare an environment variable that stocks all the extensions of video files so I can use it when using the shell.
[131065440030] |I tried several things but never got it to work: If in my .bash_profile I put:
[131065440040] |it only takes the last element:
[131065440050] |If in my .bash_profile I put:
[131065440060] |or
[131065440070] |Then, when I display it it looks OK, but it doesn't work when I use it in a ls for example:
[131065440080] |And when I run the exact same command without using the variable, it works:
[131065440090] |Also, when I reboot, it looks like my .bash_profile is not loading, and the $VIDEOS variable is empty.
[131065440100] |I have to do a source ~/.bash_profile
in order to get it to work (and I have to redo so every time I open a new terminal.
[131065440110] |Any idea?
[131065440120] |Thanks!
[131065450010] |Your command is being expanded to this:
[131065450020] |Run this to see what's happening:
[131065450030] |(it's called brace expansion)
[131065450040] |The second problem is that bash
does brace expansion before parameter expansion, not after it, so anything that looks like your solution will be messy.
[131065450050] |You would have to do something like this:
[131065450060] |which will get annoying to type every time.
[131065450070] |How about something like this:
[131065450080] |Then instead of doing:
[131065450090] |just do this:
[131065450100] |or if you need to pass it to a command:
[131065450110] |This part working:
[131065450120] |could be the clue to .bash_profile
not working.
[131065450130] |For example, it might mean you are using zsh
.
[131065450140] |Please tell us what this does:
[131065450150] |so we can figure out which file you have to put it in.
[131065460010] |You could probably use arrays for this.
[131065460020] |Your syntax will have to be a bit different:
[131065460030] |and then do
[131065460040] |If you only want to list videos with names starting with foo
, you would do
[131065470010] |How do font managers work in Fedora?
[131065470020] |I am looking to do some font management on my Fedora system.
[131065470030] |I have installed both Font Manager and Fontmatrix.
[131065470040] |My goal was to be able to have more fonts installed than I wanted displayed, and to be able to enable/disable fonts (or groups thereof) whenever I wanted to.
[131065470050] |Both programs claim to enable or disable fonts.
[131065470060] |I can't seem to get Fontmatrix to do anything other than be a comprehensive font information source (glyphs, etc.) The enable/disable doesn't appear to work, and the documentation is less than helpful.
[131065470070] |I am able to disable/enable fonts in Font Manager.
[131065470080] |I had to recreate my Gnome settings, though, because I accidentally disabled all fonts, and even re-enabling them did not fix my panel fonts.
[131065470090] |There wasn't anything I could do, short of removing my local configuration and logging out/in, that would get those fonts back.
[131065470100] |So.
[131065470110] |What exactly do these programs do when they disable a font?
[131065470120] |And what trashed my panel fonts?
[131065470130] |I know Monospace was still installed/enabled, and nothing I could do would change the panel information.
[131065470140] |Thanks in advance!
[131065480010] |How to install Debian from USB? (Using full size image not netinstall)
[131065480020] |I learned of a way to install Debian from USB using the netinstall image, which is fine.
[131065480030] |However it means I have to spend hours and hours downloading packages to do the install.
[131065480040] |Is there a way I can simply download (for example) the CD1 with most of gnome and then use that?
[131065480050] |The netinstall method using this does not work because there is not enough space.
[131065480060] |(I have enough space, it is that the method has a limitation).
[131065480070] |I rarely have CDs on hand and some machines do not have CD/DVD drives anyway.
[131065480080] |I will research on this topic and answer my own question if need be, however any help in the meantime is appreciated.
[131065490010] |How about downloading the CD1 ISO, then put it on a USB and boot?
[131065490020] |(My favourite)
[131065490030] |How about using an automated tool such as UNetbootin?
[131065490040] |Here is another tool from Pendrivelinux.
[131065500010] |I had problems with the netinstall stable 64.
[131065500020] |I eventually overcame this: I found my binaries of nm and nm-applet and added the following to the top of the files with nano: #!/bin/busybox I then cated them on to the ubninit that unetbootin puts onto the usb drive like so: cat /usr/bin/nm >>/media/sdc1/ubninit and: cat /usr/bin/nm-applet >>/media/sdc1/ubninit if you try this and it doesnt work no big loss but just remember to delete the line you added to nm and nm-applet.
[131065510010] |As of Debian 6.0 (Squeeze), the netinstall and disc 1 of the regular install CD/DVDs are 'hybrid' ISOs.
[131065510020] |They can be burned to an optical disc and booted or copied onto a USB drive and booted.
[131065510030] |To copy the ISO onto a USB drive from a linux system all you need to do is cat
the ISO onto the drive.
[131065510040] |http://www.debian.org/releases/squeeze/i386/ch04s03.html.en#usb-copy-isohybrid
[131065520010] |creating a completely offline installer using simple-cdd
[131065520020] |i'm trying to create a custom cd using my own minimal debian installed setup.
[131065520030] |i just simply typed build-simple-cdd
.
[131065520040] |The process was successful, but when i tested it on my virtualbox and disabling the network, it looks for the kernel.
[131065520050] |did i missed anything on my setup?
[131065530010] |My guess would be that your image doesn't support the architecture of your vm (e.g., 64-bit iso but your vm is specified as 32-bit).
[131065530020] |Mount the CD and check the contents manually to make sure that what you expect to be there is actually there.
[131065540010] |Which desktop indexer indexes xml file content?
[131065540020] |My favorite indexer doesn't have this capability, and I'm looking for an alternative that does.
[131065550010] |How to redirect stderr,out to different files and also display in terminal?
[131065550020] |To make it clear, I want to see the output of a command in the terminal as if there were no redirection.
[131065550030] |Also stderr need to be redirected to err.log and stdout need to be redirected to stdout.log.
[131065550040] |It would be also nice to have the exact copy of what is shown in terminal, i.e errors printed as and when it occurs in a seperate file stdouterr.log.
[131065560010] |Use the tee
command as follows:
[131065560020] |3>&1 1>&2 2>&3
is how you swap stderr and stdout, because tee can only accept stdout.
[131065560030] |Take a look at Unix tee command for more advanced redirections using tee
.
[131065570010] |@dogbane, Thanks.
[131065570020] |I found another way too which saves both the streams approximately in the order as they would get printed without redirection.
[131065570030] |command 2> >(tee errlog | tee -a bothLog >/dev/tty ) | tee outlog | tee -a bothLog
[131065570040] |But this works only with the shells which supports process substitution.
[131065580010] |How to "jail" a process without being root?
[131065580020] |Were I root, I could simply create a dummy user/group, set file permissions accordingly and execute the process as that user.
[131065580030] |However I am not, so is there any way to achieve this without being root?
[131065590010] |I guess you can have some luck with LD_PRELOAD
to intercept access to certain files, but this might be really tricky.
[131065600010] |This is a fundamental limitation of the unix permission model: only root can delegate.
[131065600020] |You don't need to be root to run a virtual machine (not true of all VM technologies), but this is a heavyweight solution.
[131065600030] |User-mode Linux is a relatively lightweight Linux-on-Linux virtualization solution.
[131065600040] |It's not that easy to set up; you'll need to populate a root partition (in a directory) with at least the minimum needed to boot (a few files in /etc
, /sbin/init
and its dependencies, a login program, a shell and utilities).
[131065610010] |One known way to achieve isolation is through the seccomp sandboxing approach used in Google Chromium.
[131065610020] |But this approach supposes that you write a helper which would process some (the allowed ones) of the "intercepted" file access and other syscalls; and also, of course, make effort to "intercept" the syscalls and redirect them to the helper (perhaps, it would even mean such a thing as replacing the intercepted syscalls in the code of the controlled process; so, it doesn't sound to be quite simple; if you are interested, you'd better read the details rather than just my answer).
[131065610030] |More related info (from Wikipedia):
[131065610040] | http://en.wikipedia.org/wiki/Seccomp
[131065610050] |http://code.google.com/p/seccompsandbox/wiki/overview
[131065610060] |LWN article: Google's Chromium sandbox, Jake Edge, August 2009
[131065610070] |seccomp-nurse, a sandboxing framework based on seccomp.
[131065610080] |(The last item seems to be interesting if one is looking for a general seccomp
-based solution outside of Chromium.
[131065610090] |There is also a blog post worth reading from the author of "seccomp-nurse": SECCOMP as a Sandboxing solution ?.)
[131065610100] |The illustration of this approach from the "seccomp-nurse" project:
[131065610110] |A "flexible" seccomp possible in the future of Linux?
[131065610120] |There used to appear in 2009 also suggestions to patch the Linux kernel so that there is more flexibility to the seccomp
mode--so that "many of the acrobatics that we currently need could be avoided".
[131065610130] |("Acrobatics" refers to the complications of writing a helper that has to execute many possibly innocent syscalls on behalf of the jailed process and of substituting the possibly innocent syscalls in the jailed process.)
[131065610140] |An LWN article wrote to this point:
[131065610150] |One suggestion that came out was to add a new "mode" to seccomp.
[131065610160] |The API was designed with the idea that different applications might have different security requirements; it includes a "mode" value which specifies the restrictions that should be put in place.
[131065610170] |Only the original mode has ever been implemented, but others can certainly be added.
[131065610180] |Creating a new mode which allowed the initiating process to specify which system calls would be allowed would make the facility more useful for situations like the Chrome sandbox.
[131065610190] |Adam Langley (also of Google) has posted a patch which does just that.
[131065610200] |The new "mode 2" implementation accepts a bitmask describing which system calls are accessible.
[131065610210] |If one of those is prctl(), then the sandboxed code can further restrict its own system calls (but it cannot restore access to system calls which have been denied).
[131065610220] |All told, it looks like a reasonable solution which could make life easier for sandbox developers.
[131065610230] |That said, this code may never be merged because the discussion has since moved on to other possibilities.
[131065610240] |This "flexible seccomp" would bring the possibilities of Linux closer to providing the desired feature in the OS, without the need to write helpers that complicated.
[131065620010] |Another trustworthy isolation solution (besides a seccomp
-based one) would be the complete syscall-interception through ptrace
, as explained in the manpage for fakeroot-ng
:
[131065620020] |Unlike previous implementations, fakeroot-ng uses a technology that leaves the traced process no choice regarding whether it will use fakeroot-ng's "services" or not.
[131065620030] |Compiling a program statically, directly calling the kernel and manipulating ones own address space are all techniques that can be trivially used to bypass LD_PRELOAD based control over a process, and do not apply to fakeroot-ng.
[131065620040] |It is, theoretically, possible to mold fakeroot-ng in such a way as to have total control over the traced process.
[131065620050] |While it is theoretically possible, it has not been done.
[131065620060] |Fakeroot-ng does assume certain "nicely behaved" assumptions about the process being traced, and a process that break those assumptions may be able to, if not totally escape then at least circumvent some of the "fake" environment imposed on it by fakeroot-ng.
[131065620070] |As such, you are strongly warned against using fakeroot-ng as a security tool.
[131065620080] |Bug reports that claim that a process can deliberatly (as opposed to inadvertly) escape fake‐ root-ng's control will either be closed as "not a bug" or marked as low priority.
[131065620090] |It is possible that this policy be rethought in the future.
[131065620100] |For the time being, however, you have been warned.
[131065620110] |Still, as you can read it, fakeroot-ng itself is not designed for this purpose.
[131065620120] |(BTW, I wonder why they have chosen to use the seccomp
-based approach for Chromium rather than a ptrace
-based...)
[131065630010] |But well, of course, the desired "jail" guarantees are implementable by programming in user-space (without additional support for this feature from the OS; maybe that's why this feature hasn't been included in the first place in the design of OSes); with more or less complications.
[131065630020] |The mentioned ptrace
- or seccomp
-based sandboxing can be seen as some variants of implementing the guarantees by writing a sandbox-helper that would control your other processes, which would be treated as "black boxes", arbitrary Unix programs.
[131065630030] |Another approach could be to use programming techniques that can care about the effects that must be disallowed.
[131065630040] |(It must be you who writes the programs then; they are not black boxes anymore.)
[131065630050] |To mention one, using a pure programming language (which would force you to program without side-effects) like Haskell will simply make all the effects of the program explicit, so the programmer can easily make sure there will be no disallowed effects.
[131065630060] |I guess, there are sandboxing facilities available for those programming in some other language, e.g., Java.
[131065640010] |More similar Qs with more answers worth attention:
[131065640020] |http://stackoverflow.com/q/3859710/94687
[131065640030] |http://stackoverflow.com/q/4410447/94687
[131065640040] |http://stackoverflow.com/q/4249063/94687
[131065640050] |http://stackoverflow.com/q/1019707/94687
[131065640060] |some of the answers there point to specific solutions not yet mentioned here.
[131065640070] |In general, the answers there cover the full described range of possibilities and even more:
[131065640080] |(virtual machines/OS),
[131065640090] |(kernel extension (like SELinux; mentioned in comments here)),
[131065640100] |chroot-based helpers,
[131065640110] |ptrace
-based (of the tools not mentioned here, I have noted Geordi for myself, because I liked that the controlling program is written in Haskell),
[131065640120] |(seccomp
-based),
[131065640130] |and unshare
-based solutions--not mentioned here--e.g., unsharing mount-points (combined with FUSE?) could perhaps be a part of a working solution for you wanting to confine filesystem accesses of your untrusted processes,
[131065640140] |and special user-space programming/compiling (NaCl--not mentioned here--belongs to this group, doesn't it?)
[131065640150] |Some pages accumulating info on this topic were also pointed at in the answers there:
[131065640160] |page on Google Chrome's sandboxing methods for Linux
[131065640170] |sandboxing.org group
[131065650010] |How to check if $PWD is a subdirectory of a given path
[131065650020] |E.g. check if $PWD
is a subdirectory of /home.
[131065650030] |In other words I'm searching for a bash string operation to check if one string starts with another.
[131065660010] |I don't understand the question too well, but to find the parent of $PWD, do dirname $PWD
.
[131065660020] |To find the parent of the parent, run dirname $(dirname $PWD)
, and so on...
[131065670010] |Crude version:
[131065670020] |Has the disadvantage that one has to count characters first and one can't replace /home/
by something general like $1
.
[131065670030] |edit (thanks @Michael) for the generalization to compare with $VAR
one can use
[131065680010] |How about this:
[131065680020] |If $PWD
starts with "/home/", it gets stripped off in the left side, which means it won't match the right side, so "!=" returns true.
[131065690010] |Hm, it's pity that [
doesn't have an option of testing STRING1 starts with STRING2
condition.
[131065690020] |You may try echo $PWD | grep '^$VAR'
, but it can fail in interesting ways when VAR
contains special symbols.
[131065690030] |awk
's index
function should be able to do the trick.
[131065690040] |But all this seems just too heavy for such an easy thing to test.
[131065700010] |Using awk
:
[131065710010] |If the searched part of path is found I "empty" the variable :
[131065720010] |To test if a string is a prefix of another, in any Bourne-style shell:
[131065720020] |The same principle works for a suffix or substring test.
[131065720030] |Note that in case
constructs, unlike in file names, *
matches any character, including a /
or an initial .
.
[131065720040] |In shells that implement the [[ … ]]
syntax (i.e. bash, ksh and zsh), it can be used to match a string against a pattern.
[131065720050] |(Note that the [
command can only test strings for equality.)
[131065720060] |If you're specifically testing whether the current directory is underneath /home
, a simple substring test is not enough, because of symbolic links.
[131065720070] |If /home
is a filesystem of its own, test whether the current directory (.
) is on that filesystem.
[131065720080] |If you have the NetBSD, OpenBSD or GNU (i.e. Linux) readlink
, you can use readlink -f
to strip symbolic links from a path.
[131065720090] |Otherwise, you can use pwd
to show the current directory.
[131065720100] |But you must take care not to use a shell built-in if your shell tracks cd
commands and keeps the name you used to reach the directory rather than its “actual” location.
[131065730010] |Flash running in Chromium and FF at once, why no sound in the second browser?
[131065730020] |I work as a student assistant Linux admin and I just packaged up Adobe's "Square" plugin to get 64bit FF running flash pretty well (first time I've seen it work this well), but there's one little problem I've come across thus far: if you open one browser and start using flash the second browser will not be able to output sound.
[131065730030] |I realize this is probably because of what sound driver is being used but is there any good way to fix this or is this just how it is for Flash being the bane of my existence?
[131065730040] |Thanks for any help!
[131065740010] |Check to see if you can play sound in any second application.
[131065740020] |Back when I used Linux on the desktop some audio drivers couldn't mix two audio streams.
[131065740030] |I really hope that would have been fixed by now, but you never know...
[131065740040] |If you really can't play two simultaneous audio sources then you'll want to install an audio mixing daemon (e.g., esound or similar).
[131065740050] |A mixing daemon will intercept audio signals, mix them itself then send a single combined audio stream to the dsp.
[131065740060] |But if you can play sound from a second audio source then I'm completely wrong.
[131065750010] |As described on the FedoraProject wiki on Flash, you might need the PulseAudio ALSA module.
[131065750020] |If one of the browser's flash plugin (or pulseaudio itself) has locked the sound device, other apps trying to use the sound device might not succeed.
[131065760010] |What would be a good choice for an elastic file system (for adding storage at a later date)?
[131065760020] |I'm currently running a Debian 6.0 server with EXT3, but I'd like to move over to Arch.
[131065760030] |It's being used as a file server right now with a 1TB drive in it (of which 650GB in it is used).
[131065760040] |What I'd like to do at a later point (when I'm not completely broke) is buy another drive and add it to the same system (for backing up my main rig).
[131065760050] |What would be the easiest way of accomplishing this?
[131065760060] |I've looked into RAID, but it'd be useless because I'd have to reinitialize the array every time I added a new drive.
[131065760070] |Note: I'm not fussed about redundancy, it's only going to be hosting mirrored backups which I can easily remake in the case of data loss.
[131065760080] |Basically: System with clean 1TB drive in, what should I do now to prepare for a new striped drive at a later date without having to reinitialize any arrays?
[131065770010] |Hey, I'm stupid!
[131065770020] |I can just use LVM (which I forgot can do striping).
[131065770030] |The rubber-ducking debugging method comes to the rescue again
[131065780010] |If you really don't care about reliability, you can use LVM and keep adding physical volumes to a single volume group.
[131065780020] |That is, you would have a single volume group acting as a virtual drive, made up of several physical volumes (the actual drives).
[131065780030] |Instead of PC-style partitions, you'd create logical volumes for filesystems and swap.
[131065780040] |LVM is a good idea anyway if you're planning to extend your storage or move stuff around.
[131065780050] |It's a lot easier to resize an LVM volume or move it to a different drive than to do this for PC partitions, and all the LVM stuff can be done online (i.e. while running from the mounted volume).
[131065780060] |Linux's RAID subsystem can grow RAID-5 and RAID-6 arrays (it's slow, but can be done online), but curiously not linear arrays, so you'd have to start with at least two disks.
[131065780070] |You could also look into ZFS, a filesystem with built-in volume management.
[131065780080] |I don't know what its capabilities for adding storage are.