[131097490010] |
Security Question => give a shell to somebody
[131097490020] |Hello.
[131097490030] |I have question about giving shell account to somebody.
[131097490040] |How much is it safe ?
[131097490050] |He could read /etc.
[131097490060] |How can I gave secured shell account that will only restrict user to use some bins and his own home.
[131097490070] |Is the only way jail chroot ?
[131097500010] |Keep in mind that parts of /etc
are required to be readable even by unprivileged users.
[131097500020] |If there are things in /etc
that shouldn't be readable but are, you have bigger issues and at minimum should run your distribution's permissions check/fix program.
[131097500030] |A jail is the easiest way to set up draconian restrictions.
[131097500040] |There are other ways, such as setting up SElinux and arrange for the user to have a very limited security context.
[131097510010] |If you give somebody a shell account, they can see all world-readable files.
[131097510020] |This includes many files in /etc
that are necessary for the system to work, including /etc/passwd
(which contains user names but not passwords).
[131097510030] |If you allow the user only to log in inside a chroot, they can't see the files outside the chroot.
[131097510040] |That means you must put enough programs, libraries, etc, inside the chroot — everything the user must have access to and every dependency of these as well.
[131097510050] |Note that a chroot only protects direct file access.
[131097510060] |The user can snoop on various things, including the names and arguments of running processes of all users (unless you set up additional protections such as SELinux).
[131097510070] |If the user isn't going to need to do much, you can set up tighter restrictions (including preventing the user from creating their own executables) with a restricted shell, but setting up a restricted shell right is very tricky, so I don't recommend it.
[131097510080] |Nowadays, virtual machines are very cheap.
[131097510090] |You have many free implementations to choose from (User Mode Linux, VirtualBox, VMware, KVM, OpenVZ, VServer, …), and the disk space used by an extra system installation is minimal (and you might need it for chroot anyway).
[131097510100] |A virtual machine isolates pretty much everything: files, processes, networking, … Unless you have very unusual constraints, this is the way to go.
[131097520010] |One option is to put them in a restricted shell session, such as rbash [bash -r].
[131097520020] |It is a bit unclear at this point what, exactly, you wish to accomplish, however, on the surface posix acl's for 'other' will apply to the new account as well as any group acl's for groups to which the account belongs, such as 'users' for example.
[131097530010] |One of the most easy/efficient way to control what a user can do is lshell.
[131097530020] |lshell is a shell coded in Python, that lets you restrict a user's environment to limited sets of commands, choose to enable/disable any command over SSH (e.g. SCP, SFTP, rsync, etc.), log user's commands, implement timing restriction, and more.
[131097540010] |What features are in zsh and missing from bash, or vice versa?
[131097540020] |As a Linux user, I've always just used bash because it was the default on every distro I used.
[131097540030] |People using other Unix systems such as BSD seem to use other shells far more frequently.
[131097540040] |In the interests of learning a bit more, I've decided to try out zsh.
[131097540050] |As a bash user, what features will I miss, and what ones should I look out for?
[131097550010] |zsh
lets you edit a multi-line command (see zsh line editor), bash doesn't. If you try the same trick (Ctrl-p
), bash fetches the last command.
[131097560010] |There's already been quite a bit of activity on the topic on other Stack Exchange sites.
[131097560020] |My experience of switching from bash to zsh, as far as can remember (it was years ago²), is that I didn't miss a single thing.
[131097560030] |I gained a lot; here are what I think are the simple zsh-specific features that I use most:
[131097560040] |The zsh feature I most miss when I occasionally use bash is autocd: in zsh, executing a directory means changing to it, provided you turn on the autocd
option.⁴
[131097560050] |Another very useful feature is the fancy globbing.
[131097560060] |The hieroglyphscharacters are a bit hard to remember but extremely convenient (as in, it's often faster to look them up than to write the equivalent find
command).
[131097560070] |A few of the simpler examples: foo*~*.bak
= all matches for foo*
except those matching *.bak
foo*(.)
= only regular files matching foo*
foo*(/)
= only directories matching foo*
foo*(-@)
= only dangling symbolic links matching foo*
foo*(om[1,10])
= the 10 most recent files matching foo*
foo*(Lm+1)
= only files of size >1MB dir/**/foo*
= foo*
in the directory dir
and all its subdirectories, recursively⁴
[131097560080] |For fancy renames, the zmv
builtin can be handy.
[131097560090] |For example, to copy every file to name.bak
: zmv -C '(*)(.)' '$1.bak'
[131097560100] |Both bash and zsh have a decent completion system that needs to be turned on explicitly (. /etc/bash_completion
or autoload -U compinit; compinit
).
[131097560110] |Zsh is much more configurable and generally fancier.
[131097560120] |If you run zsh without a .zshrc
, it starts an interactive menu that lets you choose configuration options.
[131097560130] |(Some distributions may disable this; in that case, run autoload zsh-newuser-install; zsh-newuser-install
.)
[131097560140] |I recommend enabling the recommended history options, turning on (“new-style”) completion, and turning on the “common shell options” except beep
.
[131097560150] |Later, configure more options as you discover them.
[131097560160] |²At the time programmable completion was zsh's killer feature, but bash acquired it soon after. ⁴Features that bash aquired only in version 4 (so are still not available on many systems) are in smaller type.
[131097570010] |Also the default tab completion is better than bash... for example...
[131097570020] |~/.e.d
TAB will expand to ~/.emacs.d/
in zsh, bash will just beep.
[131097580010] |Why 'umount' is written without 'n'?
[131097580020] |I am wondering if there is any historical or practical reason why the umount
command is not unmount
.
[131097590010] |For the same reason the creat
system call is nor spelled create
?
[131097590020] |http://stackoverflow.com/questions/1713457/what-did-ken-thompson-mean-when-he-said-id-spell-create-with-an-e
[131097600010] |This dates all the way back to the very first edition of Unix, where all command.
[131097600020] |All the standard file names were only at most 6 characters long (think passwd
), even though even this version supported a whooping 8 characters in a file name.
[131097600030] |Most commands had an associated source file ending in .c
(e.g. umount.c
), which left only 6 characters for the base name.
[131097600040] |A 6-character limitation might also have been a holdover from an earlier development version, or inherited from a then current IBM system that did have a 6-character limitation.
[131097600050] |(Early C implementations had a 6-character limit on identifiers — longer identifiers were accepted but the compiler only looked at the first 6 characters, so foobar1
and foobar2
were the same variable.)
[131097600060] |(I though I remembered a umount
man page that listed the spelling as a bug of unknown origin, but I can't find it now.)
[131097610010] |Something to convert a makefile.am to project in Visual Studio (2005)
[131097610020] |I'm interested in porting a library to Windows so that I can contribute to this library's project.
[131097610030] |This library has a Makefile.am, and I can sort of figure out how the library should be built (but I'm fairly new to the concept of makefiles).
[131097610040] |I was wondering if there is an automatic way to translate a Makefile.am into something that Visual Studio 2005 can use.
[131097610050] |What I know so far:
[131097610060] |I do see that there are directives in the library's Makefile.am for building to a Windows target (i.e if BUILD_WINDOWS ...some stuff... endif
).
[131097610070] |I also know that there's AutoMake for Windows, which generates the Makefile.in from the .am, but I have a knowledge gap as to how this Makefile.in would then be used.
[131097610080] |(Still researching this avenue).
[131097610090] |Is my only course to convert the makefile manually into a VS2005 project?
[131097610100] |It seems like it is easier to convert from Windows to Linux, but that may be just my imagination.
[131097610110] |Thanks for all your help!
[131097620010] |For the makefile.in / makefile.am relation, have a look at the wikipedia on the GNU build system: http://en.wikipedia.org/wiki/GNU_build_system.
[131097620020] |There's a graph explaining the role of each file.
[131097620030] |There's no automated way that I know of, but I can recommend you too ways to be able to compile the library:
[131097620040] |Install cygwin: http://www.cygwin.com that comes with the full GNU system and many useful utilities
[131097620050] |Migrate the project to use cmake (http://www.cmake.org).
[131097620060] |CMake can generate Microsoft VS files as well as Unix Makefiles.
[131097620070] |This project: http://cvs.savannah.gnu.org/viewvc/tsp/?root=tsp uses cmake a builds on Windows and various Unix flavors, for example.
[131097630010] |Do you need a shell for SCP?
[131097630020] |I'm allowing a friend a local account on my machine, exclusively for SCP.
[131097630030] |Can I specify his account's shell as /bin/true
, or in any other way limit the account, while still allowing SCP?
[131097640010] |You can set that user's shell to rssh
, which is designed precisely for that purpose:
[131097640020] |rssh is a restricted shell for use with OpenSSH, allowing only scp and/or sftp.
[131097640030] |It now also includes support for rdist, rsync, and cvs.
[131097650010] |No, you don't.
[131097650020] |As Gilles pointed out, rssh works very nicely to this end, as does scponly.
[131097650030] |See also the discussion in this related question.
[131097660010] |Why can't root on one machine change nfs mounted content from another machine?
[131097660020] |On my NFS server, I have the following export defined:
[131097660030] |On my NFS client:
[131097660040] |Obviously, as root on the server, I can do whatever I want.
[131097660050] |On the client however, my regular user 'gabe' can make changes to the nfs mount (assuming I have permissions to), but root cannot.
[131097660060] |As my regular user:
[131097660070] |As root:
[131097660080] |Again, this is all on the NFS client side of things, and I suspect perhaps it has something to do with the -maproot option, but I'm too much of an NFS noob to understand exactly what.
[131097660090] |This is the first time I'm setting up NFS and I just noticed this peculiarity.
[131097660100] |I'm going to do some reading now, to see if I can figure this out, but if anyone has any insight, I would appreciate it.
[131097670010] |NFS was designed with the idea that user and group ids would be the same on all machines across the network.
[131097670020] |For ordinary users, that works ok.
[131097670030] |But root's UID is always 0, and just because you have root on one box, it doesn't mean that you should have root access to every machine on the network.
[131097670040] |Therefore, NFS treats root specially.
[131097670050] |By default, root is mapped to the nobody
user, which normally has no write access.
[131097670060] |The -maproot
option allows you to change how root is handled.
[131097670070] |BSD's -maproot=root
corresponds to Linux's no_root_squash
option.
[131097680010] |That's common behavior with traditional NFS implementations.
[131097680020] |NFS user mappings are performed irrespective of context, so all accesses by the client root have to be mapped to a particular user (usually nobody
by default).
[131097680030] |Hence this weird behavior, where the client root can't access your files directly, but can su gabe
to access them.
[131097680040] |(“Recent” versions of NFS, i.e. NFSv4 and perhaps NFSv3, allow saner behavior if supported on both sides, but I don't know the details.)
[131097690010] |The Korn shell (ksh) is a shell originally written at AT&T, with more advanced programming features than the then-existing Bourne and C shells, and a syntax compatible with the Bourne shell.
[131097690020] |Many programming features in bash and zsh mimic ksh's. Several versions of ksh exist, including:
[131097690030] |Ksh88, the original, which was always proprietary software.
[131097690040] |Pdksh (public domain ksh), a free clone of ksh88 (with a few incompatibilities).
[131097690050] |Mksh (MirOS BSD Korn shell), a descendant of pdksh
[131097690060] |Ksh93, a major new version of ksh88, initially proprietary then released as free software
[131097700010] |The Korn shell (ksh) is a shell with advanced scripting features, commonly found on commercial unices and some BSD systems but rarely used on Linux
[131097710010] |Fish (friendly interactive shell) is a relatively new unix shell.
[131097710020] |It provides advanced interactive features such as command and argument completion, and syntax coloring.
[131097710030] |Fish has a simpler syntax than Bourne-style shells.
[131097710040] |Links and documentation
[131097710050] |Fish home page
[131097720010] |Fish is a unix shell with a clean design and a limited but powerful feature set
[131097730010] |Restricting an SCP/SFTP user to a directory
[131097730020] |Is there a simple way to restrict an SCP/SFTP user to a directory?
[131097730030] |All methods that I've come across require me to set a chroot jail up by copying binaries, but I don't think that should be necessary.
[131097740010] |A chroot is a reasonably simple method.
[131097740020] |Since the operating system already has this security feature, daemon writers tend not to attempt to reimplement it.
[131097740030] |Rssh comes with a guide on setting up a chroot jail.
[131097740040] |It's in the CHROOT
file in the source distribution.
[131097740050] |In a nutshell, you need to have:
[131097740060] |A few binaries, copied from the root: /usr/bin/scp
, /usr/libexec/openssh/sftp-server
, /usr/bin/rssh_chroot_helper
[131097740070] |The libraries ({/usr,}/lib/lib*.so.[0-9]
) that they use, likewise copied
[131097740080] |A /etc/passwd
(quite possibly not a copy but derived from the master)
[131097740090] |A few devices: /dev/null
, /dev/tty
, and also a /dev/log
socket for logging (and you need to tell your syslog daemon to listen on that socket)
[131097740100] |Extra tip that isn't in the rssh documentation: If you need some files to be accessible in a chroot jail, you can use bindfs or Linux's mount --bind
to make additional directory hierarchies from outside the jail.
[131097740110] |Both bindfs
and mount --bind
allow more the remounted directory to have more restrictive permissions, for example read-only.
[131097750010] |SSH Supports chrooting an SFTP user natively.
[131097750020] |You just need to supply
[131097750030] |ChrootDirectory
[131097750040] |In your sshd config file, and restart sshd.
[131097750050] |If you are just doing sftp, then you don't have to do anything more.
[131097750060] |Not sure if scp will work as well.
[131097750070] |For interactive shell, you will need to copy binaries, and /dev nodes into the chroot.
[131097750080] |An example config, for just a single user, testuser:
[131097750090] |A few things to be aware of, from the sshd_config man page:
[131097750100] |Search for ChrootDirectory in man sshd_config for more information.
[131097760010] |You might want to look at scponly; it's essentially a login shell that can only be used to launch scp or the sftpd subsystem.
[131097760020] |In the scponlyc
variant it performs a chroot before activating the subsystem in question.
[131097770010] |Why does `xdg-mime query filetype ...` fail to find a new added file type?
[131097770020] |I installed a new file type to share MIME database.
[131097770030] |But xdg-mime query filetype
cannot tell the new type.
[131097770040] |This problem only happens on my own Linux OS which does not use GNOME or KDE as its desktop.
[131097770050] |On Ubuntu, the same process works well.
[131097770060] |I found that xdg-mime query filetype
uses "file -i filename" under the hood on my OS but uses gnomevfs on Ubuntu.
[131097770070] |Here are my steps:
[131097770080] |wrote a xml file for my new file type my_file.xml
[131097770090] |xdg-mime install my_file.xml
[131097770100] |xdg-mime query filetype
.... no output :-(
[131097770110] |I checked /usr/share/mime/applications
and found the xml entry generated by update-mime-database there.
[131097770120] |And the C API g_file_info_get_content_type()
can get the proper mime type.
[131097770130] |So it seems the shared-mime-info has been updated successfully.
[131097770140] |But the "file" command still fails, why?
[131097770150] |Here is my xml file:
[131097780010] |I think I find the answer.
[131097780020] |On my system "xdg-mime query filetype ..." uses"file" command to get the file type, while on ubuntu it uses "gnomevfs".
[131097780030] |It seems the "file" command does not check xml entries of shared-mime-info, but looks into the file "/user/share/file/magic" to get the file MIME type.
[131097780040] |If I use "file" command on Ubuntu, it can not tell me the right MIME type, either.
[131097780050] |I'll study how to edit this magic file.
[131097790010] |How can I get the error code (exit code) of "xdg-mime query filetype" command?
[131097790020] |I ran xdg-mime query filetype
to check the MIME type of a file, and it failed.
[131097790030] |How can I print the error code (exit code) of the xdg-mime
command?
[131097790040] |I want to know what error happened:
[131097790050] |Error in command line syntax.
[131097790060] |One of the files passed on the command line did not exist.
[131097790070] |A required tool could not be found.
[131097790080] |The action failed.
[131097790090] |No permission to read one of the files passed on the command line.
[131097800010] |In Bourne-derived shells (sh
, ash
, bash
, dash
, zsh
...) the exit code of the last-run program is in the $?
variable:
[131097800020] |So in this case, the exit code of ls
is 2.
[131097810010] |"application/octet-stream" (unknown file type)
[131097810020] |is not an error message, it simply means that file does not know what your file contains.
[131097810030] |This could happen for encrypted files for example, they look so random that file is unable to print something more precise than "this is data".
[131097820010] |Unmet dependencies.
[131097820020] |I am trying to compile vim and install with "--enable-pythoninterp" flag, which needs the python-dev package.
[131097820030] |INFO I obtained the vim source from ftp://ftp.vim.org/pub/vim/unix/vim-7.3.tar.bz2 Vim7.3 is not available yet using apt.
[131097820040] |Using Ubuntu 10.10
[131097820050] |But, sudo apt-get python-dev
results in broken packages error message ->
[131097820060] |The following packages have unmet dependencies: python-dev : Depends: python (= 2.6.6-2ubuntu1) but 2.6.6-2ubuntu2 is to be installed E: Broken packages
[131097820070] |How can I best resolve this issue?
[131097830010] |This is the usual message apt produces when you have have packages which are at different apt priority.
[131097830020] |See man apt_preferences. python 2.6.6-2ubuntu1 is not of sufficiently high priority to be installed, so apt is trying to install 2.6.6-2ubuntu2, which does not satisfy the dependency.
[131097830030] |More information is needed to resolve this.
[131097830040] |Please provide the output of
[131097830050] |Also post your /etc/apt/preferences and /etc/apt/sources.list files.
[131097830060] |Also give details of how you obtained the vim source.
[131097830070] |Is this an upstream source?
[131097830080] |Did you download the source using apt-get source or similar?
[131097830090] |If I understood this correctly, you are trying to install a customized version of the vim package.
[131097830100] |Is that correct?
[131097830110] |Based on your apt-cache policy
output, you just need to downgrade python from 2.6.6-2ubuntu2
to 2.6.6-2ubuntu1
.
[131097830120] |As you can see, it does not currently correspond to any version in the archives.
[131097830130] |Do you know where you got it from?
[131097830140] |So do
[131097830150] |and then try your
[131097830160] |again.
[131097840010] |It looks like your update got the packages in between uploads of packages: http://packages.ubuntu.com/maverick-updates/python-dev both versions should be 2.6.6-2ubuntu2.
[131097840020] |I'd try:
[131097840030] |and then retry
[131097840040] |If this does not work, I'd try switching to another ubuntu mirror to get the packages from there.
[131097850010] |vim vs. emacs... and no, this is not a flame war
[131097850020] |How would you compare these editors?
[131097850030] |What are the pros and cons of each?
[131097850040] |[note] This is not meant to be answered by those who "hate one and love another" or those who haven't used both.
[131097860010] |I think they're both awesome.
[131097860020] |I think either one can do just about anything you can imagine, and they're both so customizable, that by the time you finish customizing them, they're both just exactly what you want them to be, nothing more nor less.
[131097860030] |Emacs stands out to me in being a bit closer (although still does not meet) to ISO/IEC standards of usability and consistency for user interfaces, and hence doesn't play as many tricks with your “instincts” about that vim does.
[131097860040] |The lifetime of instincts you've developed working with other programs won't work against you.
[131097860050] |Vim is a completely different model, and in many ways, it is superior on its own in so far as relies far less on Cntrl/Alt sequences, and instead just on its modes, allowing you to keep your hards on the home row and typing faster.
[131097860060] |But vim is virtually unique, and unless you install some very unusual accompanying software (e.g., Vimperator, Jumanji/Zathura, etc.), the instincts you develop working with vim won't cross over to other programs and vice-versa.
[131097860070] |That said, I've settled on vim myself.
[131097860080] |You've got to settle on one sooner or later, for better or worse, since it's difficult to master both.
[131097870010] |I'll post what I think are the main benefits of each:
[131097870020] |Emacs has considerably more extensions to let you do tasks that are only vaguely text-editor related, like browsing the filesystem or messing with version control, and extensions that are in no way text-editor related, like reading RSS feeds.
[131097870030] |If you want an environment instead of just a text editor, Emacs is going to be better than Vim.
[131097870040] |I also think Emacs is much easier to learn, despite what some would have you believe:
[131097870050] |In particular, I think a novice Emacs user will be faster than a novice Vim user
[131097870060] |On the other hand, Vim is undeniably faster.
[131097870070] |It seems like this is a core part of the argument, but in my opinion there's no contest at all; I consider myself a fluent Emacs user, and I'm no match for the couple people I know that have an equal knowledge of Vim.
[131097870080] |The problem is, the number of people that have sufficient mastery of Vim to be that fast is incredibly small (of the ~30 people I talk to regularly who use Vim, I think only one is exceptionally good at it).
[131097870090] |There's a large gap between the possible speed gain and the actual speed gain you achieve; Emacs users are going to be almost as fast as 99% of Vim users, and (as I said in the Emacs section) beginning Emacs users will probably be faster than beginning Vim users
[131097880010] |I use both on a regular bases.
[131097880020] |I view Emacs as a "live in" editor, whereas I use Vim for quick, one off tasks.
[131097880030] |Superficially, Emacs is much more bloated than Vim is, and so it really isn't quit so convenient to "Fire up" as Vim, but I also find that the philosophies of user interface from one to the other support this paradigm.
[131097880040] |Emacs is much more built to keep you inside, making things nice and comfortable so you don't have to leave, whereas vim is much more "Unixy" and sees itself as part of a greater tool-belt.
[131097880050] |Many people flee from emacs due to it's heavy reliance on bucky bits, but this is a pretty silly reason to me.
[131097880060] |The real power that Emacs has over Vim is customizability, and with the power of Viper etc, this really isn't an issue.
[131097880070] |Certainly Vim-Script provides it's own level of customization, and if, say, your favorite programing language wasn't provided with an appropriate syntax highlighter you could certainly cook one up, but Emacs is ultimately a self-hosting lisp-machine, and at bottom you can do much much more fiddling with it.
[131097880080] |There just arn't such tools as gnus or org-mode in vim, to name a few.
[131097880090] |In a nut shell, Emacs isn't just an editor, it's practically a god damned operating system.
[131097880100] |For manipulating text, I'de say they're exactly on par.
[131097890010] |There is a vi
available on every unix system (or almost), however you can't say this about any other editor.
[131097890020] |This is the #1 reason, imo, to learn and familiarize yourself with vi
(please note 'vi' not 'vim').
[131097890030] |I've never seen Emacs be available in a default install.
[131097890040] |I'm not saying don't use Emacs or this is the only reason to use Vim, but when you want to be able to use Unix systems that aren't yours... vi
is part of the universal language.
[131097900010] |I use both, although if I had to choose one, I know which one I would pick.
[131097900020] |Still, I'll try to make an objective comparison on a few issues.
[131097900030] |Available everywhere?
[131097900040] |If you're a professional system administrator who works with Unix systems, you need to know vi (not Vim), because it's available on all Unix systems and most Unix-like systems, whether desktop, server or embedded.
[131097900050] |For an ordinary user, this argument is irrelevant: Emacs is easily available for every desktop/server OS, and since it supports remote editing, it's enough to have it on your desktop machine anyway.
[131097900060] |Bloated?
[131097900070] |Emacs once stood humorously for “Eight Megabytes And Constantly Swapping”.
[131097900080] |Right now, on my machine, Google Chrome needs about as much RAM per tab as Emacs does for 100 open files, and I won't even mention Firefox.
[131097900090] |In the 21st century, Emacs bloat is just a myth.
[131097900100] |Feature bloat isn't a problem either.
[131097900110] |If you don't use it, you don't have to know it's there.
[131097900120] |Emacs features keep out of the way when you don't use them and the documentation is very well organized.
[131097900130] |Startup time: Vi(m) proponents complain about Emacs's startup time.
[131097900140] |Yes, Emacs is slow to start up, but this is not a big deal: you start Emacs once per session, then connect to the running process with emacsclient
.
[131097900150] |So Emacs's slow startup is mostly a myth.
[131097900160] |There's one exception, which is when you log in to a remote machine and want to edit a file there.
[131097900170] |Starting a remote Emacs is slower than starting a remote Vim.
[131097900180] |In some situations you can keep an Emacs running inside Screen.
[131097900190] |You can also edit remote files from within Emacs, but it does break the flow if you're in an ssh session in a terminal.
[131097900200] |(Since XEmacs 21 or GNU Emacs 23, you can open an Emacs window from a running X instance inside a terminal.)
[131097900210] |Initial learning curve: This varies from person to person.
[131097900220] |Michael Mrozek's graph made me chuckle.
[131097900230] |Seriously, I agree that Vim's learning curve starts steep, steeper than any other editor, although this can be lessened by using gvim.
[131097900240] |Since I've dispelled a couple of Emacs myths, let me dispel a vi myth: a modal editor is not hard or painful to use.
[131097900250] |It takes a little habit, but after a while it feels very natural.
[131097900260] |If I was to redesign vi(m), I'd definitely keep the modes.
[131097900270] |Asymptotic learning curve: Both Vim and Emacs have a lot of features, and you will keep discovering new ones after years of use.
[131097900280] |Productivity: This is an extremely hard topic.
[131097900290] |Proponents of vi(m) argue that you can do pretty much everything without leaving the home row, and that makes you more efficient when you need it most.
[131097900300] |Proponents of Emacs retort that Emacs has a lot of commands that are not frequently used, so don't warrant a key binding, but are damn convenient when you need them (obligatory xkcd reference).
[131097900310] |My personal opinion is that Emacs ultimately wins unless you have a typing disability (and even then you can configure Emacs to require only key sequences and not combinations like Ctrl+letter).
[131097900320] |Home row keys are nice, but they often aren't that much of a win because you have to switch modes.
[131097900330] |I don't think there's anything Vim can do significantly more efficiently than Emacs, whereas the converse is true.
[131097900340] |Customizability: Both editors are programmable, and there is an extensive body of available packages for both.
[131097900350] |However, Vim is an editor with a macro language; Emacs is an editor written in Lisp with some ad-hoc primitives.
[131097900360] |Emacs wins spectacularly when you try to do something that the authors just didn't think of.
[131097900370] |This doesn't happen every day, but it does accumulate over the years.
[131097900380] |More than an editor: Vim is an editor.
[131097900390] |Emacs is not just an editor: it's also an IDE, a file manager, a terminal emulator, a web browser, a mail client, a news client, ...
[131097900400] |Whether that's a good thing or a bad thing is a matter for debate.
[131097900410] |But you can use Emacs as a mere editor (see “feature bloat” above).
[131097900420] |As an IDE: Both Vim and Emacs have support for a lot of programming languages and other text formats.
[131097900430] |Beyond the basics such as syntactic coloring and automatic indentation, both have advanced IDE features such as code and documentation cross-reference lookups, assisted insertions and refactoring, integrated version control, and the ability to initiate a compilation and jump to the first error.
[131097900440] |One domain where Emacs is plain better than Vim is interaction with asynchronous subprocesses.
[131097900450] |That's when you start a long compilation and want to do something else inside the same editor instance while the compiler is churning.
[131097900460] |Or when you want to interact with a Read-eval-print loop — Emacs really shines at this, Vim only has clumsy hacks to offer.
[131097910010] |The main reason I don't use vi/vim is that it's modal.
[131097910020] |The main reason I do use vi is that it's available almost everywhere.
[131097920010] |I normally use vim, but they're both great editors.
[131097920020] |Learning to use vi was nasty, but I got through it and learned to like it.
[131097920030] |My most frustrating moments were when the caps lock key was on.
[131097920040] |You could try with gVim, but one of the biggest advantages with vi and emacs is the ability to do neat stuff while keeping your hands on the keyboard, and gVim is likely to keep you using the mouse.
[131097920050] |(Learning to play roguelike games at the same time gave me practice with the cursor movement keys, but caused me to try to move diagonally in documents sometimes.)
[131097920060] |Emacs is probably more approachable.
[131097920070] |It's modeless, and you aren't going to screw yourself up by hitting the caps lock key.
[131097920080] |The idea of controlling the editor through typing letters with the control key down shouldn't be too foreign to modern power users, although the actual keys to do things will seem wild and arbitrary to the typical Windows/Mac OSX user.
[131097920090] |Again, versions that allow you to use the mouse do you few favors in the long run.
[131097920100] |Both require some level of expertise to use effectively.
[131097920110] |Unlike, say, Notepad, you can't just sit down and edit.
[131097920120] |Both are configurable, although for my money writing extensions in the same Lisp the editor is written in makes a smoother experience.
[131097920130] |(Emacs, as normally distributed, isn't really an editor.
[131097920140] |It's a Lisp environment tailored for text processing, with a lot of pre-written software, including an editor.
[131097920150] |Hence the joke "Emacs makes a decent shell, but it could use a better editor.")
[131097920160] |I normally use vim because, after extensive training, it feels easier.
[131097920170] |This may be due to advantages in the mode system, where immense numbers of commands are available using one finger near the home rows, or "baby duck syndrome", which applies very much to editors: once you learn a good one, you generally stick to it.
[131097920180] |You won't go wrong using either.
[131097930010] |I use Vim/gVim.
[131097930020] |I used to use Emacs, but I found gVim to generally work faster on slower machines, plus, due to its POSIX requirement, vi is available almost everywhere.
[131097930030] |When using Vim or gVim, I use the mouse a lot, support for it is great, I think.
[131097930040] |I started out using Emacs, because it was easier to use for a novice user.
[131097930050] |I found usage of nano to be quite error prone for some reason, and at some point I realized I'm much more comfortable with using vi.
[131097930060] |Right now, it's a mixture.
[131097930070] |I use Eclipse and gedit quite often, too.
[131097930080] |Vim, however, is still my favorite and most used editor.
[131097940010] |Since it hasn't been explicitly stated, I'll add that there is no better programming environment (lisp in a box, slime, etc.) than a slightly modified Emacs distro.
[131097940020] |All of my programming needs (99%) are taken care of from within Vim, but for all those lisp libraries and routines I write, I have to fire up Emacs to get anything productive done.
[131097950010] |I upvoted Shamster: this is the reason I switched from vim to emacs (that and proofgeneral) the easy of use of having one buffer with say python code and the other with ipython running is very useful.
[131097950020] |With some quick key combinations you can send code to the interpreter, code you have used in the interpreter window can be cut and pasted with the emacs cut/paste system into the code window.
[131097950030] |Python here is just and example, I have used the same interaction with sml and lisp.
[131097950040] |(And now I am finally getting to really learn emacs lisp to get the full flexibility out of it).
[131097950050] |Best, Bart
[131097960010] |I happen to think that the "vim is modal" comment above is incorrect.
[131097960020] |Vim has commands.
[131097960030] |You can do "11aNow is the time for all good men.." and end up with 11 identical new lines of text in your file.
[131097960040] |That's a command, not a mode.
[131097960050] |But there's actually a very basic difference in Vim commands vs Emacs commands.
[131097960060] |I'm not entirely sure I can describe it, but Eric Fischer incorporated Emacs-style line editing in a TTY driver 10+ years ago, and got a paper published about it:
[131097960070] |http://www.usenix.org/event/usenix99/full_papers/fischer/fischer.pdf
[131097960080] |He found that Emacs style line editing was fundamentally different than vi-style.
[131097960090] |So Emacs has an advantage that a lot of other things (bash, gnuplot, zsh, ksh some others I can't think of off the top of my head) all end up implementing Emacs-style line editing.
[131097960100] |I should note that I personally use Vim all the time.
[131097960110] |I'm only a very occasional Emacs user.
[131097970010] |how to type “smart quotes” (U+201C, U+201D)
[131097970020] |It's like this: “
(U+201C) ”
(U+201D).
[131097980010] |In Gnome, you would press and hold down Ctrl+Shift, then type u201c.
[131097980020] |Of course, that won't work in Gnome Terminal if Ctrl+Shift+c is bound to Copy, in which case type it in GEdit and paste it in, or learn how to enter it in your editor of choice.
[131097990010] |If you have a Compose key (on some PC configurations, it's the key to the left of the right Ctrl key): Compose <" → “
Compose >" → ”
[131098000010] |I redefined my keyboard layout for good and I simply press alt-key + ; or ' to get: “ ”.
[131098000020] |Works in every desktop env.
[131098000030] |There are many choices how to do it -- for example, you can use character map app (present in Gnome and KDE for sure) to get any character you want.
[131098010010] |Only output printable chars OpenWrt
[131098010020] |perl is not a very good idea, because it's an OpenWrt router, so not enough space for it. "cat -v" doesn't works, because it doesn't supports the "-v" option
[131098010030] |Any ideas? :\
[131098010040] |Here's a bad text: http://pastebin.com/raw.php?i=zjMGHNq5
[131098010050] |Between the "review" and the "kde" word, theres a non-printable char.
[131098010060] |For example i need to remove these kind of chars for texts :\ Thank you!
[131098020010] |'tr' can be used for this.
[131098020020] |Normally, you could do the following:
[131098020030] |This deletes any characters that aren't one of the ones listed.
[131098020040] |The \NNN notation represents the character in octal, this lets us get tab, newline, carriage return in addition to the other characters.
[131098020050] |Busybox's tr
currently has a bug when it comes to using octal character representation and ranges.
[131098020060] |Instead, this might cover you:
[131098030010] |How do I work with GUI tools over a remote server?
[131098030020] |I have an ubuntu server running on EC2 (which I didn't install myself, just picked up an AMI).
[131098030030] |So far I'm using putty to work with it, but I wonder how to work on it with GUI tools (I'm not familiar with linux UI tools, but I want to learn).
[131098030040] |Silly me, I'm missing the convenience of Windows Explorer.
[131098030050] |I currently have only Windows at home.
[131098030060] |How do I set up GUI tools to work with a remote server?
[131098030070] |Should I even do this, or should I stick to command line?
[131098030080] |Do the answers change if I have a local linux machine to play with?
[131098040010] |You can use X11 forwarding over SSH; make sure the option
[131098040020] |X11Forwarding yes
[131098040030] |is enabled in /etc/ssh/sshd_config
, and either enable X11 forwarding by hand with ssh -X remoteserver
or add a line saying
[131098040040] |ForwardX11 yes
[131098040050] |to the relevant host entry in ~/.ssh/config
[131098040060] |Of course, that requires a working X display at the local end, so if you're using windows you're going to have to install something like XMing, then setup X11 forwarding in PuTTY as this page or this article demonstrates.
[131098040070] |ETA: Reading again and seeing your clarifications in the comments, this might suit your needs even better, as it will let you 'mount' SFTP folders as if they're regular network drives.
[131098050010] |Shadur covered how to enable X. Note that the /etc/ssh/sshd_config
is at the server end, and the ~/.ssh/config
is at the client end, so we are in general talking about two different machines.
[131098050020] |X forwarding will display your remote application on the local X display.
[131098050030] |So the two configs are to tell the remote and the local to allow this operation to happen, respectively.
[131098050040] |As to whether you should use X, it depends.
[131098050050] |You need to consider (at least) the following factors.
[131098050060] |What kind of bandwidth do you have?
[131098050070] |What is its speed?
[131098050080] |Is it metered?
[131098050090] |Is there a cap?
[131098050100] |If you have a very fast connection to the net and no restrictions, that X is more usable, otherwise it can be very slow.
[131098050110] |Bear in mind that in general X is a network hog; it is not bandwidth optimized (or whatever the right phrase is).
[131098050120] |What tools are you planning to use over X?
[131098050130] |Are there non-gui replacements/equivalents?
[131098050140] |If you give examples of the kinds of tools you are thinking of using, people could suggest alternatives if available.
[131098050150] |Also be aware that some well known tools come in both gui and commandline/console form.
[131098050160] |Eg. emacs, aptitude, reportbug.
[131098050170] |In general my recommendation is to use command line (apt, wget, rsync) or curses applications (like aptitude or mc) if they are available and do what you need.
[131098050180] |Such apps aren't necessarily worse than X apps; some of these are fine applications.
[131098050190] |Eg.
[131098050200] |The software of John Davis, eg jed and slrn, both console apps, show his distinctive aesthetic, and are works of art.
[131098050210] |BTW, running a X server on a Windows client to connect to a Linux server is an option, though not a particularly good one.
[131098050220] |If you have a local linux server, then the bandwidth issues go away, and X is a much more viable option.
[131098060010] |If you used Emacs, you could run a locally installed Emacs on your Windows, and do file editing, file&directory management (dired), version control, compilation, and also some other random work in the shell (M-xshell
or M-xeshell
), and probably some more things via TRAMP in your local Emacs.
[131098060020] |(Some easily findable demo videos that perhaps can make a person not so scared of the unknown Emacs, and TRAMP, etc.: 1, 2.)
[131098060030] |That's an illustration that remote X programs might not be the right solution for you.
[131098060040] |In contrast to the remote X clients way, the TRAMP way involves no heavyweight "graphical" traffic over the remote connection, it uses the ssh connection only to send directory listings, files, and command output back and forth.
[131098060050] |Say, if you want to work with "Windows Explorer", then there still won't be a "Windows Explorer" on the Ubuntu server, so you can't run it remotely.
[131098060060] |But if "Windows Explorer" had something like TRAMP as a feature (for remote accesses via SSH), you could continue to happily use your local "Windows Explorer", if that's all what you need.
[131098070010] |you should consider to stick to the command line, because a) most servers dont even have a GUI installed, and b) all GUIs are kinds slow to use over networks.
[131098070020] |that been said, i would suggest to have a look at VNC. there are native clients for windows and servers for linux, so you would not have to set up X11 on your windows box.
[131098080010] |How to download all the tweets from a twitter user?
[131098080020] |I thought this was a method with which I could list all the tweets from a twitter user:
[131098080030] |But it seems that if the twitter user exceeds 200 tweets than it doesn't work.
[131098080040] |Does anyone have any idea how to download all the tweets from a twitter user?
[131098090010] |Problem with Crontab and PHP
[131098090020] |Hi,
[131098090030] |I'm with some problems in running the Crontab with PHP files.
[131098090040] |I have this same php scripts running in Crontab for more than 6 months but some days ago mysteriously they stopped to execute the code correctly.
[131098090050] |The Crontab works but the script produces an error.
[131098090060] |The other interesting thing is that the same script runs without errors in the SSH Terminal.
[131098090070] |What should be the problem here?
[131098090080] |Is there a way to configure the Crontab to run with the same configuration of the SSH Terminal?
[131098090090] |Best Regards,
[131098100010] |The most common reason why a script works from the command line but not from a crontab is that the script depends on an environment variable.
[131098100020] |Crontabs only have a few environment variables set: typically only HOME
, USER
, SHELL
(set to /bin/sh
) and PATH
(set to a system default).
[131098100030] |If you need more, you must define them in the crontab file, or source ~/.profile
from the command.
[131098100040] |Perhaps you have two versions of PHP installed, one that comes first in your command line $PATH
and one that comes first in your system default $PATH
, and the system default PHP changed recently.
[131098100050] |But it's impossible to make more than an educated guess since you don't say what error you're getting.
[131098110010] |Make sure that you crontab executes your scripts as the same user as the one you are logged in as.
[131098110020] |Some env var or file rights has probably changed for one of the users.
[131098120010] |How can I run a script immediately after connecting via SSH?
[131098120020] |I started to ask this question but answered it while I had it open.
[131098120030] |I'm going to post this question, follow it up with my solution and leave it open to other potential solutions.
[131098120040] |<
backstory>
[131098120050] |I'm a tmux and vim user.
[131098120060] |I like remote vim work as I don't have to worry about Ubuntu development machines kirking out when a flash movie gives me a kernel panic.
[131098120070] |Running tmux means that open files are waiting for me after I reboot and I can carry on from where I left off.
[131098120080] |I've had problems with vim running in a tmux session when I connect like so:
[131098120090] |UTF-8 issues crop up that don't crop up when shelling in normally and just attaching to a tmux session manually.
[131098120100] |<
/backstory>
[131098120110] |So I want a reusable method of starting something on ssh login, that doesn't affect any of the other things I have configured in my .zshrc
(or your .bashrc
if you still use bash) that may be required for my development environment, that doesn't appear when I'm occasionally working locally on the very said machine.
[131098130010] |I previously advised setting PermitUserEnvironment yes
and adding an environment variable in your ~/.ssh/environment until Eli Heady chipped in with a better suggestion in the comments below.
[131098130020] |Open your .zlogin
(bash: .bash_profile
etc.) and put the following:
[131098130030] |Inspiration taken from How do I prompt for input in a Linux shell script?
[131098130040] |Note that I've used the .zlogin
file but you could use your .zshrc
file but I like to keep my dotfiles tidy and it separates it so I can use it on other machines.
[131098130050] |Replace the question with something appropriate for yourself and replace MY_SSH_CONNECTION="yes" tmux attach
with whatever you wish to run at that point.
[131098130060] |Note how the script sets MY_SSH_CONNECTION="yes"
before tmux attach
to pass it through to tmux as it also will be opening a shell that will access the very same script above and will prevent any recursion.
[131098140010] |Myself, I add this to my .bash_profile files:
[131098140020] |This gives me some time to abort reattaching to or creating a screen session.
[131098140030] |It won't work on 'ssh system command' formats (which does not call ~/.*profile).
[131098140040] |A shell function is set up to reattach if I abort.
[131098150010] |When you run ssh example.com
, the ssh daemon starts a login shell for you, and the login shell reads your ~/.profile
(or ~/.bash_profile
or ~/.zprofile
or ~/.login
depending on your login shell).
[131098150020] |When you specify a command to run remotely (with or without -t
), the ssh daemon starts an ordinary shell, so your .profile
is not read.
[131098150030] |Remedy:
[131098150040] |Most ssh daemons are configured to refuse transmitting environment variables except for LC_*
.
[131098150050] |If the ssh daemon on example.com
allows it, you can abuse a custom LC_*
variable to start tmux automatically — put this in your ~/.profile
:
[131098150060] |then log in with LC_tmux_session= ssh example.com
or LC_tmux_session=session_name ssh example.com
.
[131098150070] |This answer has more information about passing environment variables over ssh.
[131098160010] |You might consider running
[131098160020] |and run your terminal session there.
[131098160030] |You can then detach (^A^D
) and reattach later (from a different client as well).
[131098160040] |It will make the problem with non-interactive initialization go away as screen keeps full interactive terminal sessions (optionally logon shells as well, man screen
(1) or ^A?
)
[131098170010] |Where is CONFIG_COMPAT_VDSO in make menuconfig?
[131098170020] |I'm trying to compile a Linux Kernel to run light and paravirtualized on XenServer 5.6 fp1.
[131098170030] |I'm using the guide given here: http://www.mad-hacking.net/documentation/linux/deployment/xen/pv-guest-basics.xml
[131098170040] |But I'm stumped when I reached the option CONFIG_COMPAT_VDSO
.
[131098170050] |Where is it exactly in make menuconfig
?
[131098170060] |The site indicated that the options is in the Processor type and features group, but I don't see it:
[131098170070] |FYI, I'm configuring Gentoo's Kernel v2.6.36-hardened-r9
[131098180010] |As you had already said, it IS under "Processor Types and Features".
[131098180020] |You are compiling Gentoo's hardened kernel source, so the code would have undergone many patches.
[131098180030] |A quick search in google returned this : Gentoo kernel VDSO.
[131098180040] |It looks like gentoo has it disabled even several versions before.
[131098180050] |why dont you dowload directly from kernel.org ?
[131098190010] |OpenWrt ssh host identification with dyndns
[131098190020] |If i ssh to my remote OpenWrt router ["via DynDNS"]
[131098190030] |I get the message
[131098190040] |Ok, i delete the old line in ~/.ssh/known_hosts
, add the new one
[131098190050] |But my password wasn't accepted (even though I copy+pasted the good password)
[131098190060] |But: if someone reboots the remote router it says:
[131098190070] |Ok, again, I delete the line in ~/.ssh/known_hosts
, add the new one [the old one??], and presto!
[131098190080] |I can log in!
[131098190090] |Why?
[131098190100] |Is this because a dyndns update failed, and I'm trying to log in to the wrong IP?
[131098190110] |And if the router is rebooted, the dyndns IP will be updated, so that the IP is correct, and I can log in?
[131098190120] |Is that why it says “host identification failed”?
[131098190130] |It happened for the 3rd time...
[131098190140] |I don't know what's exactly is going on.
[131098200010] |Your theory is basically right, in that your issue is caused by dyndns.
[131098200020] |More precisely, your issue is caused by having a dynamic IP address; dyndns makes things easier but not completely seamless.
[131098200030] |When your router receives a new dynamic IP address (because it rebooted, or because your provider disconnected you for any reason¹) [step 1, step 4], you need to log in to the new IP.
[131098200040] |When your router connects, it sends a message to your dyndns provider notifying it of the address change.
[131098200050] |When you run ssh johnny8888.dyndns.example.com
, the ssh client looks up the IP address corresponding to johnny8888.dyndns.example.com
.
[131098200060] |DNS information takes some time to propagate, because it is heavily cached.
[131098200070] |For typical dyndns use, the delay is a few minutes.
[131098200080] |So if you try to connect soon after an IP address change, you might still reach the old IP address [step 3], which is now attributed to a different machine.
[131098200090] |If this machine runs an ssh server, you get the remote host identification changed warning.
[131098200100] |Add CheckHostIP No
to your ~/.ssh/config
, under the section for your router.
[131098200110] |Then ssh won't check the key associated with your IP address (which is useless since you have a dynamic address), only the key associated with your host name (which won't change).
[131098200120] |Note that most of the time, you will not get the remote host identification change warning.
[131098200130] |You only get it if you try to log into somebody else's machine, or tried to in the past.
[131098200140] |(Your tries would be accidental, when you were unlucky to try connecting after an IP address change and before the DNS update had propagated.)
[131098200150] |¹ Many ISPs disconnect each client every day or every few days, because this facilitates their load balancing.
[131098200160] |You can reconnect instantly, but may get a different IP address.
[131098210010] |Hi johnny.
[131098210020] |Yes, I believe you answered your own question.
[131098210030] |See Gilles' answer above.
[131098210040] |But most importantly, I'd recommend making your dyndns updates run more frequently.
[131098210050] |If possible, it would be ideal that a dyndns update run as soon your IP changes but I'm not sure if openwrt provides this functionality.
[131098220010] |Drawing a state machine from logs
[131098220020] |I have logs in the following format:
[131098220030] |Now, I would like to graphically reconstruct the state machine, but I'm kind of hesitating on how to approach this problem.
[131098220040] |Cutting out the transitions shouldn't be a problem, but I'm not sure how to reconstruct a graphical representation from them.
[131098230010] |I'm not positive I know what you mean, but are you looking for something like this?
[131098230020] |I used Graphviz, which takes text input files describing transitions, and figures out the graph automatically.
[131098230030] |Here's the exact command:
[131098230040] |Explanation
[131098230050] |sed 's/-/_/g' input
-- Dot doesn't like hyphens in the node name, so I converted them to underscores
[131098230060] |gawk
-- Standard awk doesn't have the match
function that gawk has; you can do the string manipulation any way you like though (e.g. perl is another good choice)
[131098230070] |BEGIN {print "digraph g {"}
-- Dot specifications start with this line (the name of the graph, "g", doesn't really matter)
[131098230080] |END {print "}"}
-- Ends the digraph g
started in the BEGIN block
[131098230090] |match($0, /from ([^ ]*) to ([^ ]*) \((.*)\)$/, groups)
-- A regular expression that matches your log file format; it stores the results in the groups
variable
[131098230100] |print groups[1] " -> " groups[2] " [label = \"" groups[3] "\"];"
-- Outputs a dot-compatible line (for example, A -> B [label = "C"];
will show two nodes, A and B, with a transition between them labeled C)
[131098230110] |dot -Tpng output.dot >output.png
-- Tell graphviz to convert the dot file to a PNG
[131098230120] |Resulting dot file
[131098230130] |The PNG you get when running that file through dot
is above
[131098240010] |How to cd to a Windows file share?
[131098240020] |At work, I spend a lot of time manipulating files on a networked computer that's running SME Server (but that's set up for Windows filesharing, if that somehow makes a difference).
[131098240030] |I have been wondering how to cd to the network drive's root from bash so that I don't have to keep calling up Finder / nautilus every time I want to copy a file.
[131098240040] |Any suggestions?
[131098240050] |In Ubuntu, I connect to the drive as a Windows share via Places - Connect to Server.
[131098240060] |In OSX, well, I logged in to the drive once, and it just shows up in Finder.
[131098250010] |An easy way is to mount the share using SAMBA.
[131098250020] |After installing samba you can mount the share as follow:
[131098250030] |There is a guide for openSUSE that appears via Google.
[131098250040] |See man mount.cifs
for more options.
[131098260010] |For OS X look for your share name under /Volumes (it may have a "-digit" at the end if you have many mounts with the same name).
[131098260020] |The same goes for mounted CD/DVDs and disk images.
[131098270010] |While technically using Samba, Nautilus uses gvfs, which uses FUSE to mount the SMB share using libsmbclient, you aren't actually mounting the filesystem as you would with the mount
command.
[131098270020] |When you use Nautilus with SMB mounts, a background process gvfsd-smb
is started.
[131098270030] |You can access the mountpoint in ~/.gvfs/, where there's a directory in there with a name based on the mount settings you used to connect.
[131098270040] |Check out /proc/mounts to see the details of the gvfs-fuse-daemon's mountpoint.
[131098310010] |How to overwrite target files with mv?
[131098310020] |I have a ton of files and dirs in a subdirectory I want to move to the parent directory.
[131098310030] |There are already some files and dirs in the target directory which need to be overwritten.
[131098310040] |Files that are only present in the target should be left untouched.
[131098310050] |Can I force mv
to do that?
[131098310060] |It (mv * ..
) complains
[131098310070] |What am I missing?
[131098320010] |You will have to copy them to the destination and then delete the source, using the commands cp -r * ..
followed by rm -rf *
.
[131098320020] |I don't think you can "merge" directories using mv
.
[131098330010] |Here's a script that moves files from under /path/to/source/root
to the corresponding path under /path/to/destination/root
.
[131098330020] |If a directory exists in both the source and the destination, the contents are moved-and-merged recursively.
[131098330030] |If a file or directory exists in the source but not in the destination, it is moved.
[131098330040] |Any file or directory that already exists in the destination is left behind.
[131098330050] |(In particular merged directories are left behind in the source.
[131098330060] |This is not easy to fix.)
[131098330070] |Beware, untested code.
[131098340010] |rsync
would probably be a better option here.
[131098340020] |It's as simple as rsync -a subdir/ ./
.
[131098340030] |My test tree in filename
:contents
format:
[131098340040] |Running rsync
:
[131098340050] |Gives:
[131098340060] |And then, to emulate mv
, you probably want to remove the source directory:
[131098340070] |Giving:
[131098340080] |If this is wrong, can you please provide a similar example (e.g. using my test tree from near the top of this answer) with the desired result?
[131098350010] |Best way run a command on each file in a directory tree
[131098350020] |There appear be a number of ways to do this including loops in shell script, find and xargs.
[131098350030] |Which of these is best, and which is most portable?
[131098360010] |for the current directory as the root of the tree or in general:
[131098370010] |If portability is an issue I would stay away from shell-specific ways of doing it (there are lots of people using lots of different shells, but in general find and xargs are really basic tools that no one would dare change in a radical way).
[131098370020] |Using basic arguments of find and xargs should give you a more portable solution.
[131098380010] |If grouping arguments together is acceptable, find | xargs will probably give better performance, since it will execute the command a lot fewer times than find -exec.
[131098380020] |If you want to execute the command each time for each file or execute in the subdirectory of the file you should probably use find -exec or -execdir.
[131098380030] |As a rule, it's preferable to stay away from shell-specific loops; find &xargs are enough for most scenarios.
[131098390010] |Works best for me.
[131098400010] |Use the -print0 option to find and the -0 option to xargs if you have file or directory names with spaces:
[131098400020] |The -print0 option to find prints out the filenames as a NUL-separated rather than whitespace separated list, while the -0 option to xargs instructs it to break its input on NUL rather than whitespace.
[131098400030] |Since NUL is one of the characters that is not allowed in Unix filenames, there is no way for it to be mistaken for part of a filename.
[131098400040] |The -r option is there so that xargs won't give you an error if it has no input.
[131098410010] |Make sure the version of the command you're using doesn't already have a flag for recursive operation. :)
[131098420010] |Books/Guides for Securing a Server
[131098420020] |I have a website idea I want to build and launch, and I'm thinking of getting a small VPS to host it (I like Linode for their price, and they seem to be widely recommended).
[131098420030] |The problem is that I'm really very new to Linux in general, and I'm pretty broke so I can't afford help/a managed server.
[131098420040] |I downloaded Ubuntu Lucid Server and have it running in a VBox, to help me learn and to act as a close approximation to the eventual production server.
[131098420050] |I'm committed to learning, but I'm pretty afraid I'm going to miss something dumb and get compromised.
[131098420060] |As such, I'd like to know of any good guides/books explaining the main points of securing a LAMP server.
[131098420070] |I've worked through the basic stuff in Linode and Slicehost's respective tutorials, but I want to be as prepared as possible.
[131098420080] |The site isn't written yet, and I'm likely to deploy to a shared host first as a trial run, so I do have time to learn at least the basics.
[131098420090] |I know to keep everything up to date, configure iptables to only allow the holes I need (which appears to just TCP ports 22, for ssh/scp/sftp - I'll change it from the default port for the (very minor) security through obscurity bonus - and 80 for http) - though I am confused by some tutorials which say to block ICMP since I don't why I wouldn't want to respond to ping - and to only install software I need/remove software I don't need.
[131098420100] |Any advice beyond this, and especially any recommendations for guides, are well appreciated.
[131098420110] |Thanks
[131098430010] |Only a partial answer, but I've written an IPtables tutorial which may be of some use to you. http://www.ellipsix.net/geninfo/firewall/index.html
[131098430020] |Besides IPtables, you'll need to configure SSH and Apache, but those come with default configurations that are sort of secure already, so there are only a couple things you'll probably have to change.
[131098430030] |Of course, as you add more features to your website, you'll have to keep the configuration up to date accordingly.
[131098430040] |Somebody else can probably recommend good references for that.
[131098430050] |In fact, I'll make this community wiki so that if anyone else feels like adding in links, they can do so.
[131098440010] |As you're already using Ubuntu, I recommend their Server Guide, which offers a basic overview of a common set of default services.
[131098440020] |Also have a look at Linux Server Security from O'Reilly.
[131098440030] |Actually, just search Amazon for quite a few offerings.
[131098440040] |Googling server hardening checklist seems to return some good, practical, ways of quickly figuring out whether something's blatantly wrong with your setup.
[131098440050] |Finally, head over to serverfault's security section and ask away.
[131098440060] |Edit: also, ICMP should be blocked based on message.
[131098440070] |See ICMP Packet Filtering for details.
[131098450010] |Read the Gentoo Security Handbook.
[131098450020] |Most of it should apply to any Linux distribution.
[131098460010] |Network via ieee1394 is unreachable (PC and laptop)
[131098460020] |Hi all.
[131098460030] |I decided to connect my PC to the laptop via firewire.
[131098460040] |All interfaces are up, but no packets can be send/recieved.
[131098460050] |I'm using Linux Slackware64-current (kernel 2.6.37.4) Laptop (Dell Vostro 3700) has this adapter:
[131098460060] |Old ieee1394 stack was removed in new kernels. thats why I use new:
[131098460070] |dmesg shows:
[131098460080] |1st string is good, but the 2nd is strange for me. what does it mean? maybe it is the source of the problem? ok. lets configure the interface:
[131098460090] |Same configuration steps were made on my PC.
[131098460100] |OS is the same.
[131098460110] |FireWire device -
[131098460120] |dmesg showed:
[131098460130] |interface configuration -
[131098460140] |Now lets ping PC from laptop -
[131098460150] |Yes. the cable is connected.
[131098460160] |Laptop has 4pin port, and PC has 6pin port.
[131098460170] |AFAIK 6pin uses two pins for power supplyment.
[131098460180] |I have the coresponding cable, so i don't think that the problem is here.
[131098460190] |thats all. thanks
[131098470010] |How do I determine LVM mapping on a physical device?
[131098470020] |I have a server (SUSE 11.5) that has two disks.
[131098470030] |There is only one volume group (vg01).
[131098470040] |How do I determine the physical device on which that vg exists?
[131098480010] |I think
[131098480020] |shows you the physical device(s) corresponding to all your volume groups.
[131098480030] |Inter alia, my system shows, for example
[131098480040] |This is the only one of my VGs that corresponds to a raw partition.
[131098480050] |The others are on top of software raid devices.
[131098490010] |Printing unique lines
[131098490020] |Is there some better solution for printing unique lines other than a combination of sort
and uniq
?
[131098500010] |Some (most?) versions of sort
have a -u
flag that does the uniq
part directly.
[131098500020] |Might be some line length restrictions depending on the implementation though, but you had those already with plain sort|uniq
.
[131098510010] |To print each identical line only one, in any order:
[131098510020] |To print only the unique lines, in any order:
[131098510030] |To print each identical line only once, in the order of their first occurrence: (for each line, print the line if it hasn't been seen yet, then in any case increment the seen counter)
[131098510040] |To print only the unique lines, in the order of their first occurrence: (record each line in seen
, and also in lines
if it's the first occurrence; at the end of the input, print the lines in order of occurrence but only the ones seen only once)
[131098520010] |Does Perl work for you?
[131098520020] |It can keep the lines in the original order, even if the duplicates are not adjacent.
[131098520030] |You could also code it in Python, or awk
.
[131098520040] |Given input file:
[131098520050] |It yields the output:
[131098530010] |Can I redirect stdout from a background application after starting it?
[131098530020] |Possible Duplicate: redirect output of a running program to /dev/null
[131098530030] |Is it possible to change stdout after starting something as a background application in the command line?
[131098530040] |Say I run test.py:
[131098530050] |and then do:
[131098530060] |Can I redirect the output to /dev/null
somehow?
[131098530070] |Relates to: redirect output of a running program to /dev/null
[131098530080] |With an answer on this site: redirect output of a running program to /dev/null by Mike Perdide
[131098530090] |It's also a direct duplicate of a StackOverflow question: Redirect STDERR / STDOUT of a process AFTER it's been started, using command line?
[131098540010] |Not unless you taught the program to do so somehow (say, on receipt of a particular signal such as SIGUSR1
it reopens sys.stdout
and sys.stderr
on /dev/null
).
[131098540020] |Otherwise, once it's been started you have very little control over it.
[131098550010] |How to investigate cause of total hang?
[131098550020] |My Arch machine sometimes hangs, suddenly not responding in any way to the mouse or the keyboard.
[131098550030] |The cursor is frozen.
[131098550040] |Ctrl-Alt-Backsp won't stop X11, and ctrl-alt-del does exactly nothing.
[131098550050] |The cpu, network, and disk activity plots in conky and icewm stop updating.
[131098550060] |In a few minutes the fan turns on.
[131098550070] |The only way to make the computer do anything at all is to turn off power.
[131098550080] |When it boots up, the CPU temperature monitors show 70 to 80C.
[131098550090] |Before the hang, I was usually doing low-intensity activity like web surfing getting around 50C.
[131098550100] |The logs show nothing special compared to a normal shutdown.
[131098550110] |Memory checker runs fine with zero defects.
[131098550120] |How can I investigate why it hung up?
[131098550130] |Is there extra information I can find for a clue?
[131098550140] |Is there anything less drastic than power-off to get some kind of action, if only some limited shell or just beeps, but might give a clue?
[131098550150] |The machine is a Gateway P6860 17" laptop (bulky but powerful) and it's running Arch 64bit, up to date (as of March 2011).
[131098550160] |I had Arch for a long time w/o this problem, switched to Ubuntu for about a week then retreated back to a fresh install of Arch.
[131098550170] |That's when the hangings started.
[131098560010] |Regarding the freeze, there are a few options:
[131098560020] |using a serial port if your box has one to get the dump there by adding console=ttyS0
to the boot options, as described here.
[131098560030] |You need a second machine with a serial port and a null modem cable to catch the dump file.
[131098560040] |using netconsole to get the dump over the network, see here: http://www.mjmwired.net/kernel/Documentation/networking/netconsole.txt
[131098560050] |Using kexec/kdump this way you get a local dump: http://www.mjmwired.net/kernel/Documentation/kdump/kdump.txt
[131098560060] |Regarding the clean power off problem, I suggest you use the magic SysRq key to 'S'ync the discs, 'U'mount them, and then re'B'oot the box (the letters are the ones you should type along with alt-sysrq.
[131098560070] |Edit: If you post the oops/trace to the lkml, you should use a recent (preferably the latest) version of the kernel and no proprietary modules.
[131098570010] |Frederik's answer involving magic SysRq and kernel dumps will work if the kernel is still running, and not truly hung.
[131098570020] |The kernel might just be busy-looping for some reason.
[131098570030] |The fact that it doesn't respond to Ctrl-Alt-Del tells me that probably isn't the case, and that the machine is locking up hard.
[131098570040] |That means hardware failure, or something closely related, like a bad driver.
[131098570050] |Your memory check test is good, if you let it run long enough.
[131098570060] |You should also try other things to try and stress the system, like StressLinux.
[131098570070] |Long-running benchmarks are good, too.
[131098570080] |Another thing to try is booting the system with an Ubuntu live CD and trying to use the system as normal.
[131098570090] |If returning to Ubuntu temporarily like that doesn't cause the problem to recur, there's a good chance it's not actually broken hardware, but one of the related things like a bad driver or incorrectly configured kernel.
[131098570100] |It is quite possible that a more popular distribution like Ubuntu could have a more stable kernel configuration than one like Arch, simply due to the greater number of machines it's been tried on during the distro's test phase.
[131098580010] |ssh fails for an unix server due to dual layer of authentication
[131098580020] |I have a server which has dual authentication layer.
[131098580030] |Meaning first a user has to login to the box using his userid and then use a group id to access the development folder (say).
[131098580040] |Now I have a script which uses ssh to copy a file to the development folder.
[131098580050] |What should be the command for this?
[131098580060] |Normal ssh commands such as as ssh $user@host $cmd
where $cmd
is something like cp ~user/Test.txt ~grp/
won't work because we are using user's id to copy to group's directory which is not allowed.
[131098580070] |And if we use group id to login it will be denied permission.
[131098580080] |Any suggestions?
[131098590010] |Could the problem be that you use ~/user
and ~/grp
(both being directories relative to the home directory of the logged in user) instead of ~user
and ~grp
(meaning two different users' home directories)?
[131098590020] |It's a bit hard to tell from your problem description what the problem really is. Normally, it sounds like "user" should be part of the same group that "grp" is, and the directory should be group writable.
[131098590030] |Stay away from the r-commands.
[131098590040] |SSH can do everything better and more secure.
[131098600010] |The best solution would be to use groups to manage groups — that's what they're for.
[131098600020] |Instead of giving the team the password to the grp
account, make grp
a group and make sure all files that should be accessible to the team are owned by that group and have group read and write permissions (as applicable).
[131098600030] |Using groups has many advantages, including instantly solving your immediate problem.
[131098600040] |If you can't do that because it's a policy set by your boss and he's stubborn, can you at least arrange to use something other than password authentication to get to the su
account?
[131098600050] |For example, if you can add your ssh public key to the team account's ~/.ssh/authorized_keys
, you'll be able to ssh directly into it.
[131098600060] |(This is not a good way of managing authorizations, but you're abusing the system already by not using groups.)
[131098610010] |Which version of X introduced feature Y?
[131098610020] |A big part of my daily job is developing software for use on machines with different versions of the same software, like bash
, find
, and grep
.
[131098610030] |When encountering a feature which would be useful for example to simplify code, it is important to know whether this feature is available in the oldest installed tools.
[131098610040] |For critical stuff, it would also be useful to know whether this feature was new or has existed for years in the oldest installed tools.
[131098610050] |What are quick ways to answer this authoritatively for Linux tools, especially the GNU Core Utils?
[131098610060] |Some possibilities in order of decreasing accuracy:
[131098610070] |Binary binary search (sic) by running the different versions is of course the ultimate answer, but is by far the most time consuming.
[131098610080] |Older installations are often not available for security reasons.
[131098610090] |Reading the code is almost as good, but it can be prohibitively time consuming if the feature is vaguely named, the name doesn't correspond directly to variable/function/object names, or it was implemented before it was enabled.
[131098610100] |Change logs, when available, usually connect feature changes to software versions.
[131098610110] |Commit logs can provide hints, but do not know which version they will be included in.
[131098610120] |man pages rarely mention dates.
[131098610130] |The same applies to Googling, and you'd also have a hard time excluding all the non-authoritative sources.
[131098620010] |Why not use the source control afferent to X to search for feature Y?
[131098620020] |The source repo is the best way to indentify when a particular feature was introduced.
[131098620030] |For coreutils you can head to http://git.savannah.gnu.org/cgit/coreutils.git and search for keywords related to Y in log messages or particular lines of code you identified as belonging to Y. You can do that on the web interface directly, or even better, clone the source repo to your station and search using git-bisect, git-blame and git-log --grep.
[131098620040] |Then Use git-describe to find out tag is the closest to a specific commmit.
[131098620050] |Tags are used for version numbers mostly, thus it will give you the version that introduced the commit.
[131098620060] |You can adapt the above method, depending on X and it's source control.
[131098630010] |Generally, there's a changelog.
[131098630020] |In fact, this (or other "prominent notices" of changes) is required by the GPL!
[131098630030] |(At least, effectively so for anything with multiple contributors — see GPLv2 section 2a.)
[131098630040] |For the GNU coreutils package — and for pretty much everything else from the GNU project directly — this file is definitely the first place to look, and should answer your question 95% of the time.
[131098640010] |iptables rule for local network with free internet blocking unrequested connection from internet to server ?
[131098640020] |I have a home server (with slackware 13) with a eth0 for the local network and a eth1 for the internet (cable modem with dynamic ip).
[131098640030] |While I do want to learn more about iptables I am still on the proccess and I need some rules done and can't until a learn to do it as I dont wish my server to get compromised at this stage.
[131098640040] |I currently have a vm where I play with my rules and everything and would appreciate if some one could wrap me a firewall rule for iptables to do the below:
[131098640050] |Allow all users from my dhcp server on eth0 to have full access to internet and server, in other words eth0 should have no restrictions within the network and server.
[131098640060] |Allow all users to be able to create a server, for example if they are playing a game such as for example warcraft, and they create a game, the firewall should allow the negotiation of those connections to go thru.
[131098640070] |Block any requests from internet to server unless it was initiated by the server or an users from the network.
[131098650010] |So, basically your Linux box acts as a firewall?
[131098650020] |First, enable IP forwarding.
[131098650030] |Then, add some forwarding rules:
[131098650040] |Secure the FORWARD chain:
[131098650050] |Create a NAT rule:
[131098650060] |Finally, don't forget to check that you have a default route:
[131098650070] |You should see something like:
[131098650080] |If not, add one:
[131098650090] |(Usually the DHCP client will automatically add one)
[131098660010] |Convince apt-get *not* to use IPv6 method
[131098660020] |The ISP I work at is setting up an internal IPv6 network in preparation for eventually connecting to the IPv6 internet.
[131098660030] |As a result, several of the servers in this network now try to connect to security.debian.org via its IPv6 address by default when running apt-get update
, and that results in having to wait for a lengthy timeout whenever I'm downloading updates of any sort.
[131098660040] |Is there a way to tell apt to either prefer IPv4 or ignore IPv6 altogether?
[131098670010] |How about adding a line in /etc/hosts
overriding the relevant addresses? e.g.,
[131098680010] |You could work around this by setting up a DNS proxy server that dropped ip6 responses.
[131098690010] |You could setup apt-cacher-ng on a spare machine to act as a proxy/cache for all of your hosts.
[131098690020] |You can force the configuration to only use specific hosts or use the /etc/hosts trick suggested by @badp on that one machine.
[131098690030] |Once you have apt-cache-ng setup you just need to drop the following line (with IP address/hostname altered to point at your cacher machine) in /etc/apt/apt.conf.d/90httpproxy
[131098690040] |I use that setup to reduce bandwidth usage but it should workaround your problem.
[131098690050] |Unfortunately I'm not aware of a way to directly disable ipv6 lookups for apt-get itself.
[131098700010] |Disabling ipv6 altogether on your machine until it is actually ready?
[131098710010] |How does a kernel mount the root partition?
[131098710020] |My question is with regards to booting a Linux system from a separate /boot partition.
[131098710030] |If most configuration files are located on a separate / partition, how does the kernel correctly mount it at boot time?
[131098710040] |Any elaboration on this would be great.
[131098710050] |I feel as though I am missing something basic.
[131098710060] |I am mostly concerned with the process and order of operations.
[131098710070] |Thanks!
[131098710080] |EDIT: I think what I needed to ask was more along the lines of the dev file that is used in the root kernel parameter.
[131098710090] |For instance, say I give my root param as root=/dev/sda2.
[131098710100] |How does the kernel have a mapping of the /dev/sda2 file?
[131098720010] |Grub mounts the /boot
partition and then executes the kernel.
[131098720020] |In Grub's configuration, it tells the kernel what to use as the root device.
[131098720030] |For example in Grub's menu.lst
:
[131098730010] |Linux initially boots with a ramdisk (called an initrd
, for "INITial RamDisk") as /
.
[131098730020] |This disk has just enough on it to be able to find the real root partition (including any driver and filesystem modules required).
[131098730030] |It mounts the root partition onto a temporary mount point on the initrd
, then invokes pivot_root(8)
to swap the root and temporary mount points, leaving the initrd
in a position to be umount
ed and the actual root filesystem on /
.
[131098740010] |C'mon, GRUB doesn't "mount" /boot, it just reads 'menu.lst' and some modules, it isn't part of LINUX kernel either.
[131098740020] |When you call the kernel, you will pass a "root" argument with the root partition.
[131098740030] |At worst, the kernel knows that just /boot has been mounted (LOL).
[131098740040] |Next: geekosaur is right, Linux uses an initial ramdisk in compressed image format, and then mounts the real root filesystem by calling pivot_root
.
[131098740050] |So Linux starts running from an image, and then from your local disk drive.
[131098750010] |The boot loader, be it grub or lilo or whatever, tells the kernel where to look with the root=
flag, and optionally loads an initial ramdisk into memory via initrd
before booting the kernel.
[131098750020] |The kernel then loads, tests its hardware and device drivers and looks around the system for what it can see (you can review this diagnostic info by typing dmesg
; nowadays it likely scrolls by way too fast to see) then attempts to mount the partition mentioned in the root=
parameter.
[131098750030] |If an initrd is present, it's mounted first and any modules/device drivers on it are loaded and probed out before the root filesystem is mounted.
[131098750040] |This way you can compile the drivers for your hard drives as modules and still be able to boot.
[131098760010] |Sounds like you're asking how does the kernel "know" which partition is the root partition, without access to configuration files on /etc.
[131098760020] |The kernel can accept command line arguments like any other program.
[131098760030] |GRUB, or most other bootloaders can accept command line arguments as user input, or store them and make various combinations of command line arguments available via a menu.
[131098760040] |The bootloader passes the command line arguments to the kernel when it loads it (I don't know the name or mechanics of this convention but it's probably similar to how an application receives command line arguments from a calling process in a running kernel).
[131098760050] |One of those command line options is root
, where you can specify the root filesystem, i.e. root=/dev/sda1
.
[131098760060] |If the kernel uses an initrd, the bootloader is responsible for telling the kernel where it is, or putting the initrd in a standard memory location (I think) - that's at least the way it works on my Guruplug.
[131098760070] |It's entirely possible to not specify one and then have your kernel panic immediately after starting complaining that it can't find a root filesystem.
[131098760080] |There might be other ways of passing this option to the kernel.
[131098770010] |Kernel - Starting the application
[131098770020] |I placed the uBoot loader and the kernel into the raw flash image.
[131098770030] |This does not contain any root file system.
[131098770040] |(I copied uBoot and kernel image using dd command to a flash image).
[131098770050] |Now i have to change my kernel as to start my application at a particular address which was located in my flash image.
[131098770060] |How can I change the kernel to start my application on its own?
[131098780010] |I think you're going to need to have a filesystem on that flash, or else modify the kernel.
[131098780020] |You'll need to mount that as your root fs, and then pass the path to your program as a kernel parameter — init=/bin/yourbinary
.
[131098790010] |zsh: trouble batch-renaming files with zmv
[131098790020] |I'm trying to mass rename the files from one extension to another (background: use haml instead of erb in my rails app).
[131098790030] |I get following output when issuing rename command:
[131098790040] |Can anyone point me to the right direction of fixing this?
[131098800010] |I think what you really want is something like this:
[131098800020] |You need to use the parenthesis to create match groups, and you create a match group for the path to the file, and then a match group for the file name.
[131098800030] |Also, you need to make sure the second argument is also in single-quotes for zmv.
[131098800040] |Also, it's a really good idea to test zmv commands with '-n' before you run them (-n will tell you what will be renamed, but not actually rename anything.)
[131098810010] |You need to tell zsh
what $1
refers to.
[131098810020] |There are two possibilities:
[131098810030] |Use parentheses around parts of the source pattern that you want to use.
[131098810040] |For example, in zmv '(*)/(*).erb' '$1/$2.haml'
, '$1' means what is matched by the first *
and $2
means what is matched by the second $2
.
[131098810050] |[EDIT (thanks to clee for pointing out that (**/)
works)] Parentheses are a little awkward to use with multiple directory levels.
[131098810060] |If you write (**)
, the double asterisk loses its special meaning (only a single directory level is matched).
[131098810070] |And you mostly can't use /
inside parentheses, so (**/*)
is not a valid pattern.
[131098810080] |However, the special case (**/)
works, so you can write zmv '(**/)(*).erb' '$1$2.haml'
.
[131098810090] |Use the -w
option, in which case each $N
corresponds to the text matched by the *N*th wildcard in the source pattern.
[131098810100] |For example, zmv -w '**/*.erb' '$1/$2.haml'
does what you want.
[131098810110] |Note that you must always use single quotes around the replacement text (or use \$
), otherwise the $
s are expanded by the shell before they get to the zmv
built-in command.
[131098820010] |Does IO prioritise by the very nature of renicing a task?
[131098820020] |The CFQ IO scheduler supports priorities though I am not sure that Deadline does (I believe not).
[131098820030] |The premise is that when I renice a task it gets a larger share of CPU under the Completely Fair Scheduler.
[131098820040] |Since this task is likely to run more often it would call for IO more often as well when needed, correct?
[131098820050] |I am wondering if even though the IO scheduler does not support priorities that the task would get more/less IO when reniced?
[131098820060] |Or is the disk/memory management completely separate?
[131098830010] |Disk and memory scheduling are entirely different.
[131098830020] |In the absence of an IO priority scheduler, IO will be handled on a first come first served basis.
[131098830030] |If the system is IO bound, then all processes run in a more or less round robin basis until all are waiting for I/O. The nice priority of a process will have little impact on its scheduling frequency.
[131098830040] |Recent versions of Linux have added an ionice
facility.
[131098830050] |The idle priority is intended to prevent IO degradation which may occur when the heads are moved to a different area of the disk delaying writes for other processes.
[131098830060] |Renicing an I/O bound process is unlikely to significantly slow its I/O rate unless the load average exceeds the number of CPUs.
[131098830070] |If unused CPU cycles are available, the process will likely be scheduled frequently enough to keep its I/O rate close to what it would be at a regular priority.
[131098830080] |Recent Linux kernels will modify the IO priority of reniced process which have not had an IO priority set.
[131098830090] |The 40 CPU priority levels are mapped to 8 IO priority levels, so a significant nice change may be required to change the IO priority.
[131098830100] |Having a significant number of CPU bound processes running at or above the I/O bound processes priority may slow its I/O rate.
[131098830110] |The process will still get time slices resulting in I/O occurring.
[131098840010] |Why is my variable being localized in one 'while read' loop, but not in another seemingly similar loop.
[131098840020] |Why do I get different values for $x from the snippets below?
[131098850010] |You're running into a variable scope issue.
[131098850020] |The variables defined in the while loop that is on the right side of the pipe have their own local scope context, and changes to the variable will not be seen outside of the loop.
[131098850030] |The while loop is essentially a subshell which gets a COPY of the shell environment, and any changes to the environment are lost at the end of the shell.
[131098850040] |See this StackOverflow question.
[131098850050] |UPDATED: I neglected to point out the important fact that the while loop with it's own subshell was due to it being the endpoint of a pipe, I've updated that in the answer.
[131098860010] |In order to get a value out of a subshell you have to echo the value to standard out and capture it, or else store it to an external file.
[131098860020] |There is no combination of export and/or variable properties that will get a variable back out of a subshell.
[131098860030] |This is one of the reasons I don't use shell for any non-trivial control structures.
[131098870010] |The right explanation has already been given by jsbillings and geekosaur, but let me expand on that a bit.
[131098870020] |In most shells, including bash, each side of a pipeline runs in a subshell, so any change in the shell's internal state (such as setting variables) remains confined to that segment of a pipeline.
[131098870030] |The only information you can get from a subshell is what it outputs (to standard output and other file descriptors) and its exit code (which is a number between 0 and 255).
[131098870040] |For example, the following snippet prints 0:
[131098870050] |In ksh (the variants derived from the AT&T code, not pdksh variants) and zsh, the last item in a pipeline is executed in the parent shell.
[131098870060] |(POSIX allows both behaviors.)
[131098870070] |So the snippet above prints 2.
[131098870080] |A useful idiom is to include the continuation of the while loop (or whatever you have on the right-hand side of the pipeline, but a while loop is actually common here) in the pipeline:
[131098880010] |How to check if bash can print colors
[131098880020] |Hi,
[131098880030] |I want to know if there's any way to check if my program can output terminal output using colors or not.
[131098880040] |Running commands like less
and looking at the output from a program that outputs using colors, the output is displayed wrong, like
[131098880050] |[ESC[0;32m0.052ESC[0m ESC[1;32m2,816.00 kbESC[0m]
[131098880060] |Thanks
[131098890010] |That would be the fault of less
not being set to interpret ANSI escapes; look for R
in $LESSOPTS
.
[131098890020] |As for determining if the system knows your terminal can deal with colors, tput colors
will output either the number of colors it supports or -1
if it doesn't support colors.
[131098890030] |(Note that some terminals may use xterm
instead of xterm-color
as their terminal description, but still support colors.)
[131098900010] |This should be enough:
[131098900020] |tput colors explained:
[131098900030] |If you look at the manpage, you'ill notice this:
[131098900040] |And...
[131098900050] |The termcap colors
is in the terminfo database, so you can ask for it.
[131098900060] |If you have a zero exit status, then the termcap is compiled in.
[131098900070] |But if you have somethin like:
[131098900080] |This shows that unknowntermcap doesn't exist.
[131098900090] |So, this:
[131098900100] |Shows that your command was right.
[131098900110] |Other useful ways:
[131098900120] |In C, you can just use isatty and see if it's a TTY
[131098900130] |See if it's a dumb terminal looking $TERM variable
[131098900140] |Cheers
[131098910010] |If you want to add color to output but only when colors are supported, you can simply use tput
. http://tldp.org/HOWTO/Bash-Prompt-HOWTO/x405.html
[131098920010] |Running commands like less and looking at the output from a program that outputs using colors, the output is displayed wrong, like
[131098920020] |[ESC[0;32m0.052ESC[0m ESC[1;32m2,816.00 kbESC[0m]
[131098920030] |Try using less --RAW-CONTROL-CHARS
.
[131098920040] |In this example I am using logtool, which prints output using colors.
[131098920050] |Without --RAW-CONTROL-CHARS :
[131098920060] |With --RAW-CONTROL-CHAR (Imagine this is in pretty colors.
[131098920070] |Also, I am not sure why that ^G
is being displayed.):
[131098930010] |The idea is for my application to know not to color the output if the program can't print, say, logging output from through a cron job to a file, no need to log colored output, but when running manually, i like to view the output colored.
[131098930020] |For this use case, what programs typically do (e.g. GNU ls or GNU grep with --color=auto
) is to use colors if their output is going to a terminal, and no colors otherwise.
[131098930030] |Terminals that don't support ANSI color-changing sequences are rare enough that it's acceptable to make their users override the default choice.
[131098930040] |In any case, make sure your application has an option to force colors on or off.
[131098930050] |In a shell script, use [ -t 1 ]
to test if standard output is a terminal.
[131098930060] |From a program using the C API, call isatty(1)
.
[131098940010] |The idea is for my application to know not to color the output if the program can't print, say, logging output from through a cron job to a file, no need to log colored output, but when running manually, i like to view the output colored
[131098940020] |What language are you writing your application in?
[131098940030] |The normal approach is to check if the output device is a tty, and if it is, check if that type of terminal supports colors.
[131098940040] |In bash
, that would look like
[131098940050] |In C, you have to do a lot more typing, but can achieve the same result using isatty and the functions listed in man 3 terminfo
.
[131098950010] |Do we have an undo in Linux?
[131098950020] |I renamed a few files in one batch script.
[131098950030] |Is there a way to undo the changes without having to rename them back?
[131098950040] |Does Linux provide some native way of undo
ing?
[131098960010] |Unfortunately, no.
[131098970010] |No there is no magical undo in any Unix.
[131098970020] |Unix assumes that you know what you are doing.
[131098970030] |For Undo support use a VCS (your text editor probably has it built in too).
[131098980010] |There is no undo in the command line.
[131098980020] |You can however, run commands as rm -i
and mv -i
.
[131098980030] |This will prompt you with an "are you sure?" question before they execute the command.
[131098980040] |It's also possible to add an alias for it to a startup script (e.g. ~/.bashrc
or /etc/bash.bashrc
):
[131098980050] |Edit: by the suggestions below, I've removed my advice to alias the default commands.
[131098980060] |Instead, it introduces new commands now).
[131098990010] |If you really want an undo feature, use source control.
[131098990020] |Subversion actually works very well on a single user machine.
[131098990030] |I use it to control all my personal files on my home system.
[131098990040] |It seems like overkill, until disaster, a rogue script or a command line typo hits.
[131099000010] |Linux (like other unices) doesn't natively provide an undo feature.
[131099000020] |The philosophy is that if it's gone, it's gone.
[131099000030] |If it was important, it should have been backed up.
[131099000040] |There is a fuse filesystem that automatically keeps copies of old versions: copyfs, available in all good distributions.
[131099000050] |Of course, that can use a lot of resources.
[131099000060] |The best way to protect against such accidents is to use a version control system (cvs, bazaar, darcs, git, mercurial, subversion, ...).
[131099000070] |It takes a little time to learn, but it pays off awesomely in the medium and long term.
[131099010010] |One thing that I like to add to my .bashrc is a copy and remove function.
[131099010020] |Something like:
[131099010030] |But you do have to get into the habit of typing cprm not rm.
[131099010040] |Obviously you will need to keep on top of the deleted area if you have limited diskspace.
[131099020010] |Best directory for shared scripts
[131099020020] |What is the conventional directory to keep shared scripts that might be used by more than one user?
[131099020030] |I've read through this overview of the standard file system hierarchy, but it doesn't seem to recommend a location for storing shared scripts.
[131099020040] |Creating a /opt/scripts directory seems like a reasonable option, but I'd like to know if there is a standard UNIX convention for this.
[131099030010] |I believe it's /usr/local/bin/ - it's for installing custom, not maintained by package manager executables
[131099040010] |Depending on the purpose... but maybe /usr/bin is good place for most user programs.
[131099040020] |OPT is intended to have third party software.
[131099050010] |Make a directory bin
in /usr/local/share
.
[131099060010] |Reading audio stream data from internet radio and pushing it to temporary file.
[131099060020] |Hi,
[131099060030] |I'm in the process of setting up an audio processor on my remotely hosted CentOS box.
[131099060040] |The audio processor itself is command line based, and after speaking with the author he explained to me that it works by reading in a live .WAV stream, and it outputs a live .WAV too.
[131099060050] |Now basically, the scenario I have is this:
[131099060060] |I have a shoutcast server on this box using port 8000.
[131099060070] |This shoutcast server is the point at which the DJ's connect.
[131099060080] |I have a secondary shoutcast server using port 8002 where the listeners will connect.
[131099060090] |In between these, I would like to use this audio processing tool.
[131099060100] |It would need to connect to the first shoutcast server on port 8000, process the audio, and then send it to the server on port 8002.
[131099060110] |The program cannot do this on it's own unfortunately, so I am told by the software author.
[131099060120] |He also stated that this scenario is workable, providing I use the right method.
[131099060130] |He suggested something like the following:
[131099060140] |Command line tool that reads the incoming stream, and pipes it to:
[131099060150] |Command line tool that extracts the MP3 data to WAV format, for example lame with option --decode.
[131099060160] |Stereo Tool.
[131099060170] |Program that encodes WAV to MP3 data, for example lame.
[131099060180] |Program that streams this, which can handle a pipe as input.
[131099060190] |Step 1+2 could be replaced by: 'arecord', linked using 'jack' to a program that receives and plays an incoming stream
[131099060200] |Similarly, step 5 could be replaced by: 'aplayer', linked using 'jack' to a program that streams audio data.
[131099060210] |I do understand what he has said, and I could proably do this if I was using a local install with a GUI and a sound card.
[131099060220] |But, with me not being majorly familiar with Linux command line, and not having a sound card, I am at a loss as to how something like this could be implemented.
[131099060230] |I am totally lost to be honest, and would appreciate some insight from you linux guru's on how to configure something like this.
[131099060240] |It's mainly the input and output im struggling with.
[131099060250] |Thanks in advance for any help.
[131099060260] |Dave
[131099070010] |I haven't done this before nor tested it nor have thoroughly read the appropriate documentation.
[131099070020] |And I am not an expert in audio/video codecs and stuff.
[131099070030] |So this is more of a "this could work" guide and hopefully others can elaborate.
[131099070040] |I did a quick search on google, trying to find out some tools that will cover the requirements (only command line tools).
[131099070050] |Getting the audio stream from the first server: icecream
[131099070060] |Decoding from mp3 to wav: lame
[131099070070] |Your Stereo Tool: stereo_tool
(hypothetically)
[131099070080] |Encoding from wav to mp3: lame
[131099070090] |Forwarding audio to the second server: ezstream
[131099070100] |Assuming that your shoutcast servers are up and running in the same box.
[131099070110] |We will make a shell script stream2stream.sh
that will read from the first, process and forward to the second.
[131099070120] |Note that ezstream
supports re-encoding by letting you define your own encoding/decoding programs.
[131099070130] |So my script above may be unnecessary and ezstream
may be sufficient by itself.
[131099070140] |But I'm not familiar with this tool and so in this implementation we have the simplest configuration.
[131099070150] |You will have to adjust the parameters on lame and ezstream to your likings.
[131099070160] |You can execute the script with nohup or in screen.
[131099080010] |Configuring mouse for right+left button simulating middle click (for copy/paste)
[131099080020] |I was using the mouse copy-paste extensively, until recently, when some OpenSuSe upgrade reconfigured this on all my machines.
[131099080030] |Now the scrollbuton is the one to paste (which I hate, since it's hard to click without scrolling, and I also click it sometimes accidentally).
[131099080040] |Where is this configured?
[131099080050] |Ideally I would love something that I can add to session start (for both Gnome and KDE).
[131099090010] |It is configured in /etc/X11/xorg.conf.
[131099090020] |You'll see a section that looks like
[131099090030] |Here is a random vaguely relevant link from SU.
[131099090040] |http://superuser.com/questions/258649/multi-button-mouse-on-x11-how-can-i-configure-several-buttons-to-act-as-the-midd
[131099100010] |You can set this property with xinput
.
[131099100020] |Run xinput list
to see the list of connected input devices.
[131099100030] |Note the exact name or the number of the device corresponding to your mouse (not the “Virtual core pointer”, but something like “Logitech USB-PS/2 Mouse M-BA47”).
[131099100040] |The name depends on your mouse model; I think the number is assigned dynamically, so you might need to do a bit of parsing to cope with multiple machines.
[131099100050] |Then (248 is the number of the property Evdev Middle Button Emulation
, you can also use the name):
[131099100060] |While you're at it, you might want to tune other settings (run xinput list-props "$device_name_or_number"
to see what settings exist).
[131099100070] |In particular, by default, I think the emulated middle button will be the same as the mouse wheel press, but that should be fixable by reassigning the wheel button (Evdev Wheel Emulation Button
).
[131099100080] |See also Configuring Input Devices on the Ubuntu wiki.