[131000010010] |
iptables
tutorial.
[131000040030] |It says '2.4' however all but the kernel config will apply equally as well to '2.6'.
[131000040040] |Even though the article is hosted by gentoo (I could have linked to IBM developerworks too) it's distribution independent (except for emerge iptables
which should be read as use your package manager to get the iptables command).
[131000050010] |I'm a huge fan of the Wiki over at Rackspace Cloud Servers.
[131000050020] |Their page on IPTables is not as detailed as some of the other stuff out there, but it gets you off the ground without causing too much confusion.
[131000060010] |Here's an article I wrote about setting up iptables for a destkop.
[131000060020] |IPTables for the average desktop user, and another one if you need to connect to a windows (smb) file share network, called IPTables browsing Samba shares.
[131000070010] |nc(1)
to copy files using TCP/IP.
[131000100020] |In the above example, I cloned sdb1 from a source box to sda1 of the destination box.
[131000100030] |My choice of 8675 for a TCP port number was arbitrary; you could use any port you have access to.
[131000100040] |And it doesn't have to be a device; it can be any file.
[131000100050] |In the second example, I copied my rsa public key(~/.ssh/id_rsa.pub) and added it to the authorized keys file for the target host.
[131000110010] |Most of the serial console programs (minicom, HyperTerm, VanDyke CRT...) you'll use on the other end of the connection will have Zmodem support, and most Linux boxes have the lrzsz
package installed on them already.
[131000110020] |If not, lrzsz
is small enough that you could bootstrap the process with one of the other recommended methods.
[131000110030] |You could ASCII-upload the sources, either as C text files, or a uuencoded tarball.
[131000110040] |Once you have Zmodem on both ends, just type rz
to start the receive on the Linux box.
[131000110050] |One of the nice things about Zmodem, relative to other alternatives mentioned like nc
or kermit
, is that it sends out a unique string that the serial console program on the other end recognizes as its cue to start a Zmodem send.
[131000110060] |You usually get a file picker dialog at this point, letting you choose the file(s) to upload.
[131000110070] |Other nice things about Zmodem are the ability to transfer multiple files at once, automatic resume if a transfer aborts, etc.
[131000120010] |dos2unix
, perl
, tr
, and sed
are not present.
[131000120030] |How can you convert files from dos to unix format?
[131000130010] |I think you are referring to removing the caret-M at the end of lines.
[131000130020] |You can use search and replace in vi to do this.
[131000130030] |In vi I normally do: (where "^" represents CTRL):
[131000130040] |Which shows on the screen as:
[131000140010] |A server without tr
or sed
would have to be really old, or missing some basic commands.
[131000140020] |Hopefully ed
is there; it existed in Unix first edition.
[131000140030] |where ^V^M
means typing Ctrl+V then Ctrl+M (to enter a literal line feed).
[131000140040] |If you know that all lines do end in CR LF, you can use 1,$s/.$//
instead (indiscriminately remove the last character on each line).
[131000150010] |less file
, and then hit Shift-F
to forward forever; like "tail -f".
[131000150040] |I want less
for use of the --raw-control-chars
flag.
[131000160010] |use the command "F" while inside of less.
[131000160020] |or, to do so automatically, use the +cmd option:
[131000170010] |/var/log/messages
would be the first place to look. /var/log/dmesg
and the dmesg
command could also be helpful.
[131000180020] |RC scripts also have their own separate log files, such as Apache, SSH, Postfix, etc.
[131000180030] |Check under /var/log
for the right log file based on the utility that is having trouble starting.
[131000190010] |If you add a -i option to the getty command in /etc/inittab this will stop the screen from clearing.
[131000190020] |So something like:
[131000190030] |c1:2345:respawn:/sbin/agetty -i -8 38400 tty1 linux
[131000200010] |at
utility if you're running Linux.
[131000210020] |You could put this at the end of your script:
[131000210030] |where X is the time you want to run your script at.
[131000210040] |To initialize just call at
for the time you want to run it the first time and then the above statement keeps renewing your scheduled call.
[131000220010] |I'm not sure if there is a more elegant way to define the every other week, but this may work for you.
[131000220020] |This will launch the script at 6:00am on the first and third Thursdays of the month.
[131000230010] |Can please someone with comment rights correct the posting to:
[131000230020] |Otherwise it will fail depending on the locale you are using.
[131000240010] |I used bash to do my math because I'm lazy; switch that to whatever you like.
[131000240020] |I take advantage of January 1, 1970 being a Thursday; for other days of the week you'd have to apply an offset.
[131000240030] |Cron needs the percent signs escaped.
[131000240040] |Quick check:
[131000240050] |Note I've chosen random times to show this will work if run anytime on Thursday, and chosen dates which cross year boundaries plus include months with both 4 and 5 Thursdays.
[131000240060] |Output:
[131000250010] |If you can use anacron on the system, things will be mush simpler.
[131000250020] |To use anacron you must have it installed and also you must have root access.
[131000250030] |With anacron, one can schedule jobs in a more flexible way, e.g. run X job once a week.
[131000250040] |Also anacron runs jobs when computer becomes available, i.e. you don't have to consider when the system is up or down.
[131000250050] |To run a script every other week, you have to add a line similar to following to the /etc/anacrontab:
[131000250060] |Take a look to the man page for details.
[131000260010] |/dev/null
)
[131000270010] |you can mount an ramfs and store data there (as a file)
[131000280010] |/boot
partition).
[131000280050] |In order to boot I need to pass the parameters below to the kernel:
[131000280060] |Apparently it is using an initial ramdisk to do something (I guess loading the LVM things) before mounting root.
[131000280070] |Is there a way that I can put this code into the kernel itself so that no initrd is needed?
[131000280080] |If not, how can I make the initrd myself?
[131000280090] |It might be useful to add that I had tried compiling the kernel for non-LVM root, without initrd and it worked perfectly.
[131000280100] |Then I tried to put the whole thing under LVM and couldn't get the machine to boot (I guess it cannot deal with the LVM stuff).
[131000280110] |Then I used the genkernel
tool with the --lvm
option and it creates the working kernel and initrd that I am currently using.
[131000280120] |Now I want to skip genkernel
and do everything on my own, preferably without initrd so that the machine will boot somewhat faster (I don't need the flexibility anyway).
[131000290010] |Simple answer: No.
[131000290020] |If you want LVM you need an initrd.
[131000290030] |But as others have said before: LVMs don't slow your system down or do anything bad in another way, they just allow you to create an environment that allows your kernel to load and do its job.
[131000290040] |The initrd allows your kernel to be loaded: If your kernel is on an LVM drive the whole LVM environment has to be established before the binary that contains the kernel can be loaded.
[131000290050] |Check out the Wikipedia Entry on initrd which explains what the initrd does and why you need it.
[131000290060] |Another note: I see your point in wanting to do things yourself but you can get your hands dirty even with genkernel.
[131000290070] |Use genkernel --menuconfig all and you can basically set everything as if you would build your kernel completely without tool support, genkernel just adds the make bzImage, make modules and make modules_install lines for you and does that nasty initrd stuff.
[131000290080] |You can obviously build the initrd yourself as it is outlined here for initramfs or here for initrd.
[131000300010] |edit: just realized you're trying to boot on LVM, I've never setup an LVM, never needed them, so probably the approach here may not work
[131000300020] |Here is the basic rules you need to do to create an initrd-less kernel (from memory, I didn't remember exactly):
[131000300030] |Device Drivers
)/
, /etc/*
, and /lib/modules/*
(under File systems
) lspci
and lshw
to help identify your hardware.
[131000300160] |If you don't have these tools already, then emerge lshw pciutils
.
[131000310010] |Yes, you need an initrd.
[131000310020] |Here's why:
[131000310030] |The normal boot process starts with the bootloader, which knows just enough about your system to find the kernel and run it.
[131000310040] |(GRUB2 is smart enough to find a kernel that's located on an LVM2 or RAID partition, but GRUB1 isn't, so it's usually recommended that you create /boot as a separate partition with a simplified layout.)
[131000310050] |Once it's loaded, the kernel needs to be able to find the root filesystem, so it can start the boot process.
[131000310060] |However, LVM can't start without being triggered by some userspace tools, which exist on the root filesystem, which can't be loaded without the LVM tools, which exist on the root filesystem... ;)
[131000310070] |To break this cycle, an initrd or initramfs is a compressed filesystem that's stored with the kernel (either in /boot, or inside the kernel itself), which contains just enough of a Linux system to start services such as LVM or MD or whatever else you want.
[131000310080] |It's a temporary filesystem, and only acts as your root filesystem long enough for the real root to be loaded.
[131000310090] |As far as actually making one, most documentation on the topic is staggeringly obsolete - lvm2create_initrd, for instance, doesn't even work on Gentoo anymore.
[131000310100] |(I set up the same thing a few months ago, and I had to all but rewrite the script before I got a working initrd from it.)
[131000310110] |Creating your own initramfs can be fun, and it's the only way to get an absolutely minimal boot process (and learn the ins and outs about how Linux boots in the process), but it's a lot of work.
[131000310120] |The short answer: use Dracut.
[131000310130] |It's a new framework that's being created for generating an initramfs in a mostly automated way, and it's in portage.
[131000310140] |The documentation is a bit sparse, but there's enough of it out there to figure things out, and it's by far the easiest way to get a solid initramfs, and an LVM root.
[131000320010] |Yes, it is.
[131000320020] |The complications that arise from creating and handling initrds are rendered moot if you install and use grub2.
[131000320030] |The grub2 wiki http://grub.enbug.org/LVMandRAID describes how you can have your /boot on lvm with nothing more than an insmod lvm in grub.cfg, the grub configuration file, hence no need for an initrd.
[131000320040] |grub2 now at version 1.98 but still in the experimental branch in gentoo.
[131000320050] |However it can be installed in another slot and is perfectly usable.
[131000320060] |Enjoy!
[131000330010] |While it is not possible to not use some sort of initrd it is possible to not use separate initrd files.
[131000330020] |(I have never used genkernel so I cannot give instruction for it).
[131000330030] |For example I have set option:
[131000330040] |Where /usr/src/initrd.contents
in my case looks like (I have LVM+tuxonice+fbsplash):
[131000330050] |And /usr/src/init
is:
[131000340010] |umount /dev/sdb1
or umount /mnt/usb
[131000410060] |See man umount for more details.
[131000410070] |For shutting down your system, you use the shutdown
command. -h
will "Halt or power off after shutdown".
[131000410080] |The manpage says:
[131000410090] |So you can use it to shutdown your system after a specific amount of time.
[131000410100] |The following command will halt your system after 30 minutes:
[131000410110] |shutdown -h 30
[131000410120] |Now you have one command which should only executed after the other one was succesfull.
[131000410130] |This is done with &&, shorthand for a conditional statement and a feature of your shell (Note: || exists also).
[131000410140] |The second command will only be executed if the first one returned without any errors.
[131000410150] |This is indicated by a return code of 0.
[131000410160] |For example:
[131000410170] |umount /dev/sdb1 &&shutdown -h 15
will detach your USB and halt your system after 15 minutes.
[131000410180] |If this doesn't answer your question, please be more specific.
[131000420010] |It sounds like you want your machine to shutdown automatically when you remove a USB pendrive.
[131000420020] |I haven't done this myself, but the new Upstart service (which is supported in Fedora 9 onwards) does have the ability to run scripts based on an event.
[131000420030] |Here's an article discusses how an event can be triggered when how a hotplug device or when a USB printer is plugged in.
[131000420040] |In theory, an event could also be generated when you unplug a USB device, and that event could call an arbitrary script , like shutdown -h now
or lock the screen
.
[131000420050] |I've seen Windows and Mac systems which automatically lock the screen when a Bluetooth device leaves the proximity of the computer, and this is probably possible in Linux using Upstart.
[131000430010] |ssh -f -L 5901:localhost:5901 server.dog.com -N
[131000450010] |top
command here, of course.
[131000500120] |[As an aside, -e'^\\\'
reassigns the Magick Screen Key from C-a (a bad default if there ever was one) to C-\.]
[131000510010] |Here's an imperfect solution I found.
[131000510020] |Then you can run:
[131000510030] |(Note that although you'll only see ^M
you should actually type C-v
followed immediately by C-m
.)
[131000510040] |It's imperfect because (I think) there is a race condition between the first and second invocations of screen.
[131000510050] |From http://aperiodic.net/screen/faq#how_to_send_a_command_to_a_window_in_a_running_screen_session_from_the_commandline, which has lots of other good information as well.
[131000520010] |Create a ~/.screenrc.top
like so:
[131000520020] |Now run screen -c ~/.screenrc.top
.
[131000520030] |No race condition!
[131000530010] |top
or ps
commands, so I assume that they aren't showing all of the running processes.
[131000530040] |Is there another command which will show all running processes or is there any other parameters I can use with top
or ps
for this?
[131000540010] |have you tried ps aux | grep postgres
? it really should show up if postgres is running.
[131000540020] |If it doesn't... how do you know postgres is running?
[131000540030] |(note: it's a common misconception that's it's ps -aux
but that's not correct)
[131000550010] |From the ps
man page:
[131000550020] |-e Select all processes.
[131000550030] |Identical to -A.
[131000550040] |Thus, ps -e
will display all of the processes.
[131000550050] |The common options for "give me everything" are ps -ely
or ps aux
, the latter is the BSD-style.
[131000550060] |Often, people then pipe this output to grep
to search for a process, as in xenoterracide's answer.
[131000550070] |In order to avoid also seeing grep
itself in the output, you will often see something like:
[131000550080] |where foo is the process name you are looking for.
[131000550090] |However, if you are looking for a particular process, I recommend using the pgrep
command if it is available.
[131000550100] |I believe it is available on Ubuntu Server.
[131000550110] |Using pgrep
means you avoid the race condition mentioned above.
[131000550120] |It also provides some other features that would require increasingly complicated grep
trickery to replicate.
[131000550130] |The syntax is simple:
[131000550140] |where foo is the process for which you are looking.
[131000550150] |By default, it will simply output the Process ID (PID) of the process, if it finds one.
[131000550160] |See man pgrep
for other output options.
[131000550170] |I found the following page very helpful:
[131000550180] |http://mywiki.wooledge.org/ProcessManagement
[131000560010] |export
ing an environment variable makes it available to any processes spawned from the current one.
[131000600030] |But the only processes that will be interested in the HISTIGNORE
variable (and some related variables) are other instances of bash, which will read ~/.bashrc and pick up the value anyway.
[131000600040] |So should I use:
[131000600050] |or just:
[131000600060] |in my .bashrc file?
[131000610010] |For shell settings, you don't need export
, for the reason you give.
[131000610020] |And it's better not to use it, in case some other application reacts to the same variable but doesn't interpret the value in the same way.
[131000610030] |I don't know any other application that uses HISTIGNORE
, but the issue arises with other variables.
[131000610040] |For example, PS1
should definitely not be exported since different shells use this variable but with different escape sequences.
[131000620010] |kickOffTests.sh
has the line ssh -t -t ServerA runTests.sh
[131000620150] |Server A: runTests.sh
calls a perl script which invokes minicom -S my.script ttyE1
[131000620160] |Target, after booting: Mounts a directory from Server B, where the tests are, and enters that directory.
[131000620170] |It invokes yet another bash script, which runs the tests, which are compiled C executables.
[131000620180] |Now, when I execute any of these scripts myself, they do what they should.
[131000620190] |However, when Hudson tries to do the same thing, over in the minicom session it complains about a line in the "yet another bash script" that invokes the C executable, ./executable
, with ./executable: cannot execute binary file
[131000620200] |I still have a lot to learn about linux, but I surmise this problem is a result of Hudson not connecting with a console.
[131000620210] |I don't know exactly what Hudson does to control its slave.
[131000620220] |I tried using the line export TERM=console
in the configuration just before running kickOffTests.sh, but the problem remains.
[131000620230] |Can anyone explain to me what is happening and how I can fix it?
[131000620240] |I cannot remove any of the servers from this equation.
[131000620250] |It may be possible to take minicom out of the equation but that would add an unknown amount of time to this project, so I'd much prefer a solution that uses what I already have.
[131000630010] |The message cannot execute binary file
has nothing to do with terminals (I wonder what led you to think that — and I recommend avoiding making such assumptions in a question, as they tend to drown your actual problem in a mess of red herrings).
[131000630020] |In fact, it's bash's way of expressing ENOEXEC
(more commonly expressed as exec format error
.
[131000630030] |First, make sure you didn't accidentally try to run this executable as a script.
[131000630040] |If you wrote . ./executable
, this tells bash to execute ./executable
in the same environment as the calling script (as opposed to a separate process).
[131000630050] |That can't be done if the file is not a script.
[131000630060] |Otherwise, this message means that ./executable
is not in a format that the kernel recognizes.
[131000630070] |I don't have any definite guess as to what is happening though.
[131000630080] |If you can run the script on that same machine by invoking it in a different way, it can't just be a corrupt file or a file for the wrong architecture (it might be that, but there's more to it).
[131000630090] |I wonder if there could be a difference in the way the target boots (perhaps a race condition).
[131000630100] |Here's a list of additional data that may help:
[131000630110] |file …/executable
on server B.uname -a
if it's unix-like.cksum ./executable
or md5sum ./executable
or whatever method you have on the target just before yet-another-bash-script invokes ./executable
.
[131000630140] |Check that the results are the same in the Hudson invocation, in your successful manual invocation and on server B.set -x
at the top of yet-another-bash-script (just below the #!/bin/bash
line).
[131000630160] |This will produce a trace of everything the script does.
[131000630170] |Compare the traces and report any difference or oddity../executable
doesn't get loaded (or is not loaded yet) in the Hudson invocations.
[131000630200] |You might want to use set -x
in other scripts to help you there, and inspect the boot logs from the target.M-x shell
if you want your usual shell and Emacs's command line edition or M-x eshell
if you want a shell built into Emacs.
[131000670010] |One answer to your question is to use emacs with M-x eshell.
[131000670020] |This gives you a reasonably full shell functionality inside of emacs.
[131000670030] |Taking quick peeks at files can obviously be done by opening them in the editor, but more importantly you can use its search functionality to search back through the buffer for any earlier output (or any earlier prompts).
[131000670040] |Another answer is to use screen, I believe this also has a search functionality of the history, but it has been too long since I used it to remember what the key-combos are.
[131000680010] |To expand on xenoterracide's comment...
[131000680020] |Rather than run make
, I put this in my .bashrc
[131000680030] |then run m
instead of make
.
[131000680040] |This puts all output to make.log
, but only prints errors on the console.
[131000680050] |That way you don't have tonnes of output on the screen, can easily see errors, and can read make.log
to diagnose any problems if it failed.
[131000690010] |In eshell
in Emacs, there is command (not yet mentioned here in other answers) that seems to tackle the task you are implicitly wondering about when asking your question -- eshell-show-output
; its description (C-h feshell-show-output
):
[131000690020] |It is bound to C-c C-r, C-M-l.
[131000690030] |(eshell-show-output &optional arg)
[131000690040] |Display start of this batch of interpreter output at top of window.
[131000690050] |Sets mark to the value of point when this command is run.
[131000690060] |With a prefix argument, narrows region to last command output.
[131000690070] |The narrowing effect (with a prefix argument, i.e., C-u C-c C-r) could be also interesting to you given your task.
[131000700010] |dmesg
or /var/log/messages
(too much scroll) so...
[131000840040] |I'm thinking there's a way to use /dev
or /proc
to find out, but I don't know what it is. for some clarification this is linux
[131000850010] |How about
[131000860010] |This is highly platform-dependent.
[131000860020] |Also different methods may treat edge cases differently (“fake” disks of various kinds, RAID volumes, …).
[131000860030] |Under Linux 2.6, each disk and disk-like device has an entry undr /sys/block
.
[131000860040] |Under Linux since the dawn of time, disks and partitions are listed in /proc/partitions
.
[131000860050] |Alternatively, you can use lshw: lshw -class disk
.
[131000860060] |If you have an fdisk
or disklabel
utility, it might be able to tell you what devices it's able to work on.
[131000860070] |You will find utility names for many unix variants on the Rosetta Stone for Unix, in particular the “list hardware configuration” and “read a disk label” lines.
[131000870010] |@Giles says this is highly platform-dependent.
[131000870020] |Here's one such example.
[131000870030] |I'm running a CentOS 5.5 system.
[131000870040] |This system has 4 disks and a 3ware RAID controller.
[131000870050] |In my case, lshw -class disk
, cat /proc/scsi/scsi
and parted --list
shows the RAID controller (3ware 9650SE-4LP).
[131000870060] |This doesn't show the actual disks:
[131000870070] |only shows the 3ware RAID controller which provides the /dev/sda volume:
[131000870080] |In order to see the disks which lie underneath, I had to install the tw_cli utility from 3ware, and ask the controller itself.
[131000880010] |yum -install samba-client
but since this is a trial version, I'm not subscribed to RHN and can't get the update.
[131000930120] |How else can I install the client?
[131000930130] |Final question, if I can't do this, am I able to still mount for instance other RHEL? ( how is that called? regular mount or somthing? )
[131000930140] |Thanks in advance
[131000940010] |You do not need samba-client for that.
[131000940020] |What you need is the smbfs or cifs kernel module. smbfs is deprecated and should not be used (unless you can't use cifs for some reason. e.g. your distribution is too old or perhaps you're trying to connect to a Win95 box or something.)
[131000940030] |Try:
[131000940040] |Then try with mount -t cifs ...
as mentioned by Gilles.
[131000940050] |If that doesn't work, you can access the files using smbclient (e.g.) instead of mounting the filesystem. smbclient is in the samba-client and gives you an interface similar to a command line FTP client.
[131000940060] |To "mount [...] other RHEL", there are various options.
[131000940070] |You could use NFS (in which case you would have to set up an NFS server on the machine you want to mount.)
[131000940080] |Another possibility is sshfs, in which case all you need on the server is an SSH server, but the client will need sshfs, which needs fuse.
[131000940090] |I don't know if RHEL 5.5 supports fuse.
[131000940100] |It would also be possible to set up Samba on the other RHEL box and then mount using mount -t cifs ...
as if it were a Windows box.
[131000950010] |0xffffffff
(4'294'967'295
) linear addresses to access a physical location ontop of the RAM.
[131000960020] |The kernel divides these addresses into user and kernel space.
[131000960030] |User space (high memory) can be accessed by the user and, if necessary, also by the kernel.
[131000960040] |The address range in hex and dec notation:
[131000960050] |Kernel space (low memory) can only be accessed by the kernel.
[131000960060] |The address range in hex and dec notation:
[131000960070] |Like this:
[131000960080] |Thus, the memory layout you saw in dmesg
corresponds to the mapping of linear addresses in kernel space.
[131000960090] |First, the .text, .data and .init sequences which provide the initialization of the kernel's own page tables (translate linear into physical addresses).
[131000960100] |The range where the kernel code resides.
[131000960110] |The range where the kernel data segments reside.
[131000960120] |The range where the kernel's initial page tables reside.
[131000960130] |(and another 128 kB for some dynamic data structures.)
[131000960140] |This minimal address space is just large enough to install the kernel in the RAM and to initialize its core data structures.
[131000960150] |Their used size is shown inside the parenthesis, take for example the kernel code:
[131000960160] |In decimal notation, that's 3'255'914
(3179 kB).
[131000960170] |Second, the usage of kernel space after initialization
[131000960180] |The lowmem range can be used by the kernel to directly access physical addresses.
[131000960190] |This is not the full 1 GB, because the kernel always requires at least 128 MB of linear addresses to implement noncontiguous memory allocation and fix-mapped linear addresses.
[131000960200] |Virtual memory allocation can allocate page frames based on a noncontiguous scheme.
[131000960210] |The main advantage of this schema is to avoid external fragmentation, this is used for swap areas, kernel modules or allocation of buffers to some I/O devices.
[131000960220] |The permanent kernel mapping allows the kernel to establish long-lasting mappings of high-memory page frames into the kernel address space.
[131000960230] |These are fix-mapped linear addresses which can refer to any physical address in the RAM, not just the last 1 GB like the lowmem addresses.
[131000960240] |Fix-mapped linear addresses are a bit more efficient than their lowmem-colleagues.
[131000960250] |If you want to dive deeper into the rabbit hole: Understanding the Linux Kernel
[131000970010] |echo "$WEBSITE.sql"
.
[131000980020] |So you could write it like:
[131000990010] |Use ${ }
to enclosure a variable.
[131000990020] |Without curly brackets:
[131000990030] |would give
[131000990040] |and nothing, because the variable $VARbar
doesn't exist.
[131000990050] |With curly brackets:
[131000990060] |would give
[131000990070] |Enclosing the first $VAR
is not necessary, but a good practice.
[131000990080] |For your example:
[131000990090] |This works for bash
, zsh
, ksh
, maybe others too.
[131001000010] |Just concatenate the variable contents to whatever else you want to concatenate, e.g.
[131001000020] |The double quotes are unrelated to concatenation: here >$WEBSITE.sql
would have worked too.
[131001000030] |They are needed around variable expansions when the value of the variable might contain some shell special characters (whitespace and \[?*
).
[131001000040] |I strongly recommend putting double quotes around all variable expansions and command substitutions, i.e., always write "$WEBSITE"
and "$(mycommand)"
.
[131001000050] |For more details, see $VAR vs ${VAR} and to quote or not to quote.
[131001010010] |-L -k --proxy [username:password]
I get the following error.
[131001010040] |What switches do need to add to get cURL to get the website?
[131001010050] |The url https://www.fleetagent.be is redirected to https://www.fleetagent.be/portal/pls/portal.
[131001010060] |Somehow the website is authenticating it self for the redirected portal. how can i simulate this behavior with cURL?
[131001010070] |Thanks.
[131001010080] |Darrell.
[131001020010] |I asume you have an account for this portal.
[131001020020] |Add it to your curl call:
[131001020030] |You are only submitting username and password for the proxy, but not for the actual website.
[131001030010] |iwlist wlan0 scan
; it should show a list of access points in your area.
[131001070090] |If it does show a list of access points, then there is something wrong with Network Manager, and more work is required to either fix Network Manager or get an alternative to Network Manager.
[131001070100] |If it doesn't show the list, it would seem that the driver isn't working completely right, and there will be more work to figure out what's wrong.
[131001080010] |rm -r link
or, better still, rm link
.
[131001080050] |Regardless, that command did get the job done (i.e. removed the file named "link").
[131001080060] |Things are a bit different when doing such a thing on a mounted volume, where "dir" is replaced with some like "/media/my_movies".
[131001080070] |In such a case, the entire volume will be wiped, not just the symlink as in the previous example.
[131001080080] |Why is it like that?
[131001080090] |Is this some bug in rm
, or is this expected?
[131001080100] |Why the inconsistency?
[131001080110] |UPDATE: Maybe I was dizzy when I was experiencing this because when I try now, "dir" is not getting deleted while its contents are, and in both cases (mounted and local directory).
[131001080120] |I'm using Linux 2.6.32, and I think I was using 2.6.37 then.
[131001090010] |On my system (Debian; Linux 2.6; rm --version
reports GNU coreutils 8.5), whether or not dir is a mount point, the following removes file, but not dir or link, and gives me the same error you saw:
[131001090020] |If I'm following you right, your rm -r link/
command doesn't remove file, unless dir/ is a mount point.
[131001090030] |If that's the case, I think you're seeing a bug in rm
. There's no good reason for a mount point to change its behavior like that.
[131001090040] |It would be interesting to know what version of rm
you're using.
[131001100010] |mv file1.txt newfilename.txt
on every one.
[131001100060] |I can find lots of tuturials online to change file names with brace expansion if you know all the parts to expand, but nothing to just replace file1 with newfilename no matter what the extension.
[131001100070] |Is this possible, or am I barking up the wrong tree?
[131001100080] |Thanks
[131001100090] |EDIT: I'm sorry, not moments after posting this I found a different page in my Google results that answered the question for me: for f in file1.*; do mv "$f" "${f/file1/newfilename}"; done
works perfectly.
[131001110010] |I found this just after I posted the question:
[131001110020] |for f in file1.*; do mv "$f" "${f/file1/newfilename}"; done
[131001110030] |Works like a charm.
[131001120010] |You can do that with bash, bu there are other tools more suited for the job.
[131001120020] |On most distros:
[131001120030] |On Debian and Ubuntu, replace rename
with rename.ul
.
[131001130010] |grub-install /dev/sda
(replace /dev/sda with the boot device on your computer.
[131001140020] |You'll want to update your /boot/grub/menu.lst in SuSE to include an entry for Ubuntu.
[131001150010] |The generic procedure to restore Grub is
[131001150020] |/mnt/suse
and /dev/sda
with your mount point and device grub-setup -d /mnt/suse /dev/sda
chroot
to /mnt/suse
and from there execute
[131001150060] |I find the Ubuntu help page very informative at this.
[131001160010] |dmesg
output:
[131001170010] |Looks like your USB stick has a hidden partiton that acts as a CD-ROM drive.
[131001170020] |You might need to look for a Windows utility from the manufacturer that can remove that, unfortunately.
[131001180010] |So I went and tried mounting the USB stick with pmount /dev/sdb1 /mnt/blah
and it it gives a more useful message than the GUI dialogue:
[131001180020] |This led me to find that "/etc/fstab" actually has an entry for /dev/sdb1:
[131001180030] |The reason for this is that my stick was actually attached while installing Debian Squeeze, and so got automatically added in there.
[131001180040] |That's what will happen when you install from the same stick, and now I'm curious how others avoid this problematic situation.
[131001190010] |umount
instead of Nautilus.
[131001200050] |You could also just call sync
to flush the filesystem buffers to the disk.
[131001200060] |Just found a thread which has more info : http://ubuntuforums.org/showthread.php?t=1477247
[131001200070] |So basically either a) Rebuild nautilus from source without that patch (and keep it up to date when you update your system...) or b) use another file manager (at least when unmounting ^^).
[131001210010] |pam
?
[131001230060] |Is there a log?
[131001230070] |Is there any common reason why libpam-umask
would not work?
[131001230080] |Do I have to install something?
[131001240010] |I think PAM reads the default umask from /etc/login.defs
as of Debian 6.0, but I do not currently have access to a system to check on.
[131001250010] |Last time I had umask problem, trying to get all files in a directory to be group readable no matter who created them.
[131001250020] |I got a bit stuck at first; I could set the sticky bit on the group, so all files had same group, but could find no way to set permissions consistently and correctly.
[131001250030] |The use of a cron job to regularly put it right did not seem satisfactory.
[131001250040] |But then someone told be a solution.
[131001250050] |Posix ACLs, you can set in a directory properties(users,groups,permissions) to inherit.
[131001250060] |You will probably need to install it, and new backup tools (the default ones don't always know about ACLs)
[131001260010] |ls -lat
command or something similar?
[131001260030] |I want to use the result in another script.
[131001270010] |stat
from GNU coreutils can do this:
[131001270020] |Unfortunately, there are a number of versions of stat
, and there's not a lot of consistency in their syntax.
[131001270030] |For example, on FreeBSD, it would be
[131001270040] |If portability is a concern, you're probably better off using Gilles's suggestion of combining ls
and awk
.
[131001270050] |It has to start two processes instead of one, but it has the advantage of using only POSIX-standard functionality:
[131001280010] |Parsing the output of ls
is rarely a good idea, but obtaining the first few fields is an exception, it actually works.
[131001280020] |Another option is to use a stat
command, but the problem with stat
from the shell is that there are multiple commands with different syntax, so stat
in a shell script is unportable (even across Linux installations).
[131001280030] |Note that testing whether a given user is the owner is a different proposition.
[131001290010] |One can also do this with GNU find:
[131001290020] |This isn't portable outside of the GNU system, but I'd be surprised to find a Linux distribution where it doesn't work.
[131001300010] |ls -lt
[131001320010] |Is that a text file you're trying to sort, or are you trying to view a directory listing by date?
[131001320020] |For the latter, use the -t
flag to ls
.
[131001320030] |For the former, see in particular the -M
flag to GNU sort, which sorts by three-letter month name abbreviations.
[131001320040] |The following command sorts by the 3rd, 1st, 2nd and 4th columns, treating the second sort key as a month name.
[131001320050] |Do consider always using Japanese/ISO style dates, always in the order YYYY-MM-DD-HH-MM-SS (i.e. most significant first, constant column width).
[131001320060] |This way sorting by date is identical to a lexicographic sort.
[131001330010] |If you are not fussy about date format, your best bet is to change it to something that's easy to sort, e.g.:
[131001330020] |Add whichever parameter (recursive...) to ls and sort the output of this.
[131001330030] |If the date format matters to you, you could use awk or perl to read the date (both awk and perl have powerful date parsing functions) and sort on that basis, but then the standard date format is a pain to parse as it is not always the same number of words, so I'll leave you find that solution.
[131001340010] |/tmp
folder, and from time to time an administrator will come along and purge the /tmp
folder with roughly sudo rm -rf *
.
[131001340040] |Is there a way to give something like a prompt or alert that they are about to delete a specific folder?
[131001340050] |Something along the lines of:
[131001340060] |I know, the best solution is to move this folder elsewhere (the /tmp
folder is called temp for a reason after all!), but that has other problems.
[131001340070] |Hence my question.
[131001340080] |Asking this question makes me wonder, is it bad practice to actually blindly delete all the contents of the /tmp
folder?
[131001340090] |Isn't a better approach to only delete files that are more than a certain age?
[131001350010] |As the earlier question points out, there is no out of the box solution to this problem.
[131001350020] |But you can always opt to do this through a script.
[131001350030] |Something like the script first checks for whatever the conditions you set for deletion/delete protection and takes an appropriate action.
[131001350040] |Which in case you can rename rm and rmdir to something else.
[131001350050] |Create a script by the same in their place.
[131001350060] |That script can do whatever you want.
[131001360010] |chmod +t /tmp/perm_file
[131001360020] |source: http://oldfield.wattle.id.au/luv/permissions.html
[131001370010] |Moving your work folder is the solution.
[131001370020] |You're right that it is a little bit dangerous to wipe out files in /tmp
blindly — normally, it's done either on system boot/shutdown or by using an access-time based deletion program (like tmpwatch
).
[131001370030] |But by its definition, the space volatile and it's not reasonable to expect otherwise.
[131001370040] |If you really want to prevent this, though, SE Linux could do it.
[131001370050] |You would give the directory a particular label, and configure it so that root doesn't normally have the unlink permission for objects with that label.
[131001370060] |This seems like significantly more work than just moving the directory to a better shared location, though — and since it causes an SE Linux audit message rather than the nice "are you sure y/n" prompt you're imagining, it seems like it'll eventually cause frustrating confusion.
[131001380010] |You could change the directory attributes on the directory to be immutable:
[131001380020] |Unfortunately, since the directory is immutable, you can't modify it either:
[131001380030] |You can have subdirectories that are mutable, but rm -rf will delete the files still.
[131001380040] |So this solution will only work if you want read-only content in /tmp.
[131001380050] |If you must have RW content in /tmp that's undeletable, why don't you just put it somewhere else more permanent, and create a symlink in /tmp, that can be easily restored. (perhaps automatically if missing?)
[131001390010] |Move /bin/rm to another location (like /bin/original/rm) and replace /bin/rm with a script that, if $UID
is 0, checks the parameters for specific folders and takes appropriate action, calling /bin/original/rm if needed.
[131001390020] |You probably need to check somehow if an interactive user is calling the script, as /bin/rm could be used by system utilities.
[131001400010] |echo
which is introducing the space, and the shell is (perhaps) using \0?
[131001430010] |From man bash
[131001430020] |EXPANSION Expansion is performed on the command line after it has been split into words.
[131001430030] |There are seven kinds of expansion performed: brace expansion, tilde expansion, parameter and variable expansion, command substitution, arithmetic expansion, word splitting, and pathname expansion.
[131001430040] |The order of expansions is: brace expansion, tilde expansion, parameter, variable and arithmetic expansion and command substitution (done in a left-to-right fashion), word splitting, and pathname expansion.
[131001430050] |The reason this gives you 4 entries is because an unquoted $names
parameter is further subject to word splitting based on the internal field separator IFS
which is by default
.
[131001430060] |If you were to quote "$names"
to inhibit word splitting, then you'll only get one array element with value f 1 f 2
, again not what you want.
[131001430070] |The above on the other hand is only subject to pathname expansion which happens to be the last expansion performed.
[131001430080] |The results are not subject to word splitting thus you get the desired 2 elements.
[131001430090] |If you want to make array=( $names )
work then you'll need to somehow separate the file names by a non-space character which also is not contained in the file names.
[131001430100] |You'll then need to set IFS to this character.
[131001430110] |A more elegant way to do this would be to use the NUL byte \0
as the filename delimiter as that is guaranteed to never be apart of a filename.
[131001430120] |To accomplish this we will need to use the find
command with its -print0
flag as well as the read
builtin delimited on NUL.
[131001430130] |We well also need to clear IFS so no word splitting on spaces is performed.
[131001430140] |IFS
does not augment argument splitting, only word splitting.
[131001430210] |Consider this example:
[131001430220] |Notice how setting IFS
to #
did not alter the fact that the shell still only saw one argument f1#f2
; which by the way is then further subject to the various expansions.
[131001430230] |I would highly recommend your aquatint yourself with the BashFAQ if you haven't already.
[131001430240] |In particular, I would strongly suggest you read the following two supplemental entries:
[131001430250] |echo -n *
, the shell performs pathname expansion (also called filename generation or globbing) on *
, and replaces it by the list of matching file names.
[131001440040] |So after expansion, this command consists of four words: echo
, -n
, f 1
and f 2
.
[131001440050] |The command echo
is run with two arguments, and it prints its arguments with a space in between (and no terminating newline because of the -n
option).
[131001440060] |So the output is f 1 f 2
.
[131001440070] |Exercise: create another file whose name consists of two consecutive spaces, run echo -n *
, and make sure you understand the output.
[131001440080] |When you run names=$(echo -n * )
, the output from the command is stored in the names
variable.
[131001440090] |Here, that line is equivalent to names='f 1 f 2'
.
[131001440100] |Now we get to array=( $names )
.
[131001440110] |That's an array assignment, but it doesn't affect the expansion in this case.
[131001440120] |Since $names
is an unquoted variable expansion, it's subject to word splitting followed by pathname expansion.
[131001440130] |Word splitting means that the value of the variable (which is a string) is split into pieces at each whitespace sequence (for the precise rules, search IFS
in your shell's documentation).
[131001440140] |You can end up with zero, one or more words; here the string is split into 4 words: f
, 1
, f
and 2
.
[131001440150] |So the array contains four elements (each a one-character word).
[131001440160] |Exercise: with that extra file with two consecutive spaces in its name, what is now the exact contents of the array?
[131001440170] |Next, you tried array=( * )
.
[131001440180] |Here, there's a single word in the array, subject to the usual expansions, the last of which is pathname expansion.
[131001440190] |Since there are two matching files, the array contains two words, the names of each file: f 1
and f 2
.
[131001440200] |In terms of shell programming practice, what advice can we draw from this analysis?
[131001440210] |Well, first, there's the usual shell programming principle: always put double quotes around variable expansions, unless you have a good reason not to.
[131001440220] |Then, don't store a list in a string variable.
[131001440230] |If you want to store a list of file names, put it directly in an array:
[131001440240] |Further exercise: create a file whose name is a single asterisk (touch '*'
) and run these commands again.
[131001440250] |Do you understand the output?
[131001440260] |Aside: zsh does not perform word splitting or pathname expansion on variable expansions.
[131001440270] |This makes it quite a bit saner to program in.
[131001450010] |pr * | lpr
then is it both a process group as well a job?
[131001450040] |What exactly is the difference between a process group ID and a job ID?
[131001450050] |Edit: I know it appears similar to What is the difference between a job and a process?, but it is slightly different.
[131001450060] |Also, I didn't understand this concept from this thread.
[131001460010] |A process group is a unix kernel concept.
[131001460020] |It doesn't come up very often.
[131001460030] |You can send a signal to all the processes in a group, by calling the kill
system call or utility with a negative argument.
[131001460040] |When a process is created (with fork
), it remains in the same process group as its parent.
[131001460050] |A process can move into another group by calling setpgid
or setpgrp
.
[131001460060] |This is normally performed by the shell when it starts an external process, before it executes execve
to load the external program.
[131001460070] |The main use for process groups is that when you press Ctrl+C
, Ctrl+Z
or Ctrl+\
to kill or suspend programs in a terminal, the terminal sends a signal to a whole process group, the foreground process group.
[131001460080] |The details are fairly complex and mostly of interest to shell or kernel implementers; the General Terminal Interface chapter of the POSIX standard is a good presentation (you do need some unix programming background).
[131001460090] |Jobs are an internal concept to the shell.
[131001460100] |In the simple cases, each job in a shell corresponds to a process group in the kernel.
[131001470010] |rw-r--r--
to 644
.
[131001490030] |Is there a simple web based converter between the 2?
[131001500010] |http://permissions-calculator.org/
[131001510010] |Octal is used for permissions because it's an easy conversion.
[131001510020] |Each group of rwx
forms one octal digit.
[131001510030] |All you have to remember is the first 3 powers of 2: 4, 2, 1. r
= 4, w
= 2, x
= 1.
[131001510040] |rw-r--r--
= 110 100 100
= 4+2+0 4+0+0 4+0+0
= 644
[131001520010] |I have this little alias that you can put in your .bashrc (or equivalent).
[131001520020] |DISCLAIMER: I am not the author of the script, and I'm not sure who wrote it... but props to him/her for doing this.
[131001530010] |Why do you need octal number in the first place?
[131001530020] |I always use:
[131001530030] |ugo(a) is easy to remember.
[131001530040] |However you can confuse o:=owner? o:=other? but what would be u, if o=owner? u:=user, therefore o=other.
[131001530050] |Some comands like numerical permissions only.
[131001530060] |Okay, it's not hard to calculate, if you remember the two sequences: ugo + rwx.
[131001530070] |Yes, very artificial.
[131001530080] |When it comes to s and S I have to consult the manual.
[131001530090] |Maybe google next time. :)
[131001540010] |speaker-test
is a handy utility here.
[131001570020] |If pulse is your default audio device then all audio programs including speaker-test
will go through it for audio.
[131001570030] |Try adding default-sample-channels = 6
to /etc/pulse/daemon.con
f to tell pulse to use 5.1 audio. speaker-test -c6
will test all 6 channels individually.
[131001570040] |If you want to test your sound card directly instead of going through pulse you may need to call speaker-test -D hw:0,0
.
[131001580010] |umount /dev/sdb1
.
[131001610020] |If you cannot delete it in any way, create a similar partition in a loopback device and dd
it to that stick.
[131001610030] |Then, you must install the programs to create a FAT32 partition again.
[131001620010] |