[131090690010] |
xdg-open
fails to open the file and give me the error:
[131090690030] |No application is registered as handling this file
[131090690040] |But xdg-mime query defaut ...
succeeds for the mime type.
[131090690050] |Why?
[131090690060] |Here is my process:
[131090690070] |application/vnd.xx
by xdg-mime install mytype.xml
.
[131090690080] |Then xdg-mime query filetype
shows that the new mime type is recognized.~/Desktop
.
[131090690110] |After I re-login, I saw the shortcut on the desktop and xdg-mime query defualt application/vnd.xx
printed out this desktop file. xdg-open
fails with the error:
[131090690130] |No application is registered as handling this file *.desktop
files need to be in specific places to be fully recognized.
[131090700020] |Try moving your my-app.desktop
to ~/.local/share/applications/my-app.desktop
(create that directory first if needed: mkdir -p ~/.local/share/applications
).
[131090700030] |If you used a full pathname to the *.desktop
file, change it to just the basename; I don't think pathnames work as expected there.
[131090710010] |--management
and the --management-query-passwords
options so that the password can be input via a telnet session.
[131090720060] |This works fine when I do it manually, but when I try and do it automatically with a bash script it fails.
[131090720070] |My guess would be that either I am not doing the carridge return after the password line properly, or that some other garbage values are sneaking in to the telnet session as inputs.
[131090720080] |Here is the relevant bits of code (xxx for stuff that is classified):
[131090720090] |Obviously this is not working -- the openvpn telnet management interface is still waiting for the password to be entered.
[131090730010] |I would expect all it is looking for is the password for the private key.
[131090730020] |Try using echo -e "xxxxxxxxx\r\n"
or echo "xxxxxxxx"
.
[131090730030] |You may want to try using expect
to respond to the password request.
[131090730040] |Some password programs look for the password on a tty
type device.
[131090730050] |The program expect handles this.
[131090730060] |You may be better off looking for an rc.d
init script to start your tunnel.
[131090730070] |This is the normal method for starting things at startup.
[131090740010] |ok i have managed to get the openvpn tunnel password entered automatically and also managed to get the tunnel to run on bootup. hopefully this helps someone else who is trying to do the same thing - coz its taken me over 20 hours to figure out something which now looks pretty basic. code:
[131090740020] |you may also want to redirect all output to a file so that if it fails you will be able to see why. i called the file ZZcreate_ovpn_tun.sh to make sure it was run last out of all of the scripts in the init.d dir. ideally i would just have made sure that it only ran at level 6 or so but this works fine for now.
[131090750010] |/etc/rc.local
?
[131090750050] |Does this make sense?
[131090760010] |I don't have experience with xmodmap
, but you can always make a .xmodmaprc
file and put it in /etc/skel
.
[131090760020] |The file will be copied to all new users' home, thus applying the settings.
[131090770010] |/etc/rc.local
won't work for this situation, because xmodmap requires an X server to talk to.
[131090770020] |I know that /etc/X11/Xmodmap is part of the xorg-x11-xinit package on RHEL and Fedora, so make your changes there.
[131090770030] |They will be used when any new X session starts.
[131090780010] |xdg-mime query default mime-type
on Linux?
[131090790010] |I don't believe there's a C API for querying mime-types in the same way that xdg-mime works. xdg-mime is just a shell script that queries your desktop environment (Gnome, KDE, or other), and runs the appropriate command to get the MIME type from that DE's internal configuration.
[131090790020] |You could replicate the behaviour of the shell script, or just call the shell script directly from C. The XDG Utils web page doesn't seem to show anything about a C API.
[131090800010] |glibc
doesn't know anything about MIME types; the API functions live at the level of desktop environment APIs, and the freedesktop.org
recognize that harmonizing them is an impossible task so they only specify the shell-level interface.
[131090800020] |You either use that via popen()
or code for a particular desktop environment.
[131090810010] |sched_autogroup_enabled
set to 1. I am not certain that I am seeing the benefits of this patch since:
[131090810030] |SEE ALSO
sections and elsewhere will become hyperlinks in the HTML representation of the man pages?
[131090820030] |I'm using Debian (6) and Ubuntu (10.04, 10.10) currently, so existing packages would be preferred, but I'll also go for any other solutions if they are clearly superior.
[131090830010] |The debian package dwww
give access to all the documentation installed by the packages, included the manual pages.
[131090830020] |After installing the package with your favorite package manager, you will be able to browse the local documentation with your navigator on http://localhost/dwww/
.
[131090830030] |By default, access to this URL is restricted to local connections but you can change this restriction in the configuration file /etc/dwww/apache.conf
(don't forget to reload apache after changing something in this file).
[131090840010] |traceroute
may be a good start: it shows you the number of hops between your own host and a remote.
[131090850020] |I don't know of a way to get this kind of info for a pair of remotes, except by running traceroute
on one of them.
[131090860010] |If your device list includes IP addresses and netmasks, you could create a basic layer 3 graph by creating a vertex for each subnet, a vertex for each device, and an edge between each (subnet, device) in your device list.
[131090860020] |This will result in a pure layer-3 topology which probably isn't a bad start.
[131090860030] |Also if your network is somewhat complex, this won't work too well.
[131090860040] |For example, if you have duplicate or overlapping subnets (perhaps with NAT or MPLS VPNs), the assumption that all devices within a particular IP range are connected may not be true.
[131090870010] |.ssh/config
file:
[131090950030] |Additionally, I was tempted to investigate repeating this for a headless development server that's located off-site that I'm regularly working on.
[131090950040] |alias initFakeDisplay=startx -- /usr/bin/Xvfb :2 -screen 0 1024x768x24 &
192.168.17.*
to the wired network 192.168.250.*
?
[131090990090] |I don't know much about network set up or terminology, so if I'm not including any pertinent details, please ask.
[131091000010] |socat
for some similar use-case, but it appears that the use case is so arcane that documentation is virtually non-existent on the topic.
[131091000060] |Of course it's well possible that I simply used the wrong search terms so far.
[131091000070] |Reasoning: it's almost hard to come by devices that have a COM port nowadays, but most have an ethernet port.
[131091000080] |Also, the board is in a rather inaccessible location, so to connect to it we've been using mobile devices.
[131091000090] |And then it's even harder to find machines with COM ports.
[131091000100] |NB: I'm aware of RS-232 to USB devices, but would prefer a solution as pointed out as it seems more universal.
[131091010010] |It's not clear exactly what you want.
[131091010020] |If you want to use your existing Ethernet port, that won't be an option for many reasons; the most fundamental being that Ethernet requires precise termination and voltage levels, the hardware on the interface (the PHY) is made to deal with that.
[131091010030] |Ethernet uses strictly +/- 0.85V and 50ohm termination impedance; RS-232 uses at a minimum +/- 3V, and could be as high as +/-25V, typically +/-12V. I imagine if you did try to connect your Ethernet port to an RS-232 line, it would fry your network interface.
[131091010040] |Socat is a whole other level, and definitely is not useful here: it's a TCP/IP communication tool: it doesn't know anything about the electrical characteristics of the underlying hardware - it could talk over an RS232 line, but it'll be talking TCP, and you'd need to talk TCP on the other side for it to work.
[131091010050] |Now, if what you're doing is designing a board, you could put an RJ45 jack with traces to a serial I/O port, which is exactly what the makers of your PCIX board have done.
[131091010060] |I've also seen Cisco routers like this.
[131091010070] |The tool you really need is an RS232->USB converter.
[131091020010] |Many devices use nonstandard connectors for serial ports.
[131091020020] |RJ-45 is probably the most common connector used for RS-232 serial after DB-9, but unlike with DB-9, there aren't even de facto standards for the pinout.
[131091020030] |I'm aware of 4 different RJ-45 RS-232 pinouts, and there are probably others I haven't seen yet.
[131091020040] |None of this means that people are somehow converting Ethernet to serial.
[131091020050] |They merely happen to use the same connector.
[131091020060] |There are many products that do provide that conversion, and in fact most of them do use the RJ-45 connector for their serial side.
[131091020070] |For an example of a single-port converter, there's the Digi One SP.
[131091020080] |More common are boxes that provide multiple serial ports, like the Digi PortServer and the Avocent (neé Cyclades) Console Servers.
[131091020090] |These are just two examples out of many.
[131091020100] |Digi and Avocent are easily the two biggest players, but there are lots of smaller companies doing things like this.
[131091020110] |Some of these boxes present themselves to the OS as /dev/ttyWHATEVER
by installing a driver.
[131091020120] |These have the advantage that any program that knows how to talk to a serial port can talk to the remote device plugged into the converter.
[131091020130] |For the most part, the driver makes the converter appear no different from a local serial port.
[131091020140] |For example, if a program opens one of the converter's /dev/
nodes and calls cfsetospeed()
on it to set the serial port's bit rate, the driver forwards the command to the remote converter box, which changes the serial bit rate on that port.
[131091020150] |The main problem you run into with that type of converter is that it isn't always possible to find a working driver for your particular kernel.
[131091020160] |This problem is becoming more common as the popularity of RS-232 drops, since it means the companies providing these boxes have dwindling incentives to keep enhancing their driver to track kernel differences.
[131091020170] |The other major type of serial to Ethernet converter is purely a network appliance.
[131091020180] |For example, with the Cyclades boxes, if it gets the IP 10.1.2.3 from the DHCP server, you can connect to 10.1.2.3 on TCP port 7001 to connect to the first serial port.
[131091020190] |You'd use TCP port 7002 for the second serial port, and so forth.
[131091020200] |To set serial port parameters with this sort of converter, you typically have to use a web management UI hosted by the converter box.
[131091020210] |While this does mean you don't get features like automatic serial port parameter forwarding to the converter, you do get compatibility with any program that can open a TCP connection without needing a driver.
[131091030010] |sudo
.
[131091040020] |That way if the root account is hosed, you can just do sudo bash
or such to have root access to the system again.
[131091040030] |Although it is better to just use sudo
for individual commands...
[131091040040] |Some distro's such as Ubuntu are actually configured this way out-of-the-box, as a security measure.
[131091050010] |I've actually seen a system set up the way your describe.
[131091050020] |It had two lines in /etc/passwd for user ID 0 (root):
[131091050030] |Or something like that.
[131091050040] |I think it was a SunOS 4.1.x system, a long time ago, so maybe you can't do this on a modern Linux system.
[131091050050] |I'd say go ahead and give it a try.
[131091050060] |What can it hurt?
[131091060010] |xmodmap
lets you modify keymaps.
[131091080020] |Make a file to hold xmodmap commands (~/.xmodmaprc
is a common choice).
[131091080030] |The Win keys are called "Super" in xmodmap (Super_L and Super_R for the left and right ones).
[131091080040] |By default they're connected to mod4
, so you want to remove them from that modifier and add them to control
.
[131091080050] |Add this to the command file:
[131091080060] |Tell xmodmap
to load it with:
[131091080070] |It will only last as long as your X session does, so you'll need to rerun it each time, or put it in something like ~/.xinitrc
so it will be run automatically
[131091090010] |Go into the keyboard settings, click "Options", expand "Alt/Win key behavior", and select "Control is mapped to Win keys".
[131091090020] |(Command line version: setxkbmap -options altwin:ctrl_win
, then edit /etc/X11/xorg.conf
and add XkbOptions "altwin:ctrl_win"
to the keyboard InputDevice
section.
[131091090030] |(If there is already an XkbOptions
line, then add it to that line, separated by a comma: XkbOptions "grp:alt_shift_toggle,altwin:ctrl_win"
.)
[131091100010] |file Debian.raw
and fdisk -l Debian.raw
.
[131091110040] |The easiest way to access this partition is to associate it with a loop device.
[131091110050] |If you can, make sure your loop
driver supports and is loaded with the max_parts
option; you may need to run rmmod loop; modprobe loop max_part=63
.
[131091110060] |Then associate the disk image with a loop device, and voilà:
[131091110070] |If you can't get the loop driver to use partitions, you need to find out the offset of the partition in the disk image.
[131091110080] |Run fdisk -lu Debian.raw
to list the partitions and find out its starting sector S (a sector is 512 bytes).
[131091110090] |Then tell losetup
you want the loop device to start at this offset:
[131091110100] |If you want to copy the partition from the VM image to your system, determine its starting ($S
) and ending ($E
) offsets with fdisk -lu
as above.
[131091110110] |Then copy just the partition:
[131091110120] |(If the source and the destination are not on the same disk, don't bother with dd
, just redirect tail
's output to /dev/sda5
.
[131091110130] |If they are on the same disk, dd
with a large bs
parameter is a lot faster.)
[131091120010] |S
column) show R
(again, R
here is often said to mean “running”, but this really means “runnable” as above).
[131091190030] |In practice, the number may not match because top obtains information for each task one by one and some of the runnable tasks may have fallen asleep or vice versa by the time it finishes.
[131091190040] |(Some implementations of top may just count tasks with the status R
to compute the “running” field; then the number will be exact.)
[131091190050] |Note that there is always a runnable task when top gather its information, namely top itself.
[131091190060] |If you see a single runnable task, it means no other process is contending for CPU time.
[131091200010] |xinput
commands in order to enable two-finger scrolling on my touchpad.
[131091200040] |It has to be re-run every time the computer sleeps and wakes up again, and it has to be run from within the X11 session.
[131091200050] |How do I trigger a script to be run on my X11 desktop when the computer wakes up?
[131091200060] |P.S.
[131091200070] |I suspect there's a way to do this with DBUS, but I' m fuzzy on the specifics.
[131091200080] |Maybe someone with stronger DBUS-fu could point me in the right direction.
[131091210010] |Try exporting the DISPLAY variable for existing X session.
[131091210020] |Assuming it's :0, add the following to the start of your script (well, at least before you run any X-related commands).
[131091210030] |I think you may also need to grant authorization to the local host, by running the following in one of your existing X terminals.
[131091210040] |(I'm assuming it's you, and only you on this system, if not, this will give other local users access to your X display)
[131091210050] |It's been a while since I've really played with X, so hopefully someone can come along and correct me.
[131091220010] |If you have gnome-control-center
installed, run gnome-mouse-properties
and visit Touchpad tab and click on Two-finger scrolling:
[131091220020] |It works very well, and across wake-ups too.
[131091220030] |So, unless you are using your scripts for something else too, throw them away :)
[131091230010] |vi
to another file, I have to highlight the text, Control-Shift-C it, quit the first file, open the second, and then paste it via Control-Shift-V.
[131091230030] |It feels like there must be an easier way to do this - that is, keyboard commands only.
[131091230040] |Any suggestions?
[131091240010] |Sure:
[131091240020] |:e filename
to open another file and :rew
to return to the original file like this:
[131091250020] |vi foo
yy
(yank a line)ESC :e bar
p
(then save :w
)ESC :rew
"+y
will yank the selection to the X cut board, and "*y
will yank to the X clipboard.
[131091260040] |Similarly, "*p
will paste from the X clipboard and "+p
from the X selection.
[131091260050] |You can set the default (anonymous) buffer in vim to be the X clipboard by running set clipboard=unnamed
.
[131091260060] |Then you can just y
and p
from the X clipboard directly.
[131091270010] |~/.bashrc
:
[131091280020] |Then run mkcd longtitleproject
.
[131091280030] |You might want to omit -p
, use pushd
instead of cd
, give the function a different name, or other variants.
[131091280040] |There are also less specialized ways to not have to retype the word from the previous line:
[131091280050] |cd
, then Esc . (or Alt+.) to insert the last argument from the previous command.cd !$
executes cd
on the last argument of the previous command.mkdir
into cd
.x = longproject ; mkdir $x ; cd $x
- which I admit is still longer than using a shellscript function :)
[131091310010] |Would never have occurred to me to script up this behaviour because I enter the following on a near daily basis ...
[131091310020] |where bash kindly substitutes !$ with the last word of the last line; i.e. the long directory name that you entered.
[131091310030] |In addition, filename completion is your friend in such situations.
[131091310040] |If your new directory was the only file in the folder a quick double TAB would give you the new directory without re-entering it.
[131091310050] |Although it's cool that bash allows you to script up such common tasks as the other answers suggest I think it is better to learn the command line editing features that bash has to offer so that when you are working on another machine you are not missing the syntactic sugar that your custom scripts provide.
[131091320010] |zsh
with setopt extendedglob
,
[131091340010] |$ find -type f -print0 | xargs -r0 grep foo
[131091340020] |-r
in xargs
avoids executing the command if there wasn't input.
[131091340030] |It's a GNU extension.
[131091350010] |There's also ack
, which is designed specifically for this kind of tasks and does subfolder search automatically.
[131091360010] |what's wrong with grep -r
(== grep --recursive
)?
[131091360020] |Am I missing something here?
[131091360030] |(+1 for ack
too -- I regularly use both)
[131091360040] |edit: I found an excellent article detailing the possibilities and pitfalls if you don't have GNU grep
here.
[131091360050] |But, seriously, if you don't have GNU grep
available, getting ack
is even more highly recommended.
[131091370010] |As an alternative to the find | xargs
responses, you might consider using ctags since you say you are searching not for text, but specifically for function names.
[131091370020] |To do this you would run ctags
against your source to create a TAGS
file, and then run your grep
against the TAGS
file which will spit out lines in the following format:
[131091370030] |Where tagname
will contain the function name, tagfile
is the file it is in, and tagaddress
will be a vi command to get to that line.
[131091370040] |(Could be a just a line number.)
[131091370050] |(Is there an easy way to do something similar with the various indices that eclipse builds, or to just query the eclipse database?)
[131091380010] |find . | xargs grep
will fail on filenames with spaces:
[131091380020] |Note that even -print0 has this problem.
[131091380030] |It's better in my opinion to use -exec grep
with find which will handle all filenames internally and avoid this problem:
[131091390010] |If your disks are fast you may want to parallelize the grep:
[131091390020] |Watch the intro video to learn more about GNU Parallel: http://www.youtube.com/watch?v=OpaiGYxkSuQ
[131091400010] |tmux attach
, my screen looks like:
[131091400030] |I was wondering if there is a command to get rid of the viewport
[131091410010] |Probably the width/height (colums/rows) of the "original" terminal form which you launched te tmux session is lower than that of the terminal you're attacching from.
[131091410020] |Personally I don't use tmux, but that happens to me with screen
when I launch from a 80x25 terminal and then I attach from another terminal with 80x50 columns/rows.
[131091420010] |0
if there is only a single digit, e.g. 1
in the "day" part?
[131091420050] |I need this date format: YYYYMM DD.
[131091440010] |Another solution: awk '{$2 = sprintf("%02d", $2); print}'
[131091450010] |Here is a (non-sed) way to use bash with extended regex..
[131091450020] |This method, allows scope to do more complex processing of individual lines. (ie. more than just regex substitutions)
[131091450030] |output:
[131091460010] |mail
.
[131091460030] |However I cannot find the command in my system (Ubuntu 10.04 server).
[131091460040] |What do I need to install?
[131091470010] |Just install mailutils which must contain mail
:
[131091470020] |Read more about mail and GNU mailutils here
[131091480010] |Another program you can use is mutt
.
[131091480020] |I prefer using mutt
to mail
- it just has a nicer interface in my opinion.
[131091480030] |should work - but I use Fedora not Ubuntu so can't confirm this.
[131091490010] |You may already have mail
installed.
[131091490020] |If so, you can read your mail by entering mail
at the command line.
[131091490030] |Welcome to the world of choice.
[131091490040] |You can use pretty well any mail reader you choose. emacs
users can read mail from within their editor.
[131091490050] |Install a pop3
or imap
server and you can read your mail from your Windows PC, Mac, or other devices.
[131091490060] |If you setup a .forward
or .procmailrc
file then you may be able to forward your mail to another e-mail address and read it from there.
[131091500010] |On Debian and derived distributions, you can use the apt-file
command to search for a package containing a file.
[131091500020] |Install apt-file
(apt-get install apt-file
) and download its database (apt-file update
, Ubuntu does it automatically if you're online).
[131091500030] |Then search for bin/mail
:
[131091500040] |With the command-not-found
package installed, if you type a command that doesn't exist but can be installed from the Ubuntu repositories, you get an informative message:
[131091500050] |If you're not after mail
specifically, but after any program to read your local mail from the command line, there are much better alternatives.
[131091500060] |All mail user agents provide the mail-reader
virtual package, so browse the list of packages that provide mail-reader
and install one or more that looks good to you (and doesn't use a GUI, if it's for a server).
[131091500070] |mutt
's motto is “All mail clients suck.
[131091500080] |This one just sucks less.”, and I tend to agree, but in the end it's a very personal choice.
[131091510010] |M-x server-start
inside the Emacs session, then use emacsclient -n file1 file2 ...
to add files to the existing Emacs.
[131091560020] |There are additional options you might want to use, e.g. -c
to open the files in a new window (frame).
[131091570010] |Put (server-start)
in your .emacs
file.
[131091570020] |Add this to ~/.bashrc
[131091570030] |then use myedit
as your editor.
[131091570040] |You will have to use the -c
option to bring up a window.
[131091570050] |So you may do this:
[131091570060] |or
[131091580010] |xmessage Message -display :0 &
How does it work? there is no -display option in xmessage's man page.
[131091590010] |It's included by (obscure) reference.
[131091590020] |SEE ALSO
[131091590030] |X(7), echo(1), cat(1)
[131091590040] |And buried down a ways in X(7)
:
[131091590050] |OPTIONS
[131091590060] |Most X programs attempt to use the same names for command line options and arguments.
[131091590070] |All applications written with the X Toolkit Intrinsics automatically accept the following options:
[131091590080] |-display display
Xt
) standard options.
[131091590110] |More modern toolkits have similar common options, which you can see with the --help-all
option.
[131091600010] |/boot
partition active.
[131091610050] |The Windows tool will probably refuse to mark any non-Microsoft partition active, so you'll have to use another tool.
[131091610060] |I recommend booting your system with the Ubuntu install disk and telling it to use rescue mode.
[131091610070] |I haven't used the Ubuntu rescue mode recently; it may have a menu option for fixing this sort of thing automatically.
[131091610080] |If not, you will have to get to a command prompt, then say something like this:
[131091610090] |That sets /dev/sda1
to be active.
[131091610100] |That's the most likely one to be /boot
, but isn't necessarily it.
[131091610110] |You can try rebooting now.
[131091610120] |If that didn't work, try repairing your GRUB boot loader.
[131091610130] |If that also fails, go back into rescue mode, get into fdisk
and look at the partition table again.
[131091610140] |If any look to be marked as something other than either NTFS, Linux, or Linux swap, and the odd one out is 5 GB, you may have found the "unallocated" partition.
[131091610150] |Say it's /dev/sda3
.
[131091610160] |Then in fdisk
:
[131091610170] |That sets /dev/sda3
to partition type 83, which says it contains an ext2
filesystem, or one of its successors.
[131091610180] |Again, try booting.
[131091610190] |If that's still not doing it, there are other steps you can take, but we've run out of easy ones.
[131091610200] |It sounds like this was just a hobby install, so it's probably not worth going to heroic measures to fix it.
[131091610210] |If it comes to reinstalling, consider using Wubi this time around instead of installing Ubuntu in a separate partition.
[131091610220] |Wubi lets you create a virtual disk image inside your Windows partition, which is easier to manage and has less risk of a fight with Windows.
[131091620010] |Shift
and Ctrl
don't have standard assignments, so I can't tell you what the corresponding key is. xmodmap -pm
would tell you, but try pressing and releasing the Alt key or the Windows key.
[131091630030] |(I'm basing this on state 0x11
, which I think means Shift + mod2, but I'm not completely sure that's what it means.
[131091630040] |But the symptoms do look like a missed key release event.)
[131091630050] |Update: The output from xmodmap -pm
shows that mod2 is Num Lock, so try toggling Num Lock off.
[131091640010] |/
search command to verify my regular expressions (just to see what it matches).
[131091690030] |After that i usually use the :%s
replace command, where i use that regexp from search as a string to be replaced, e.g. i first look for such string:
[131091690040] |It matches exactly what i want, so i do my replace:
[131091690050] |But i have to write again entire regexp here.
[131091690060] |Usually that regexp is much longer, that's why I'm looking for solution:
[131091690070] |Is there any existing shortcut or vim script for pasting that search pattern directly into replace command?
[131091690080] |P.S.
[131091690090] |I use vim in terminal (no gvim).
[131091700010] |In general, an empty regular expression means to use the previously entered regular expression, so :%s//\1/g
should do what you want.
[131091710010] |EXTRA_FIRMWARE_DIR
kernel option, but I do not understand if it is used during compile time only or if it is effective after the new kernel is used.
[131091740040] |My WiFi adapter chip is Atheros, and according to this page, I have to put the firmware to the right place.
[131091740050] |On Ubuntu, I found the /lib/firmware
directory as it is indicated in that page, but I cannot find that directory on Gentoo.
[131091750010] |Take a look at this: http://www.kernel.org/doc/menuconfig/drivers-base-Kconfig.html
[131091750020] |In particular:
[131091750030] |code
harder to read.
[131091770060] |Also handy: hold the control
key and hit -
or +
to decrease / increase font size.
[131091780010] |cd root tar -cf - * | (cd /mnt ; tar -xpf -)
[131091780040] |I got this error message: "cowardly refusing to create an empty archive"
[131091780050] |When I do ls
to the same root directory- it is not empty at all- all my needed files are there.
[131091780060] |Why does this happen?
[131091790010] |Why don't you simply use cp -pr source destination
?
[131091790020] |Anyway:
[131091790030] |works just fine.
[131091800010] |I find the best thing for copying whole directory structures is rsync
[131091800020] |This also has the advantage that you can to it to or from a remote directory through ssh.
[131091810010] |if you want to copy the root filesystem and worry about special files and devices, the best way is:
[131091810020] |which
can't find it:
[131091860040] |which
output shows that you use the old which
written in csh.
[131091870020] |The PATH shows up quoted by parentheses, and the directories in PATH have entries like /opt/SUNWspro/bin
and /usr/ccs/bin
which only make sense in Solaris.
[131091870030] |That's consistent: Solaris used the csh which
.
[131091870040] |Here's my guess: you've got one PATH for bash, and another for csh.
[131091870050] |This might be a system problem.
[131091870060] |As I recall, Solaris keeps /etc/profile and /etc/cshrc files for system-wide PATH setting.
[131091870070] |Those two initialization files might set different PATH variables for different shells.
[131091870080] |Do "echo $PATH" under bash, and see if it agrees with what the which
command prints out as a PATH string.
[131091880010] |For bash use type -a assemble.sh
[131091890010] |You can use locate assemble.sh
to find the location of the file.
[131091900010] |Or split the path, and use it in find - the first match should be the solution:
[131091900020] |'type' is of course more easy.
[131091910010] |Mint Search Enhancer
and Stylish
- deactivating these makes no difference.
[131091920010] |It's probably a custom theme.
[131091920020] |Click Tools >Add-ons >Themes and select a different theme.
[131091930010] |sdc2
to occupy more space, and create a new logical partition sdc6
(and more if desired) inside the extended partition.
[131091950010] |sudo apt-get install kate
.
[131091950040] |Now I want to install this kate sql plugin and google is not helping me.
[131091950050] |I downloaded a punch of files from here but what should I do with these files ?
[131091950060] |Where should I put them ?
[131091950070] |Would you please tell me how can I install this ?
[131091950080] |Thanks
[131091960010] |aclocal
to bring in all the relevant definitions?
[131091980010] |I found the root cause.
[131091980020] |In configure.ac, I should have added DBUS C/LD flags before I call AC_CONFIG_FILES([Makefile]) and AC_OUTPUT.
[131091980030] |Then the AM_CFLAGS and AM_LDFLAGS in Makefile can get valid value.
[131091990010] |PAM_AUTH_ERR
) as either the only configured required
module or as requisite
before anything else (or in a number of other possible configurations with similar effect), it will instantly return failure to sudo
, which will then try again, twice, getting three failures in quick succession.
[131092010030] |(You can configure passwd_tries
in /etc/sudoers
to a value other than 3 in order to get more or less failures, if for some reason you prefer.)
[131092010040] |This doesn't prompt for your password once first, but there's definitely some PAM configurations which could do that, locking you out after the first failure and then returning failures quickly for the next tries.
[131092010050] |So, I'm going to go ahead and guess that you've either messed up your PAM configuration, or else something pointed to by that configuration is failing (either correctly or not) in a way that doesn't introduce a delay.
[131092010060] |(The "normal" delay is usually actually introduced by the pam_unix.so
module, unless you give it the nodelay
argument.)
[131092010070] |One easy way to recreate this is to put
[131092010080] |right above any existing auth
lines in /etc/pam.d/sudo
.
[131092010090] |Again, that's insta-failure, not prompt-once-and-then-fail, but this should put you on the track for your specific configuration.
[131092010100] |(As I understand it, your setup works fine if you give the right password, so I'd look into the on-failure behaviors of your configured PAM modules.)
[131092020010] |Try sudo -K
to remove the timestamp.
[131092020020] |Also have a look at the timestamps directory (/var/run/sudo on debian systems), maybe something went wrong there.
[131092030010] |diff
[131092030050] |gives me this:
[131092030060] |I have ssh keys set up, so it's not prompting me for a password.
[131092030070] |What's a workaround for this?
[131092040010] |Piping into diff is equivalent to running
[131092040020] |diff path/file.name
[131092040030] |and then, once it's running, typing the entire contents of the file.
[131092040040] |As you can see, that's not what you expected.
[131092050010] |Try to use -
to represent the standard input.
[131092050020] |ssh user@remote-host "cat path/file.name" | diff path/file.name -
[131092060010] |Here's one workaround: diff
seems to accept <(expr) as arguemnts:
[131092080010] |wget
like this:
[131092080030] |I get a bunch of these messages:
[131092080040] |I suppose that means that pages keep getting re-downloaded, even though I have them locally.
[131092080050] |NOTE: I want this so that I don't have to re-download existing files each time I run the command mirror.
[131092090010] |That means that the web server does not provide last modification info.
[131092090020] |Many servers hide that info for static content to manipulate the browser's cache.
[131092090030] |You have instructed wget to ask for that info with --timestamping
flag (which is redundant, it is implicitly enabled with --mirror
).
[131092090040] |If you don't want wget to re-download the same files on one run, try this (untested)
[131092090050] |It isn't a good way to update an already existing mirror though (it won't re-download the same files even if they're changed), but AFAIK, there is no other workaround for wget.
[131092090060] |edit: removed -N that accidentally left in the command line
[131092100010] |Did you try adding the -c
parameter?
[131092100020] |Excerpt from wget manual:
[131092100030] |-c --continue
[131092100040] |Beginning with Wget 1.7, if you use -c on a non-empty file, and it turns out that the server does not support continued downloading, Wget will refuse to start the download from scratch, which would effectively ruin existing contents.
[131092100050] |If you really want the download to start from scratch, remove the file.
[131092100060] |Also beginning with Wget 1.7, if you use -c on a file which is of equal size as the one on the server, Wget will refuse to download the file and print an explanatory message.
[131092100070] |The same happens when the file is smaller on the server than locally (presumably because it was changed on the server since your last download attempt)---because ''continuing'' is not meaningful, no download occurs.
[131092100080] |On the other side of the coin, while using -c, any file that's bigger on the server than locally will be considered an incomplete download and only "(length(remote) - length(local))" bytes will be downloaded and tacked onto the end of the local file.
[131092100090] |This behavior can be desirable in certain cases---for instance, you can use wget -c to download just the new portion that's been appended to a data collection or log file.
[131092100100] |To my knowledge it should skip files that are already downloaded and of the same size.
[131092110010] |3dd
but upwards.
[131092130010] |user.action
file to rewrite e.g.: http://foo.org
to https://foo.org
?
[131092130040] |Note that I want to rewrite, not redirect.
[131092130050] |So if I search google for foo.org
then on the search page there would be https://foo.org
.
[131092130060] |Would the rewrite work on e.g.: https://encrypted.google.com/
?
[131092130070] |Or is redirecting better because there could be e.g.:
?
[131092140010] |The reason why you need to redirect that URL rather than rewrite is because you are visiting an unencrypted web page with the http:// (plaintext) URL, and the proxy needs to tell the browser to talk to the https:// URL.
[131092140020] |If the connection was simply redirected at the SSL port, your browser wouldn't know what to do with an SSL response if it were somehow directed to the secure port using the HTTP protocol.
[131092140030] |(Sadly, I'm not sure if anyone uses http-starttls, which should be able to handle that, but that's a separate question)
[131092140040] |By using a redirect, the proxy uses HTTP return codes to tell the browser to open a new connection, using HTTPS instead of HTTP.
[131092150010] |/etc/network/interfaces
file:
[131092150060] |I can reboot my system and sometimes eth1 is accessible from SSH, and other times eth0 is accessible.
[131092150070] |Then sometimes eth1 will just stop being pingable alltogether.
[131092150080] |This is a fairly fresh install of Debian, and the only thing I have running is VMWare Server 2.0, bridged to both of my network connections.
[131092160010] |You've defined a gateway on both interfaces.
[131092160020] |So there is a default route through both interfaces.
[131092160030] |I'm not sure what exactly happens in this case, but I doubt this is what you intended.
[131092160040] |I suspect that only a smaller network should be accessible through eth0
.
[131092160050] |You can do this by changing the corresponding stanza like this:
[131092170010] |-h
or --help
or -?
.
[131092190020] |Or sometimes man command
.
[131092200010] |command -h
-h
for help; it works 97%1 of the time.
[131092200030] |Other help flags possibles: --help
, -?
(/?
, /h
)
[131092200040] |command --flag1 --flag2 arg1 arg2 file1 file2
(ClOverleAF )grep 'c.*o.*a.*f' /usr/share/dict/words
[131092200070] |ClOverleAF (command, option, args, files)
[131092200080] |grep -ri text dir1 dir2
awk '{ print $2 }' file1 file2
find dir1 dir2 -name '*.bar'
command source1 source2 destination
source
then the destination
in that order on the line.
[131092200150] |ln -s source destination
dd if=source of=destination
command1 | command2 -
ls | vim -
dd if=/dev/sda | file -
wget -q -O - http://unix.stackexchange.com | grep ''
(100-1d6)%
[131092210010] |Most common syscalls - read(2) and write(2) takes 3 parameters: descriptor, buffer and length.
[131092210020] |Returns number of bytes actually read or written. close(2), obviously, takes one parameter - descriptor to close.
[131092210030] |Most syscalls return -1 in case of error and sets errno
.
[131092210040] |Everything else I usually read in corresponding man page.
[131092210050] |Just don't forget the command: man 2 syscall_name
[131092210060] |P.S.: do you have intro(2) ?
[131092220010] |This is a common problem for most developers.
[131092220020] |If you write code often you will eventually find some patterns that you can use as mnemonics, for example file descriptors are usually the first parameter.
[131092220030] |But there will always be annoying exceptions hard to memorize.
[131092220040] |You are approaching the problem the wrong way.
[131092220050] |There is a good reason why so many sophisticated development tools exist.
[131092220060] |Instead of making your life harder, start using a specialized source code editor or an integrated development environment.
[131092220070] |Some of the standard features (Auto-completion lists, realtime syntax checking, documentation tooltips) will eliminate your problem, taking away a big overhead for you.
[131092220080] |After all, that's what computers are for, doing the boring repetitive tasks, so you can focus on the interesting stuff.
[131092230010] |sleep
?
[131092250020] |Or do you want to have something that waits for input before continuing?
[131092250030] |You can do that with a read
call.
[131092260010] |He might also be looking for CTRL-Z, which pauses the current process.
[131092270010] |ls
and whoami
, but I get nothing back from the custom script.
[131092270100] |If I run the custom script as me (in an interactive shell), of course it works.
[131092270110] |Finally, my question: What's the right way to configure this.
[131092270120] |Have the webserver run as me?
[131092270130] |Or change permissions so that _www can run my custom scripts?
[131092270140] |Thanks in advance for any help.
[131092270150] |I'm not an advanced Unix user, so sorry if this is a dumb question!
[131092280010] |The first-best thing would be to put the script in a standard location (such as /usr/local/bin
) where the web server would have sufficient permissions to execute it.
[131092280020] |If that's not an option, you can change the group of the script using chgrp groupname path
, then make it executable for the group by chmod g+x path
.
[131092280030] |If the _www
user isn't already in that group, add it to the group by usermod -aG groupname _www
.
[131092290010] |To answer your question, it's better to give the _www group permission to execute your scripts.
[131092290020] |Use an ACL to extend the permissions on your *.sh scripts to allow a user in the _www group execute privilege:
[131092290030] |Also check each directory component of /Path/To/Custom &verify that apache has permission to access (i.e. 'see') the scripts in /Path/To/Custom:
[131092290040] |Each directory component above should grant apache a minimum of execute permission apart from the final component (Custom) where apache needs both execute &read permission. e.g. if all the directory components above have other permissions of r-x then apache has all the access rights it needs to find your scripts in the Custom directory.
[131092300010] |$chkconfig | grep 5:on
on my laptop running Fedora 14.
[131092300040] |I don't use NM for connecting to the Internet.
[131092300050] |So I think that should be stopped right away.
[131092300060] |Also I have ext4 filesystem so I assume lvm2-monitor can be safely turned off.
[131092300070] |My primary usage is surfing net and coding in Python (newbie though).
[131092300080] |Which services should I disable so that unnecessarily resources don't remain busy?
[131092300090] |Thanks.
[131092310010] |It's possible (and likely, if you didn't specify otherwise in the installer) that you are still using LVM with ext4 on the logical volumes, however, lvm2-monitor is really only useful if you're using LVM snapshots and/or mirrors, so it is safe to trun off.
[131092310020] |Are you using NFS in any way?
[131092310030] |If not, you can probably safely turn off the netfs, nfslock and rpc* services.
[131092310040] |Do you use any mDNS (or ZeroConf) devices?
[131092310050] |Avahi-daemon both registers your computer as a mdns device and enables your system to search for similar devices.
[131092310060] |If you don't plan on ever using that, you can disable it.
[131092310070] |The other services are fairly normal to have running (like rsyslog), or are simply startup processes that don't leave around running processes (like smolt and udev-post).
[131092320010] |You can do without NetworkManager, but I find it awfully handy for dealing with changing wifi on a laptop (which you say you're using).
[131092320020] |If you don't need it, though, no harm in turning this off.
[131092320030] |This is probably what's making your power button work, and what makes the system suspend when you close the lid.
[131092320040] |You can live without it, but probably don't want to.
[131092320050] |This is the userspace part of the Linux Auditing System, which is a more secure way of logging kernel-level events than syslog.
[131092320060] |Among other things, it records SELinux alerts.
[131092320070] |Strictly speaking, you don't need it.
[131092320080] |This is for autodiscovery of services on a network — printers being a big example.
[131092320090] |It's not required.
[131092320100] |This will probably just start the right in-kernel CPU frequency scaling driver as an on-start operation, and not run anything.
[131092320110] |(And if it can't for whatever reason and runs the daemon, you probably want it.)
[131092320120] |This runs hald
, which is in the process of being obsoleted but which is, as of Fedora 14, still used for a few things.
[131092320130] |Best to leave it on for now
[131092320140] |This sets up the kernel-level packet filter and doesn't leave any user-space daemon running.
[131092320150] |Leave it on.
[131092320160] |This is for multi-cpu/multi-core systems.
[131092320170] |If you just have one, it will exit harmlessly after a few seconds.
[131092320180] |You can gain a few milliseconds of startup time by chkconfiging it off.
[131092320190] |If you're sure you're not using lvm (note that you can use ext4 on top of lvm!), you can turn off lvm2-monitor, and the same goes for md software RAID and mdmonitor.
[131092320200] |This is the d-bus system message bus.
[131092320210] |If you're using a modern desktop environment, you'll basically need this.
[131092320220] |If you're not, you can get away without it, but will probably have to hack things up.
[131092320230] |(I'm pretty sure gdm
needs it, for example.)
[131092320240] |This doesn't run any daemons, but starts any network filesystems in /etc/fstab/
.
[131092320250] |It's harmless either way.
[131092320260] |If you're not using NFS, NIS, or some other RPC-based service, all of these can go off.
[131092320270] |You technically don't need to log anything, but you probably really want to.
[131092320280] |You could consider tuning it to work in a more lightweight way on your laptop.
[131092320290] |This sends anonymized usage statistics back to the Fedora Project.
[131092320300] |It doesn't run anything, but there's a cron file in /etc/cron.d/smolt
which checks the state here.
[131092320310] |If you don't want it, I suggest removing the entire smolt package.
[131092320320] |(But consider leaving it — the data is useful to the people putting the distro together for you, and it's only once a month.)
[131092320330] |Another run-and-done startup script, this one needed to keep rules generated during the boot process around once the system is up.
[131092320340] |Leave it on.
[131092330010] |c:
, I have chosen it to be 10 Gb.
[131092330060] |Now I want to increase the size of this drive.
[131092330070] |How to do this?
[131092330080] |I've configured the system for PHP MySQL and installed a lot of software and fixed wireless connection problems a lot of things i don't want to loose these things and start troubling again.
[131092330090] |I heard of backups, but I think it will take too long.
[131092330100] |Is there any other simple and fast way?
[131092340010] |In other words, you're using Wubi, right?
[131092340020] |As far as I know, it is currently not possible to resize a Wubi installation of Ubuntu 10.04 or 10.10.
[131092340030] |What you can do is add another virtual disk and mount it on /home
or /srv
, wherever you need room.
[131092340040] |There are instructions in the Wubi guide.
[131092340050] |In a nutshell: download the wubi-add-virtual-disk
script, and run the following command in a terminal (the number is the size of the new virtual disk):
[131092340060] |I recommend moving your installation to a real partition.
[131092340070] |It'll be less hassle in the long term.
[131092340080] |In your situation, the route I recommend is:
[131092340090] |/media/new
.
[131092340140] |Open a terminal and run the following commands to overwrite the new partition with your existing data from the Wubi installation, and set up the bootloader for the new partition.
[131092340150] |/media/new/etc/fstab
and /var/tmp/fstab.new
in an editor.
[131092340160] |In each file, there is a line with a single /
in the second column.
[131092340170] |Replace the line in /media/new/etc/fstab
with the one from /var/tmp/fstab.new
.find
, stat
and sort
, but for some weird reason the stat
is not installed on the box and it's unlikely that I can get it installed.
[131092350040] |Any other option?
[131092350050] |PS: gcc
is not installed either
[131092360010] |Assuming GNU find
:
[131092360020] |Change 1n,1
to 1nr,1
if you want the files listed most recent first.
[131092360030] |If you don't have GNU find
it becomes more difficult because ls
's timestamp format varies so much (recently modified files have a different style of timestamp, for example).
[131092370010] |My shortest method uses zsh:
[131092370020] |If you have GNU find, make it print the file modification times and sort by that.
[131092370030] |I assume there are no newlines in file names.
[131092370040] |If you have Perl (again, assuming no newlines in file names):
[131092370050] |If you have Python (again, assuming no newlines in file names):
[131092370060] |If you have SSH access to that server, mount the directory over sshfs on a better-equipped machine:
[131092370070] |With only POSIX tools, it's a lot more complicated, because there's no good way to find the modification time of a file.
[131092370080] |The only standard way to retrieve a file's times is ls
, and the output format is locale-dependent and hard to parse.
[131092370090] |If you can write to the files, and you only care about regular files, and there are no newlines in file names, here's a horrible kludge: create hard links to all the files in a single directory, and sort them by modification time.
[131092380010] |-g
flag to output only the property you're interested in, and -Pv
to print the value without any surrounding fluff.
[131092390020] |The result is easy to parse.
[131092390030] |It may also be helpful to change the file date to match the image date: exiv2 -T DSC_01234.NEF
.
[131092400010] |FOO='bar'
that's a shell variable.
[131092410100] |You can try creating 2 scripts:
[131092410110] |When you execute the first script it sets an internal shell variable then calls fork().
[131092410120] |The parent shell process will wait() for the child to finish then the execution continues (if there are more commands). In the child process exec() is called to load a new shell.
[131092410130] |This new process does not know about FOO.
[131092410140] |If you modify the first script:
[131092410150] |the FOO variable becomes part of the environment and inherited to the forked process.
[131092410160] |It's important to note that the environment is not global.
[131092410170] |Child processes can't affect their parent's environment variables.
[131092410180] |Modifications in test4.sh are not visible in test3.sh.
[131092410190] |Information simply does not go that way.
[131092410200] |When the child process ends its environment is discarded.
[131092410210] |Let's change test3.sh:
[131092410220] |Source is an built-in shell command.
[131092410230] |It tells the shell to open a file then read and execute its content.
[131092410240] |There is only a single shell process.
[131092410250] |This way the caller can see the modifications to the environment variables and even shell variables.
[131092410260] |As you probably know PATH is a special environment variable which tells the shell where to look for other executables.
[131092410270] |When a new login shell is started it automatically sources .bash_profile.
[131092410280] |The variables declared in there will be visible.
[131092410290] |However if in .bash_profile you call other scripts with sh the PATH you set in those scripts will be lost.
[131092420010] |Perhaps, using "set" is improper?
[131092420020] |Yes, there's your problem. set
doesn't do what you might expect.
[131092420030] |From the documentation:
[131092420040] |This builtin is so complicated that it deserves its own section. set
allows you to change the values of shell options and set the positional parameters, or to display the names and values of shell variables.
[131092420050] |Note the conspicuous lack of "actually set shell variables" in that list of things it does.
[131092420060] |Buried in all the docs, you'll find that what it's doing is setting the shell's positional parameters to the arguments you've given.
[131092420070] |You're just giving one argument, all of MONGODB="/usr/local/mongodb/bin"
.
[131092420080] |So $1
gets set to that (and $#
gets set to 1, since there's just the one argument).
[131092420090] |Score one for anti-mnemonic Unix command names, huh?
[131092420100] |So anyway, try just:
[131092420110] |and it'll work.
[131092430010] |This is not a variable assignment.
[131092430020] |(It is one in C shell (csh, tcsh), but not in Bourne-style shells (sh, ash, bash, ksh, zsh, …).) This is a call to the set
built-in, which sets the positional parameters, i.e. $1
, $2
, etc.
[131092430030] |Try running this command in a terminal, then echo $1
.
[131092430040] |To assign a value to a shell variable, just write
[131092430050] |This creates a shell variable (also called a (named) parameter), which you can access with $MONGODB
.
[131092430060] |The variable remains internal to the shell unless you've exported it with export MONGODB
.
[131092430070] |If exported, the variable is also visible to all processes started by that shell, through the environment.
[131092430080] |You can condense the assignment and the export into a single line:
[131092430090] |For what you're doing, there doesn't seem to be a need for MONGODB
outside the script, and PATH
is already exported (once a variable is exported, if you assign a new value, it is reflected in the environment).
[131092430100] |So you can write: