[131016250010] |
Can't reach DNS through wireless router
[131016250020] |I've got a Ubuntu 10.04 laptop, and recently it's had some odd networking problems.
[131016250030] |It's on the home wireless router, supplied by the phone company, and has no problem talking to anything on the wireless LAN, whether by IP number or /etc/hosts name.
[131016250040] |It's set to use the wireless connection with DHCP, and there really isn't much I can screw up after that for network entries.
[131016250050] |Right now, I can reach outside the LAN by IP number, but that doesn't do well for web surfing.
[131016250060] |When I do a nslookup, I'm getting non-authoritative answers, so I suspect I'm hitting a cache somewhere (probably the router).
[131016250070] |Any attempt to get outside the LAN with a domain name fails quietly, like a "can't find" using Firefox.
[131016250080] |The only things that might have changed since it did work right are 10.04 updates (and there's been quite a few of them) and a couple of lines added to /etc/hosts, to address fixed IPs on the LAN (in the 192.168.0.* range).
[131016250090] |The lines are in the same format as others, and it's nothing I haven't done before with no ill effects.
[131016250100] |Any ideas on what to try next?
[131016260010] |On Ubuntu 10.04, you can configure the networking so it gets only your machine's IP via DHCP, but lets you set everything else statically.
[131016260020] |In System >Network Connections, go into your wireless card's setup and select "Automatic (DHCP) addresses only" from the Method drop-down.
[131016260030] |Below, you will then be able to give static DNS server addresses.
[131016260040] |This feature is common on lots of OSes, though there is no agreement on what to call the feature or where to put it.
[131016260050] |The Arch Linux info in the comment below is one possibility.
[131016260060] |OS X and Windows can do it, too.
[131016260070] |If your system truly has no such feature, you can can temporarily overwrite /etc/resolv.conf
to try out a different set of DNS servers.
[131016260080] |Such changes will persist until the next DHCP lease renewal.
[131016260090] |Regardless, the way to debug a problem like this is to try using a public DNS service instead of your phone company's.
[131016260100] |I like to use Google's public DNS servers, since their addresses are easy to remember:
[131016260110] |8.8.8.8 8.8.4.4
[131016260120] |Another popular public DNS service is OpenDNS, whose servers are:
[131016260130] |208.67.222.222 208.67.220.220
[131016260140] |If that works, you can just keep using these servers, since they likely have advantages over the generic DNS services provided by your ISP.
[131016260150] |Or, you can then start from a position of being able to blame the phone company's DNS in some way and attack the problem from that direction.
[131016260160] |If this change doesn't help, you've exonerated the phone company's DNS servers, so you know the problem is inside the house.
[131016270010] |Is automake and autoconf the standard way to compile code?
[131016270020] |I sometimes compile apps from source and I've either been using:
[131016270030] |But recently, I came across ./autogen.sh
which generates the configure and make scripts for me and executes them.
[131016270040] |What other methods to streamline C/C++/C#(mono) compilation exist?
[131016270050] |Make seems a bit old.
[131016270060] |Are there new tools out there?
[131016270070] |Given the choice, which one should I use?
[131016280010] |Scons is one possible replacement, though I have no personal experience.
[131016280020] |It's also implemented in Python, which could be a problem, depending on the build environment.
[131016290010] |There are two "Big players" in this area; Cmake, and GnuAutotools.
[131016290020] |GnuAutotools is the Gnu way to do things, and is fairly focused on *nix.
[131016290030] |It's a sort of meta-build system, providing a set of tools that generate specific config and make files for what you're trying to do.
[131016290040] |This helps you make more changes in your code without having to directly manipulate your build system, and it helps others build your code in ways you hadn't designed for- under *nix.
[131016290050] |Cmake is the cross-platform way to do things.
[131016290060] |The Cmake team builds software on many many many different ways, with GCC, Visual Studio, XCode, Windows, OSX, Solaris, BSD, GNU/Linux, whatever.
[131016290070] |If you are at all concerned with portability of your code base, this is the way to go.
[131016290080] |As has been mentioned, some people appear to be fond of Scons.
[131016290090] |If you're familiar with Python this might provide more consistency in your working environment.
[131016290100] |Ruby also has a sort of meta-build system called Rake, which is pretty cool in it's own right, and is very handy for those already familiar with Ruby.
[131016300010] |Autoconf and Automake were set out to solve an evolutionary problem of Unix.
[131016300020] |As Unix evolved into different directions, developers that wanted portable code tended to write code like this:
[131016300030] |As Unix was forked into different implementations (BSD, SystemV, many vendor forks, and later Linux and other Unix-like systems) it became important for developers that wanted to write portable code to write code that depended not on a particular brand of operating system, but on features exposed by the operating system.
[131016300040] |This is important because a Unix version would introduce a new feature, for example the "send" system call, and later other operating systems would adopt it.
[131016300050] |Instead of having a spaghetti of code that checked for brands and versions, developers started probing by features, so code became:
[131016300060] |Most README files to compile source code back in the 90's pointed developers to edit a config.h file and comment out that proper features available on the system, or would ship standard config.h files for each operating system configuration that had been tested.
[131016300070] |This process was both cumbersome and error prone and this is how Autoconf came to be.
[131016300080] |You should think of Autoconf as a language made up of shell commands with special macros that was able to replace the human editing process of the config.h with a tool that probed the operating system for the functionality.
[131016300090] |You would typically write your probing code in the file configure.ac and then run the autoconf command which would compile this file to the executable configure command that you have seen used.
[131016300100] |So when you run ./configure &&make
you were probing for the features available on your system and then building the executable with the configuration that was detected.
[131016300110] |When open source projects started using source code control systems, it made sense to check in the configure.ac file, but not the result of the compilation (configure).
[131016300120] |The autogen.sh is merely a small script that invokes the autoconf compiler with the right command arguments for you.
[131016300130] |--
[131016300140] |Automake grew also out of existing practices in the community.
[131016300150] |The GNU project standardized a regular set of targets for Makefiles:
[131016300160] |make all
would build the project
[131016300170] |make clean
would remove all compiled files from the project
[131016300180] |make install
would install the software
[131016300190] |things like make dist
and make distcheck
would prepare the source for distribution and verify that the result was a complete source code package
[131016300200] |and so on...
[131016300210] |Building compliant makefiles became burdensome because there was a lot of boilerplate that was repeated over and over.
[131016300220] |So Automake was a new compiler that integrated with autoconf and processed "source" Makefile's (named Makefile.am) into Makefiles that could then be fed to Autoconf.
[131016300230] |The automake/autoconf toolchain actually uses a number of other helper tools and they are augmented by other components for other specific tasks.
[131016300240] |As the complexity of running these commands in order grew, the need for a ready-to-run script was born, and this is where autogen.sh came from.
[131016300250] |As far as I know of, Gnome was project that introduced the use of this helper script autogen.sh
[131016310010] |If you're using C#/Mono, you can use msbuild (the .sln/.csproj files used by MonoDevelop and Visual Studio) to manage your entire build process.
[131016310020] |You can then either build from MonoDevelop, or run the xbuild
command in your favorite terminal (works best in Mono >= 2.6).
[131016310030] |This is extremely easy and requires pretty much no work on your part, because MonoDevelop will handle the msbuild files for you, and you won't need to edit them unless you want to tweak things beyond what MonoDevelop's UI can do for you.
[131016310040] |I'm not familiar with how people who depend on msbuild handle the installs for their projects, but you could always ask that. ;-)
[131016320010] |For C# you can use xbuild (and msbuild on windows), which will build the project from your project files.
[131016330010] |Screen Doesn't always shut off on its own
[131016330020] |It seems like when I have certain web pages focussed my screensaver will kick in but the screen won't turn itself off.
[131016330030] |Does anyone have any ideas why this might be? and whether there is a way to disable whatever is keeping DPMS from kicking in?
[131016340010] |Break a large file into smaller pieces
[131016340020] |How do I break a large, +4GB file into smaller files of about 500MB each.
[131016340030] |And how do I re-assemble them again to get the original file?
[131016350010] |You can use split and cat.
[131016350020] |E.g. something like
[131016350030] |Assuming that the shell sorts the results of shell globbing you could do something like this:
[131016350040] |Else, you could use a combination of find/sort/xargs.
[131016360010] |You can also do this with Archive Manager if you prefer a GUI.
[131016360020] |Look under 'Save->Other Options->Split into volumes of'.
[131016370010] |modify path globally
[131016370020] |Is there a standard for $PATH
and the order of things that are supposed to be in there? out of the box Arch Linux doesn't have /usr/local/bin
in the $PATH
.
[131016370030] |I want to add it but I'm not sure if there's a predefined pecking order for system paths.
[131016370040] |Also where is the right place to do this? for right now I modified /etc/profile
but I'm not sure that's the right place in Arch for user modifications.
[131016370050] |Anyone know if there's a better place?
[131016380010] |By "globally", do you mean for all users?
[131016380020] |I put my path modifications in ~/.profile
, as it affects X applications as well.
[131016380030] |If you want it in the system profile it's probably best to modify /etc/profile
[131016390010] |Arch is a minimalistic linux distribution, so normally there are no other special configuration files which are getting included in some strange placces, modified by system configuration wizards.
[131016390020] |/etc/profile is the right place to do this for a system wide configuration.
[131016390030] |This file is intended to be used for ALL common Bourne-compatible shells.
[131016390040] |Shell specifics should be handled in /etc/profile.$SHELL where $SHELL is the name of the binary being run (discounting symlinks)
[131016390050] |It is also mentioned in the offical FAQ for reloading if you're shell can't find an newly installed binary: http://wiki.archlinux.org/index.php/FAQ#Q.29_I_just_installed_Package_X._How_do_I_start_it.3F
[131016400010] |Good tutorial for setting up a KVM/Xen box and advice on which would be better.
[131016400020] |I've got a dual xeon, 2GB, 75GB hd server that I'd like to turn into my dedicated virtual environment.
[131016400030] |Currently I'm using VirtualBox locally to run a mock cluster for Cassandra and Nginx/Haproxy, but it's starting to overload my system.
[131016400040] |I'd like to run Arch for this box and have a minimal desktop environment with either KVM or Xen managing all the VM's. Anyone know of a good tutorial or should I just do the base arch install and then find a good tutorial for setting up Xen/KVM and managing the machines?
[131016400050] |Also, which would be better for this type of environment.
[131016400060] |I've read that kvm is the way to go because it's much easier to setup and manage but I do not mind a more difficult setup if I can make better use of the hardware with Xen.
[131016410010] |This is not a complete answer to your question, but with regard to performance there is a KVM vs Xen question on Server Fault that might be helpful.
[131016420010] |Arch Linux KVM tutorial
[131016420020] |Installing Xen on Arch
[131016420030] |An opinion on Xen vs. KVM that I agree with.
[131016420040] |The performance is comparable with arguments on both sides at the moment.
[131016420050] |Time will tell which solution gets the most love (and improves most) in the long-run.
[131016420060] |My guess is KVM.
[131016420070] |Red Hat is investing quite a lot of time, energy, money, and risk to move away from Xen to KVM.
[131016420080] |Would they do that for no reason?
[131016430010] |As root can I launch a graphical program on another users desktop?
[131016430020] |As root can I launch a graphical program on another users desktop?
[131016430030] |Following are other questions which I think I need to know
[131016430040] |From a non X Session? (meaning root isn't logged into X)
[131016430050] |If multiple people were logged in on X could I auto-detect who was on which screen? and thus programmatically detect which screen I need to launch the app on?
[131016430060] |Can I launch the app as the user? ( ok I'm 99.999% sure this is a yes )
[131016430070] |Can I detect if users of group X are logged in to X?
[131016440010] |I can't completely try this since all my machines have root disabled.
[131016440020] |To find which display a user is on, you can use the who
command.
[131016440030] |The last column of output is usually the DISPLAY that the user is logged in on.
[131016440040] |Something like this could be used to grab just the display (there is likely a far more efficient way to do this, feel free to offer edits):
[131016440050] |Then to launch a graphical X command on that display:
[131016440060] |where :0 would be replaced with whatever display you found in the first command and firefox would be replaced with whatever command you want to run.
[131016440070] |You could put this in a shell script and just use a variable.
[131016440080] |The next part is the part I haven't tested, but I don't see why it shouldn't be possible to do:
[131016440090] |to launch the X command as that user.
[131016450010] |You could look at how acpid does it.
[131016450020] |E.g. when it issues xscreensaver commands or blanks the screen for each user running X or X-session.
[131016450030] |For example under Ubuntu this file contains related stuff:
[131016450040] |Contains this loop:
[131016460010] |To launch a graphical program on a user's desktop, you need to find two things: what display the user's desktop is on (the address) and what authorization cookie to use (the password).
[131016460020] |The following command should list the local displays that the user is logged on (one per line) on most unices:
[131016460030] |Finding the authorization cookie is a little harder.
[131016460040] |You have to look for the user's cookie file, which is ~/.Xauthority
by default (all you need is the location of the cookie file, you don't need to extract the cookie from it).
[131016460050] |I can't think of a portable way to find out the actual X cookie file.
[131016460060] |The most accurate way to find out is to find out the pid of the X process and look for the argument to the -auth
option.
[131016460070] |Another way is to find a process running on that X server and grab its XAUTHORITY
environemnt variable.
[131016460080] |Simply using .Xauthority
in the user's home directory works on many systems, but not on a default Ubuntu configuration.
[131016460090] |Once you have both pieces of information, put the chosen display in the DISPLAY
environment variable, the chosen X authority cookie file in the XAUTHORITY
environment variable, and you're set.
[131016460100] |It doesn't matter what user the program runs as; combine with su
if you like.
[131016470010] |compile GNU/Linux with -O3 optimization
[131016470020] |it said that compiling GNU tools and linux kernel with -O3 gcc optimization option will produce weird and funky bugs, is it real? anyone tried it? or its just hoax...
[131016480010] |While you can get away with using -O3 and other optimizations knobs on most applications (and it can result in speed improvements), I would hesitate to use such tweaks the kernel itself or on the tool chain required for building it (compiler, binutils, etc.).
[131016480020] |Think about it: Is a 5% performance gain of the raid and ext3 subsystems worth system crashes or potential data loss and/or corruption?
[131016480030] |Tweak all the knobs to want for that Quake port you're playing or the audio/video codecs you use for ripping your DVD collection to divx files.
[131016480040] |You'll likely see an improvement.
[131016480050] |Just don't mess w/ the kernel unless you have time to waste and data you can bear to lose.
[131016490010] |-O3 uses some aggressive optimisations that are only safe if certain assumptions about register use, how stack frames are interacted with, and function reentrancy are true, and these assumptions are not guaranteed to be true in some code like the kernel especially when inline assembly is used (as it is in some very low level parts of the kernel and its driver modules).
[131016500010] |used in gentoo, i didn't notice anything unusual
[131016510010] |-O3
have several disadvantages:
[131016510020] |First of all it often produces not faster code then -O2
or -Os
.
[131016510030] |Sometimes it produces longer code due to loop unrolling which may be in fact slower due to worse cache performance of code.
[131016510040] |As it was said it sometimes produces wrong code.
[131016510050] |It may be either due to error in optimalization or error in code (like ignoring strict aliasing).
[131016510060] |As kernel code sometimes is and sometimes have to be 'smart' I'd say it is possible that some kernel developer made some error.
[131016510070] |I experienced various strange problems, like crashing of user-spece utilities, when I compiled kernel with gcc 4.5 which at that point was stable.
[131016510080] |I still use gcc 4.4 for kernel and several selected userspace utilities due to various bugs.
[131016510090] |The same may apply for -O3
.
[131016510100] |I don't think it have much benefit for kernel.
[131016510110] |Kernel does not do heavy computations and in places it does it is optimized with assembler. -O3
flag will not change the cost of context switching or speed of I/O. I don't think something like <0.1% speedup of overall performance is worth it.
[131016520010] |Note that large chunks of the toolchain (glibc in particular) flat out don't compile if you change optimization levels.
[131016520020] |The build system is setup to ignore your -O preferences for these sections on most sane distros.
[131016520030] |Simply put, certain fundamental library and OS features depend on the code actually doing what it says, not what would be faster in many cases. -fgcse-after-reload in particular (enabled by -O3) can cause odd issues.
[131016530010] |kernel memory allocator patch
[131016530020] |is there any patch for linux kernel source to use different memory allocator such as ned allocator, or TLSF allocator
[131016540010] |The allocators you mention are userspace allocators, entirely different to a kernel allocator.
[131016540020] |Perhaps some of the underlying concepts could be used in the kernel, but it would have to be implemented from scratch.
[131016540030] |The kernel already has 3 allocators, SLAB, SLUB, SLOB, (and there was/is SLQB).
[131016540040] |SLUB in particular is designed to work well on multi cpu systems.
[131016540050] |As always if you have ideas on how to improve the kernel your specific suggestions, preferably in the form of patches, are welcome on LKML :-)
[131016550010] |benefit of kernel module compiled inside kernel?
[131016550020] |What's the benefit of compiling kernel modules into the kernel (instead of as loadable modules)?
[131016560010] |Sometimes it's necessary.
[131016560020] |If you compile some vital driver (e.g. SCSI driver) as a module your system won't boot.
[131016560030] |Another great candidate for not compiling as a module is the filesystem type of the root partition.
[131016560040] |If the kernel doesn't understand ext3
to read /lib/modules/
how will it load modules from it ?
[131016560050] |Think about it this way: to use modules the kernel needs to know enough about your system to to read and load kernel modules.
[131016560060] |Use that and trial and error :-)
[131016570010] |It depends.
[131016570020] |If you have small amount of memory the use of modules may improve the resume as they are not reloaded every time (I felt it significant on 2 GiB of RAM but not on 4 GiB on traditional harddrives).
[131016570030] |Especially when due to some bug battery module (regardless of being compiled-in or as module) started very long (read several minutes).
[131016570040] |Also when you don't know what hardware are you going to use modules are clearly benefit.
[131016570050] |PS.
[131016570060] |You can compile even vital drivers as modules as long as you include them in initrd.
[131016570070] |For example distros will includ the filesystem of /, drivers of harddrive etc. in initrd on installation.
[131016580010] |I statically compile every driver for built-in hardware inside the kernel.
[131016580020] |Exception would be hardware which is not permanent (usb connected hardware, for example).
[131016580030] |As my hardware configuration is not likely to change anytime soon, I don't bother with modules.
[131016590010] |A couple potential benefits.
[131016590020] |Performance is an arguable one.
[131016590030] |You'd avoid some runtime overhead associated with a dynamic loader, but I doubt that's a big deal unless you're depending on a real-time scheduler.
[131016590040] |If you're taking advantage of large pages on your system, then perhaps creating a larger static kernel image means you make more efficient use of the page descriptor cache.
[131016590050] |Some systems will 'cage' the kernel so that it packs tightly into one memory locality, which can alleviate some amount of delay due to minor, and possibly major, page faults.
[131016590060] |It might suit you, architecturally, to deliver One Big Image, arguing that fewer independent modules is easier to maintain and the loss of flexibility is not important.
[131016590070] |A lot of this kind of reasoning ventures into matters of style and practice.
[131016600010] |As far as I know, there is no speed difference.
[131016600020] |I think you will gain a few kB of kernel memory as the granularity of allocations is one page, so on typical architecture each module wastes an average of about 2kB (½ page) per would-be module.
[131016600030] |Even on embedded systems, that's hardly significant.
[131016600040] |You also gain a little disk space as the modules can be compressed in the same go as the kernel; that can be more relevant in embedded systems with little storage.
[131016600050] |If you can dispense with modules altogether, you save a little kernel memory (no need for the module loader), disk space (no need for the module utilities), and system complexity (no need to include module loading as a feature in your distribution).
[131016600060] |These points are quite attractive in some embedded designs where the hardware is not extensible.
[131016610010] |dash compatibility to bash
[131016610020] |Wikipedia says that dash
executes faster than bash
.
[131016610030] |My question is, if I set /bin/sh
to dash
, will all scripts that use /bin/sh
in their shebang line that was intended for bash
work under dash
?
[131016620010] |No, not all scripts intended for bash work with dash.
[131016620020] |A number of 'bashism' will not work in dash, such as C-style for loops and the double-bracket comparison operators.
[131016620030] |If you have a set of bash scripts that you want to use for dash, you may consider using checkbashisms.
[131016620040] |This tool will check your script for bash-only features that aren't likely to work in dash.
[131016630010] |Not even sh scripts are compatible with dash.
[131016630020] |Pkgsrc's bootstrap, par example.
[131016640010] |Does the Linux PreemptRT patch benefit desktop users?
[131016640020] |Does the PREEMPT_RT patch (real-time kernel) have any benefit for regular desktop users?
[131016650010] |I don't think so.
[131016650020] |The patch seems to provide real-time scheduling which is very important for some enviroments (planes, nuclear reactors etc.) but overkill for regular desktop.
[131016650030] |The current kernels however seems to be enough "real-time" and "preemptive" for regular desktop users[1].
[131016650040] |It may be useful if you work with high quality audio recording and playing in which even small amount of time may dramatically reduce the quality.
[131016650050] |[1] Technically both are 0/1 features but I guess it is clear what I mean ;)
[131016660010] |I'm guessing you misunderstand the concept of 'real-timeness'.
[131016660020] |If not, sorry, but it happens a lot, and I thought I'd throw a little clarification in here.
[131016660030] |The main point of a real-time kernel is to serve requests within a predictable deadline.
[131016660040] |That does not necessarily mean faster than a 'normal' kernel.
[131016660050] |So for desktop systems, a preemptive kernel is good, a real-time kernel much less so.
[131016660060] |That said, I am not familiar with this specific patch.
[131016660070] |Maybe this is different.
[131016670010] |Attempts to unify Linux and other free Unix?
[131016670020] |It's such a pain when you don't know what distribution to choose.
[131016670030] |It's such a waste of time and effort for developers to port stuff from one distribution to another.
[131016670040] |It makes Linux/Unix more complicated (and scary) than it should be.
[131016670050] |While I know there are certain reasons why the situation became the way it is now, I wonder if anyone has ever thought of reunifying the worlds of Linux and other (free) Unix?
[131016670060] |This is still a question: Have there been any (failed) attempts to unify Linux/Unix?
[131016680010] |If you would unify the distributions system configuration tools and general behavior there would be no need for different distributions.
[131016680020] |An advantage would be to define some binary interface for the applications.
[131016680030] |The The Linux Standard Base Workgroup tries to define some.
[131016680040] |Here is a list of the specifications which are the base of some iso standards: LSB Specs
[131016690010] |There was United Linux , which attempted to crete a baseline for linux distros.
[131016700010] |In addition to what echox said.
[131016700020] |Any attempt is an exercise in futility.
[131016700030] |Truthfully?
[131016700040] |I don't want to run my Desktop the way I run a Server, and the way I run my desktop, rolling bleeding edge, would not be good for everyone.
[131016700050] |What we can and should do is attempt to minify the differences.
[131016700060] |I think things like the freedesktop notification API and Systray API which are now a pseudo standard (I think) is a good thing.
[131016700070] |The more we make things like that which takes duplication away from the dev's the better.
[131016700080] |Poppler is a good example of an app with a lot of split effort being pulled into one effort, now any app that needs to render pdf's has a good library to use on all platforms.
[131016700090] |In short we should all try to share as much code an API's as possible instead of creating a new library every time we want to do something.
[131016700100] |( Will someone create a standard api to access the 'system password manager' already (be that kwallet or whatever ) )
[131016710010] |well when you said "linux" you are only saying about the kernel, the distro itself composed of many GNU tool/apps and other application, which is hard to make them unified as every developer or users has its own taste and preferences, that's the one that make linux distro so vary.
[131016710020] |But the kernel itself more or less quite unified.
[131016710030] |as for unification or standardization itself there are bunch of efforts such as Linux standard base and for example Filesystem Hierarchy Standard
[131016720010] |In a way the Linux compatibility layer in FreeBSD comes close.
[131016720020] |The two are not really "merged", but it is a fairly painless way to run Linux applications without porting them to FreeBSD.
[131016720030] |From your followup comments, it sounds like you're mostly interested in unifying the package management, as many distros have come up with their own solutions.
[131016720040] |Actually, package management is itself an attempt at unification, but the spirit of competition still hasn't resolved which approach will "win".
[131016720050] |Perhaps it would be better for each distro to support as many package systems as possible, and time will tell which one has the right balance of flexibility, ease of use, etc., needed to become the de facto standard.
[131016730010] |Keeping config files synced across multiple pc's
[131016730020] |I have a few different linux machines and a lot of config files (and folders) on each.
[131016730030] |For example:
[131016730040] |Is there a simple and elegant method to keep these files synced between my machines ( one has no internet access )?
[131016730050] |Also, some files will need a more advanced syncing process, as they will have to differ slightly... for example:
[131016730060] |My desktop keybaord has a range of hotkeys, where my laptop has almost none.
[131016730070] |I use XF86Mail to open thunderbird on my desktop, but Meta+M on my laptop.
[131016730080] |My Home Desktop and Work Desktop are both more "multiple user" orientated, where my Laptop is just for me.
[131016730090] |So on my laptop, I tend to keep the 'rc.xml' file for openbox at /etc/xdg/openbox/rc.xml
but on the desktops at ~/.config/openbox/rc.xml
[131016740010] |Puppet http://www.puppetlabs.com/ and Cfengine http://www.cfengine.org/ are two good tools for syncing files (and a lot more..)
[131016750010] |Keep the files under version control.
[131016750020] |This has multiple benefits, including facilitating keeping files synchronized (commit on one machine, update on the others) and keeping a history of changes (so you can easily find out what broke a program that worked last month).
[131016750030] |I use CVS and synchronize the repositories with Unison or sneakernet, but that's because I've been doing this since a time before widely-available distributed version control.
[131016750040] |Anyone starting now should use a proper distributed version control tool, such as bazaar, darcs, git, mercurial, ...
[131016750050] |Managing files that need to differ between machines is always a bit of a pain.
[131016750060] |If the configuration language allows conditionals, use them.
[131016750070] |Otherwise, if there is an include mechanism, use it to split the configuration file into a machine-dependent part and a shared part.
[131016750080] |Keep all the machine-dependent parts in a separate directory (something like ~/.local/NAME/
) which is always referred to through a symbolic link (~/.here
-> local/NAME
on each machine).
[131016750090] |I have a few files that are generated by a script in the shared part from parameters kept in the machine-specific part; this precludes modifying these files indirectly through a GUI configuration interface.
[131016750100] |Avoid configuring things in /etc
, it's harder to synchronize between machines.
[131016760010] |I agree with the version control answer, but another method I've been experimenting with recently is Dropbox.
[131016760020] |It's essentially a version control system that automatically syncs between all your machines, so if you edit a file on one computer you'll see the changes reflected on your other computers in a couple seconds, without needing to commit on the former and update on the latter.
[131016760030] |Their free basic plan is 2GB, so I use it to version my configuration files and chat logs
[131016770010] |Setting up a unix powered network.
[131016770020] |I want to set up several computers on a LAN, all connecting to one unix server.
[131016770030] |The primary goal being that the user accounts should only exist on the server.
[131016770040] |So that any user could access his normal interface through any of the given computers.
[131016770050] |What different ways are there of doing this, and what are pros and cons for these methods?
[131016770060] |sidenote: I'm looking for a method that is simplistic, requires an absolute minimum work done on the individual terminals.
[131016770070] |If the user can play games and do resource intensive stuff, it would be a bonus.
[131016780010] |For account management, use LDAP.
[131016780020] |Simply install an LDAP client on all clients (e.g. the ldap-auth-client
package on Ubuntu) and run an LDAP server on the server.
[131016780030] |Keep the home directories over NFS or Samba.
[131016780040] |The simplest setup is to mount the home filesystem as a whole on all clients at boot time.
[131016780050] |This doesn't provide good security because anyone who plugs their a laptop can access all files; if that's a concern, Samba is the next simplest method.
[131016780060] |The major downside to LDAP and NFS or Samba is that users won't be able to do anything on the clients if the server or the network is down.
[131016780070] |I don't think any solution to this downside will come anywhere near your requirement for simplicity.
[131016790010] |Alternative to the LDAP solution, you can use NIS for account informations, and NFS for file sharing.
[131016800010] |Red Hat has a project called FreeIPA that is producing an integrated LDAP, Kerberos, NTP and DNS server setup that's easier to setup.
[131016800020] |I haven't tried it yet, but its been on my list of things to try out for awhile now.
[131016810010] |What features does Darwin have that other Unixes don't, or vice versa?
[131016810020] |Does Darwin have any features that are specific to it?
[131016810030] |Do other Unixe(s) have features that Darwin lacks?
[131016820010] |This isn't quite an answer but, DTrace is an awesome system debugging tool.
[131016820020] |That exists for Solaris, Darwin/OS X, and *BSD, but not Linux.
[131016830010] |Fist that comes to my mind is all tools that OSX have in console.. there are tons of more useful commands that unix have.
[131016830020] |Diskutil it's like partition magic in shell, this tool have so much options for disk operations that fdisk is really just 10% of what this beast have... btw osx supports really great Software Raid support you can have JBOD, Strip and Mirror software raid types.. does really unix have this? in your dreams!! ;D
[131016830030] |SystemProfiler - great tool which display all hardware id's,names,models,sn's and stuff like that in a VERY comfortable way.
[131016830040] |darwin kernel isn't totally transparent like in unix.
[131016830050] |darwin have killall util =P unix don't, only skill
[131016830060] |different file system also..
[131016830070] |HFS, HFS+
[131016830080] |maybe latter i'll remember more =)
[131016840010] |OS X is the only remaining operating system based on the Mach microkernel which is also still commercially relevant.
[131016840020] |There are a few ongoing research projects and obsolescent OSes that no doubt are still being used in production settings on old machines, but nothing you can go out and buy on a new machine today.
[131016840030] |OS X has the usual assortment of kernel feature incompatibilities that any *ix has.
[131016840040] |The biggest one I most recently had to work around is a lack of System V message queues. (msgget(2)
and friends.)
[131016840050] |We had to replace our message queue code —which was written for a "real" System V variant and later ported to Linux —with TCP/IP to get our software to run on OS X. For our application, the differences between these two IPC methods mattered at the time we made the choice to go with message queues, but due to later architectural changes, it ended up not being a big deal to switch to TCP/IP.
[131016850010] |I think it's best to describe Darwin as just another flavour of UNIX.
[131016850020] |Solaris is one.
[131016850030] |HP/UX is another.
[131016850040] |There are lots more, maybe not as "high-profile" but they're there.
[131016850050] |And with every flavour comes its own specifics.
[131016850060] |That's why there are flavours in the first place.
[131016850070] |Some company thinks up something which would help selling it (or simply working with it or even administrating it) and creates it and gives it its own name.
[131016860010] |If I've heard right Darwin, as released by Apple, no longer functions as an independent operating system, so I'd point out that the biggest difference it has is OS X on top of it. :D
[131016860020] |Although the integration between the old Mac OS, new Mac OS X, and NeXT stuff is sometimes laughable, little utilities like diskutil and hdiutil are great.
[131016860030] |Maybe it is some old Mach kernel architects left over from NeXT who use these little things and care about them who have made sure XCode such a good tool, too.
[131016870010] |Darwin is based on FreeBSD.
[131016870020] |One cool feature that is not present in other Unix operating systems (in my experience) is the Berkeley Packet Filter, aka /dev/bpf
.
[131016870030] |This is a very versatile device you can use for packet capturing.
[131016880010] |When it comes right down to it, isn't Darwin just a thin BSD layer on top of Mach 2.0?
[131016880020] |I used to use NeXTStep, I don't know how much current MacOSX departs from NeXTStep, but...
[131016880030] |Mach 2.0 offered a different set of abstractions at the kernel level:
[131016880040] |A "task": that's an address space + a set of "ports", possibly with a thread running in it.
[131016880050] |Threads.
[131016880060] |This was the schedulable unit of execution.
[131016880070] |A task (address space) could have more than 1 running in it.
[131016880080] |I believe that Mach-O files (Mach's executable file format) could specify more than one thread at process run time: no main() function that started more Cthreads, the OS would start one.
[131016880090] |Ports.
[131016880100] |These aren't like TCP or UDP ports.
[131016880110] |They were typed, ordered streams of messages.
[131016880120] |Rather RPC-like.
[131016880130] |You made up a protocol spec file, then ran that through a compiler to get server and client side stubs, marshalling and unmarshalling routines, etc.
[131016880140] |User level memory pagers.
[131016880150] |You could set up a task+thread to handle paging of other tasks' address spaces.
[131016880160] |The original CMU Mach folks used these abstractions to emulate BSD Unix processes, MS-DOS processes, and in a fabulous fit of freakiness, VMS tasks.
[131016880170] |Each VMS task took 2 Mach tasks, plus many threads.
[131016880180] |Somebody used to sell a Mac OS (pre-OSX) emulator for NeXTStep that used the user-space-pagers to good effect.
[131016880190] |The old CMU Mach publications page: http://www.cs.cmu.edu/afs/cs/project/mach/public/www/doc/documents_top.html
[131016880200] |The VMS-on-Mach paper: http://www.sture.ch/vms/Usenix_VMS-on-Mach.pdf
[131016890010] |Darwin has a relatively small set of supported file system types.
[131016890020] |Apart from specials like devfs and network types like webdav, the list is:
[131016890030] |HFS/HFS+
[131016890040] |UFS/FFS
[131016890050] |ISO-9660/UDF/CDDA
[131016890060] |NTFS
[131016890070] |FAT
[131016890080] |Of this list, only UFS was designed for Unix, although HFS+ has been upgraded to support all necessary Unix functionality.
[131016890090] |As of Mac OS X 10.6, UFS cannot be used for the root partition, although this restriction probably doesn't apply to Darwin.
[131016890100] |By default, HFS+ is case-insensitive, although case sensitivity can be requested at creation time.
[131016890110] |Although Mac OS X will run on a case-sensitive partition, many high-profile Mac applications will not (eg Adobe CS).
[131016890120] |For a while, Apple was planning to supersede HFS with ZFS, and even shipped ZFS with some versions of Mac OS X, but sadly this experiment eventually failed because of unresolvable licensing issues.
[131016900010] |The main feature of Darwin that actually matters is that OS X uses it, really.
[131016910010] |Redirect stdout over ssh
[131016910020] |I would like to run
[131016910030] |on a remote system through ssh, but if I run
[131016910040] |the redirection is executed locally as ssh etc >file
[131016910050] |I've tried it with '
or ''
or dd
or with a pipe |
instead, but I can't get it to work.
[131016910060] |How can this be done?
[131016920010] |Try:
[131016930010] |Even simpler, instead of:
[131016930020] |do:
[131016940010] |How do I bind the tmux prefix to a Super?
[131016940020] |I find even Ctrl+b to be very emacs
like but I understand the point.
[131016940030] |I'm wondering if I could bind it to a single keypress of a key I don't other wise use? namely Super_L (also known as the left windows key. for why I say Super_L start xev
in a terminal and press that key)
[131016950010] |You can't. Binding a key will call the cmd_bind_key_parse
function from cmd-bind-key.c
which in turn will (eventually) call key_string_get_modifiers
from key-string.c
:
[131016950020] |The tmux.c
contains the modifier key #define
declarations and from that file we have:
[131016950030] |On the surface though, it doesn't look too hard to modify; maybe a weekend (famous last words ;)) project?
[131016960010] |Super_L
is an X keysym.
[131016960020] |Tmux runs in a terminal.
[131016960030] |It is up to your terminal emulator to transform a keysym into a character sequence.
[131016960040] |So you would have to configure both your terminal emulator and tmux.
[131016960050] |Looking at the tmux documentation, the prefix can only been a known key name with an optional modifier.
[131016960060] |So you can set the tmux prefix to a key combination you don't use, say M-F12
, and get your terminal to send the character sequence for M-F12
when you press Super_L
.
[131016960070] |With a little more work, you could use a key that your keyboard probably doesn't have (tmux accepts F13
through F20
as key names, but they have to be declared in terminfo).
[131016960080] |On the terminal emulator side, you would have to arrange for Super_L
to generate the key sequence \e\e[24~
(for M-F12
) or \e[34~
(for F20
) (where \e
is the escape character).
[131016960090] |How to do this depends on the terminal emulator (and some aren't configurable enough to do it).
[131016960100] |With xterm, it's done through X resources:
[131016960110] |You may hit a snag that Super_L
is normally a modifier, and modifier keys don't always work when a non-modifier is required.
[131016960120] |If you don't want Super_L
to be a modifier, you can take its modifier away, or (less confusingly) use a different keysym for the physical key.
[131016960130] |This can be done through xmodmap (old-fashioned and simple to understand), through xkb (the modern, poorly-documented, powerful and complex way), or perhaps through your desktop environment's GUI configuration tol.
[131016970010] |I have not been able to set a prefix to a custom modifier key, but I did manage to define tmux bindings in combination with a custom modifier key under Gnome in combination with Metacity.
[131016970020] |For example, to map Mod4
+k and Mod4
+j to move to current panel up and down respectively:
[131016970030] |This allows for tmux bindings in combination with for example the Windows key.
[131016970040] |Something along those lines works for any window manager that allows to define global keyboard shortcuts (Compiz, KWin, etc.).
[131016980010] |How to change background color of side pane of Kate and Kile?
[131016980020] |I am trying to customize the color scheme of Kile/Kate.
[131016980030] |I could do it, except I could not find any way to change the color of the side pane such as files, etc.
[131016980040] |I prefer dark background, and having dark background in editing space and white background in the left pane is not good for my eyes.
[131016990010] |I don't think you can change it from within Kate, because I think it's like a filebrowser_part or something.
[131016990020] |Go to System Settings >Application Appearance >Colors Try the Zion ( Reversed ) Scheme.
[131016990030] |I think you can change those colors individually, and obviously you can create a scheme.
[131016990040] |I wish the makers of kate would realize that dev's like dark backgrounds and facilitate it more.
[131016990050] |Here's the resulting appearance of kate
[131016990060] |please note some portions of this are not from the colorscheme but other settings
[131017000010] |Is there any option with 'ls' command that I see only the directories?
[131017000020] |Sometimes, I need to check only the directories not files.
[131017000030] |Is there any option with the command ls?
[131017000040] |Or is there any utility for doing that?
[131017000050] |ADDED
[131017000060] |I'm using Mac OS X, and 'ls -d' gives me '.' even though I have directories.
[131017010010] |No, but a simple find
command will do it:
[131017010020] |or grep
[131017010030] |You could then alias either one if necessary.
[131017020010] |I think ls has a bug on Mac OS X. A workaround is to use grep... ls -l / | grep "^d"
[131017030010] |I know there is already a selected answer, but you can get the requested behavior with just ls
:
[131017030020] |This will list all the directories in the current working directory where it is run.
[131017030030] |To get all the subdirectories of some other folder, just try:
[131017030040] |Note that the -l is optional.
[131017040010] |I like the tree utility to get an overview over the directory structure.
[131017040020] |It's available in MacPorts and all Linux distributions I've tried.
[131017040030] |That would show all directories, two levels deep.
[131017050010] |+ or @ mark after running 'ls -al'
[131017050020] |In Mac OS X, run 'ls -al' gives me something like this.
[131017050030] |What's the + or @ at the end of the first column means?
[131017050040] |Is this unique to Mac, or common in UNIX?
[131017050050] |ADDED
[131017050060] |After Michael Mrozek's answer, I ran 'ls -ale' to get the following.
[131017050070] |What those appended messages mean?
[131017050080] |Why do I have them for some of the files?
[131017050090] |I don't remember doing anything particular for them.
[131017060010] |The @
suffix is unique to Mac OS and is covered by this question, so I copied this part of my answer from there; it means the file has extended attributes.
[131017060020] |You can use the xattr
command-line utility to view and modify them:
[131017060030] |The +
suffix means the file has an access control list, and is common in any *nix that supports ACLs.
[131017060040] |Giving ls
the -e
flag will make it show the associated ACLs after the file, and chmod
can be used to modify then.
[131017060050] |Most of this is from the chmod
man page:
[131017060060] |You add an ACL with chmod +a "type:name flag permission,..."
, and remove it with chmod -a
.
[131017060070] |The argument to chmod
is fairly complicated:
[131017060080] |type is either user
or group
, to clarify if name
is referring to a username or a group name.
[131017060090] |If name
is unambiguous, you can omit the type
[131017060100] |name is the username or group the ACL applies to
[131017060110] |flag is allow
if this ACL entry is granting a permission, or deny
if it's denying a permission
[131017060120] |permission is the actual permission being modified; you can list as many as you like, comma-separated
[131017060130] |delete -- Allow the file/directory to be deleted
[131017060140] |readattr -- Read basic attributes
[131017060150] |writeattr -- Write basic attributes
[131017060160] |readextattr -- Read extended attributes (using xattr
, from above)
[131017060170] |writeextattr -- Write extended attributes
[131017060180] |readsecurity -- Read ACL info
[131017060190] |writesecurity -- Write ACL info
[131017060200] |chown -- Change owner
[131017060210] |Directory-specific permissions
[131017060220] |list -- Show the files/folders in the directory
[131017060230] |search -- Find a file/folder in the directory by name
[131017060240] |add_file -- Create a new file in the directory
[131017060250] |add_subdirectory -- Create a new directory in the directory
[131017060260] |delete_child -- Remove a file/directory in the directory
[131017060270] |Inheritance-control
[131017060280] |file_inherit -- ACLs on the directory are inherited by files
[131017060290] |directory_inherit -- ACLs on the directory are inherited by subdirectories
[131017060300] |limit_inherit -- Stops ACLs inherited by this directory from being inherited by subdirectories
[131017060310] |only_inherit -- Inherited by all newly created items but ignored
[131017060320] |File-specific permissions
[131017060330] |read -- Open the file for reading
[131017060340] |write -- Open the file for writing
[131017060350] |append -- Open the file for appending
[131017060360] |execute -- Run the file
[131017060370] |In your particular example, most of the ACL entries are group:everyone deny delete
.
[131017060380] |That is, all users in the everyone
group (which is naturally everyone) are denied the permission to delete the folder.
[131017060390] |I believe, although I can't find any documentation about it, that these are default ACLs to stop you from removing essential root folders -- somebody correct this if that's not the case.
[131017060400] |The only other entry is group:com.apple.sharepoint.group.3 allow search
, which allows Directory Services to search for files by name in the /Library
folder
[131017070010] |What are the (dis)advantages of ext4, ReiserFS, JFS, and XFS?
[131017070020] |What purpose is each suitable for?
[131017080010] |I'll just name a few pro and con points for each.
[131017080020] |This is by no means an exhausting list, just an indication.
[131017080030] |If there are some big omissions that need to be in this list, leave a comment and I'll add them, so we get a nice, big list in one place.
[131017080040] |ext4
[131017080050] |Pro:
[131017080060] |supported by all distro's, commercial and not, and based on ext3, so it's widely tested, stable and proven
[131017080070] |all kinds for nice features (like extents, subsecond timestamps) which ext3 does not have.
[131017080080] |Con:
[131017080090] |rumor has it that it is slower than ext3, the fsync dataloss soap
[131017080100] |XFS
[131017080110] |Pro:
[131017080120] |support for massive filesystems (up to 8 exabytes (yes, 'exa') on 64-bit systems)
[131017080130] |online defrag
[131017080140] |supported on upcoming RHEL6 as the 'large filesystem' option
[131017080150] |proven track record: xfs has been around for ages
[131017080160] |Con:
[131017080170] |none that I know of; wikipedia mentions slow metadata operations, but I wouldn't know about that
[131017080180] |JFS
[131017080190] |Pro:
[131017080200] |said to be fast (I have little experience with JFS)
[131017080210] |originated in AIX: proven technology
[131017080220] |Con:
[131017080230] |used and supported by virtually no-one, except IBM (correct me if I'm wrong; I have never seen or heard about JFS used in production, though it obviously must be, somewhere)
[131017080240] |ReiserFS
[131017080250] |Pro:
[131017080260] |fast with small files
[131017080270] |Con:
[131017080280] |ReiserFS' future, Reiser4, has a very uncertain future and has not been merged yet
[131017080290] |main supporting distro (SuSE) dropped it years ago
[131017080300] |Hans Reiser's 'legal issues' are not really helping
[131017080310] |I recommend this page for further reading.
[131017090010] |How can I suppress unneeded output from emacsclient?
[131017090020] |I use emacs fairly regularly and am trying to configure my system to use emacsclient when possible.
[131017090030] |To do this I have done the following:
[131017090040] |In .bashrc:
[131017090050] |I then also installed an alternative for /usr/bin/editor that points to /usr/local/bin/emacsclient.wrapper that has the following:
[131017090060] |For the most part this works great.
[131017090070] |However, when emacsclient is launched from the terminal, it often spews out text I don't need.
[131017090080] |Notably:
[131017090090] |Or, when emacs isn't already started and emacclient has to fall back to starting emacs itself:
[131017090100] |What can I do to surpress this output?
[131017090110] |I've tried changing my emacs wrapper to
[131017090120] |but this didn't help.
[131017090130] |I didn't try editing EDITOR or VISUAL since I don't have any ideas other than this standard redirection.
[131017100010] |In general:
[131017100020] |will force the output (STDOUT
) of program
to /dev/null
and redirects STDERR
to STDOUT
.
[131017100030] |So try:
[131017110010] |Directory shortcuts in emacs buffer
[131017110020] |So I asked this question about directory shortcuts in a shell.
[131017110030] |The idea is I'd like to be able to open a file in /home/Documents/foo
, say, by typing (C-x C-f
) ~foo/file
.
[131017110040] |So, is there a way to emulate zsh's "named directories" in the emacs buffer?
[131017110050] |(Incidentally, the shortcut definitions I want to use are already in my .zshrc, if there's a solution that reads them directly, that gets bonus points...)
[131017120010] |The Emacs component that's responsible for expanding ~
in file names is expand-file-name
.
[131017120020] |Unfortunately, it's written in C, and deep inside its bowels is code that assumes that what comes after ~
is a user name.
[131017120030] |Fortunately, Emacs has a generic way of adding a wrapper around functions, so you can do what you want if you don't mind repeating some of the logic in the built-in function.
[131017120040] |Here's some completely untested code that should get you going.
[131017120050] |Look up “Advising Emacs Lisp Functions” in the Emacs Lisp manual for more information; the basic idea is that defadvice
adds some code to run before the actual code of expand-file-name
.
[131017120060] |Please signal the mistakes I've inevitably made in comments (whether you know how to fix them or not).
[131017120070] |I'll leave parsing the shortcuts in .zshrc
to fill expand-file-name-custom-tilde-alist
(or whatever technique you choose to keep the aliases in synch) as an exercise.
[131017130010] |is GNU/Linux still relevant
[131017130020] |some guys insist (and ofcourse stallman) that, the O/S which is in most case "linux distro" is called GNU/Linux family, not just Linux. is this still relevant today? since to create complete O/S we need other tools/utility/apps that is not linux, but today may be only half of them are GNU software..
[131017140010] |From my point of view - as far as we exclusively use GNU toolchain for kernel compilation it should be GNU/Linux.
[131017150010] |From my point of view, "GNU/Linux" label is pure marketing from GNU supporters.
[131017150020] |Many Unix (not only Linux) distributions have GNU tools installed, starting with gcc.
[131017150030] |Many tools with other licences are installed as well on these distributions.
[131017150040] |Should we say GNU/BSD/Ubuntu or BSD/GNU/Mac OS X ?
[131017160010] |Well.
[131017160020] |It depends how exact you need to be.
[131017160030] |For example:
[131017160040] |If you are asked by someone who uses browser Windows and operating system Google (;) ) use Linux
[131017160050] |If you are talking about embedded Linux (like difference between GNU/Linux and embedded Linux), GNU operating system familly (differences between GNU/Linux GNU/kFreeBSD) use GNU/Linux
[131017160060] |If you are talking about differences between userland and kernel use GNU/Linux for system and Linux for kernel
[131017160070] |If none above apply and there is no need to disambiguate the difference use Linux, GNU/Linux or my personal choice (GNU/)Linux
[131017170010] |My car is a Ford and needs tyres, at the moment I have Michelin tyres installed.
[131017170020] |Should I refer to my car as a Michelin/Ford?
[131017170030] |Presumably Stallman would say yes, as Bibendum obviously deserves credit for making the tyres.
[131017170040] |However, I would imagine that if I said that I drove a "Michelin/Ford" and tried to persuade other people to refer to it as such, everybody would think I was a complete arse.
[131017170050] |If I was Bibendum, everybody would also say that I was only insisting on the "Michelin/" because of my relationship with the company.
[131017170060] |OK, the analogy is only an analogy, but is it that bad?
[131017180010] |It's quite true that GNU is not the only organisation to contribute non-kernel software to a working GNU/Linux system, though GNU software is a very large and important part.
[131017180020] |So there isn't a cast-iron argument that you absolutely have to call it GNU/Linux.
[131017180030] |But consider what Richard Stallman and GNU have contributed, and how little they ask in return.
[131017180040] |You don't have to pay money, you don't have to sign a licence agreement, and you have all the Freedoms that Free Software gives you.
[131017180050] |If all they want for that is for you to call it "GNU/Linux", is that really so hard?
[131017190010] |Identity Management with ActiveDirectory
[131017190020] |What is the best, or most reliable, way to manage my Unix/Linux user accounts with ActiveDirectory?
[131017190030] |Or, is this even feasible?
[131017200010] |PAM LDAP against Active Directory should work fine.
[131017210010] |I highly highly highly (highly) recommend using Likewise Open to do this.
[131017210020] |Every time I talk about them, I sound like a paid shill, but I'm not.
[131017210030] |It's just really that good.
[131017210040] |Essentially, you install the software (painless, there's an RPM and DEB intaller), run "domainjoin-cli domain.com adminuser", type the password for "adminuser", and then your machine is part of the AD domain.
[131017210050] |The one thing that I do change is in the configuration, I turn on the the assume default domain setting, because I don't want my users to have to type their domain every time they connect to the machine.
[131017210060] |The benefits are huge.
[131017210070] |When you log in with AD credentials, your UID and GIDs are assigned based on a hash, which means that they're the same across the entire infrastructure.
[131017210080] |This means that things like NFS work.
[131017210090] |In addition, it's simple to get things like Samba and Apache to authenticate, since Likewise configures PAM.
[131017210100] |Thanks to Likewise Open, there is not a single network-based service that I offer that isn't authenticated against AD.
[131017220010] |It's quite feasible, and already done.
[131017220020] |As someone has already mentioned, Likewise will give you direct integration.
[131017220030] |However...
[131017220040] |If you want to take the plunge, you could also install winbind
from the Samba project, which would give you the same experience.
[131017220050] |Using winbind, your machine will become a domain member...and user accounts in Active Directory can be transparently mapped and assigned UID/GID settings.
[131017230010] |Not exactly AD, but I got a nice answer to a similar question over here:
[131017230020] |http://unix.stackexchange.com/questions/333/what-is-the-equivalent-of-active-directory-on-linux/338#338
[131017240010] |Because we are talking about AD, I am going to assume an enterprise environment here.
[131017240020] |I have a couple of hundred RHEL3, 4 and 5 boxes running with Active Directory based user-accounts.
[131017240030] |All of them run the same configuration, using nss_ldap and pam_krb5.
[131017240040] |It works brilliantly, it is supported by all commercial Linux vendors in the standard support option, because it uses out-of-the-box tools and it is rock solid.
[131017240050] |In the end, AD is just Kerberos and LDAP and and to vendors, those are standardized, easily supportable protocols.
[131017240060] |I have yet to run into a problem with this way of using AD that I cannot solve.
[131017240070] |Scott Lowe's documentation here helped me quite a bit when initially designing our solution.
[131017240080] |It's not perfect, but it'll help you get underway.
[131017240090] |Scott's idea is to create a bind account for LDAP, which I'm not that fond of.
[131017240100] |A machine that is joined in AD can do LDAP queries with its own credentials, which is a lot saner, if you ask me.
[131017240110] |Depending on your requirements, you might want to take a step back and consider whether you need a supported solution or not.
[131017240120] |Because nice as Likewise may be, it is fairly expensive.
[131017240130] |Using the tools that come with every Linux distro by default and are thus supported, might be a tiny bit more complicated (but that shouldn't scare off a good Linux admin) but is just as good (or maybe better, depending on your requirements).
[131017240140] |I could write up in a bit more detail about how I did this, but I don't have time for that right now.
[131017240150] |Would that be of help?
[131017250010] |I'll spare you the marketing speak, but try Centrify Express.
[131017250020] |It's a free tool to join your Linux and Macs to AD. http://www.centrify.com/express/download-centrify-express.asp
[131017260010] |What is a "kernel panic"?
[131017260020] |What does it mean when your computer has a "kernel panic"?
[131017260030] |Does it equate to the windows BsoD?
[131017260040] |Also, what methods, tips, tricks are available to the user when a kernel panic strikes?
[131017270010] |It is the unexpected program flow behavior (kernel is a program in this case).
[131017270020] |In case of panic program stops working.
[131017270030] |It IS equal to the windows BsoD.
[131017270040] |KP means something wrong with kernel or modules.
[131017270050] |If it's stable kernel - look at drivers.
[131017270060] |If nothing special and all drivers are common it could be hardware problem.
[131017280010] |Kernel panic is the same as BSOD and is non-rescuable IIRC.
[131017280020] |However smaller failure is OOPS which denotes some error in kernel.
[131017280030] |You can use kexec which switches to new kernel on panic (you can threat it as fast reboot) - possibly getting meaningful dump of system to debug the problem
[131017280040] |You can use panic
parameter which reboots kernel after n seconds.
[131017280050] |You can instruct GRUB to switch to fallback kernel in such case
[131017280060] |Use Magic SysRQ keys to print stack traces etc.
[131017290010] |Patching Linux kernel on-line (i.e. without rebooting)
[131017290020] |Sorry - I don't remember the exact name.
[131017290030] |I know there is mechanism to patch the kernel at runtime by loading modules without need of the reboot as long as the structures involved are not affected.
[131017290040] |It is used by servers for security patches and recently by Ubuntu &Fedora.
[131017290050] |What is the name of mechanism
[131017290060] |Is there any how-to for hand-compiled kernels
[131017290070] |Is it possible to automatically check if the change x.y.z.a -> x.y.z.a+1 changed any structure or not
[131017300010] |I think you are looking for Ksplice.
[131017300020] |I haven't really followed the technology so I'm not sure how freely available the how-to information is but they certainly have freely available support for some Fedora and Ubuntu versions.
[131017310010] |Defragging an ext partition?
[131017310020] |Why don't ext2/3/4 need to be defragmented?
[131017310030] |Is there no fragmentation at all?
[131017320010] |There is not much fragmentation as long as the disk isn't filled too much (so don't fill a disk up too much if you can avoid it).
[131017330010] |Modern filesystems, particularly those designed to be efficient in multi-user and/or multi-tasking use cases, do a good fairly job of not fragmenting data until filesystems become near to full (there is no exact figure for where the "near to full" mark is as it depends on how large the filesystem is, the distribution of file sizes and what your access patterns are - figures between 85% and 95% are commonly quoted) or the pattern of file creations and writes is unusual or the filesystem is very old so has seen a lot of "action".
[131017330020] |This includes ext2/3/4, reiser, btrfs, NTFS, ZFS, and others.
[131017330030] |There is currently no kernel-/filesystem- level way to defragment ext3 or 4 at the moment (see http://en.wikipedia.org/wiki/Ext3#Defragmentation for a little more info) though ext4 is planned to soon gain online defragmentation.
[131017330040] |There are user-land tools (such as http://vleu.net/shake/ and others listed in that wikipedia article) that try defragment individual files or sets of files by copying/rewriting them - if there is a large enough block of free space this generally results in the file being given a contiguous block.
[131017330050] |This in no way guarantees files are near each other though so if you run shake over a pair of large files you migth find is results in the two files being defragmented themselves but not anywhere near each other on the disk.
[131017330060] |In a multi-user filesystem the locality of files to each other isn't often important (it is certainly less important then fragmentation of the files themselves) as the drive heads are flipping all over the place to serve different user's needs anyway and this drowns out the latency bonus that locality of reference between otherwise unfragmented files would give but on a mostly single user system it can give measurable benefits.
[131017330070] |If you have a filesystem that has become badly fragmented over time and currently has a fair amount of free space then running something like shake
over all its files could have the effet you are looking for.
[131017330080] |Another method would be to copy all the data to a new filesystem, remove the original, and then copy it back on again.
[131017330090] |This helps in much the same way shake
does but may be quicker for larger amounts of data.
[131017330100] |For small amounts of fragmentation, just don't worry about it.
[131017330110] |I know people who spend more time sat watching defragmentation progress bars than they'll ever save (due to more efficient disk access) in several lifetimes of normal operation!
[131017340010] |How can I use ffmpeg to split mpeg video into 10 minute chunks for youtube upload?
[131017340020] |I intend to answer this myself once I get the process down but if anyone has insight.
[131017340030] |There is often a need in the open source or active developer community to publish large video segments online (Meetup videos, campouts, tech talks).
[131017340040] |Being that I am a developer and not a videographer I have no desire to fork out the extra scratch on a premium vimeo account.
[131017340050] |How then do I take a 12.5 GB video mpg (1:20:00) tech talk and slice it into 00:10:00 segments for easy uploading to youtube?
[131017350010] |Wrapping this up into a script to do it in a loop wouldn't be hard.
[131017350020] |Beware that if you try to calculate the number of iterations based on the duration output from an ffprobe
call that this is estimated from the average bit rate at the start of the clip and the clip's file size. ffprobe
doesn't scan the entire file for speed reasons, so it can be quite inaccurate.
[131017350030] |I don't think you really want to be cutting at exactly 10 minutes for each clip.
[131017350040] |That will put cuts right in the middle of sentences, even words.
[131017350050] |I think you should be using a video editor or player to find natural cut points just shy of 10 minutes apart.
[131017350060] |Assuming your file is in a format that YouTube can accept directly, you don't have to reencode to get segments.
[131017350070] |Just pass the natural cut point offsets to ffmpeg
, telling it to pass the encoded A/V through untouched by using the "copy" codec:
[131017350080] |The start point for every command after the first is the previous command's start point plus the previous command's duration.
[131017360010] |reduce ncurses terminfo size
[131017360020] |I notice that ncurses's terminfo database on /usr/share/terminfo
is about 7MB (I compiled myself), this is too large for my need if I want to deploy it into embedded linux of 64MB disk space, is there any way to reduce its size by deleting unneeded entry and keep the most used. and what's is this actually for?
[131017360030] |edit->added: is there any info or reference for commonly used terminfo for regular PC or SSHclient?
[131017370010] |I hate to give RTFM type answers, but what information are you looking for that isn't contained in man 5 terminfo
?
[131017370020] |It is often easy to overlook manual pages when there are multiple manual pages in different sections.
[131017370030] |Often the other manual pages to look at will be listed at the bottom of the first manual page that is found, but it is also helpful to remember that manual pages are divided into multiple sections(from man man
):
[131017370040] |From man 5 terminfo
it seems that how much you can get rid of from /usr/share/terminfo will depend on whether you have complete control over the terminal type (which you likely do).
[131017370050] |If you know that you will only ever be running on one terminal type, you can likely remove all but one of the terminal discriptions.
[131017370060] |On some systems, these terminfo files are also in /etc/terminfo/ or /lib/terminfo/, but I am unsure if this is the case when you compile directly from the upstream tarball.
[131017380010] |With ansi, cygwin, linux, vt100, vt220, and xterm terminfo definitions, I expect you'd be able to hit 98% of the terminal emulations that you'll encounter in the wild.
[131017380020] |Even terminal emulators that have a different native mode can likely be directed to emulate vt100/vt220 modes, often without user intervention.
[131017390010] |Why won't my xmodmap command run on startup/login?
[131017390020] |I want to run this command every time I log in (or every time I start up, if that doesn't work): xmodmap -e 'keysym Delete = Menu' -e 'keysym Menu = Delete'
[131017390030] |I've tried many things.
[131017390040] |I put the command in System >Preferences >Startup Applications
.
[131017390050] |I put it in a .sh
file, marked it chmod +x
and put that file in System >Preferences >Startup Applications
.
[131017390060] |I put the script in /etc/init.d
.
[131017390070] |I put the commands in ~/.profile
.
[131017390080] |Nothing seems to work.
[131017390090] |Finally, I put this in my ~/.profile
:
[131017390100] |Both test1 and test2 get created, but the keys are still not remapped.
[131017390110] |If I just copy/paste the command and run it manually, it works fine.
[131017390120] |But it won't run on login.
[131017390130] |Any ideas?
[131017400010] |Depending on your distribution a ~/.xsession is executed (shell script) when logging into X. Or the ~/.Xmodmap is sourced by a xmodmap process.
[131017400020] |~/.profile is only executed by a login shell (with or without X), thus it is not the right place
[131017410010] |Put it in ~/.xsessionrc
and make sure that /etc/X11/Xsession.options
contains allow-user-xsession
.
[131017420010] |using cross-compiled Valgrind
[131017420020] |I have downloaded the sources from the Valgrind trunk and built it for using on an embedded Linux system(Montavista Linux) powered by an ARM926(ARMv5t architecture).
[131017420030] |Everything went well until I tried to run it to check some application I've made; Valgrind tells me:
[131017420040] |"valgrind: failed to start tool 'memcheck' for platform 'arm-linux': No such file or directory".
[131017420050] |Trying to run any other tool gives the same message.
[131017420060] |For make install I used the --prefix option pointing to the embedded system's filesystem which is mounted through NFS(e.g. --prefix=/home/NFS_mounted/opt/valgrind).
[131017420070] |I checked the installation location and the lib directory there contains the proper tools; I also checked the corresponding pkgconfig file for Valgrind and it seems to be OK so I'm kind of clueless right now and can't figure out what's going wrong.
[131017420080] |2 questions:
[131017420090] |Is it correct to use/install a cross-compiled Valgrind like that?
[131017420100] |Maybe I'm doing something wrong regarding the setup, or maybe there is something involving the host system that I'm missing(gdb/gdbserver like).
[131017420110] |Has anyone tried running it on ARM architectures?
[131017420120] |I'm aware that the support is fairly recent so I'm wondering if I could get decent results.
[131017430010] |The problem is that 'valgrind' is looking for a different executable to run the real checking.
[131017430020] |It uses the install path you specified when you configured it, which is not the same path as on the target.
[131017430030] |You should be able to confirm this by creating a symlink /home/NFS_mounted on your target that points to '/'.
[131017440010] |Invoke Syscalls from Java
[131017440020] |Is there a way to invoke syscalls directly from Java, or is it necessary to first call a native method?
[131017450010] |It is necessary to use a native method, or a library that does so for you.
[131017460010] |You need to use a native method, but you don't need to implement it yourself.
[131017460020] |Java has a variation on JNI called JNA (Java Native Access), which lets you access shared libraries directly without needing a JNI interface wrapped around them, so you can use that to interface directly with glibc:
[131017470010] |Connecting to a network via another pc
[131017470020] |What's a simplistic recipe to connect my PC to my WLAN through my laptop?
[131017470030] |I have one wireless network, one desktop that (for practical reasons) has no connection to the WLAN, and one laptop with wireless and an ethernet port.
[131017470040] |I would like to run a cable from my desktop to my laptop, connect my laptop to my WLAN and forward the network to my desktop.
[131017470050] |Is there a solution?
[131017480010] |Ok, I don't think I really have the answer here but here it goes:
[131017480020] |Connect your laptop to the WLAN,
[131017480030] |Connect your Desktop to your Laptop (make sure to configure the IP's properly, or have a DHCP server on your laptop),
[131017480040] |Use firestarter (should be available on your package manager) to create a bridge between the two connections.
[131017480050] |That's as far as I could get by researching the subject; hope this info serves as a good starting point.
[131017490010] |You can also connect the laptop to the WLAN, the desktop to Laptop and only bridge the connections on the laptop so you don't need to run anything on the laptop (DHCP/NAT).
[131017490020] |The Desktop will get its config from the WLAN dhcp.
[131017500010] |Simple and platform agnostic:
[131017500020] |Ensure that the two networks to be bridged have different subnet addresses.
[131017500030] |Enable standard Linux IP forwarding in /etc/sysctl.conf.
[131017500040] |For different subnets, assuming you are using the allocated private class C space, 192.168.1.* and 192.168.2.* are different subnets.
[131017510010] |Asuming that:
[131017510020] |PC1 has a working internet connection which we want to share with PC2.
[131017510030] |PC1 is connected to PC2 with a cross-over cable or a switch
[131017510040] |192.168.0.1 is the IP address we assign to PC1
[131017510050] |192.168.0.2 is the IP address we assign to PC1
[131017510060] |10.0.0.2 is the IP address for the nameserver used by PC1 ( cat /etc/resolve.conf
on PC1 )
[131017510070] |ON PC1:
[131017510080] |eth0 is the network interface that connects to PC2
[131017510090] |ON PC2:
[131017510100] |eth0 is the interface that connects to PC1
[131017510110] |See Internet Share for reference.
[131017520010] |games directory?
[131017520020] |On a standard filesystem, we have:
[131017520030] |Is this a joke, or is there some history behind this?
[131017520040] |What is it for?
[131017520050] |Why do we have separate and specialized directories for something like games?
[131017530010] |At least partially, it's so the system can have a games
group that certain users are members of, and they all have rights to execute games in the games
folder
[131017540010] |It's just a bit of historical cruft.
[131017540020] |A long time ago, games were an optional part of the system, and might be installed by different people, so they lived in /usr/games
rather than /usr/bin
.
[131017540030] |Data such as high scores came to live in /var/games
.
[131017540040] |As time went by, people variously put variable game data in /var/lib/games/NAME
or /var/games/NAME
and static game data in /usr/lib/NAME
or /usr/games/lib/NAME
or /usr/games/NAME
or /usr/lib/games/NAME
(and the same with share
instead of lib
for architecture-independent data).
[131017540050] |Nowadays, there isn't any compelling reason to keep games separate, it's just a matter of tradition.
[131017550010] |Linux-compatible 56k modem
[131017550020] |I have a client who wants to set up a fax server in their office.
[131017550030] |I'd like to use HylaFAX under Ubuntu, but I'm a little shy to spend money on hardware for Linux, particularly a modem given the whole Winmodem issue.
[131017550040] |So to ask two different questions in the same post:
[131017550050] |First, what are the most reliable ways to tell if a modem is supported in modern releases of Linux?
[131017550060] |It's nice when a manufacturer specifically mentions Linux support.
[131017550070] |Failing that, do you check the kernel docs?
[131017550080] |Chipset type?
[131017550090] |Other keywords on the modem's spec sheet?
[131017550100] |Second, what specific modem would you recommend?
[131017560010] |External modems that plug into a serial (RS232) or USB port usually have all the necessary electronics.
[131017560020] |Most Winmodems (requiring Windows software to supplement their deficiencies) are internal.
[131017560030] |“Hayes compatible” or a mention of compatibility with DOS, OS/2, Linux, SCO, or anything other than Windows is an indication that the modem will just work under any OS.
[131017560040] |If in doubt, look for a model that other people have used under Linux, perhaps second-hand.
[131017570010] |The HylaFAX Handbook has some documentation about making sure you have a compatible fax device.
[131017570020] |According to that document, external serial modems will work almost without exception, but some external USB modems may be softmodems and require a driver.
[131017570030] |Regarding internal modems, that document references some chipsets and model numbers that are known to work.
[131017570040] |Once you find a device that you intend to use, it probably wouldn't hurt to search for it in the hylafax-users mailing list archive to see if another user has encountered a problem with it.
[131017570050] |The HylaFAX website also has Hardware Compatibility List.
[131017580010] |Capture multi-line output of a bash builtin
[131017580020] |Normally, bash is able to assign multi-line output of a command:
[131017580030] |This works from both interactive shell and scripts.
[131017580040] |But it appears the output of a builtin can't be captured in a variable:
[131017580050] |This works in the interactive shell, but in the script only first line end up in the variable, probably due to different treatment of builtins by bash.
[131017580060] |Why does it happen, and how can I do what I need?
[131017590010] |The output of commands can vary depending on whether it is run interactively or within a script.
[131017590020] |For example, "multiline output" goes fine with export
:
[131017590030] |, which gives you all the output just the same as when you run it interactively.
[131017590040] |But not alias
, which gives you nothing.
[131017600010] |There seems to be some ambiguity in the way that you worded your question.
[131017600020] |You said:
[131017600030] |but in the script only first line end up in the variable
[131017600040] |and then your comments suggest that you are writing a shell function - not a script.
[131017600050] |I suspect you know that scripts are useless for changing directories, since any cd
within a script does not propagate out to the superior shell.
[131017600060] |If indeed you are trying to develop your directory changer in a script, you'll have a tough time as the directory stack is not inherited by subordinate shells:
[131017600070] |You could get a function to help out:
[131017600080] |but, of course you'd need the function to also effect the cd
in the current shell in which case wrapping and unwrapping arguments through the command line is a quoting chore so you are best just doing it all in the function
[131017600090] |which also switched to using $(...)
as a synonym for backticks to reduce quoting confusion.
[131017610010] |recursively chmod
[131017610020] |I was trying to chmod
folders and files with:
[131017610030] |But I was wondering:
[131017610040] |How to do it in one line using find
and excluding the directories .
and ..
[131017610050] |Which method is better, find -exec
or xargs
?
[131017620010] |You can do this with the named arguments to chmod:
[131017620020] |The capital X
is the key part here which changes the permission to executable only if it is a directory.
[131017620030] |See the manual page for more details.
[131017630010] |Sherlock got (1), for (2) xargs
is generally preferable as it allows for more reasonable syntax and less shell invocation overhead.
[131017630020] |Be aware of the -print0
and -0
options to find and xargs respectively as paths with spaces and special characters can gum up the works.
[131017630030] |For a one-time run, the difference really shouldn't matter unless you are chmodding a huge tree.
[131017630040] |If you do this often enough to formalize into a script, I'd use xargs.
[131017640010] |I think im comfortable with:
[131017640020] |This can be minified in one line?
[131017640030] |Thanks.
[131017650010] |Is using find
or xargs
mandatory?
[131017650020] |If not, you can use:
[131017660010] |find ... -exec ... +
is like find ... -exec ... \;
except that the command is executed only once per large set of matching files.
[131017660020] |Once upon a time, find OPTIONS... -exec COMMAND... \;
had to act on one file at a time.
[131017660030] |So xargs
was invented to group actions for efficiency.
[131017660040] |But xargs
introduced its own share of trouble: it expects input that is quoted in a way that find
cannot produce.
[131017660050] |So find OPTIONS... | xargs COMMAND...
is no good unless you know that your file names do not contain any of '"\
or whitespace.
[131017660060] |Then GNU invented find OPTIONS... -print0 | xargs -0 COMMAND...
, which allows any character to appear in a file name.
[131017660070] |But it took a long time for anyone else to adopt it, and in the meantime Sun (I think) invented find OPTIONS... -exec COMMAND... +
, which does the same grouping job as xargs
, without the added complications (longer command line, limit to one per find
command).
[131017660080] |Now find ... -exec ... +
is standard (it's in Single Unix v3), and more widely available than xargs -0
.
[131017660090] |Unless you have to maintain compatibility with an old Linux system, just forget about xargs
.
[131017670010] |What is using this network socket?
[131017670020] |I'm trying to use NTP to update the time on my machine.
[131017670030] |However, it gives me an error:
[131017670040] |What does the error "socket is in use" mean?
[131017670050] |How can I see what is using this socket?
[131017670060] |This happens on my CentOS 4.x system, but I also see it on FreeBSD 7.x, Ubuntu 10.04 and Solaris 10.
[131017680010] |You can use lsof to find which application is using this socket.
[131017690010] |You can do
[131017690020] |to see all of your listening ports, but dollars to donuts that ntpd is running:
[131017690030] |And as for "What does socket in use" mean?
[131017690040] |If I can be forgiven for smoothing over some wrinkles (and for the very basic explanation, apologies of most of this is remedial for you)...TCP/IP (the language of the internet) specifies that each computer has an IP address, which uniquely identifies that computer on the internet.
[131017690050] |In addition, there are 65,000 numbered ports on each IP address that can be connected to.
[131017690060] |When you want to connect to a web server, you open the site in your browser, but the machinery underneath is actually connecting you to port 80 on the web server's IP.
[131017690070] |The web server's daemon (the program listening for connections to port 80) uses a "socket" to hold open that port, reserving it for itself.
[131017690080] |Only one program can use the same port at a time.
[131017690090] |Since you had ntpd running, it was using that port. 'ntpdate' tried to access that port, but since it was already held open, you got the 'socket already in use' error.
[131017690100] |Edit Changed to account for UDP as well
[131017700010] |As root, do this:
[131017700020] |This will show you all processes that are listening on IPv4 sockets.
[131017700030] |You may want to add '-b' to prevent lsof from doing some things that might block it.
[131017700040] |If you do that you'll probably also want to redirect stderr to /dev/null.
[131017710010] |You can also use netstat to look for open sockets--it's much cleaner than using lsof as the other posters have suggested.
[131017710020] |Try this command line as root
[131017710030] |netstat -lp -u -t
[131017710040] |to view all listening connections, including their associated pid's and programs.
[131017710050] |The -l parameter is what specifies listening connections, -p specifies that you want to see the PID/name and -t and -u tell netstat that you want only TCP and UDP connections (IPv4 and IPv6).
[131017710060] |If you want to see numeric port and host names (ie. not resolved in the case of hosts, and not transformed to service names in the case of ports), you can add -n
to the command line above.
[131017710070] |EDIT: This works on Linux--I don't know how well it works on BSD, as I don't have any BSD-based boxes around.
[131017720010] |How can I set up Cygwin to automatically update and download without the GUI?
[131017720020] |How can I set up Cygwin to automatically update itself?
[131017720030] |How can I get Cygwin to download a package without having to go via the GUI thing?
[131017730010] |Cygwin : Unix :: Peaches : Trombone (that was on my GRE ;)
[131017730020] |Given how dramatic Cygwin changes can be, I'd be really wary of having it done without my explicit consent.
[131017730030] |If you are daring, you could invoke cron to run whatever update script you might choose.
[131017730040] |If you were looking for the ill-documented setup.exe --quiet-mode
for unattended operation, there it is.
[131017740010] |bootstrapping a DSL installation onto a machine with no bios boot support
[131017740020] |I have a vintage 2001 laptop (Vaio R505) which is very hardware limited.
[131017740030] |Fortunately there is much that works, but I can't figure how to make it work better.
[131017740040] |The two biggest constraints are 256MB RAM and no floppy or CD and it cannot boot from a USB drive because the BIOS is ancient.
[131017740050] |It does have enough disk for a shrunken WinXP partition, an Ubuntu Lucid partition, swap, and 60MB unallocated.
[131017740060] |Even stripped down Xubuntu installation with a custom built minimal kernel is a little too heavyweight for the small core and ultra-slow swap.
[131017740070] |I'd like to install Damn Small Linux because it is designed for machines of this vintage and specs but I can't figure out how to get it loaded.
[131017740080] |To get Xubuntu on, I started WUBI in windows which is designed to then install Unbuntu.
[131017740090] |My bootloader is now GRUB2 and happily boots Linux or XP (which I keep around for no good reason).
[131017740100] |I'm almost certain that putting the right materials on my free partition and telling GRUB about the DSL installation is possible, I just don't know what the right materials are.
[131017740110] |As this is a pretty odd circumstance and I am capable of rolling a custom kernel, I'm mostly looking for pointers to information to demystify the boot process and what update-grub needs to see to add DSL to the boot-list.
[131017750010] |I would crack the case, remove the hard drive, purchase something like the "SABRENT USB-2535 USB 2.0 TO IDE CABLE FOR 2.5"/3.5"/ 5.25" DRIVE" (currently $15.29 from NewEgg) and do the setup all on a modern machine.
[131017750020] |Slip the drive back in when you are done.
[131017750030] |That way, you can also dump the drive contents you already have working and avoid ending up with a brick.
[131017760010] |You can install Linux under a chroot
ed environment (from your existing Ubuntu).
[131017760020] |I cannot find a DSL guide right now but this Gentoo guide may help.
[131017760030] |Adding the new install to the boot menu is as easy as running update-grub
(there is a script that tries to probe your hard drive and adds things as it finds).
[131017760040] |If that does not work, manually adding a new entry to Grub2 is just vim /etc/grub.d/40_custom
and update-grub
again (this Ubuntu guide came up first from googling).
[131017760050] |Good luck!
[131017770010] |Since you already have grub installed the hard part is already done.
[131017770020] |To proceed:
[131017770030] |create a partition in your 60 MB unallocated space, create filesystem
[131017770040] |Boot into ubuntu
[131017770050] |loop-back mount the iso
[131017770060] |cp the contents to your new filesystem
[131017770070] |add a grub entry
[131017770080] |boot ...
[131017770090] |1) For example via mkfs.ext3
[131017770100] |3)4) see the frugal_liste.sh script available at the dsl mirrors - something along these lines:
[131017770110] |5) Check out this howto
[131017770120] |You have adapt these lines:
[131017770130] |That means you have to adapt the root line, the root= parameter and the paths according to your setup.
[131017780010] |You say, you look more for pointer on how to get information than a concrete answer.
[131017780020] |Here is such a pointer: You face the same problem as people renting a dedicated server running a distribution they don't like.
[131017780030] |They also have access to the machine over network, and have to bootstrap an other distro.
[131017780040] |Searching for "dedicated server bootstrap linux" on Google gives me plenty of hits...
[131017790010] |UNetbootin
[131017790020] |Yes, the question was already answered, but I just learned of UNetbootin which given just about any running Linux or Windows system with a network connection the ability to load and install a dozen or so distributions.
[131017790030] |This useful tool can be viewed as a more generalized WUBI, taking you from what you have running now to anything from Damn Small Linux to Ubuntu.
[131017790040] |This turns out to be really helpful when your upgrade to a new system revision reveals a regression of an ancient graphics driver and downgrades are effectively impossible.
[131017800010] |How to fix Ctrl + arrows in Vim?
[131017800020] |I am using Putty -> Suse box -> vim 7.2
combo for editing and want to remap Ctrl + arrows combo to a particular task.
[131017800030] |But for some reason, Vim ignores the shortcut and goes into insert mode and inserts character "D" (for left) of "C" (for right).
[131017800040] |Which part of my keyboard/terminal configuration is to blame and how to fix it?
[131017810010] |Your best bet is probably to look at PuTTY's Application Cursor Keys mode configuration.
[131017810020] |The default sequences send ESC as a prefix and [
followed by A
ppend or C
hange or other things throwing you into insert mode.
[131017810030] |added, following Gilles
[131017810040] |A slightly more explicit version of the ^V escape can be seen with od(1).
[131017810050] |Here is me typing ^Up, ^Down, ^Right, ^Left at my terminal:
[131017810060] |So my terminal sends ^[[1;5A
when I press Ctrl + ↑
[131017820010] |Figure out exactly what escape sequence your terminal sends for Ctrl+arrow by typing Ctrl+V, Ctrl+arrow in insert mode: this will insert the leading ESC
character (shown as ^[
in vim) literally, followed by the rest of the escape sequence.
[131017820020] |Then tell vim about these escape sequences with something like
[131017820030] |I seem to recall that Putty has a default setting for Application Cursor Keys mode that's inconvenient (I forget why), you might want to toggle this setting first.
[131017820040] |Note that although escape sequences vary between terminals, conflicts (i.e. an escape sequence that corresponds to different keys in different terminals) are rare, so there's no particular need to try to apply the mappings only on a particular terminal type.
[131017830010] |What call before a fork?
[131017830020] |A long while ago I went to a job interview with a thoroughly weird company in London who claimed to be expert Unix consultants.
[131017830030] |Anyway these people were complete nutters and not surprisingly the company doesn't seem to be there any more, but at the interview they asked me the following question:
[131017830040] |What is the first call you make to start a new process?
[131017830050] |So I answered that the first call is fork
.
[131017830060] |The two people interviewing me said (very insistently, as if I'd missed something utterly obvious)
[131017830070] |What call do you do before you do the fork?
[131017830080] |Now, I have no idea about this question, and I have to say these people convinced me they were unmitigated idiots for reasons completely unrelated to this question, but for some reason this question still bugs me even now.
[131017830090] |Is there some call which is made before a fork?
[131017830100] |Is there some subtlety I missed out on?
[131017840010] |I hesitate to put this as an answer as one can only guess, but:
[131017840020] |flush stdout in case the child is going to do output prior to exec.
[131017840030] |close any open file descriptors greater than 2 assuming that the new process is expecting a "standard" environment as it would get from the shell.
[131017840040] |some obscure thing required by some obscure variant like Eunice that they though very clever of themselves to know.
[131017840050] |Any way you scratch it, asking an obscure factual question is no way to derive anything about a candidate except perhaps how they respond to stupid questions.
[131017850010] |My guess is that
[131017850020] |either they were thinking of something that's only applicable to a specific situation that they didn't bother to mention, like flushing buffers;
[131017850030] |or they meant something you'd do after forking;
[131017850040] |or they meant something you'd do before execve
(which is the other half of spawning another program), expecting the answer fork
, but they didn't really understand the whole thing and confused the two.
[131017860010] |Any funny *nix one-liners?
[131017860020] |I saw a t-shirt reading 'anything you say gets piped to /dev/null', not incredibly funny but amusing at least.
[131017860030] |Does anyone else have any good one-liner *nix jokes?
[131017870010] |FreeBSD make:
[131017870020] |or bsdmake
on OS X:
[131017880010] |I read your mail.
[131017890010] |You don't exist.
[131017890020] |Go away.
[131017890030] |Message from various programs when they try to look up your user info by user ID and that fails.
[131017890040] |Reasons that can happen:
[131017890050] |/etc/passwd
is corrupt
[131017890060] |the user got deleted between the program getting its doomed ID and attempting the lookup
[131017890070] |various weirdnesses involving the utmp
file
[131017900010] |There are two major products of Berkeley, CA -- LSD and UNIX.
[131017900020] |We don't believe this to be strictly by coincidence.
[131017900030] |—Jeremy S. Anderson
[131017910010] |Found in early Unix sources:
[131017910020] |(Lightly edited.)
[131017920010] |Those who do not understand UNIX are condemned to reinvent it, poorly.
[131017920020] |—Henry Spencer
[131017930010] |I didn't see this for a while now:
[131017930020] |Unix is sexy: who | grep -i blonde | date; cd ~; unzip; touch; strip; finger; mount; gasp; yes; uptime; umount; sleep
[131017940010] |From http://q4td.blogspot.com/
[131017940020] |“Unix never says ‘please’” — Rob Pike
[131017950010] |Here are a couple of T-shirt slogans:
[131017950020] |There is no place like ~
[131017950030] |My favorite: Thou shalt not kill -9
[131017960010] |These are all taken from here.
[131017960020] |A lot of these don't seem to work on newer shells, but they're still possible to laugh at.
[131017970010] |A more traditional version of the dirty joke echox posted:
[131017970020] |TOUCH GREP UNZIP MOUNT FSCK FSCK FSCK UMOUNT
[131017980010] |I mount my soul at /dev/null
- a colleage
[131017990010] |Unix is user-friendly.
[131017990020] |It's just picky about who its friends are.
[131017990030] |(I don't know the origin.
[131017990040] |There are several variant formulations floating around.
[131017990050] |T-shirt (first google hit)
[131018000010] |From the cover of the book 'A quarter Century of Unix' by Peter H. Salus
[131018010010] |From Linus Torvalds : "Software is like sex: it's better when it's free".
[131018020010] |Seen on a t-shirt a while back:
[131018030010] |If you have any trouble sounding condescending, find a UNIX user to show you how it's done.
[131018030020] |— Scott Adams, Dilbert Cartoonist
[131018040010] |Microsoft is not the answer.
[131018040020] |Microsoft is the question.
[131018040030] |NO (or Linux) is the answer.
[131018040040] |taken from here
[131018040050] |$ man woman No manual entry for woman
[131018050010] |Here's a nickel, kid.
[131018050020] |Get yourself a better computer.
[131018050030] |That line, from an old Dilbert cartoon, is now famous enough that you don't even need the rest of the cartoon anymore.
[131018050040] |Here it is anyway:
[131018060010] |UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things.
[131018060020] |— Doug Gwyn
[131018080010] |when installed :
[131018080020] |fortune
[131018090010] |http://en.tiraecol.net/modules/comic/comic.php?content_id=161
[131018090020] |http://www.tiraecol.net/modules/comic/comic.php?content_id=162
[131018110010] |hm, maybe this one:
[131018110020] |In a world without walls or fences who needs windows or gates.
[131018110030] |I know it's lame but true too :)
[131018120010] |OK, this is cheating, but still a lot of fun!
[131018120020] |Play around with the command line version of xkcd webcomic: http://uni.xkcd.com/
[131018120030] |Go ahead, type in your commands!
[131018120040] |Here's what you you can expect:
[131018130010] |[root@satish:~/Desktop]$ whatis linux
[131018130020] |linux: nothing appropriate.
[131018140010] |My favorites:
[131018140020] |And:
[131018150010] |Not specific to *nix but related to coders:
[131018150020] |Code is like fart, you support only yours.
[131018160010] |The Sad Story
[131018160020] |The Contradiction
[131018160030] |less >more
[131018160040] |The name came from the joke of doing "backwards more."
[131018160050] |To help remember the difference between less and more, a common joke is to say, "less >more," implying that less has greater functionality than more. - from wikipedia