Hi,
I got some problems (disk access denied, bus errors,...) that led me to
go into single user mode and fsck my partitions. I noticed then that
there were some directories created in the tmp directory, called "dev"
and "etc", and one file in the / directory called "success" with 0
length. At reboot, these files disapeared. Does anybody know if this
could be a sign of a hacked computer ? I mean could someone have
installed a rootkit or some such ?
I'm not paranoid but this indeed seems strange.
Cheers
Damir
--
=====================================================================
| Damir Buskulic | Universite de Savoie/LAPP |
| | Chemin de Bellevue, B.P. 110 |
| Tel : +33 (0)450091600 | F-74941 Annecy-le-Vieux Cedex |
| e-mail: buskulic(a)lapp.in2p3.fr | FRANCE |
=====================================================================
mailto:buskulic@lapp.in2p3.fr
I have uploaded an 80MB boot.iso for PReP systems.
boot/ppc/zImage.prep.initrd is the bootfile, it should be possible to
load this file also from harddisk or via tftp.
A 10.2-bet2 testinstall was almost successful. But I forgot to replace
the installed kernel-default.rpm from the orginial install media with the
one on the uploaded boot.iso, so my MTX+ is stuck now until I reset it.
The bootloader install will not work out of the box. YaST will not
figure out a valid boot device. To workaround this, go into the
bootloader config in YaST. There is a pulldown menu on the right side.
Select 'Edit configuration file'. Add 'default=linux' and
'boot=/dev/sda' as global options for lilo.conf.
After package installation, YaST will reboot the system. It does not
know that the shipped kernel-default will not work on PReP. To work
around this, boot again from the boot.iso. Instead of fresh install or
upgrade, select 'Other -> boot into installed system', and continue with
the installation. Once you have done that, install the kernel-default.rpm
from the boot.iso.
sax2 is currently unusable, it will write an incomplete and unusuable
xorg.conf. I dont know in what shape xorg7.1 is for the graphics cards
in PReP systems. The mga driver may work, but better stick with fbdev.
'Xorg -configure' should be able to write a xorg.conf that can be used
as a start.
---------------------------------------------------------------------
To unsubscribe, e-mail: opensuse-ppc+unsubscribe(a)opensuse.org
For additional commands, e-mail: opensuse-ppc+help(a)opensuse.org
I recently aquired a G4/700/1M upgrade for my 9600/300. This upgrade has a 256k on-chip L2 cache and a 1MB off chip L3 cache that runs at 200Mhz. When I start up the openSuSE v10.0 installer, it shows the L2 cache, but it doesn't mention the L3. I have also ran into this on my Beige G3 when I installed a G4/400/1M. This chip is from a Yike PCI G4 machine. When I start Linux, the L2 cache of 1M is not shown. BootX comes with a tool called GrabG3CacheSetting, but this tool doesn't work with the G4s. I have seen some hacks about editing out the BootX settings manually, but I'm really not sure what to do. Any help would be appriciated.
I'm sure that the G4/700 will be very fast even without the L3 cache active, but the G4/400 won't be......
Thanx
________________________________________________________________________
Check Out the new free AIM(R) Mail -- 2 GB of storage and industry-leading spam and email virus protection.
test
---------------------------------------------------------------------
To unsubscribe, e-mail: opensuse-ppc+unsubscribe(a)opensuse.org
For additional commands, e-mail: opensuse-ppc+help(a)opensuse.org
Hi,
> by 12:35 everything should be migrated.
It is now. If you encounter any problem let me know!
Henne
--
Henne Vogelsang, Core Services
"Rules change. The Game remains the same."
- Omar (The Wire)
---------------------------------------------------------------------
To unsubscribe, e-mail: opensuse-ppc+unsubscribe(a)opensuse.org
For additional commands, e-mail: opensuse-ppc+help(a)opensuse.org
Hi,
as announced a while ago on the big lists [1] we will move this
list to the new mailinglist server. We have also decided to rename it
while moving it. All this will happen this friday (2006.11.10). I cant say
exactly at which time but i will send a short heads-up mail right before
we are doing it.
This means a few things for you.
1. This list, suse-ppc(a)suse.com, will be renamed to opensuse-ppc(a)opensuse.org
This means that mails to you will come from the new address and for
posting you also have to send mail to opensuse-ppc(a)opensuse.org.
The old list adress will still work but is depreceated. I will turn
it off in the future (with notice).
2. Your subscription will get transfered to the new listserver. No action
on your part is required to continue reciving mails from this list.
3. Because of the renaming the mailinglist headers will change.
If you filter for headers in your mail setup please adopt
your setup accordingly. New headers will look like this:
Delivered-To: opensuse-ppc(a)lists4.opensuse.org
Mailing-List: contact opensuse-ppc+help(a)opensuse.org; run by mlmmj
X-Mailinglist: opensuse-ppc
List-Post: <mailto:opensuse-ppc@opensuse.org>
List-Help: <mailto:opensuse-ppc+help@opensuse.org>
List-Subscribe: <mailto:opensuse-ppc+subscribe@opensuse.org>
List-Unsubscribe: <mailto:opensuse-ppc+unsubscribe@opensuse.org>
List-Owner: <mailto:opensuse-ppc+owner@opensuse.org>
4. Mails will originate from the new server. If you accepts listmails
only from the listserver adopt your setup accordingly. The mails
will originate from
DNS: lists4.suse.de
IP: 195.135.221.135
5. The webarchive of this list is located at
http://lists.opensuse.org/opensuse-ppc/
6. We are using a new mailinglist manager on the new server. This means
some aspects of using the list changed. Some features are dropped and
some new features are added. For a complete list check out this post:
http://lists.opensuse.org/opensuse-announce/2006-08/msg00005.html
7. If you have any problems at all with the new setup please dont
hesitate to contact me!
Henne
[1] http://lists.suse.com/archive/suse-linux-e/2006-Nov/1464.htmlhttp://lists.suse.com/archive/suse-linux/2006-Nov/0440.html
--
ml-admin