четверг, 13 августа 2009 г.

Diablo JDK 1.6: ошибка в VM

Информация о системе


> uname -rsm
FreeBSD 8.0-BETA2 amd64
Дерево портов обновлено.

Инсталляция и неудачный запуск


% portinstall -p java/diablo-jdk16
...
===> Registering installation for diablo-jdk-1.6.0.07.02_5
...
===> Building package for diablo-jdk-1.6.0.07.02_5
...
===> Cleaning for diablo-jdk-1.6.0.07.02_5
---> Cleaning out obsolete shared libraries
[Updating the pkgdb in /var/db/pkg ... - 444 packages found (-0 +1) . done]
% rehash
% java -version
Error occurred during initialization of VM
Unable to load ZIP library: /usr/local/diablo-jdk1.6.0/jre/lib/amd64/libzip.so
%

Решение проблемы

% echo "libz.so.4 libz.so.5 #for diablo-jdk1.6" >> /etc/libmap.conf
% rehash
% java -version
Diablo Java(TM) SE Runtime Environment (build 1.6.0_07-b02)
Diablo Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)
%

среда, 12 августа 2009 г.

Метки GPT для ZFS

Нужно создать ZFS пул с зеркалом из носителей, смонтированных по меткам GPT, чтобы не зависеть от имён устройств и номеров портов контроллёров.

Исходное состояние


Зеркало как таковое отсутствует:

% zpool status
pool: amd64rio
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
amd64rio ONLINE 0 0 0
ad6p3 ONLINE 0 0 0

errors: No known data errors

Включим в пул ещё один носитель (GPT-раздел устройства ad10) — разметка разделов второго винчестера аналогична первому, так что здесь последовательность команд не приводится.

Создание зеркала в пуле


% zpool attach amd64rio /dev/ad6p3 /dev/ad10p3
% zpool scrub amd64rio
% zpool status
pool: amd64rio
state: ONLINE
scrub: scrub in progress for 0h0m, 0,00% done, 131h10m to go
config:

NAME STATE READ WRITE CKSUM
amd64rio ONLINE 0 0 0
mirror ONLINE 0 0 0
ad6p3 ONLINE 0 0 0
ad10p3 ONLINE 0 0 0

errors: No known data errors

Состояние разметки устройств


Подготовительные операции для перехода на новую схему:

% echo 'geom_label_load="YES"' >> /boot/loader.conf
% shutdown -r now
% glabel list
Geom name: ad6p1
Providers:
1. Name: gpt/rio_boot
Mediasize: 131072 (128K)
Sectorsize: 512
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 256
length: 131072
index: 0
Consumers:
1. Name: ad6p1
Mediasize: 131072 (128K)
Sectorsize: 512
Mode: r0w0e0

Geom name: ad6p1
Providers:
1. Name: gptid/6e56389f-81a6-11de-8aa6-02508d92a2eb
Mediasize: 131072 (128K)
Sectorsize: 512
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 256
length: 131072
index: 0
Consumers:
1. Name: ad6p1
Mediasize: 131072 (128K)
Sectorsize: 512
Mode: r0w0e0

Geom name: ad6p2
Providers:
1. Name: gpt/rio_swap
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 4194304
length: 2147483648
index: 0
Consumers:
1. Name: ad6p2
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Mode: r0w0e0

Geom name: ad6p2
Providers:
1. Name: gptid/e0fa02b3-81a6-11de-8aa6-02508d92a2eb
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 4194304
length: 2147483648
index: 0
Consumers:
1. Name: ad6p2
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Mode: r0w0e0

Geom name: ad6p3
Providers:
1. Name: gpt/rio_zfs
Mediasize: 317921280000 (296G)
Sectorsize: 512
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 620940000
length: 317921280000
index: 0
Consumers:
1. Name: ad6p3
Mediasize: 317921280000 (296G)
Sectorsize: 512
Mode: r0w0e0

Geom name: ad6p3
Providers:
1. Name: gptid/1f26a2d6-81a7-11de-8aa6-02508d92a2eb
Mediasize: 317921280000 (296G)
Sectorsize: 512
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 620940000
length: 317921280000
index: 0
Consumers:
1. Name: ad6p3
Mediasize: 317921280000 (296G)
Sectorsize: 512
Mode: r0w0e0

Geom name: ad10p1
Providers:
1. Name: gptid/a01d172c-81a6-11de-8aa6-02508d92a2eb
Mediasize: 131072 (128K)
Sectorsize: 512
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 256
length: 131072
index: 0
Consumers:
1. Name: ad10p1
Mediasize: 131072 (128K)
Sectorsize: 512
Mode: r0w0e0

Geom name: ad10p2
Providers:
1. Name: gptid/e2aef92e-81a6-11de-8aa6-02508d92a2eb
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 4194304
length: 2147483648
index: 0
Consumers:
1. Name: ad10p2
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Mode: r0w0e0

Geom name: ad10p3
Providers:
1. Name: gptid/20db981e-81a7-11de-8aa6-02508d92a2eb
Mediasize: 317921280000 (296G)
Sectorsize: 512
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 620940000
length: 317921280000
index: 0
Consumers:
1. Name: ad10p3
Mediasize: 317921280000 (296G)
Sectorsize: 512
Mode: r0w0e0

Процесс отвязки носителей от "устройств"


1. Вывод из зеркала одного носителя и его полная очистка "для чистоты эксперимента"

% zpool detach amd64rio ad10p3
% zpool status
pool: amd64rio
state: ONLINE
scrub: scrub in progress for 0h0m, 0,00% done, 69h45m to go
config:

NAME STATE READ WRITE CKSUM
amd64rio ONLINE 0 0 0
ad6p3 ONLINE 0 0 0

errors: No known data errors
% dd if=/dev/zero of=/dev/ad10p3 bs=100m
dd: /dev/ad10p3: short write on character device
dd: /dev/ad10p3: end of device
3032+0 records in
3031+1 records out
317921280000 bytes transferred in 5836.782166 secs (54468587 bytes/sec)

2. Задание метки

% gpart modify -i 3 -l rio_zfs2 ad10
ad10p3 modified
% shutdown -r now

3. Внесение носителя в зеркало

% zpool attach amd64rio ad6p3 gpt/rio_zfs2
% zpool status
pool: amd64rio
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h3m, 17,22% done, 0h17m to go
config:

NAME STATE READ WRITE CKSUM
amd64rio ONLINE 0 0 0
mirror ONLINE 0 0 0
ad6p3 ONLINE 0 0 0 7,80M resilvered
gpt/rio_zfs2 ONLINE 0 0 0 9,08G resilvered

errors: No known data errors

4. После окончания репликации проделываем аналогичную операцию с другим носителем

% zpool detach amd64rio ad6p3
% zpool status
pool: amd64rio
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
amd64rio ONLINE 0 0 0
gpt/rio_zfs2 ONLINE 0 0 0

errors: No known data errors
% gpart modify -i 3 -l rio_zfs1 ad6
ad6p3 modified
% dd if=/dev/zero of=/dev/ad6p3 bs=100m
dd: /dev/ad6p3: short write on character device
dd: /dev/ad6p3: end of device
3032+0 records in
3031+1 records out
317921280000 bytes transferred in 5950.955982 secs (53423564 bytes/sec)
% shutdown -r now
% zpool attach amd64rio gpt/rio_zfs2 gpt/rio_zfs1
% zpool status
pool: amd64rio
state: ONLINE
scrub: resilver completed after 0h21m with 0 errors on Wed Aug 12 17:44:06 2009
config:

NAME STATE READ WRITE CKSUM
amd64rio ONLINE 0 0 0
mirror ONLINE 0 0 0
gpt/rio_zfs2 ONLINE 0 0 0 123M resilvered
gpt/rio_zfs1 ONLINE 0 0 0 52,7G resilvered

errors: No known data errors


Это всё.

воскресенье, 9 августа 2009 г.

GPT и ZFS для FreeBSD

Имеется диск Western Digital серии Scorpio Blue
(WD3200BEVT, 320 ГБ; SATA 3 Гб/с; Кэш 8 МБ; 5400 об/мин).
Вот его-то я и подготовлю для использования в FreeBSD 8.

Предисловие


% echo 'zfs_load="YES"' >> /boot/loader.conf
% shutdown -r now


Начало


% gpart create -s GPT ad6
% gpart add -b 34 -s 256 -t freebsd-boot -l rio_boot ad6
ad6p1 added
% gpart add -b 290 -s 4194304 -t freebsd-swap -l rio_swap ad6
ad6p2 added
% gpart add -b 4194594 -s 620940000 -t freebsd-zfs -l rio_zfs ad6
ad6p3 added
% gpart show
=> 34 625142381 ad6 GPT (298G)
34 256 1 freebsd-boot (128K)
290 4194304 2 freebsd-swap (2.0G)
4194594 620940000 3 freebsd-zfs (296G)
625134594 7821 - free - (3.8M)
% gpart bootcode -b /boot/pmbr ad6
ad6 has bootcode
% gpart bootcode -p /boot/gptzfsboot -i 1 ad6


Сначала было слово...


% zpool create amd64rio /dev/ad6p3
% zpool set bootfs=amd64rio amd64rio
% zfs list
NAME USED AVAIL REFER MOUNTPOINT
amd64rio 67,5K 291G 18K /amd64rio


Он сказал: "Поехали!"


% zfs set atime=off amd64rio
% zfs create -o atime=on amd64rio/var
% zfs create -o compression=gzip amd64rio/var/crash
% zfs create -o readonly=on amd64rio/var/empty
% zfs create amd64rio/var/tmp
% chmod 1777 /amd64rio/var/tmp
% zfs create amd64rio/var/db
% zfs create amd64rio/usr
% zfs create amd64rio/usr/home
% zfs create amd64rio/usr/local
% zfs create amd64rio/usr/obj
% zfs create -o compression=gzip amd64rio/usr/ports
% zfs create -o compression=off amd64rio/usr/ports/distfiles
% zfs create -o compression=gzip amd64rio/usr/src
% zfs create amd64rio/tmp


Результат


% zfs list
NAME USED AVAIL REFER MOUNTPOINT
amd64rio 387K 291G 22K /amd64rio
amd64rio/tmp 18K 291G 18K /amd64rio/tmp
amd64rio/usr 114K 291G 23K /amd64rio/usr
amd64rio/usr/home 18K 291G 18K /amd64rio/usr/home
amd64rio/usr/local 18K 291G 18K /amd64rio/usr/local
amd64rio/usr/obj 18K 291G 18K /amd64rio/usr/obj
amd64rio/usr/ports 37K 291G 19K /amd64rio/usr/ports
amd64rio/usr/ports/distfiles 18K 291G 18K /amd64rio/usr/ports/distfiles
amd64rio/usr/src 18K 291G 18K /amd64rio/usr/src
amd64rio/var 95K 291G 23K /amd64rio/var
amd64rio/var/crash 18K 291G 18K /amd64rio/var/crash
amd64rio/var/db 18K 291G 18K /amd64rio/var/db
amd64rio/var/empty 18K 291G 18K /amd64rio/var/empty
amd64rio/var/tmp 18K 291G 18K /amd64rio/var/tmp


Проверка


% zpool export amd64rio
% zpool import amd64rio
% zpool status
pool: amd64rio
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
amd64rio ONLINE 0 0 0
ad6p3 ONLINE 0 0 0

errors: No known data errors


На этом пока всё.

Полезные ссылки

  • Руководство по администрированию файловых систем ZFS Solaris
  • ZFS Best Practice Guide
  • WHEN TO (AND NOT TO) USE RAID-Z

  •