tracker-extract dumps core thousands of times in 15.2
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, I upgraded to 15.2 on 2021-12-31, and since then I have been flooded with coredumps from tracker-extract: TIME PID UID GID SIG COREFILE EXE Fri 2021-01-01 12:20:44 CET 15416 1000 100 11 missing /usr/lib/tracker-extract Fri 2021-01-01 13:32:00 CET 17907 1000 100 6 missing /usr/lib/tracker-extract ... Sun 2021-01-03 12:27:21 CET 25059 1000 100 11 present /usr/lib/tracker-extract Sun 2021-01-03 12:27:22 CET 25088 1000 100 11 present /usr/lib/tracker-extract Sun 2021-01-03 12:27:24 CET 25113 1000 100 11 present /usr/lib/tracker-extract Telcontar:~ # coredumpctl | grep tracker-extract | wc -l 4563 Telcontar:~ # Literally, thousands. Is this really not known? Is it worth a bugzilla? I reported this problem on the past and it keeps coming... dump and gdb trace below. Telcontar:~ # coredumpctl dump 25653 PID: 25653 (tracker-extract) UID: 1000 (cer) GID: 100 (users) Signal: 11 (SEGV) Timestamp: Sun 2021-01-03 12:31:40 CET (2min 5s ago) Command Line: /usr/lib/tracker-extract Executable: /usr/lib/tracker-extract Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Unit: user@1000.service User Unit: tracker-extract.service Slice: user-1000.slice Owner UID: 1000 (cer) Boot ID: 2ef60a9b78a14f8aa0ad0315a348b17c Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.25653.1609673500000000.lz4 Message: Process 25653 (tracker-extract) of user 1000 dumped core. Stack trace of thread 25669: #0 0x00007fecd783da0c n/a (libexiv2.so.26) #1 0x00007fecd7818a0b n/a (libexiv2.so.26) #2 0x00007fecd7808344 _ZN8TXMPMetaINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEE17RegisterNamespaceEPKcS8_ (libexiv2.so.26) #3 0x00007fecd7803ba6 _ZN5Exiv29XmpParser10initializeEPFvPvbES1_ (libexiv2.so.26) #4 0x00007fecd7806539 _ZN5Exiv29XmpParser6decodeERNS_7XmpDataERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE (libexiv2.so.26) #5 0x00007fecd77ec576 _ZN5Exiv28Internal11TiffDecoder9decodeXmpEPKNS0_13TiffEntryBaseE (libexiv2.so.26) #6 0x00007fecd77d47a6 _ZN5Exiv28Internal13TiffDirectory8doAcceptERNS0_11TiffVisitorE (libexiv2.so.26) #7 0x00007fecd77dde9f _ZN5Exiv28Internal16TiffParserWorker6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhjjPFMNS0_11TiffDecoderEFvPKNS0_13TiffEntryBaseEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjNS0_5IfdIdEEPNS0_14TiffHeaderBaseE (libexiv2.so.26) #8 0x00007fecd77ddfe7 _ZN5Exiv210TiffParser6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhj (libexiv2.so.26) #9 0x00007fecd77de179 _ZN5Exiv29TiffImage12readMetadataEv (libexiv2.so.26) #10 0x00007fecd7b5c01d n/a (libgexiv2.so.2) #11 0x00007fecd7b5c26f gexiv2_metadata_open_path (libgexiv2.so.2) #12 0x00007fecd7d76848 tracker_extract_get_metadata (libextract-raw.so) #13 0x000055ed1e442eac get_file_metadata (tracker-extract) #14 0x000055ed1e44349b get_metadata (tracker-extract) #15 0x000055ed1e443530 single_thread_get_metadata (tracker-extract) #16 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #17 0x00007fed21aed4f9 start_thread (libpthread.so.0) #18 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25656: #0 0x00007fed2181b6db __GI___poll (libc.so.6) #1 0x00007fed221e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fed221e48cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007fed1905a5bd dconf_gdbus_worker_thread (libdconfsettings.so) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25654: #0 0x00007fed2181b6db __GI___poll (libc.so.6) #1 0x00007fed221e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fed221e48cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007fed221e4911 glib_worker_main (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25658: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25657: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25653: #0 0x00007fed2181b6db __GI___poll (libc.so.6) #1 0x00007fed221e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fed221e4b02 g_main_loop_run (libglib-2.0.so.0) #3 0x000055ed1e44007c main (tracker-extract) #4 0x00007fed2174e34a __libc_start_main (libc.so.6) #5 0x000055ed1e44016a _start (tracker-extract) Stack trace of thread 25659: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25660: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25668: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223018a g_cond_wait_until (libglib-2.0.so.0) #2 0x00007fed221b5c51 g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e7a6 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25655: #0 0x00007fed2181b6db __GI___poll (libc.so.6) #1 0x00007fed221e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fed221e4b02 g_main_loop_run (libglib-2.0.so.0) #3 0x00007fed22a2cb16 gdbus_shared_thread_func (libgio-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25667: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25666: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25663: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25662: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25661: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25665: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25664: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Refusing to dump core to tty (use shell redirection or specify --output). Telcontar:~ # Telcontar:~ # coredumpctl gdb 25653 PID: 25653 (tracker-extract) UID: 1000 (cer) GID: 100 (users) Signal: 11 (SEGV) Timestamp: Sun 2021-01-03 12:31:40 CET (6min ago) Command Line: /usr/lib/tracker-extract Executable: /usr/lib/tracker-extract Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Unit: user@1000.service User Unit: tracker-extract.service Slice: user-1000.slice Owner UID: 1000 (cer) Boot ID: 2ef60a9b78a14f8aa0ad0315a348b17c Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.25653.1609673500000000.lz4 Message: Process 25653 (tracker-extract) of user 1000 dumped core. Stack trace of thread 25669: #0 0x00007fecd783da0c n/a (libexiv2.so.26) #1 0x00007fecd7818a0b n/a (libexiv2.so.26) #2 0x00007fecd7808344 _ZN8TXMPMetaINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEE17RegisterNamespaceEPKcS8_ (libexiv2.so.26) #3 0x00007fecd7803ba6 _ZN5Exiv29XmpParser10initializeEPFvPvbES1_ (libexiv2.so.26) #4 0x00007fecd7806539 _ZN5Exiv29XmpParser6decodeERNS_7XmpDataERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE (libexiv2.so.26) #5 0x00007fecd77ec576 _ZN5Exiv28Internal11TiffDecoder9decodeXmpEPKNS0_13TiffEntryBaseE (libexiv2.so.26) #6 0x00007fecd77d47a6 _ZN5Exiv28Internal13TiffDirectory8doAcceptERNS0_11TiffVisitorE (libexiv2.so.26) #7 0x00007fecd77dde9f _ZN5Exiv28Internal16TiffParserWorker6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhjjPFMNS0_11TiffDecoderEFvPKNS0_13TiffEntryBaseEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjNS0_5IfdIdEEPNS0_14TiffHeaderBaseE (libexiv2.so.26) #8 0x00007fecd77ddfe7 _ZN5Exiv210TiffParser6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhj (libexiv2.so.26) #9 0x00007fecd77de179 _ZN5Exiv29TiffImage12readMetadataEv (libexiv2.so.26) #10 0x00007fecd7b5c01d n/a (libgexiv2.so.2) #11 0x00007fecd7b5c26f gexiv2_metadata_open_path (libgexiv2.so.2) #12 0x00007fecd7d76848 tracker_extract_get_metadata (libextract-raw.so) #13 0x000055ed1e442eac get_file_metadata (tracker-extract) #14 0x000055ed1e44349b get_metadata (tracker-extract) #15 0x000055ed1e443530 single_thread_get_metadata (tracker-extract) #16 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #17 0x00007fed21aed4f9 start_thread (libpthread.so.0) #18 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25656: #0 0x00007fed2181b6db __GI___poll (libc.so.6) #1 0x00007fed221e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fed221e48cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007fed1905a5bd dconf_gdbus_worker_thread (libdconfsettings.so) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25654: #0 0x00007fed2181b6db __GI___poll (libc.so.6) #1 0x00007fed221e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fed221e48cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007fed221e4911 glib_worker_main (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25658: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25657: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25653: #0 0x00007fed2181b6db __GI___poll (libc.so.6) #1 0x00007fed221e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fed221e4b02 g_main_loop_run (libglib-2.0.so.0) #3 0x000055ed1e44007c main (tracker-extract) #4 0x00007fed2174e34a __libc_start_main (libc.so.6) #5 0x000055ed1e44016a _start (tracker-extract) Stack trace of thread 25659: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25660: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25668: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223018a g_cond_wait_until (libglib-2.0.so.0) #2 0x00007fed221b5c51 g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e7a6 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25655: #0 0x00007fed2181b6db __GI___poll (libc.so.6) #1 0x00007fed221e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fed221e4b02 g_main_loop_run (libglib-2.0.so.0) #3 0x00007fed22a2cb16 gdbus_shared_thread_func (libgio-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25667: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25666: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25663: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25662: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25661: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25665: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) Stack trace of thread 25664: #0 0x00007fed21820839 syscall (libc.so.6) #1 0x00007fed2223006f g_cond_wait (libglib-2.0.so.0) #2 0x00007fed221b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fed2220e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fed2220ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fed21aed4f9 start_thread (libpthread.so.0) #6 0x00007fed21825fbf __clone (libc.so.6) GNU gdb (GDB; openSUSE Leap 15.2) 8.3.1 Copyright (C) 2019 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-suse-linux". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://bugs.opensuse.org/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from /usr/lib/tracker-extract... Reading symbols from /usr/lib/debug/usr/lib/tracker-extract-2.3.2-lp152.1.3.x86_64.debug... [New LWP 25669] [New LWP 25656] [New LWP 25654] [New LWP 25658] [New LWP 25657] [New LWP 25653] [New LWP 25659] [New LWP 25660] [New LWP 25668] [New LWP 25655] [New LWP 25667] [New LWP 25666] [New LWP 25663] [New LWP 25662] [New LWP 25661] [New LWP 25665] [New LWP 25664] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/lib/tracker-extract'. Program terminated with signal SIGSEGV, Segmentation fault. #0 std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>--Type <RET> for more, q to quit, c to continue without paging--c
const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::lower_bound (__k=..., this=<optimized out>) at /usr/include/c++/7/bits/stl_tree.h:1187
warning: Source file is more recent than executable. 1187 { return _M_lower_bound(_M_begin(), _M_end(), __k); } [Current thread is 1 (Thread 0x7feccb7fe700 (LWP 25669))] (gdb) quit Telcontar:~ # - -- Cheers Carlos E. R. (from 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCX/Gu4xwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfV2eYAn2UCGpfnw9GH2/13Gg/5 GBgpitnMAJ9wTjGwuhpdRgIhvCUVhRMWPssoIg== =I2Qv -----END PGP SIGNATURE-----
On 1/3/21 5:47 AM, Carlos E. R. wrote:
Hi,
I upgraded to 15.2 on 2021-12-31, and since then I have been flooded with coredumps from tracker-extract:
TIME PID UID GID SIG COREFILE EXE Fri 2021-01-01 12:20:44 CET 15416 1000 100 11 missing /usr/lib/tracker-extract Fri 2021-01-01 13:32:00 CET 17907 1000 100 6 missing /usr/lib/tracker-extract ... Sun 2021-01-03 12:27:21 CET 25059 1000 100 11 present /usr/lib/tracker-extract Sun 2021-01-03 12:27:22 CET 25088 1000 100 11 present /usr/lib/tracker-extract Sun 2021-01-03 12:27:24 CET 25113 1000 100 11 present /usr/lib/tracker-extract
Telcontar:~ # coredumpctl | grep tracker-extract | wc -l 4563 Telcontar:~ #
Looks like a threading race condition that believes it needs to start an additional thread and is continually starting a new thread until some thread-pool limit is reached. As to why? Myriad of reasons, but: See: 5 Big Fat Reasons Mutexes Suck, Big Time https://accu.org/var/uploads/journals/Overload149.pdf 2nd article... -- David C. Rankin, J.D.,P.E.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday, 2021-01-04 at 10:23 -0600, David C. Rankin wrote:
On 1/3/21 5:47 AM, Carlos E. R. wrote:
Hi,
I upgraded to 15.2 on 2021-12-31, and since then I have been flooded with coredumps from tracker-extract:
TIME PID UID GID SIG COREFILE EXE Fri 2021-01-01 12:20:44 CET 15416 1000 100 11 missing /usr/lib/tracker-extract Fri 2021-01-01 13:32:00 CET 17907 1000 100 6 missing /usr/lib/tracker-extract ... Sun 2021-01-03 12:27:21 CET 25059 1000 100 11 present /usr/lib/tracker-extract Sun 2021-01-03 12:27:22 CET 25088 1000 100 11 present /usr/lib/tracker-extract Sun 2021-01-03 12:27:24 CET 25113 1000 100 11 present /usr/lib/tracker-extract
Telcontar:~ # coredumpctl | grep tracker-extract | wc -l 4563 Telcontar:~ #
Looks like a threading race condition that believes it needs to start an additional thread and is continually starting a new thread until some thread-pool limit is reached. As to why? Myriad of reasons, but:
No, it dies, and some control process starts it again (via systemd). Then it chokes and waits for some minutes, then starts again (search for <=== below) It is probably trying to analyze files in my system that it can't. Same problem for years. <3.4> 2021-01-04T19:42:48.285712+01:00 Telcontar systemd 6386 - - tracker-extract.service: Main process exited, code=killed, status=11/SEGV <3.5> 2021-01-04T19:42:48.286092+01:00 Telcontar systemd 6386 - - tracker-extract.service: Unit entered failed state. <3.4> 2021-01-04T19:42:48.286299+01:00 Telcontar systemd 6386 - - tracker-extract.service: Failed with result 'signal'. <1.2> 2021-01-04T19:42:48.347439+01:00 Telcontar systemd-coredump 15818 - - Process 15796 (tracker-extract) of user 1000 dumped core.#012#012Stack trace of thread 15816:#012#0 0x00007f0b0a054a0c _ZNSt8_Rb_treeINSt7__cxx1112basic_stri> <1.4> 2021-01-04T19:42:48.715643+01:00 Telcontar tracker-extract 15825 - - Locale 'LANG' is not set, defaulting to C locale <0.6> 2021-01-04T19:42:49.136931+01:00 Telcontar kernel - - - [177057.452294] single[15841]: segfault at 10 ip 00007f5a646d8a0c sp 00007f5a5c926fb0 error 4 in libexiv2.so.26.0.0[7f5a644c3000+2d9000] <0.6> 2021-01-04T19:42:49.136947+01:00 Telcontar kernel - - - [177057.452306] Code: 0f 87 af 03 00 00 48 39 dd 0f 87 15 ff ff ff 0f 1f 80 00 00 00 00 48 8d 05 71 53 31 00 4c 8b 74 24 48 4c 8b 6c 24 40 48 8b 00 <48> 8b 68 10 48 8d 58 08> <1.4> 2021-01-04T19:42:48.763194+01:00 Telcontar tracker-extract 15825 - - Locale 'LANG' is not set, defaulting to C locale <3.6> 2021-01-04T19:42:49.172093+01:00 Telcontar systemd 1 - - Started Process Core Dump (PID 15842/UID 0). <3.6> 2021-01-04T19:42:49.228618+01:00 Telcontar systemd-coredump 15843 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.21522.1609771521000000.lz4. <3.4> 2021-01-04T19:42:49.317781+01:00 Telcontar systemd 6386 - - tracker-extract.service: Main process exited, code=killed, status=11/SEGV <3.5> 2021-01-04T19:42:49.318023+01:00 Telcontar systemd 6386 - - tracker-extract.service: Unit entered failed state. <3.4> 2021-01-04T19:42:49.318145+01:00 Telcontar systemd 6386 - - tracker-extract.service: Failed with result 'signal'. <1.2> 2021-01-04T19:42:49.378902+01:00 Telcontar systemd-coredump 15843 - - Process 15825 (tracker-extract) of user 1000 dumped core.#012#012Stack trace of thread 15841:#012#0 0x00007f5a646d8a0c _ZNSt8_Rb_treeINSt7__cxx1112basic_stri> <3.4> 2021-01-04T19:42:49.587539+01:00 Telcontar systemd 6386 - - tracker-extract.service: Start request repeated too quickly. <======== <3.3> 2021-01-04T19:42:49.593167+01:00 Telcontar systemd 6386 - - Failed to start Tracker metadata extractor. <3.5> 2021-01-04T19:42:49.593362+01:00 Telcontar systemd 6386 - - tracker-extract.service: Unit entered failed state. <3.4> 2021-01-04T19:42:49.593522+01:00 Telcontar systemd 6386 - - tracker-extract.service: Failed with result 'signal'. <3.6> 2021-01-04T19:42:49.903030+01:00 Telcontar dbus-daemon 6409 - - [session uid=1000 pid=6409] Activating via systemd: service name='org.freedesktop.Tracker1.Miner.Extract' unit='tracker-extract.service' requested by ':1.66' (uid=1> <3.4> 2021-01-04T19:42:49.903252+01:00 Telcontar systemd 6386 - - tracker-extract.service: Start request repeated too quickly. <======== <3.3> 2021-01-04T19:42:49.903355+01:00 Telcontar systemd 6386 - - Failed to start Tracker metadata extractor. <3.4> 2021-01-04T19:42:49.903437+01:00 Telcontar systemd 6386 - - tracker-extract.service: Failed with result 'signal'. <4.5> 2021-01-04T19:44:49.903877+01:00 Telcontar dbus-daemon 6409 - - [session uid=1000 pid=6409] Failed to activate service 'org.freedesktop.Tracker1.Miner.Extract': timed out (service_start_timeout=120000ms) <3.6> 2021-01-04T19:44:50.901917+01:00 Telcontar dbus-daemon 6409 - - [session uid=1000 pid=6409] Activating via systemd: service name='org.freedesktop.Tracker1.Miner.Extract' unit='tracker-extract.service' requested by ':1.66' (uid=1> <1.4> 2021-01-04T19:44:51.022401+01:00 Telcontar tracker-extract 15948 - - Locale 'LANG' is not set, defaulting to C locale <1.4> 2021-01-04T19:44:51.069686+01:00 Telcontar tracker-extract 15948 - - Locale 'LANG' is not set, defaulting to C locale <3.6> 2021-01-04T19:44:51.070274+01:00 Telcontar dbus-daemon 6409 - - [session uid=1000 pid=6409] Successfully activated service 'org.freedesktop.Tracker1.Miner.Extract' <0.6> 2021-01-04T19:44:51.416932+01:00 Telcontar kernel - - - [177179.734664] single[15965]: segfault at 10 ip 00007f54d8af0a0c sp 00007f54d0d3efb0 error 4 in libexiv2.so.26.0.0[7f54d88db000+2d9000] <0.6> 2021-01-04T19:44:51.416949+01:00 Telcontar kernel - - - [177179.734671] Code: 0f 87 af 03 00 00 48 39 dd 0f 87 15 ff ff ff 0f 1f 80 00 00 00 00 48 8d 05 71 53 31 00 4c 8b 74 24 48 4c 8b 6c 24 40 48 8b 00 <48> 8b 68 10 48 8d 58 08> <3.6> 2021-01-04T19:44:51.449303+01:00 Telcontar systemd 1 - - Started Process Core Dump (PID 15966/UID 0). <3.6> 2021-01-04T19:44:51.509976+01:00 Telcontar systemd-coredump 15967 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.21619.1609771644000000.lz4. <3.4> 2021-01-04T19:44:51.618322+01:00 Telcontar systemd 6386 - - tracker-extract.service: Main process exited, code=killed, status=11/SEGV <3.5> 2021-01-04T19:44:51.618697+01:00 Telcontar systemd 6386 - - tracker-extract.service: Unit entered failed state. <3.4> 2021-01-04T19:44:51.618960+01:00 Telcontar systemd 6386 - - tracker-extract.service: Failed with result 'signal'. <1.2> 2021-01-04T19:44:51.680834+01:00 Telcontar systemd-coredump 15967 - - Process 15948 (tracker-extract) of user 1000 dumped core.#012#012Stack trace of thread 15965:#012#0 0x00007f54d8af0a0c _ZNSt8_Rb_treeINSt7__cxx1112basic_stri> <1.4> 2021-01-04T19:44:51.936658+01:00 Telcontar tracker-extract 15975 - - Locale 'LANG' is not set, defaulting to C locale <1.4> 2021-01-04T19:44:51.978328+01:00 Telcontar tracker-store 11853 - - tracker_writeback_transact: assertion 'private == NULL' failed <1.4> 2021-01-04T19:44:51.978640+01:00 Telcontar tracker-extract 15975 - - Locale 'LANG' is not set, defaulting to C locale <0.6> 2021-01-04T19:44:53.271279+01:00 Telcontar kernel - - - [177181.588657] single[15995]: segfault at 10 ip 00007ff62e49ea0c sp 00007ff621d4dfb0 error 4 in libexiv2.so.26.0.0[7ff62e289000+2d9000] <0.6> 2021-01-04T19:44:53.271295+01:00 Telcontar kernel - - - [177181.588662] Code: 0f 87 af 03 00 00 48 39 dd 0f 87 15 ff ff ff 0f 1f 80 00 00 00 00 48 8d 05 71 53 31 00 4c 8b 74 24 48 4c 8b 6c 24 40 48 8b 00 <48> 8b 68 10 48 8d 58 08> <3.6> 2021-01-04T19:44:53.306689+01:00 Telcontar systemd 1 - - Started Process Core Dump (PID 15996/UID 0). <3.6> 2021-01-04T19:44:53.363472+01:00 Telcontar systemd-coredump 15997 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.21648.1609771646000000.lz4. <3.4> 2021-01-04T19:44:53.487817+01:00 Telcontar systemd 6386 - - tracker-extract.service: Main process exited, code=killed, status=11/SEGV <3.5> 2021-01-04T19:44:53.488172+01:00 Telcontar systemd 6386 - - tracker-extract.service: Unit entered failed state. <3.4> 2021-01-04T19:44:53.488377+01:00 Telcontar systemd 6386 - - tracker-extract.service: Failed with result 'signal'. <1.2> 2021-01-04T19:44:53.554287+01:00 Telcontar systemd-coredump 15997 - - Process 15975 (tracker-extract) of user 1000 dumped core.#012#012Stack trace of thread 15995:#012#0 0x00007ff62e49ea0c _ZNSt8_Rb_treeINSt7__cxx1112basic_stri> <1.4> 2021-01-04T19:44:53.955849+01:00 Telcontar tracker-extract 16005 - - Locale 'LANG' is not set, defaulting to C locale <0.6> 2021-01-04T19:44:54.356933+01:00 Telcontar kernel - - - [177182.673681] single[16021]: segfault at 10 ip 00007f2daedada0c sp 00007f2da8aaafb0 error 4 in libexiv2.so.26.0.0[7f2daeb98000+2d9000] <0.6> 2021-01-04T19:44:54.356947+01:00 Telcontar kernel - - - [177182.673687] Code: 0f 87 af 03 00 00 48 39 dd 0f 87 15 ff ff ff 0f 1f 80 00 00 00 00 48 8d 05 71 53 31 00 4c 8b 74 24 48 4c 8b 6c 24 40 48 8b 00 <48> 8b 68 10 48 8d 58 08> <1.4> 2021-01-04T19:44:54.007161+01:00 Telcontar tracker-extract 16005 - - Locale 'LANG' is not set, defaulting to C locale <3.6> 2021-01-04T19:44:54.390731+01:00 Telcontar systemd 1 - - Started Process Core Dump (PID 16022/UID 0). <3.6> 2021-01-04T19:44:54.459191+01:00 Telcontar systemd-coredump 16023 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.21679.1609771647000000.lz4. Telcontar:~ # coredumpctl | grep tracker-extract | wc -l 7980 Telcontar:~ # Obviously I can kill the thing, at worst delete the program, but I'm waiting to see if someone knows better. - -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCX/NmsRwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVqqUAmwfmHQ5wt5uRBGEDaxSn dnyq0+MaAKCMHrw1UJKU85Aip8jf0cARC0Rdmw== =MsCU -----END PGP SIGNATURE-----
On 1/4/21 1:04 PM, Carlos E. R. wrote:
No, it dies, and some control process starts it again (via systemd). Then it chokes and waits for some minutes, then starts again (search for <=== below)
It is probably trying to analyze files in my system that it can't. Same problem for years.
Oh.. Yep the log times show it. I see "Locale 'LANG' is not set, defaulting to C locale" -- I wonder if that figures into the issue. It says it's defaulting to C -- but how gracefully... -- David C. Rankin, J.D.,P.E.
On 05/01/2021 02.00, David C. Rankin wrote:
On 1/4/21 1:04 PM, Carlos E. R. wrote:
No, it dies, and some control process starts it again (via systemd). Then it chokes and waits for some minutes, then starts again (search for <=== below)
It is probably trying to analyze files in my system that it can't. Same problem for years.
Oh..
Yep the log times show it. I see "Locale 'LANG' is not set, defaulting to C locale" -- I wonder if that figures into the issue. It says it's defaulting to C -- but how gracefully...
That's irrelevant... a red herring. I did not set the no locale thing, that was some default of the package itself. My session has a specific locale. In any case, any wrong setting, any data set that can be input to a program, must not crash a program. A program has to allow for any kind of error on the input and cope with it gracefully. If a program segfaults, that's sloppy programming and never, ever, never the user fault. tracker-extract has been crashing on this machine maybe for a decade, since it appeared. I report it, maybe they correct the problem, and then it goes crashing again on something else or on next upgrade. If you google "tracker-extract" you will see reports going back for years. I'm tired of reporting issues on it. So, does somebody really want the bugzilla? -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. wrote:
If you google "tracker-extract" you will see reports going back for years.
I'm tired of reporting issues on it. So, does somebody really want the bugzilla?
If you have reported it multiple times without anyone picking it up, it sounds to me like the answer has to be "no". Surely it can be disabled? -- Per Jessen, Zürich (-0.4°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland.
On 05/01/2021 11.23, Per Jessen wrote:
Carlos E. R. wrote:
If you google "tracker-extract" you will see reports going back for years.
I'm tired of reporting issues on it. So, does somebody really want the bugzilla?
If you have reported it multiple times without anyone picking it up, it sounds to me like the answer has to be "no". Surely it can be disabled?
killall tracker-store tracker-extract tracker-miner-apps tracker-miner-fs hours ago. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On Tue, 5 Jan 2021 04:17:57 +0100 "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 05/01/2021 02.00, David C. Rankin wrote:
On 1/4/21 1:04 PM, Carlos E. R. wrote:
No, it dies, and some control process starts it again (via systemd). Then it chokes and waits for some minutes, then starts again (search for <=== below)
It is probably trying to analyze files in my system that it can't. Same problem for years.
Oh..
Yep the log times show it. I see "Locale 'LANG' is not set, defaulting to C locale" -- I wonder if that figures into the issue. It says it's defaulting to C -- but how gracefully...
That's irrelevant... a red herring. I did not set the no locale thing, that was some default of the package itself. My session has a specific locale.
In any case, any wrong setting, any data set that can be input to a program, must not crash a program. A program has to allow for any kind of error on the input and cope with it gracefully. If a program segfaults, that's sloppy programming and never, ever, never the user fault.
tracker-extract has been crashing on this machine maybe for a decade, since it appeared. I report it, maybe they correct the problem, and then it goes crashing again on something else or on next upgrade.
If you google "tracker-extract" you will see reports going back for years.
I'm tired of reporting issues on it. So, does somebody really want the bugzilla?
On the man page, it says: "The actual extraction is done by a separate process. This is done to isolate the calling process from any memory leaks or crashes in the libraries Tracker uses to extract metadata." So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker? I don't use tracker and have no idea of what the logs show, but maybe it's worth trying to pin down the particular place the problem occurs.
Dave Howorth wrote:
On Tue, 5 Jan 2021 04:17:57 +0100 "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 05/01/2021 02.00, David C. Rankin wrote:
On 1/4/21 1:04 PM, Carlos E. R. wrote:
No, it dies, and some control process starts it again (via systemd). Then it chokes and waits for some minutes, then starts again (search for <=== below)
It is probably trying to analyze files in my system that it can't. Same problem for years.
Oh..
Yep the log times show it. I see "Locale 'LANG' is not set, defaulting to C locale" -- I wonder if that figures into the issue. It says it's defaulting to C -- but how gracefully...
That's irrelevant... a red herring. I did not set the no locale thing, that was some default of the package itself. My session has a specific locale.
In any case, any wrong setting, any data set that can be input to a program, must not crash a program. A program has to allow for any kind of error on the input and cope with it gracefully. If a program segfaults, that's sloppy programming and never, ever, never the user fault.
tracker-extract has been crashing on this machine maybe for a decade, since it appeared. I report it, maybe they correct the problem, and then it goes crashing again on something else or on next upgrade.
If you google "tracker-extract" you will see reports going back for years.
I'm tired of reporting issues on it. So, does somebody really want the bugzilla?
On the man page, it says:
"The actual extraction is done by a separate process. This is done to isolate the calling process from any memory leaks or crashes in the libraries Tracker uses to extract metadata."
So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker?
Maybe a poorly written library for some obscure/exotic file-format ? if you can think of any, might be worth omitting them from tracker. -- Per Jessen, Zürich (-0.1°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland.
On 05/01/2021 12.36, Per Jessen wrote:
Dave Howorth wrote:
On Tue, 5 Jan 2021 04:17:57 +0100 "> wrote:
So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker?
Maybe a poorly written library for some obscure/exotic file-format ? if you can think of any, might be worth omitting them from tracker.
Somewhere I saw it doing jpg files and crashing. But the crash logs are huge, I don't know how to locate the filenames if they are there. cer@Telcontar:~> zgrep tracker /var/log/messages-2021*z | grep -i "jpg|jpeg" cer@Tecer@Telcontar:~> zgrep tracker /var/log/messages-2021*z | grep -i "/home" lcontar:~> Maybe the filenames only appear in the backtraces, and there are thousands of them, I can not analyze that lot manually. If there are photos it is crashing at, you know I have thousands of them. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. wrote:
On 05/01/2021 12.36, Per Jessen wrote:
Dave Howorth wrote:
On Tue, 5 Jan 2021 04:17:57 +0100 "> wrote:
So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker?
Maybe a poorly written library for some obscure/exotic file-format ? if you can think of any, might be worth omitting them from tracker.
Somewhere I saw it doing jpg files and crashing. But the crash logs are huge, I don't know how to locate the filenames if they are there.
I guess jpegs ought to be supported, they are hardly exotic. Maybe take a closer look at one of the core dumps, to maybe see if a filename can be found ? -- Per Jessen, Zürich (-0.3°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland.
On 05/01/2021 13.26, Per Jessen wrote:
Carlos E. R. wrote:
On 05/01/2021 12.36, Per Jessen wrote:
Dave Howorth wrote:
On Tue, 5 Jan 2021 04:17:57 +0100 "> wrote:
So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker?
Maybe a poorly written library for some obscure/exotic file-format ? if you can think of any, might be worth omitting them from tracker.
Somewhere I saw it doing jpg files and crashing. But the crash logs are huge, I don't know how to locate the filenames if they are there.
I guess jpegs ought to be supported, they are hardly exotic.
I know.
Maybe take a closer look at one of the core dumps, to maybe see if a filename can be found ?
That's what I did, first post in the thread. To examine the lot I would need some automation that I don't know how to do. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. wrote:
On 05/01/2021 13.26, Per Jessen wrote:
Carlos E. R. wrote:
On 05/01/2021 12.36, Per Jessen wrote:
Dave Howorth wrote:
On Tue, 5 Jan 2021 04:17:57 +0100 "> wrote:
So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker?
Maybe a poorly written library for some obscure/exotic file-format ? if you can think of any, might be worth omitting them from tracker.
Somewhere I saw it doing jpg files and crashing. But the crash logs are huge, I don't know how to locate the filenames if they are there.
I guess jpegs ought to be supported, they are hardly exotic.
I know.
Maybe take a closer look at one of the core dumps, to maybe see if a filename can be found ?
That's what I did, first post in the thread.
That was a back trace, yes - a filename I would suspect to find as argument to a function.
To examine the lot I would need some automation that I don't know how to do.
There is probably not much need. If you're lucky, you'll see that it keeps aborting on files called .klop99 (just an example) and you can hopefully just exclude those in a config somewhere. I see some mention of tiff files in the first backtrace - maybe try excluding those. -- Per Jessen, Zürich (-0.3°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland.
On 05/01/2021 16.23, Per Jessen wrote:
Carlos E. R. wrote:
On 05/01/2021 13.26, Per Jessen wrote:
Carlos E. R. wrote:
On 05/01/2021 12.36, Per Jessen wrote:
Dave Howorth wrote:
On Tue, 5 Jan 2021 04:17:57 +0100 "> wrote:
So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker?
Maybe a poorly written library for some obscure/exotic file-format ? if you can think of any, might be worth omitting them from tracker.
Somewhere I saw it doing jpg files and crashing. But the crash logs are huge, I don't know how to locate the filenames if they are there.
I guess jpegs ought to be supported, they are hardly exotic.
I know.
Maybe take a closer look at one of the core dumps, to maybe see if a filename can be found ?
That's what I did, first post in the thread.
That was a back trace, yes - a filename I would suspect to find as argument to a function.
To examine the lot I would need some automation that I don't know how to do.
There is probably not much need. If you're lucky, you'll see that it keeps aborting on files called .klop99 (just an example) and you can hopefully just exclude those in a config somewhere.
But I do not see the filename of what it is crashing with.
I see some mention of tiff files in the first backtrace - maybe try excluding those.
tiff? Ah, now I see. I don't have tiff files of my own, only system files. 122 in total. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 05/01/2021 13.26, Per Jessen wrote:
Carlos E. R. wrote:
On 05/01/2021 12.36, Per Jessen wrote:
Somewhere I saw it doing jpg files and crashing. But the crash logs are huge, I don't know how to locate the filenames if they are there.
I guess jpegs ought to be supported, they are hardly exotic.
Maybe take a closer look at one of the core dumps, to maybe see if a filename can be found ?
I have obtained coredumpctl info PID for every PID in the coredump list. That is 120 MB in 8383 files. I tried to automate: coredumpctl gdb 17114 > p but it halts expecting some input. Can't generate them. Tried -batch, no good: cer@Telcontar:~/tmp/coredump/core_info> coredumpctl gdb -batch 17114 > p coredumpctl: invalid option -- 'b' cer@Telcontar:~/tmp/coredump/core_info> coredumpctl gdb --batch 17114 > p coredumpctl: unrecognized option '--batch' cer@Telcontar:~/tmp/coredump/core_info> Now what? Yes, they are photos, as I guessed. Look at one random info file: Stack trace of thread 26906: #0 0x00007f874bceaa0c n/a (libexiv2.so.26) #1 0x00007f874bcc5a0b n/a (libexiv2.so.26) #2 0x00007f874bcb5344 _ZN8TXMPMetaINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEE17RegisterNamespaceEPKcS8_ (libexiv2.so.26) #3 0x00007f874bcb0ba6 _ZN5Exiv29XmpParser10initializeEPFvPvbES1_ (libexiv2.so.26) #4 0x00007f874bcb3539 _ZN5Exiv29XmpParser6decodeERNS_7XmpDataERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE (libexiv2.so.26) Now, what is libexiv2? Well, photo information library. cer@Telcontar:~> rpm -qfi /usr/lib64/libexiv2.so.26 Name : libexiv2-26 Version : 0.26 Release : lp152.8.2 Architecture: x86_64 Install Date: 2021-01-01T00:49:42 CET Group : System/Libraries Size : 3211936 License : GPL-2.0-or-later Signature : RSA/SHA256, 2020-05-16T18:41:15 CEST, Key ID b88b2fd43dbdc284 Source RPM : exiv2-0.26-lp152.8.2.src.rpm Build Date : 2020-05-16T18:40:31 CEST Build Host : lamb03 Relocations : (not relocatable) Packager : https://bugs.opensuse.org Vendor : openSUSE URL : http://www.exiv2.org/ Summary : Library to access image metadata Description : libexiv2 is a C++ library with a C compatibility interface to access image metadata, esp from Exif tags. Distribution: openSUSE Leap 15.2 cer@Telcontar:~> -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On Tue, 5 Jan 2021 15:09:57 +0100 "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 05/01/2021 13.26, Per Jessen wrote:
Carlos E. R. wrote:
On 05/01/2021 12.36, Per Jessen wrote:
Somewhere I saw it doing jpg files and crashing. But the crash logs are huge, I don't know how to locate the filenames if they are there.
I guess jpegs ought to be supported, they are hardly exotic.
Maybe take a closer look at one of the core dumps, to maybe see if a filename can be found ?
I have obtained
coredumpctl info PID
for every PID in the coredump list. That is 120 MB in 8383 files.
I tried to automate:
coredumpctl gdb 17114 > p
but it halts expecting some input. Can't generate them.
Tried -batch, no good:
cer@Telcontar:~/tmp/coredump/core_info> coredumpctl gdb -batch 17114
p coredumpctl: invalid option -- 'b' cer@Telcontar:~/tmp/coredump/core_info> coredumpctl gdb --batch 17114 p coredumpctl: unrecognized option '--batch' cer@Telcontar:~/tmp/coredump/core_info>
What were you trying to do? Best to read the man page before trying random options! Perhaps: coredumpctl -o p gdb 17114 maybe?
On 05/01/2021 17.24, Dave Howorth wrote:
On Tue, 5 Jan 2021 15:09:57 +0100 "Carlos E. R." <> wrote:
On 05/01/2021 13.26, Per Jessen wrote:
Carlos E. R. wrote:
On 05/01/2021 12.36, Per Jessen wrote:
Somewhere I saw it doing jpg files and crashing. But the crash logs are huge, I don't know how to locate the filenames if they are there.
I guess jpegs ought to be supported, they are hardly exotic.
Maybe take a closer look at one of the core dumps, to maybe see if a filename can be found ?
I have obtained
coredumpctl info PID
for every PID in the coredump list. That is 120 MB in 8383 files.
I tried to automate:
coredumpctl gdb 17114 > p
but it halts expecting some input. Can't generate them.
Tried -batch, no good:
cer@Telcontar:~/tmp/coredump/core_info> coredumpctl gdb -batch 17114
p coredumpctl: invalid option -- 'b' cer@Telcontar:~/tmp/coredump/core_info> coredumpctl gdb --batch 17114 p coredumpctl: unrecognized option '--batch' cer@Telcontar:~/tmp/coredump/core_info>
What were you trying to do? Best to read the man page before trying random options!
--batch is in the gdb manual. The normal line is: coredumpctl gdb 17114 which works but requires user input.
Perhaps:
coredumpctl -o p gdb 17114
maybe?
No, requires user input, can't be automated. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 05/01/2021 11.55, Dave Howorth wrote:
On Tue, 5 Jan 2021 04:17:57 +0100 "Carlos E. R." <> wrote:
On the man page, it says:
"The actual extraction is done by a separate process. This is done to isolate the calling process from any memory leaks or crashes in the libraries Tracker uses to extract metadata."
So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker? I don't use tracker and have no idea of what the logs show, but maybe it's worth trying to pin down the particular place the problem occurs.
To me it seems that they are aware that it crashes and do some mitigation, instead of solving the problem. I have no idea how to investigate whether there are wrong libraries related to tracker. cer@Telcontar:~> rpm -qa | grep tracker tracker-debuginfo-2.3.2-lp152.2.4.x86_64 grilo-plugin-tracker-0.3.11-lp152.2.2.x86_64 tracker-miner-files-2.3.2-lp152.1.3.x86_64 libtracker-common-2_0-2.3.2-lp152.2.4.x86_64 tracker-miners-2.3.2-lp152.1.3.x86_64 tracker-2.3.2-lp152.2.4.x86_64 libtracker-sparql-2_0-0-2.3.2-lp152.2.4.x86_64 tracker-miners-debuginfo-2.3.2-lp152.1.3.x86_64 tracker-miners-lang-2.3.2-lp152.1.3.noarch libtracker-control-2_0-0-2.3.2-lp152.2.4.x86_64 libtracker-common-2_0-debuginfo-2.3.2-lp152.2.4.x86_64 libtracker-miner-2_0-0-2.3.2-lp152.2.4.x86_64 tracker-debugsource-2.3.2-lp152.2.4.x86_64 libtracker-sparql-2_0-0-debuginfo-2.3.2-lp152.2.4.x86_64 libxatracker2-1.0.0-lp152.27.1.x86_64 libtracker-miner-2_0-0-debuginfo-2.3.2-lp152.2.4.x86_64 libfolks-tracker25-0.13.1-lp152.2.4.x86_64 tracker-miners-debugsource-2.3.2-lp152.1.3.x86_64 tracker-lang-2.3.2-lp152.2.4.noarch All are leap 152 libraries. I did a gdb backtrace of one of the crashes selected at random; I have 9605 crashes to choose from this minute. I can do more if somebody wants them. Heck, I can trace hundreds of them if somebody tells me how to automate the process. But the system will erase the backtraces automatically via systemd-tmpfiles after some unknown number of days, so there is a hurry. Telcontar:~ # coredumpctl gdb 25653 PID: 25653 (tracker-extract) ... [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/lib/tracker-extract'. Program terminated with signal SIGSEGV, Segmentation fault. #0 std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>--Type <RET> for more, q to quit, c to continue without paging--c
const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::lower_bound (__k=..., this=<optimized out>) at /usr/include/c++/7/bits/stl_tree.h:1187
warning: Source file is more recent than executable. 1187 { return _M_lower_bound(_M_begin(), _M_end(), __k); } [Current thread is 1 (Thread 0x7feccb7fe700 (LWP 25669))] (gdb) quit Telcontar:~ # -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On Tue, 5 Jan 2021 13:09:36 +0100 "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 05/01/2021 11.55, Dave Howorth wrote:
On the man page, it says:
"The actual extraction is done by a separate process. This is done to isolate the calling process from any memory leaks or crashes in the libraries Tracker uses to extract metadata."
So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker? I don't use tracker and have no idea of what the logs show, but maybe it's worth trying to pin down the particular place the problem occurs.
To me it seems that they are aware that it crashes and do some mitigation, instead of solving the problem.
But since you don't say on what basis you believe that, I have no way to know whether it is plausible or not. Maybe a link to whatever makes you think that would help.
I have no idea how to investigate whether there are wrong libraries related to tracker.
AIUI, running a process under valgrind can be a good way to debug a segv.
cer@Telcontar:~> rpm -qa | grep tracker tracker-debuginfo-2.3.2-lp152.2.4.x86_64 grilo-plugin-tracker-0.3.11-lp152.2.2.x86_64 tracker-miner-files-2.3.2-lp152.1.3.x86_64 libtracker-common-2_0-2.3.2-lp152.2.4.x86_64 tracker-miners-2.3.2-lp152.1.3.x86_64 tracker-2.3.2-lp152.2.4.x86_64 libtracker-sparql-2_0-0-2.3.2-lp152.2.4.x86_64 tracker-miners-debuginfo-2.3.2-lp152.1.3.x86_64 tracker-miners-lang-2.3.2-lp152.1.3.noarch libtracker-control-2_0-0-2.3.2-lp152.2.4.x86_64 libtracker-common-2_0-debuginfo-2.3.2-lp152.2.4.x86_64 libtracker-miner-2_0-0-2.3.2-lp152.2.4.x86_64 tracker-debugsource-2.3.2-lp152.2.4.x86_64 libtracker-sparql-2_0-0-debuginfo-2.3.2-lp152.2.4.x86_64 libxatracker2-1.0.0-lp152.27.1.x86_64 libtracker-miner-2_0-0-debuginfo-2.3.2-lp152.2.4.x86_64 libfolks-tracker25-0.13.1-lp152.2.4.x86_64 tracker-miners-debugsource-2.3.2-lp152.1.3.x86_64 tracker-lang-2.3.2-lp152.2.4.noarch
All are leap 152 libraries.
I did a gdb backtrace of one of the crashes selected at random; I have 9605 crashes to choose from this minute. I can do more if somebody wants them. Heck, I can trace hundreds of them if somebody tells me how to automate the process. But the system will erase the backtraces automatically via systemd-tmpfiles after some unknown number of days, so there is a hurry.
Telcontar:~ # coredumpctl gdb 25653 PID: 25653 (tracker-extract) ...
[Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/lib/tracker-extract'. Program terminated with signal SIGSEGV, Segmentation fault. #0 std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>--Type <RET> for more, q to quit, c to continue without paging--c
const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::lower_bound (__k=..., this=<optimized out>) at /usr/include/c++/7/bits/stl_tree.h:1187
warning: Source file is more recent than executable. 1187 { return _M_lower_bound(_M_begin(), _M_end(), __k); } [Current thread is 1 (Thread 0x7feccb7fe700 (LWP 25669))] (gdb) quit Telcontar:~ #
On 05/01/2021 13.36, Dave Howorth wrote:
On Tue, 5 Jan 2021 13:09:36 +0100 "Carlos E. R." <robin.listas@telefonica.net> wrote:
On 05/01/2021 11.55, Dave Howorth wrote:
On the man page, it says:
"The actual extraction is done by a separate process. This is done to isolate the calling process from any memory leaks or crashes in the libraries Tracker uses to extract metadata."
So it seems that maybe your problems are caused by some library you have installed that is borked or not compatible with tracker? I don't use tracker and have no idea of what the logs show, but maybe it's worth trying to pin down the particular place the problem occurs.
To me it seems that they are aware that it crashes and do some mitigation, instead of solving the problem.
But since you don't say on what basis you believe that, I have no way to know whether it is plausible or not. Maybe a link to whatever makes you think that would help.
Educated guess based on my past experience as paid programmer.
I have no idea how to investigate whether there are wrong libraries related to tracker.
AIUI, running a process under valgrind can be a good way to debug a segv.
How exactly? I don't start those processes myself, they are done by the system "somehow". The only thing I have are the coredumps, with installed debug info packages. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 1/5/21 2:30 PM, Carlos E. R. wrote:
Educated guess based on my past experience as paid programmer.
If you were a paid programmer, then it should be much easier to debug this issue ;)
I have no idea how to investigate whether there are wrong libraries related to tracker.
AIUI, running a process under valgrind can be a good way to debug a segv.
How exactly?
I don't start those processes myself, they are done by the system "somehow".
The only thing I have are the coredumps, with installed debug info packages.
I think a good idea is in the coredumpctl output, it says, Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Which looks like a systemd service. Those you can easily override. On my laptop, I've just found, /usr/lib/systemd/user/tracker-extract.service So, what you can do is create an override file. The nice thing is this is not system level systemd, but user level, so to see it, run as the user (not root) # systemctl --user status tracker-extract.service then you can override the exec to run valgrind with, mkdir -p ~/.config/systemd/user/tracker-extract.service.d cd ~/.config/systemd/user/tracker-extract.service.d cat > override.conf <<EOF [Service] ExecStart= ExecStart=/usr/bin/valgrind --track-origins=yes /usr/lib/tracker-extract EOF systemctl --user daemon-reload systemctl --user status tracker-extract.service And then you should have overriden the system file with your own that runs it under valgrind. My hunch is you have some file on your system that looks or is like a media file that is then crashing this program. - Adam PS. I found very nice documentation about user's systemd at https://wiki.archlinux.org/index.php/systemd/User
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday, 2021-01-06 at 16:41 +0100, Adam Majer wrote:
On 1/5/21 2:30 PM, Carlos E. R. wrote:
Educated guess based on my past experience as paid programmer.
If you were a paid programmer, then it should be much easier to debug this issue ;)
I used BorlandC and other tools for that other operating system. Not Linux. I'm not familiar with the Linux programming environment. I never really tried :-)
I have no idea how to investigate whether there are wrong libraries related to tracker.
AIUI, running a process under valgrind can be a good way to debug a segv.
How exactly?
I don't start those processes myself, they are done by the system "somehow".
The only thing I have are the coredumps, with installed debug info packages.
I think a good idea is in the coredumpctl output, it says,
Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service
Which looks like a systemd service. Those you can easily override. On my laptop, I've just found,
/usr/lib/systemd/user/tracker-extract.service
Yes, that's the one.
So, what you can do is create an override file. The nice thing is this is not system level systemd, but user level, so to see it, run as the user (not root)
# systemctl --user status tracker-extract.service
then you can override the exec to run valgrind with,
mkdir -p ~/.config/systemd/user/tracker-extract.service.d cd ~/.config/systemd/user/tracker-extract.service.d
cat > override.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/bin/valgrind --track-origins=yes /usr/lib/tracker-extract
EOF
systemctl --user daemon-reload
systemctl --user status tracker-extract.service
And then you should have overriden the system file with your own that runs it under valgrind.
Yes, I have done that, and restarted the session to make it run (I killed previously). Now I have: Wed 2021-01-06 20:29:19 CET 3752 1000 100 11 present /usr/lib/tracker-extract Wed 2021-01-06 20:43:20 CET 5546 1000 100 11 present /usr/lib/tracker-extract Wed 2021-01-06 20:43:50 CET 6905 1000 100 11 present /usr/lib/tracker-extract Wed 2021-01-06 20:44:29 CET 9101 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Wed 2021-01-06 20:45:04 CET 9240 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Wed 2021-01-06 20:45:35 CET 9668 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Wed 2021-01-06 20:46:07 CET 9717 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Wed 2021-01-06 20:46:41 CET 9976 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Wed 2021-01-06 20:47:12 CET 10033 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Wed 2021-01-06 20:47:43 CET 10082 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Wed 2021-01-06 20:48:18 CET 10132 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Wed 2021-01-06 20:48:49 CET 10205 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Wed 2021-01-06 20:49:20 CET 10249 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Now what?
My hunch is you have some file on your system that looks or is like a media file that is then crashing this program.
One hunch is that it is working on tiff files, and those are system files, not mine. Another hunch is that it is working on photo files, of which I have thousands.
- Adam
PS. I found very nice documentation about user's systemd at https://wiki.archlinux.org/index.php/systemd/User
Yes, they are. - -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCX/YVMxwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVR4AAoIr6vNrMGEbqvCYl1QvO 0HBzj/OmAJ9VgJ0aEPtBu/5nVdl86P2eNwI1GA== =CZ0c -----END PGP SIGNATURE-----
On 1/6/21 8:53 PM, Carlos E. R. wrote:
On Wednesday, 2021-01-06 at 16:41 +0100, Adam Majer wrote:
On 1/5/21 2:30 PM, Carlos E. R. wrote:
Educated guess based on my past experience as paid programmer.
If you were a paid programmer, then it should be much easier to debug this issue ;)
I used BorlandC and other tools for that other operating system. Not Linux. I'm not familiar with the Linux programming environment. I never really tried :-)
Sure, but it's all the same ;) The more things change, the more they stay the same is not just a saying. I also actually started with Borland Turbo C 2.0 back in the day with 16-bit registers and funny memory models.
Wed 2021-01-06 20:49:20 CET 10249 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux
Now what?
Can you look in the log what valgrind has logged for one of these things? It would be great if you can attach one of the coredumps from valgrind AND the log it created in the journal. valgrind logs will be with a procid prepended to all entries on all lines in the log so you can map the errors from the journal to the event. By default, it seems the journal is only stored per user IF you have created /var/log/journal directory.... otherwise the journal is not persistent and not per user. If you have this directory, you can look do, journalctl --user and then grep for valgrind and the specific procid of the coredump. Compress that and attach to the bug. If you don't have persistent logs (because /var/log/journal was never created), you have to look in the system journal with `sudo journalctl` and also grep for valgrind and the specific procid.
My hunch is you have some file on your system that looks or is like a media file that is then crashing this program.
One hunch is that it is working on tiff files, and those are system files, not mine. Another hunch is that it is working on photo files, of which I have thousands.
Yes, and it would be very helpful to find which one is breaking it ;) That would be the reproducer. My guess is this is not a system file that is causing it or we would see a lot more reports. - Adam
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday, 2021-01-07 at 09:42 +0100, Adam Majer wrote:
On 1/6/21 8:53 PM, Carlos E. R. wrote:
On Wednesday, 2021-01-06 at 16:41 +0100, Adam Majer wrote:
On 1/5/21 2:30 PM, Carlos E. R. wrote:
Educated guess based on my past experience as paid programmer.
If you were a paid programmer, then it should be much easier to debug this issue ;)
I used BorlandC and other tools for that other operating system. Not Linux. I'm not familiar with the Linux programming environment. I never really tried :-)
Sure, but it's all the same ;) The more things change, the more they stay the same is not just a saying. I also actually started with Borland Turbo C 2.0 back in the day with 16-bit registers and funny memory models.
I debugged with the code running in the machine, a serial cable connected to my machine, running the debugger IDE. Thus the program had almost the full memory except a small stub :-) But most likely I would add printf lines to the code at every step to catch the cuplrit.
Wed 2021-01-06 20:49:20 CET 10249 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux
Now what?
Can you look in the log what valgrind has logged for one of these things? It would be great if you can attach one of the coredumps from valgrind AND the log it created in the journal. valgrind logs will be with a procid prepended to all entries on all lines in the log so you can map the errors from the journal to the event.
By default, it seems the journal is only stored per user IF you have created /var/log/journal directory.... otherwise the journal is not persistent and not per user. If you have this directory, you can look do,
journalctl --user
and then grep for valgrind and the specific procid of the coredump. Compress that and attach to the bug.
If you don't have persistent logs (because /var/log/journal was never created), you have to look in the system journal with `sudo journalctl` and also grep for valgrind and the specific procid.
No problem, I have both persistent journal and syslog :-) 3.6> 2021-01-06T20:44:27.769716+01:00 Telcontar valgrind 9101 - - --9101-- WARNING: unhandled amd64-linux syscall: 317 <3.6> 2021-01-06T20:44:27.769977+01:00 Telcontar valgrind 9101 - - --9101-- You may be able to write your own handler. <3.6> 2021-01-06T20:44:27.770093+01:00 Telcontar valgrind 9101 - - --9101-- Read the file README_MISSING_SYSCALL_OR_IOCTL. <3.6> 2021-01-06T20:44:27.770209+01:00 Telcontar valgrind 9101 - - --9101-- Nevertheless we consider this a bug. Please report <3.6> 2021-01-06T20:44:27.770310+01:00 Telcontar valgrind 9101 - - --9101-- it at http://valgrind.org/support/bug_reports.html. <3.6> 2021-01-06T20:44:27.874092+01:00 Telcontar systemd 1 - - Started Process Core Dump (PID 9238/UID 0). <3.4> 2021-01-06T20:44:28.535715+01:00 Telcontar systemd 6477 - - tracker-extract.service: Main process exited, code=killed, status=31/SYS <3.5> 2021-01-06T20:44:28.536102+01:00 Telcontar systemd 6477 - - tracker-extract.service: Unit entered failed state. <3.4> 2021-01-06T20:44:28.536314+01:00 Telcontar systemd 6477 - - tracker-extract.service: Failed with result 'signal'. <3.6> 2021-01-06T20:44:28.751936+01:00 Telcontar valgrind 9240 - - ==9240== Memcheck, a memory error detector <3.6> 2021-01-06T20:44:28.752192+01:00 Telcontar valgrind 9240 - - ==9240== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. <3.6> 2021-01-06T20:44:28.752338+01:00 Telcontar valgrind 9240 - - ==9240== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info <3.6> 2021-01-06T20:44:28.752475+01:00 Telcontar valgrind 9240 - - ==9240== Command: /usr/lib/tracker-extract <3.6> 2021-01-06T20:44:28.752599+01:00 Telcontar valgrind 9240 - - ==9240== <1.2> 2021-01-06T20:44:29.104988+01:00 Telcontar systemd-coredump 9239 - - Process 9101 (memcheck-amd64-) of user 1000 dumped core.#012#012Stack trace of thread 9237:#012#0 0x0000000058023ee2 n/a (/usr/lib64/valgrind/memcheck-amd64-linux) <3.6> 2021-01-06T20:44:30.230514+01:00 Telcontar dbus-daemon 6500 - - [session uid=1000 pid=6500] Activating via systemd: service name='org.freedesktop.Tracker1.Miner.Extract' unit='tracker-extract.service' requested by ':1.440' (uid=1000 pid=5597 comm="/usr/lib/tracker-miner-fs ") <10.3> 2021-01-06T20:44:36.835679+01:00 Telcontar auth - - - gkr-pam: unable to locate daemon control file <1.4> 2021-01-06T20:44:39.112957+01:00 Telcontar memcheck-amd64- 9240 - - Locale 'LANG' is not set, defaulting to C locale <1.4> 2021-01-06T20:44:41.944715+01:00 Telcontar tracker-store 5578 - - tracker_writeback_transact: assertion 'private == NULL' failed <1.4> 2021-01-06T20:44:42.000944+01:00 Telcontar memcheck-amd64- 9240 - - Locale 'LANG' is not set, defaulting to C locale <3.6> 2021-01-06T20:44:42.071461+01:00 Telcontar dbus-daemon 6500 - - [session uid=1000 pid=6500] Successfully activated service 'org.freedesktop.Tracker1.Miner.Extract' <3.6> 2021-01-06T20:45:03.169668+01:00 Telcontar valgrind 9240 - - --9240-- WARNING: unhandled amd64-linux syscall: 317 <3.6> 2021-01-06T20:45:03.170055+01:00 Telcontar valgrind 9240 - - --9240-- You may be able to write your own handler. <3.6> 2021-01-06T20:45:03.170180+01:00 Telcontar valgrind 9240 - - --9240-- Read the file README_MISSING_SYSCALL_OR_IOCTL. <3.6> 2021-01-06T20:45:03.170306+01:00 Telcontar valgrind 9240 - - --9240-- Nevertheless we consider this a bug. Please report <3.6> 2021-01-06T20:45:03.170431+01:00 Telcontar valgrind 9240 - - --9240-- it at http://valgrind.org/support/bug_reports.html. <3.6> 2021-01-06T20:45:03.627164+01:00 Telcontar systemd 1 - - Started Process Core Dump (PID 9666/UID 0). <3.6> 2021-01-06T20:45:03.693115+01:00 Telcontar systemd-coredump 9667 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.24397.1609802992000000.lz4. <3.6> 2021-01-06T20:45:03.694113+01:00 Telcontar systemd-coredump 9667 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.24422.1609802993000000.lz4. <3.6> 2021-01-06T20:45:03.694978+01:00 Telcontar systemd-coredump 9667 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.24447.1609802996000000.lz4. <3.6> 2021-01-06T20:45:03.695837+01:00 Telcontar systemd-coredump 9667 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.24507.16 Now, I don't know how to correlate the log entry with the coredump file. Maybe aiming for the last one... cer@Telcontar:~> coredumpctl TIME PID UID GID SIG COREFILE EXE ... Wed 2021-01-06 21:30:45 CET 20253 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux Syslog: 3.6> 2021-01-06T21:30:02.262883+01:00 Telcontar systemd-coredump 20252 - - Removed old coredump core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.11850.1 609963070000000.lz4. <3.4> 2021-01-06T21:30:02.729181+01:00 Telcontar systemd 6477 - - tracker-extract.service: Main process exited, code=killed, status=31/SYS <3.5> 2021-01-06T21:30:02.729663+01:00 Telcontar systemd 6477 - - tracker-extract.service: Unit entered failed state. <3.4> 2021-01-06T21:30:02.729825+01:00 Telcontar systemd 6477 - - tracker-extract.service: Failed with result 'signal'. <3.6> 2021-01-06T21:30:03.001642+01:00 Telcontar valgrind 20253 - - ==20253== Memcheck, a memory error detector <3.6> 2021-01-06T21:30:03.001937+01:00 Telcontar valgrind 20253 - - ==20253== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. <3.6> 2021-01-06T21:30:03.002093+01:00 Telcontar valgrind 20253 - - ==20253== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info <3.6> 2021-01-06T21:30:03.002235+01:00 Telcontar valgrind 20253 - - ==20253== Command: /usr/lib/tracker-extract <3.6> 2021-01-06T21:30:03.002398+01:00 Telcontar valgrind 20253 - - ==20253== <1.2> 2021-01-06T21:30:03.213505+01:00 Telcontar systemd-coredump 20252 - - Process 20086 (memcheck-amd64-) of user 1000 dumped core.#012#012Stack trace of thread 20250:#012#0 0x0000000058023ee2 n/a (/usr/lib64/valgrind/memcheck-amd64-linux) <3.6> 2021-01-06T21:30:06.639275+01:00 Telcontar dbus-daemon 12609 - - [session uid=1030 pid=12609] Activating via systemd: service name='org.freedesktop.Tracker1.Miner.Extract' unit='tracker-extract.service' requested by ':1.52' (uid=1030 pid=12987 comm="/usr/lib/tracker-miner-fs ") <1.4> 2021-01-06T21:30:06.769538+01:00 Telcontar tracker-extract 20269 - - Locale 'LANG' is not set, defaulting to C locale <1.4> 2021-01-06T21:30:06.807229+01:00 Telcontar tracker-extract 20269 - - Locale 'LANG' is not set, defaulting to C locale <3.6> 2021-01-06T21:30:06.807620+01:00 Telcontar dbus-daemon 12609 - - [session uid=1030 pid=12609] Successfully activated service 'org.freedesktop.Tracker1.Miner.Extract' <1.4> 2021-01-06T21:30:14.361915+01:00 Telcontar memcheck-amd64- 20253 - - Locale 'LANG' is not set, defaulting to C locale <3.4> 2021-01-06T21:30:15.303965+01:00 Telcontar systemd 6477 - - tracker-extract.service: Current command vanished from the unit file, execution of the command list won't be resumed. <3.6> 2021-01-06T21:30:17.640451+01:00 Telcontar dbus-daemon 12609 - - [session uid=1030 pid=12609] Activating via systemd: service name='org.freedesktop.Tracker1.Miner.Extract' unit='tracker-extract.service' requested by ':1.52' (uid=1030 pid=12987 comm="/usr/lib/tracker-miner-fs ") <3.6> 2021-01-06T21:30:17.673078+01:00 Telcontar dbus-daemon 6500 - - [session uid=1000 pid=6500] Activating via systemd: service name='org.freedesktop.Tracker1' unit='tracker-store.service' requested by ':1.702' (uid=1000 pid=20253 comm="/usr/bin/valgrind --track-origins=yes /usr/lib/tra") <3.6> 2021-01-06T21:30:17.709536+01:00 Telcontar dbus-daemon 6500 - - [session uid=1000 pid=6500] Successfully activated service 'org.freedesktop.Tracker1' <1.4> 2021-01-06T21:30:17.709833+01:00 Telcontar tracker-store 20320 - - Locale 'LANG' is not set, defaulting to C locale <1.4> 2021-01-06T21:30:17.743024+01:00 Telcontar tracker-extract 20317 - - Locale 'LANG' is not set, defaulting to C locale <1.4> 2021-01-06T21:30:17.796035+01:00 Telcontar tracker-extract 20317 - - Locale 'LANG' is not set, defaulting to C locale <3.6> 2021-01-06T21:30:17.796424+01:00 Telcontar dbus-daemon 12609 - - [session uid=1030 pid=12609] Successfully activated service 'org.freedesktop.Tracker1.Miner.Extract' <1.4> 2021-01-06T21:30:18.071227+01:00 Telcontar tracker-store 20320 - - tracker_writeback_transact: assertion 'private == NULL' failed <1.4> 2021-01-06T21:30:18.136010+01:00 Telcontar memcheck-amd64- 20253 - - Locale 'LANG' is not set, defaulting to C locale <3.6> 2021-01-06T21:30:27.878423+01:00 Telcontar tracker-store 20320 - - Received signal:15->'Terminated' <3.6> 2021-01-06T21:30:27.878578+01:00 Telcontar tracker-store 20320 - - OK <3.6> 2021-01-06T21:30:28.640515+01:00 Telcontar dbus-daemon 12609 - - [session uid=1030 pid=12609] Activating via systemd: service name='org.freedesktop.Tracker1.Miner.Extract' unit='tracker-extract.service' requested by ':1.52' (uid=1030 pid=12987 comm="/usr/lib/tracker-miner-fs ") cer@Telcontar:~> coredumpctl info 20253 UID: 1000 (cer) GID: 100 (users) Signal: 31 (SYS) Timestamp: Wed 2021-01-06 21:30:44 CET (15h ago) Command Line: /usr/bin/valgrind --track-origins=yes /usr/lib/tracker-extract Executable: /usr/lib64/valgrind/memcheck-amd64-linux Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Unit: user@1000.service User Unit: tracker-extract.service Slice: user-1000.slice Owner UID: 1000 (cer) Boot ID: 431e0a53560949238ef3840406768db8 Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.20253.1609965044000000.lz4 Message: Process 20253 (memcheck-amd64-) of user 1000 dumped core. Stack trace of thread 20409: #0 0x0000000058023ee2 n/a (/usr/lib64/valgrind/memcheck-amd64-linux) cer@Telcontar:~> That coredump file I have saved on home directory for preservation. It has 83M, so it can not be uploaded to bugzilla. cer@Telcontar:~> coredumpctl gdb 20253 ... ore was generated by `/usr/bin/valgrind --track-origins=yes /usr/lib/tracker-extract'. Program terminated with signal SIGSYS, Bad system call. #0 0x0000000058023ee2 in ?? () [Current thread is 1 (LWP 20409)] Missing separate debuginfos, use: zypper install valgrind-debuginfo-3.15.0-lp152.3.3.x86_64 Ok, doing so... but what would be the purpose of debugging valgrind iteself? Repeating. cer@Telcontar:~> coredumpctl gdb 20253 cer@Telcontar:~> coredumpctl gdb 20253 PID: 20253 (memcheck-amd64-) UID: 1000 (cer) GID: 100 (users) Signal: 31 (SYS) Timestamp: Wed 2021-01-06 21:30:44 CET (15h ago) Command Line: /usr/bin/valgrind --track-origins=yes /usr/lib/tracker-extract Executable: /usr/lib64/valgrind/memcheck-amd64-linux Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Unit: user@1000.service User Unit: tracker-extract.service Slice: user-1000.slice Owner UID: 1000 (cer) Boot ID: 431e0a53560949238ef3840406768db8 Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.20253.1609965044000000.lz4 Message: Process 20253 (memcheck-amd64-) of user 1000 dumped core. Stack trace of thread 20409: #0 0x0000000058023ee2 n/a (/usr/lib64/valgrind/memcheck-amd64-linux) GNU gdb (GDB; openSUSE Leap 15.2) 8.3.1 Copyright (C) 2019 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-suse-linux". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://bugs.opensuse.org/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from /usr/lib64/valgrind/memcheck-amd64-linux... Reading symbols from /usr/lib/debug/usr/lib64/valgrind/memcheck-amd64-linux-3.15.0-lp152.3.3.x86_64.debug... warning: core file may not match specified executable file. [New LWP 20409] [New LWP 20265] [New LWP 20253] [New LWP 20267] [New LWP 20291] [New LWP 20286] [New LWP 20293] [New LWP 20290] [New LWP 20292] [New LWP 20285] [New LWP 20287] [New LWP 20408] [New LWP 20301] [New LWP 20289] [New LWP 20294] [New LWP 20295] [New LWP 20288] [New LWP 20388] Core was generated by `/usr/bin/valgrind --track-origins=yes /usr/lib/tracker-extract'. Program terminated with signal SIGSYS, Bad system call. #0 0x0000000058023ee2 in do_syscall_WRK () [Current thread is 1 (LWP 20409)] (gdb) q cer@Telcontar:~> I don't think that is of any use... :-? I suspect that it is valgrind itself which is crashing now. Let's try again. This produces the list of usables: cer@Telcontar:~> coredumpctl | grep present | less The first one with valgrind: Wed 2021-01-06 21:04:38 CET 13531 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux syslog with context - there is a bug with valgrind iteself, too, and another with tracker-store besides tracker-extract: <3.6> 2021-01-06T21:03:24.779547+01:00 Telcontar dbus-daemon 6500 - - [session uid=1000 pid=6500] Successfully activated service 'org.freedesktop.Tracker1.Miner.Ex tract' <3.6> 2021-01-06T21:03:28.495260+01:00 Telcontar systemd 1 - - user-runtime-dir@482.service: Unit not needed anymore. Stopping. <3.6> 2021-01-06T21:03:28.502868+01:00 Telcontar systemd 1 - - Stopping User Manager for UID 482... <10.6> 2021-01-06T21:03:28.518095+01:00 Telcontar systemd - - - pam_unix(systemd-user:session): session closed for user sddm <3.6> 2021-01-06T21:03:28.518254+01:00 Telcontar systemd 1 - - user-runtime-dir@482.service: Unit not needed anymore. Stopping. <3.6> 2021-01-06T21:03:28.518992+01:00 Telcontar systemd 1 - - Stopped User Manager for UID 482. <3.6> 2021-01-06T21:03:28.519384+01:00 Telcontar systemd 1 - - user-runtime-dir@482.service: Unit not needed anymore. Stopping. <3.6> 2021-01-06T21:03:28.520581+01:00 Telcontar systemd 1 - - user-482.slice: Unit not needed anymore. Stopping. <3.6> 2021-01-06T21:03:28.520668+01:00 Telcontar systemd 1 - - Stopping User Runtime Directory /run/user/482... <3.6> 2021-01-06T21:03:28.555737+01:00 Telcontar systemd 1 - - Stopped User Runtime Directory /run/user/482. <3.6> 2021-01-06T21:03:28.555944+01:00 Telcontar systemd 1 - - user-482.slice: Unit not needed anymore. Stopping. <3.6> 2021-01-06T21:03:28.556143+01:00 Telcontar systemd 1 - - Removed slice User Slice of UID 482. <1.4> 2021-01-06T21:03:29.463630+01:00 Telcontar tracker-store 12949 - - Unable to insert multiple values for subject `http://www.tracker-project.org/temp/nmm#albu mTitle' and single valued property `rdfs:comment' (old_value: 'nmm:albumTitle is deprecated, use nie:title instead, extractors still need updating', new value: 'nmm :albumTitle is deprecated, use nie:title instead') <1.4> 2021-01-06T21:03:29.620918+01:00 Telcontar tracker-store 12949 - - tracker_writeback_transact: assertion 'private == NULL' failed <1.4> 2021-01-06T21:03:44.885569+01:00 Telcontar tracker-store 12949 - - message repeated 4 times: [ tracker_writeback_transact: assertion 'private == NULL' failed ] <4.5> 2021-01-06T21:03:44.887983+01:00 Telcontar dbus-daemon 1980 - - [system] Failed to activate service 'org.bluez': timed out (service_start_timeout=25000ms) <3.6> 2021-01-06T21:03:44.888189+01:00 Telcontar pulseaudio 12748 - - E: [pulseaudio] bluez5-util.c: GetManagedObjects() failed: org.freedesktop.DBus.Error.NoReply : Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeou t expired, or the network connection was broken. <1.4> 2021-01-06T21:03:44.966923+01:00 Telcontar tracker-store 12949 - - tracker_writeback_transact: assertion 'private == NULL' failed <1.4> 2021-01-06T21:03:46.661039+01:00 Telcontar tracker-store 12949 - - message repeated 5 times: [ tracker_writeback_transact: assertion 'private == NULL' failed ] 3.6> 2021-01-06T21:03:46.848575+01:00 Telcontar valgrind 12444 - - --12444-- WARNING: unhandled amd64-linux syscall: 317 <3.6> 2021-01-06T21:03:46.848914+01:00 Telcontar valgrind 12444 - - --12444-- You may be able to write your own handler. <3.6> 2021-01-06T21:03:46.849064+01:00 Telcontar valgrind 12444 - - --12444-- Read the file README_MISSING_SYSCALL_OR_IOCTL. <3.6> 2021-01-06T21:03:46.849200+01:00 Telcontar valgrind 12444 - - --12444-- Nevertheless we consider this a bug. Please report <============== <3.6> 2021-01-06T21:03:46.849324+01:00 Telcontar valgrind 12444 - - --12444-- it at http://valgrind.org/support/bug_reports.html. <============== <3.6> 2021-01-06T21:03:46.912952+01:00 Telcontar systemd 1 - - Started Process Core Dump (PID 13526/UID 0). <3.6> 2021-01-06T21:03:46.978902+01:00 Telcontar systemd-coredump 13527 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.13193.1 609813528000000.lz4. <3.6> 2021-01-06T21:03:46.979225+01:00 Telcontar systemd-coredump 13527 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.13217.1609813531000000.lz4. <3.6> 2021-01-06T21:03:46.979533+01:00 Telcontar systemd-coredump 13527 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.13251.1609813532000000.lz4. <1.4> 2021-01-06T21:03:47.151093+01:00 Telcontar tracker-store 12949 - - tracker_writeback_transact: assertion 'private == NULL' failed <3.4> 2021-01-06T21:03:47.568152+01:00 Telcontar systemd 6477 - - tracker-extract.service: Main process exited, code=killed, status=31/SYS <3.5> 2021-01-06T21:03:47.568376+01:00 Telcontar systemd 6477 - - tracker-extract.service: Unit entered failed state. <3.4> 2021-01-06T21:03:47.568489+01:00 Telcontar systemd 6477 - - tracker-extract.service: Failed with result 'signal'. <1.4> 2021-01-06T21:03:47.636451+01:00 Telcontar tracker-store 12949 - - tracker_writeback_transact: assertion 'private == NULL' failed <3.6> 2021-01-06T21:03:47.750388+01:00 Telcontar valgrind 13531 - - ==13531== Memcheck, a memory error detector <3.6> 2021-01-06T21:03:47.750645+01:00 Telcontar valgrind 13531 - - ==13531== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. <3.6> 2021-01-06T21:03:47.750790+01:00 Telcontar valgrind 13531 - - ==13531== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info <3.6> 2021-01-06T21:03:47.750936+01:00 Telcontar valgrind 13531 - - ==13531== Command: /usr/lib/tracker-extract <3.6> 2021-01-06T21:03:47.751065+01:00 Telcontar valgrind 13531 - - ==13531== <1.4> 2021-01-06T21:03:48.077338+01:00 Telcontar tracker-store 12949 - - tracker_writeback_transact: assertion 'private == NULL' failed <1.2> 2021-01-06T21:03:48.155653+01:00 Telcontar systemd-coredump 13527 - - Process 12444 (memcheck-amd64-) of user 1000 dumped core.#012#012Stack trace of thread 13525:#012#0 0x0000000058023ee2 n/a (/usr/lib64/valgrind/memcheck-amd64-linux) <1.4> 2021-01-06T21:03:48.483612+01:00 Telcontar tracker-store 12949 - - tracker_writeback_transact: assertion 'private == NULL' failed <1.4> 2021-01-06T21:03:48.944769+01:00 Telcontar tracker-store 12949 - - tracker_writeback_transact: assertion 'private == NULL' failed <3.6> 2021-01-06T21:03:49.230160+01:00 Telcontar dbus-daemon 6500 - - [session uid=1000 pid=6500] Activating via systemd: service name='org.freedesktop.Tracker1.Miner.Extract' unit='tracker-extract.service' requested by ':1.440' (uid=1000 pid=5597 comm="/usr/lib/tracker-miner-fs ") <1.4> 2021-01-06T21:03:49.342165+01:00 Telcontar tracker-store 12949 - - tracker_writeback_transact: assertion 'private == NULL' failed <1.4> 2021-01-06T21:03:52.430926+01:00 Telcontar tracker-store 12949 - - message repeated 13 times: [ tracker_writeback_transact: assertion 'private == NULL' failed] <3.6> 2021-01-06T21:03:52.434444+01:00 Telcontar dbus-daemon 12609 - - [session uid=1030 pid=12609] Activating via systemd: service name='org.freedesktop.Tracker1.Miner.Extract' unit='tracker-extract.service' requested by ':1.52' (uid=1030 pid=12987 comm="/usr/lib/tracker-miner-fs ") <1.4> 2021-01-06T21:03:52.512227+01:00 Telcontar tracker-extract 13589 - - External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though. <1.4> 2021-01-06T21:03:52.632275+01:00 Telcontar tracker-extract 13589 - - Locale 'LANG' is not set, defaulting to C locale <1.4> 2021-01-06T21:03:52.685689+01:00 Telcontar tracker-extract 13589 - - Locale 'LANG' is not set, defaulting to C locale <3.6> 2021-01-06T21:03:52.686012+01:00 Telcontar dbus-daemon 12609 - - [session uid=1030 pid=12609] Successfully activated service 'org.freedesktop.Tracker1.Miner.Extract' <0.6> 2021-01-06T21:03:52.695939+01:00 Telcontar kernel - - - [30749.566788] single[13612]: segfault at 10 ip 00007fd21d131a0c sp 00007fd1fa54efb0 error 4 in libexiv2.so.26.0.0[7fd21cf1c000+2d9000] <0.6> 2021-01-06T21:03:52.695956+01:00 Telcontar kernel - - - [30749.566792] Code: 0f 87 af 03 00 00 48 39 dd 0f 87 15 ff ff ff 0f 1f 80 00 00 00 00 48 8d 05 71 53 31 00 4c 8b 74 24 48 4c 8b 6c 24 40 48 8b 00 <48> 8b 68 10 48 8d 58 08 48 89 44 24 18 48 89 5c 24 10 48 85 ed 75 <3.6> 2021-01-06T21:03:52.730187+01:00 Telcontar systemd 1 - - Started Process Core Dump (PID 13613/UID 0). <3.6> 2021-01-06T21:03:52.785226+01:00 Telcontar systemd-coredump 13614 - - Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.13275.1 cer@Telcontar:~> coredumpctl info 13531 PID: 13531 (tracker-extract) UID: 1000 (cer) GID: 100 (users) Signal: 11 (SEGV) Timestamp: Sun 2021-01-03 00:49:15 CET (4 days ago) Command Line: /usr/lib/tracker-extract Executable: /usr/lib/tracker-extract Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Unit: user@1000.service User Unit: tracker-extract.service Slice: user-1000.slice Owner UID: 1000 (cer) Boot ID: 2ef60a9b78a14f8aa0ad0315a348b17c Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.13531.1609631355000000.lz4 (inaccessible) Message: Process 13531 (tracker-extract) of user 1000 dumped core. Stack trace of thread 13547: #0 0x00007f6dd6295a0c n/a (libexiv2.so.26) #1 0x00007f6dd6270a0b n/a (libexiv2.so.26) #2 0x00007f6dd6260344 _ZN8TXMPMetaINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEE17RegisterNamespaceEPKcS8_ (libexiv2.so.26) #3 0x00007f6dd625bba6 _ZN5Exiv29XmpParser10initializeEPFvPvbES1_ (libexiv2.so.26) #4 0x00007f6dd625e539 _ZN5Exiv29XmpParser6decodeERNS_7XmpDataERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE (libexiv2.so.26) #5 0x00007f6dd6244576 _ZN5Exiv28Internal11TiffDecoder9decodeXmpEPKNS0_13TiffEntryBaseE (libexiv2.so.26) #6 0x00007f6dd622c7a6 _ZN5Exiv28Internal13TiffDirectory8doAcceptERNS0_11TiffVisitorE (libexiv2.so.26) #7 0x00007f6dd6235e9f _ZN5Exiv28Internal16TiffParserWorker6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhjjPFMNS0_11TiffDecoderEFvPKNS0_13TiffEntryBaseEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjNS0_5IfdIdEEPNS0_14TiffHeaderBaseE (libexiv2.so.26) #8 0x00007f6dd6235fe7 _ZN5Exiv210TiffParser6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhj (libexiv2.so.26) #9 0x00007f6dd6236179 _ZN5Exiv29TiffImage12readMetadataEv (libexiv2.so.26) #10 0x00007f6dd65b401d n/a (libgexiv2.so.2) #11 0x00007f6dd65b426f gexiv2_metadata_open_path (libgexiv2.so.2) #12 0x00007f6dd67ce848 tracker_extract_get_metadata (libextract-raw.so) #13 0x000055fa3efcceac get_file_metadata (tracker-extract) #14 0x000055fa3efcd49b get_metadata (tracker-extract) #15 0x000055fa3efcd530 single_thread_get_metadata (tracker-extract) #16 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #17 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #18 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13542: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13544: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13545: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13540: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13541: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13538: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13546: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9218a g_cond_wait_until (libglib-2.0.so.0) #2 0x00007f6e20c17c51 g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c707a6 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13534: #0 0x00007f6e2027d6db __GI___poll (libc.so.6) #1 0x00007f6e20c467b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f6e20c468cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007f6e182bd5bd dconf_gdbus_worker_thread (libdconfsettings.so) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13543: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13536: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13533: #0 0x00007f6e2027d6db __GI___poll (libc.so.6) #1 0x00007f6e20c467b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f6e20c46b02 g_main_loop_run (libglib-2.0.so.0) #3 0x00007f6e2148eb16 gdbus_shared_thread_func (libgio-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13539: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13537: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13531: #0 0x00007f6e2027d6db __GI___poll (libc.so.6) #1 0x00007f6e20c467b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f6e20c46b02 g_main_loop_run (libglib-2.0.so.0) #3 0x000055fa3efca07c main (tracker-extract) #4 0x00007f6e201b034a __libc_start_main (libc.so.6) #5 0x000055fa3efca16a _start (tracker-extract) Stack trace of thread 13535: #0 0x00007f6e20282839 syscall (libc.so.6) #1 0x00007f6e20c9206f g_cond_wait (libglib-2.0.so.0) #2 0x00007f6e20c17c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f6e20c70845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) Stack trace of thread 13532: #0 0x00007f6e2027d6db __GI___poll (libc.so.6) #1 0x00007f6e20c467b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f6e20c468cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007f6e20c46911 glib_worker_main (libglib-2.0.so.0) #4 0x00007f6e20c6fdce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f6e2054f4f9 start_thread (libpthread.so.0) #6 0x00007f6e20287fbf __clone (libc.so.6) PID: 13531 (memcheck-amd64-) UID: 1000 (cer) GID: 100 (users) Signal: 31 (SYS) Timestamp: Wed 2021-01-06 21:04:36 CET (16h ago) Command Line: /usr/bin/valgrind --track-origins=yes /usr/lib/tracker-extract Executable: /usr/lib64/valgrind/memcheck-amd64-linux Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Unit: user@1000.service User Unit: tracker-extract.service Slice: user-1000.slice Owner UID: 1000 (cer) Boot ID: 431e0a53560949238ef3840406768db8 Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.13531.1609963476000000.lz4 Message: Process 13531 (memcheck-amd64-) of user 1000 dumped core. Stack trace of thread 13827: #0 0x0000000058023ee2 n/a (/usr/lib64/valgrind/memcheck-amd64-linux) cer@Telcontar:~> coredumpctl gdb 13531 cer@Telcontar:~> coredumpctl gdb 13531 PID: 13531 (memcheck-amd64-) UID: 1000 (cer) GID: 100 (users) Signal: 31 (SYS) Timestamp: Wed 2021-01-06 21:04:36 CET (16h ago) Command Line: /usr/bin/valgrind --track-origins=yes /usr/lib/tracker-extract Executable: /usr/lib64/valgrind/memcheck-amd64-linux Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Unit: user@1000.service User Unit: tracker-extract.service Slice: user-1000.slice Owner UID: 1000 (cer) Boot ID: 431e0a53560949238ef3840406768db8 Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.13531.1609963476000000.lz4 Message: Process 13531 (memcheck-amd64-) of user 1000 dumped core. Stack trace of thread 13827: #0 0x0000000058023ee2 n/a (/usr/lib64/valgrind/memcheck-amd64-linux) GNU gdb (GDB; openSUSE Leap 15.2) 8.3.1 Copyright (C) 2019 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-suse-linux". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://bugs.opensuse.org/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from /usr/lib64/valgrind/memcheck-amd64-linux... Reading symbols from /usr/lib/debug/usr/lib64/valgrind/memcheck-amd64-linux-3.15.0-lp152.3.3.x86_64.debug... warning: core file may not match specified executable file. <==== check below [New LWP 13827] [New LWP 13572] [New LWP 13531] [New LWP 13546] [New LWP 13573] [New LWP 13569] [New LWP 13821] [New LWP 13552] [New LWP 13574] [New LWP 13576] [New LWP 13579] [New LWP 13585] [New LWP 13591] [New LWP 13604] [New LWP 13606] [New LWP 13581] [New LWP 13826] [New LWP 13794] Core was generated by `/usr/bin/valgrind --track-origins=yes /usr/lib/tracker-extract'. Program terminated with signal SIGSYS, Bad system call. #0 0x0000000058023ee2 in do_syscall_WRK () [Current thread is 1 (LWP 13827)] (gdb) gdb) bt #0 0x0000000058023ee2 in do_syscall_WRK () #1 0x000000005803072d in vgPlain_do_syscall (sysno=sysno@entry=128, a1=a1@entry=69115956600, a2=a2@entry=69115956608, a3=a3@entry=1478553584, a4=a4@entry=8, a5=a5@entry=0, a6=0, a8=0, a7=0) at m_syscall.c:932 #2 0x00000000581c8918 in vgPlain_sigtimedwait_zero (info=0x1017a1cd80, set=0x1017a1cd78) at m_libcsignal.c:416 #3 vgPlain_poll_signals (tid=tid@entry=18) at m_signals.c:2949 #4 0x00000000581d16bd in vgPlain_scheduler (tid=tid@entry=18) at m_scheduler/scheduler.c:1279 #5 0x00000000581d3327 in thread_wrapper (tidW=18) at m_syswrap/syswrap-linux.c:71 #6 run_a_thread_NORETURN (tidW=18) at m_syswrap/syswrap-linux.c:125 #7 0x00000000581d374b in vgModuleLocal_start_thread_NORETURN (arg=<optimized out>) at m_syswrap/syswrap-linux.c:315 #8 0x0000000058023f81 in do_syscall_clone_amd64_linux () #9 0xdeadbeefdeadbeef in ?? () #10 0xdeadbeefdeadbeef in ?? () #11 0xdeadbeefdeadbeef in ?? () #12 0xdeadbeefdeadbeef in ?? () #13 0x0000000000000000 in ?? () (gdb) (gdb) q cer@Telcontar:~> coredumpctl | grep 13531 Sun 2021-01-03 00:49:16 CET 13531 1000 100 11 missing /usr/lib/tracker-extract Wed 2021-01-06 21:04:38 CET 13531 1000 100 31 present /usr/lib64/valgrind/memcheck-amd64-linux cer@Telcontar:~>
My hunch is you have some file on your system that looks or is like a media file that is then crashing this program.
One hunch is that it is working on tiff files, and those are system files, not mine. Another hunch is that it is working on photo files, of which I have thousands.
Yes, and it would be very helpful to find which one is breaking it ;) That would be the reproducer. My guess is this is not a system file that is causing it or we would see a lot more reports.
Well, I don't see any file name above... :-? - -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCX/b/Vxwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfV8woAn3UQuXAoAB2rfiOrAhtb Rq+tc5Z7AJwNbGogIK5P/fXAMZWn6kMLBz30tQ== =Jo52 -----END PGP SIGNATURE-----
On 1/7/21 1:32 PM, Carlos E. R. wrote:
<3.6> 2021-01-06T21:30:03.001937+01:00 Telcontar valgrind 20253 - - ==20253== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. <3.6> 2021-01-06T21:30:03.002093+01:00 Telcontar valgrind 20253 - - ==20253== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info <3.6> 2021-01-06T21:30:03.002235+01:00 Telcontar valgrind 20253 - - ==20253== Command: /usr/lib/tracker-extract <3.6> 2021-01-06T21:30:03.002398+01:00 Telcontar valgrind 20253 - - ==20253==
These are the lines that you are looking for. You need to find the ==<PID>== lines with valgrind in them for one of the coredumps. The PID is the first column. Other log entries are not really important. Here we only see start of the program. Valgrind will also generate backtraces with some additional information on errors. - Adam
On 07/01/2021 14.37, Adam Majer wrote:
On 1/7/21 1:32 PM, Carlos E. R. wrote:
<3.6> 2021-01-06T21:30:03.001937+01:00 Telcontar valgrind 20253 - - ==20253== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. <3.6> 2021-01-06T21:30:03.002093+01:00 Telcontar valgrind 20253 - - ==20253== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info <3.6> 2021-01-06T21:30:03.002235+01:00 Telcontar valgrind 20253 - - ==20253== Command: /usr/lib/tracker-extract <3.6> 2021-01-06T21:30:03.002398+01:00 Telcontar valgrind 20253 - - ==20253==
These are the lines that you are looking for. You need to find the ==<PID>== lines with valgrind in them for one of the coredumps. The PID is the first column. Other log entries are not really important.
I did that, I included the info in the previous post. I can not attach the coredump because it is 80 MB in size. I can upload them to google drive and share with someone with a gmail account.
Here we only see start of the program.
Valgrind will also generate backtraces with some additional information on errors.
Sorry, I'm not familiar with valgrind. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 1/7/21 2:42 PM, Carlos E. R. wrote:
On 07/01/2021 14.37, Adam Majer wrote:
On 1/7/21 1:32 PM, Carlos E. R. wrote:
<3.6> 2021-01-06T21:30:03.001937+01:00 Telcontar valgrind 20253 - - ==20253== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. <3.6> 2021-01-06T21:30:03.002093+01:00 Telcontar valgrind 20253 - - ==20253== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info <3.6> 2021-01-06T21:30:03.002235+01:00 Telcontar valgrind 20253 - - ==20253== Command: /usr/lib/tracker-extract <3.6> 2021-01-06T21:30:03.002398+01:00 Telcontar valgrind 20253 - - ==20253==
These are the lines that you are looking for. You need to find the ==<PID>== lines with valgrind in them for one of the coredumps. The PID is the first column. Other log entries are not really important.
I did that, I included the info in the previous post.
I can not attach the coredump because it is 80 MB in size. I can upload them to google drive and share with someone with a gmail account.
You can share a link with me. As for the last post, it's just a few lines of journal. What we need is something that would look like,
valgrind ./a.out
==22002== Memcheck, a memory error detector ==22002== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==22002== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info ==22002== Command: ./a.out ==22002== ==22002== Invalid write of size 4 ==22002== at 0x4004AB: main (test.c:6) ==22002== Address 0x8 is not stack'd, malloc'd or (recently) free'd ==22002== ==22002== ==22002== Process terminating with default action of signal 11 (SIGSEGV): dumping core ==22002== Access not within mapped region at address 0x8 ==22002== at 0x4004AB: main (test.c:6) ==22002== If you believe this happened as a result of a stack ==22002== overflow in your program's main thread (unlikely but ==22002== possible), you can try to increase the size of the ==22002== main thread stack using the --main-stacksize= flag. ==22002== The main thread stack size used in this run was 8388608. ==22002== ==22002== HEAP SUMMARY: ==22002== in use at exit: 0 bytes in 0 blocks ==22002== total heap usage: 0 allocs, 0 frees, 0 bytes allocated ==22002== ==22002== All heap blocks were freed -- no leaks are possible ==22002== ==22002== For lists of detected and suppressed errors, rerun with: -s ==22002== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) Segmentation fault (core dumped) which is for a program like, adamm@adamm:~/t> cat test.c int main() { int a; int *b = 0; b[2] = 2; return a; } To get this, you can do something like, journalctl | grep ==22002== > /tmp/crash.log and replace the number with the process id (PID) of the core that is listed in the coredumpctl. Then look in /tmp/crash.log and send it with the coredump if it looks complete. If it doesn't exist or you don't see backtrace in the log, maybe it was in one of the child processes (I'm not familiar with the program in question), then change the .service override to (one line) ExecStart=/usr/bin/valgrind --track-origins=yes --trace-children=yes /usr/lib/tracker-extract - Adam
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday, 2021-01-07 at 15:14 +0100, Adam Majer wrote:
On 1/7/21 2:42 PM, Carlos E. R. wrote:
On 07/01/2021 14.37, Adam Majer wrote:
On 1/7/21 1:32 PM, Carlos E. R. wrote:
<3.6> 2021-01-06T21:30:03.001937+01:00 Telcontar valgrind 20253 - - ==20253== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. <3.6> 2021-01-06T21:30:03.002093+01:00 Telcontar valgrind 20253 - - ==20253== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info <3.6> 2021-01-06T21:30:03.002235+01:00 Telcontar valgrind 20253 - - ==20253== Command: /usr/lib/tracker-extract <3.6> 2021-01-06T21:30:03.002398+01:00 Telcontar valgrind 20253 - - ==20253==
These are the lines that you are looking for. You need to find the ==<PID>== lines with valgrind in them for one of the coredumps. The PID is the first column. Other log entries are not really important.
I did that, I included the info in the previous post.
I can not attach the coredump because it is 80 MB in size. I can upload them to google drive and share with someone with a gmail account.
You can share a link with me.
But you don't have a gmail address, I'd have to share an open link with no security.
As for the last post, it's just a few lines of journal.
and "coredumpctl gdb PID"
What we need is something that would look like,
valgrind ./a.out
==22002== Memcheck, a memory error detector
==22002== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
The files I have are lz4 compressed binaries.
To get this, you can do something like,
journalctl | grep ==22002== > /tmp/crash.log
I posted that already from syslog. Reposting from journal: cer@Telcontar:~> journalctl | grep 13531 Jan 01 01:32:59 Telcontar os-prober[13531]: debug: running /usr/lib/os-probes/mounted/30utility on mounted /dev/sdb8 Stack trace of thread 13531: Jan 03 00:49:15 Telcontar tracker-extract[13531]: Locale 'LANG' is not set, defaulting to C locale Jan 03 00:49:15 Telcontar tracker-extract[13531]: Locale 'LANG' is not set, defaulting to C locale Jan 03 00:49:16 Telcontar systemd-coredump[13549]: Process 13531 (tracker-extract) of user 1000 dumped core. Stack trace of thread 13531: Stack trace of thread 13531: Jan 03 14:20:46 Telcontar systemd-coredump[16152]: Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.13531.1609631355000000.lz4. Jan 06 21:03:46 Telcontar systemd-coredump[13527]: Removed old coredump core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.13217.1609813531000000.lz4. Jan 06 21:03:47 Telcontar valgrind[13531]: ==13531== Memcheck, a memory error detector Jan 06 21:03:47 Telcontar valgrind[13531]: ==13531== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. Jan 06 21:03:47 Telcontar valgrind[13531]: ==13531== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info Jan 06 21:03:47 Telcontar valgrind[13531]: ==13531== Command: /usr/lib/tracker-extract Jan 06 21:03:47 Telcontar valgrind[13531]: ==13531== Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== HEAP SUMMARY: Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== in use at exit: 3,041,676 bytes in 38,623 blocks Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== total heap usage: 125,003 allocs, 86,380 frees, 12,068,295 bytes allocated Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== LEAK SUMMARY: Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== definitely lost: 21,976 bytes in 56 blocks Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== indirectly lost: 2,560 bytes in 40 blocks Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== possibly lost: 4,652 bytes in 62 blocks Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== still reachable: 2,910,480 bytes in 38,069 blocks Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== of which reachable via heuristic: Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== length64 : 1,464 bytes in 30 blocks Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== newarray : 1,808 bytes in 33 blocks Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== suppressed: 0 bytes in 0 blocks Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== Rerun with --leak-check=full to see details of leaked memory Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== For lists of detected and suppressed errors, rerun with: -s Jan 06 21:03:57 Telcontar valgrind[13531]: ==13736== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) Jan 06 21:03:57 Telcontar memcheck-amd64-[13531]: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though. Jan 06 21:04:03 Telcontar valgrind[13531]: ==13531== Warning: invalid file descriptor -1 in syscall close() Jan 06 21:04:03 Telcontar valgrind[13531]: ==13531== Warning: invalid file descriptor -1 in syscall close() Jan 06 21:04:07 Telcontar memcheck-amd64-[13531]: Locale 'LANG' is not set, defaulting to C locale Jan 06 21:04:10 Telcontar memcheck-amd64-[13531]: Locale 'LANG' is not set, defaulting to C locale Jan 06 21:04:36 Telcontar valgrind[13531]: --13531-- WARNING: unhandled amd64-linux syscall: 317 Jan 06 21:04:36 Telcontar valgrind[13531]: --13531-- You may be able to write your own handler. Jan 06 21:04:36 Telcontar valgrind[13531]: --13531-- Read the file README_MISSING_SYSCALL_OR_IOCTL. Jan 06 21:04:36 Telcontar valgrind[13531]: --13531-- Nevertheless we consider this a bug. Please report Jan 06 21:04:36 Telcontar valgrind[13531]: --13531-- it at http://valgrind.org/support/bug_reports.html. Jan 06 21:04:38 Telcontar systemd-coredump[13832]: Process 13531 (memcheck-amd64-) of user 1000 dumped core. Jan 07 15:17:44 Telcontar systemd-coredump[29567]: Removed old coredump core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.13531.1609963476000000.lz4. cer@Telcontar:~>
and replace the number with the process id (PID) of the core that is listed in the coredumpctl. Then look in /tmp/crash.log and send it with the coredump if it looks complete. If it doesn't exist or you don't see backtrace in the log, maybe it was in one of the child processes (I'm not familiar with the program in question), then change the .service override to (one line)
ExecStart=/usr/bin/valgrind --track-origins=yes --trace-children=yes /usr/lib/tracker-extract
I can call directly tacker-extract on a single file and it crashes. So if you tell me what valgrind line to use, I'll try that. - -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCX/cmHxwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVAKQAn2Hq8z59bCr75Xvt+haI ffi5nkTdAJ9cV5qNuKZElDVxXIKmQq0V2KYfhw== =jfEK -----END PGP SIGNATURE-----
* Carlos E. R. <robin.listas@telefonica.net> [01-07-21 10:22]:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Thursday, 2021-01-07 at 15:14 +0100, Adam Majer wrote:
On 1/7/21 2:42 PM, Carlos E. R. wrote:
On 07/01/2021 14.37, Adam Majer wrote:
On 1/7/21 1:32 PM, Carlos E. R. wrote:
<3.6> 2021-01-06T21:30:03.001937+01:00 Telcontar valgrind 20253 - - ==20253== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. <3.6> 2021-01-06T21:30:03.002093+01:00 Telcontar valgrind 20253 - - ==20253== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info <3.6> 2021-01-06T21:30:03.002235+01:00 Telcontar valgrind 20253 - - ==20253== Command: /usr/lib/tracker-extract <3.6> 2021-01-06T21:30:03.002398+01:00 Telcontar valgrind 20253 - - ==20253==
These are the lines that you are looking for. You need to find the ==<PID>== lines with valgrind in them for one of the coredumps. The PID is the first column. Other log entries are not really important.
I did that, I included the info in the previous post.
I can not attach the coredump because it is 80 MB in size. I can upload them to google drive and share with someone with a gmail account.
You can share a link with me.
But you don't have a gmail address, I'd have to share an open link with no security.
you can share it with me, I can put it on my local server for him to see.
- -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar)
-----BEGIN PGP SIGNATURE-----
iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCX/cmHxwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVAKQAn2Hq8z59bCr75Xvt+haI ffi5nkTdAJ9cV5qNuKZElDVxXIKmQq0V2KYfhw== =jfEK -----END PGP SIGNATURE-----
-- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode
* Adam Majer <amajer@suse.de> [01-07-21 11:08]:
On 1/7/21 4:50 PM, Patrick Shanahan wrote:
But you don't have a gmail address, I'd have to share an open link with no security.
you can share it with me, I can put it on my local server for him to see.
It's OK, we have sorted this now :)
Adam, how did you see this? I am under censorship and this post has not passed yet, timing is not convenient for the probable *single* appointed censor. notice: as I am currently under censorship, this message may spend a long time waiting for a time convenient for the censor to review. I have cc'd you. -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode
On 07/01/2021 17.30, Patrick Shanahan wrote:
* Adam Majer <> [01-07-21 11:08]:
On 1/7/21 4:50 PM, Patrick Shanahan wrote:
But you don't have a gmail address, I'd have to share an open link with no security.
you can share it with me, I can put it on my local server for him to see.
It's OK, we have sorted this now :)
Adam, how did you see this? I am under censorship and this post has not passed yet, timing is not convenient for the probable *single* appointed censor.
Your posts are being posted just fine. The delay was one minute. Some posts of yesterday took 10 minutes. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
* Carlos E. R. <robin.listas@telefonica.net> [01-07-21 13:20]:
On 07/01/2021 17.30, Patrick Shanahan wrote:
* Adam Majer <> [01-07-21 11:08]:
On 1/7/21 4:50 PM, Patrick Shanahan wrote:
But you don't have a gmail address, I'd have to share an open link with no security.
you can share it with me, I can put it on my local server for him to see.
It's OK, we have sorted this now :)
Adam, how did you see this? I am under censorship and this post has not passed yet, timing is not convenient for the probable *single* appointed censor.
Your posts are being posted just fine. The delay was one minute. Some posts of yesterday took 10 minutes.
some have taken 6+ hours notice: as I am currently under censorship, this message may spend a long time waiting for a time convenient for the censor to review. I have cc'd you. -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode
On 1/5/21 6:09 AM, Carlos E. R. wrote:
Program terminated with signal SIGSEGV, Segmentation fault. #0 std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>--Type <RET> for more, q to quit, c to continue without paging--c
const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::lower_bound (__k=..., this=<optimized out>) at /usr/include/c++/7/bits/stl_tree.h:1187
That's not something you are doing, STL used RB_tree to implement std::map. So the above is just the normal construction of whatever container is using the RB_tree of a node that looks like: std::_Rb_tree < std::string, std::pair<const std::string, std::string>, std::_Selectlist<std::pair<const std::string, std::string>>, std::less<std::string, std::string>, std::allocator <std::string, std::string> > So basically it's a std::map <std::string, std::pair<const std::string, std::string>> The _Selectlist, less, and allocator are just standard template components for the std::pair, how to select the first and second parts of the pair, the sort function and the allocator for making more. If the Segfault begins there, then there is a pointer problem, either a nullptr or just a fsck'ed up address being returned for some part of that node. This is more a side-effect of the problem rather than an identification of the cause. If it were me, I'd see if I could identify a process to strace (if that is possible). The thing gets started somehow. If it has a systemd unit file, take a look at the process that is being launched there and you can attach a strace to the PID of the running process if you can get to it before it crashes. Some others more up on stracing may have more refined suggestions about what process to look for, or maybe even a way to hand systemd start the strace for you when the process starts. If it's an /etc/xdg/autostart thing, then you may be able to replace the /etc/xdg/autostart/tracker... file with a shell script that is a wrapper that starts strace on whatever executable is normally in /etc/xdg/autostart/.... I'd just mv, e.g. /etc/xdg/autostart/tracker /etc/xdg/autostart/tracker_real and then write the shell script wrapper and save it as /etc/xdg/autostart/tracker. (or something similar) I don't know if that helps, but that is at least the path I'd start down if I had the problem. -- David C. Rankin, J.D.,P.E.
On 05/01/2021 23.08, David C. Rankin wrote:
On 1/5/21 6:09 AM, Carlos E. R. wrote:
Program terminated with signal SIGSEGV, Segmentation fault. #0 std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>--Type <RET> for more, q to quit, c to continue without paging--c
const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::lower_bound (__k=..., this=<optimized out>) at /usr/include/c++/7/bits/stl_tree.h:1187
That's not something you are doing, STL used RB_tree to implement std::map. So the above is just the normal construction of whatever container is using the RB_tree of a node that looks like:
std::_Rb_tree < std::string, std::pair<const std::string, std::string>,
std::_Selectlist<std::pair<const std::string, std::string>>, std::less<std::string, std::string>, std::allocator <std::string, std::string> >
So basically it's a
std::map <std::string, std::pair<const std::string, std::string>>
The _Selectlist, less, and allocator are just standard template components for the std::pair, how to select the first and second parts of the pair, the sort function and the allocator for making more.
If the Segfault begins there, then there is a pointer problem, either a nullptr or just a fsck'ed up address being returned for some part of that node.
This is more a side-effect of the problem rather than an identification of the cause.
Thanks for the analysis.
If it were me, I'd see if I could identify a process to strace (if that is possible). The thing gets started somehow. If it has a systemd unit file, take a look at the process that is being launched there and you can attach a strace to the PID of the running process if you can get to it before it crashes.
Judging from the log message: <3.4> 2021-01-05T12:52:14.011729+01:00 Telcontar systemd 6503 - - tracker-extract.service: Failed with result 'signal'. I can locate the service, and I do: /usr/lib/systemd/user/tracker-extract.service And it contains: cer@Telcontar:~/tmp/coredump/core_info> cat /usr/lib/systemd/user/tracker-extract.service [Unit] Description=Tracker metadata extractor [Service] Type=dbus BusName=org.freedesktop.Tracker1.Miner.Extract ExecStart=/usr/lib/tracker-extract Restart=on-abnormal # Don't restart after tracker daemon -k (aka tracker-control -k) RestartPreventExitStatus=SIGKILL cer@Telcontar:~/tmp/coredump/core_info> however: cer@Telcontar:~/tmp/coredump/core_info> systemctl status tracker-extract.service Unit tracker-extract.service could not be found. cer@Telcontar:~/tmp/coredump/core_info> (same result as root) I did "killall tracker-store tracker-extract tracker-miner-apps tracker-miner-fs" so it is not restarting and I don't know how to restart it. Maybe it does start on next boot.
Some others more up on stracing may have more refined suggestions about what process to look for, or maybe even a way to hand systemd start the strace for you when the process starts. If it's an /etc/xdg/autostart thing, then you may be able to replace the /etc/xdg/autostart/tracker... file with a shell script that is a wrapper that starts strace on whatever executable is normally in /etc/xdg/autostart/....
I'd just mv, e.g. /etc/xdg/autostart/tracker /etc/xdg/autostart/tracker_real and then write the shell script wrapper and save it as /etc/xdg/autostart/tracker. (or something similar)
I don't know if that helps, but that is at least the path I'd start down if I had the problem.
I'm not very much tempted to do anything... but thanks, much appreciated :-) If someone is really interested in correcting the source, I will help providing all the information he asks. But I don't fancy doing anything myself. I'm tired of this tracker-extract thing crashing year after year and I don't believe it is worth it :-( See Bug 1155165, reported Oct 2019, for instance. Still "New". It is the same problem, thousands of coredumps (machine then was less powerful). Previously I reported Bug 1123869, january 2019. That one was solved. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 1/5/21 5:15 PM, Carlos E. R. wrote:
I'm not very much tempted to do anything... but thanks, much appreciated :-)
If someone is really interested in correcting the source, I will help providing all the information he asks. But I don't fancy doing anything myself. I'm tired of this tracker-extract thing crashing year after year and I don't believe it is worth it :-(
See Bug 1155165, reported Oct 2019, for instance. Still "New". It is the same problem, thousands of coredumps (machine then was less powerful).
Previously I reported Bug 1123869, january 2019. That one was solved.
Chuckling... Treat it like the dreaded-dog, shoot it in the head and be done with it. I've never needed beagle or tracker to find a file for me (but it has been tempting to find something like silver-searcher or other well behaved indexer) If you were interested in trying the strace approach, I'd try editing the unit file and adding strace with, e.g. ExecStart=strace /usr/lib/tracker-extract At least then strace is started when tracker-extract starts. You may need to quote it, e.g. "strace /usr/lib/tracker-extract" but I'll have to defer to those better at unit file internals for that. -- David C. Rankin, J.D.,P.E.
On 07/01/2021 00.05, David C. Rankin wrote:
On 1/5/21 5:15 PM, Carlos E. R. wrote:
I'm not very much tempted to do anything... but thanks, much appreciated :-)
If someone is really interested in correcting the source, I will help providing all the information he asks. But I don't fancy doing anything myself. I'm tired of this tracker-extract thing crashing year after year and I don't believe it is worth it :-(
See Bug 1155165, reported Oct 2019, for instance. Still "New". It is the same problem, thousands of coredumps (machine then was less powerful).
Previously I reported Bug 1123869, january 2019. That one was solved.
Chuckling...
Treat it like the dreaded-dog, shoot it in the head and be done with it. I've never needed beagle or tracker to find a file for me (but it has been tempting to find something like silver-searcher or other well behaved indexer)
I find a content searcher useful sometimes, when I can't find a file that I don't remember the name of, and it is not in the directories I expected. But it is useless if they crash and can not create the index :-(
If you were interested in trying the strace approach, I'd try editing the unit file and adding strace with, e.g.
ExecStart=strace /usr/lib/tracker-extract
At least then strace is started when tracker-extract starts. You may need to quote it, e.g. "strace /usr/lib/tracker-extract" but I'll have to defer to those better at unit file internals for that.
Interesting. Yes, I now know how to do that. The problem is adding a limit, say ten runs, or it floods the system. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 1/3/21 12:47 PM, Carlos E. R. wrote:
I upgraded to 15.2 on 2021-12-31, and since then I have been flooded with coredumps from tracker-extract:
Telcontar:~ # coredumpctl | grep tracker-extract | wc -l 4563 Telcontar:~ #
Literally, thousands.
Is this really not known? Is it worth a bugzilla? I reported this problem on the past and it keeps coming...
Is this the report? It's for 15.1. Can you update it for 15.2? https://bugzilla.suse.com/show_bug.cgi?id=1155165 As for your question, whatever the cause, a segfault is a serious problem that always should be addressed. It should *not* happen. The solution is whether there is enough information and to reproduce the issue on the developer machine. Can you attach the actual coredump (or 2 or 3) to the bugreport? The /var/lib/systemd/coredump/core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.25653.1609673500000000.lz4 as an example. I'm assuming that your system is updated, rebooted, etc and still this remains? If you create a new test user and login with that user (create user, reboot, login with new user, don't touch uid=1000 use), does the problem start again or no longer occurs? - Adam
dump and gdb trace below.
Telcontar:~ # coredumpctl dump 25653 PID: 25653 (tracker-extract) UID: 1000 (cer) GID: 100 (users) Signal: 11 (SEGV) Timestamp: Sun 2021-01-03 12:31:40 CET (2min 5s ago) Command Line: /usr/lib/tracker-extract Executable: /usr/lib/tracker-extract Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Unit: user@1000.service User Unit: tracker-extract.service Slice: user-1000.slice Owner UID: 1000 (cer) Boot ID: 2ef60a9b78a14f8aa0ad0315a348b17c Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.25653.1609673500000000.lz4
Message: Process 25653 (tracker-extract) of user 1000 dumped core.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 El 2021-01-06 a las 16:10 +0100, Adam Majer escribió:
On 1/3/21 12:47 PM, Carlos E. R. wrote:
I upgraded to 15.2 on 2021-12-31, and since then I have been flooded with coredumps from tracker-extract:
Telcontar:~ # coredumpctl | grep tracker-extract | wc -l 4563 Telcontar:~ #
Literally, thousands.
Is this really not known? Is it worth a bugzilla? I reported this problem on the past and it keeps coming...
Is this the report? It's for 15.1. Can you update it for 15.2?
Thanks for your interest. Yes, that's the report. Sure, I can update it. Notice that the original report was with a different CPU. That was Intel, now I use AMD.
As for your question, whatever the cause, a segfault is a serious problem that always should be addressed. It should *not* happen. The solution is whether there is enough information and to reproduce the issue on the developer machine.
IMHO a core dump should give enough information to backtrace and not need to reproduce the issue. And if triggers on a particular file of mine, unless the file is confidential I can provide it.
Can you attach the actual coredump (or 2 or 3) to the bugreport? The
/var/lib/systemd/coredump/core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.25653.1609673500000000.lz4
That one is gone... purged, I suppose. I have 9000 to choose from, anyway. Hum, not that many. er@Telcontar:~> l /var/lib/systemd/coredump/ | grep tracker-extract | wc -l 387 cer@Telcontar:~> l /var/lib/systemd/coredump/ | wc -l 416 cer@Telcontar:~> total 4274600 drwxr-xr-x 2 root root 81920 Jan 6 20:59 ./ drwxr-xr-x 8 root root 4096 Nov 24 00:21 ../ - -rw-r-----+ 1 root root 86430951 Jan 6 20:47 core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.10033.1609962431000000.lz4 - -rw-r-----+ 1 root root 86427151 Jan 6 20:47 core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.10082.1609962462000000.lz4 - -rw-r-----+ 1 root root 86918327 Jan 6 20:48 core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.10132.1609962497000000.lz4 - -rw-r-----+ 1 root root 86522137 Jan 6 20:48 core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.10205.1609962528000000.lz4 ... - -rw-r-----+ 1 root root 103008025 Jan 6 20:44 core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.9101.1609962267000000.lz4 - -rw-r-----+ 1 root root 86801900 Jan 6 20:45 core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.9240.1609962303000000.lz4 - -rw-r-----+ 1 root root 86421574 Jan 6 20:45 core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.9668.1609962334000000.lz4 - -rw-r-----+ 1 root root 86429305 Jan 6 20:46 core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.9717.1609962366000000.lz4 - -rw-r-----+ 1 root root 86796268 Jan 6 20:46 core.memcheck-amd64-.1000.431e0a53560949238ef3840406768db8.9976.1609962400000000.lz4 - -rw-r-----+ 1 root root 5178439 Jan 5 02:56 core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.10017.1609811795000000.lz4 - -rw-r-----+ 1 root root 5186656 Jan 5 02:56 core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.10043.1609811796000000.lz4 - -rw-r-----+ 1 root root 5647889 Jan 5 02:58 core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.10114.1609811922000000.lz4 - -rw-r-----+ 1 root root 5187646 Jan 5 02:58 core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.10143.1609811923000000.lz4 - -rw-r-----+ 1 root root 5188110 Jan 5 02:58 core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.10169.1609811924000000.lz4 - -rw-r-----+ 1 root root 5681785 Jan 5 02:58 core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.10195.1609811926000000.lz4 ... - -rw-r-----+ 1 root root 8158768 Jan 5 12:52 core.tracker-extract.1000.434b6b93ca3f4478b7cd87e981f4f7c4.16896.1609847528000000.lz4 - -rw-r-----+ 1 root root 5186722 Jan 5 12:52 core.tracker-extract.1000.434b6b93ca3f4478b7cd87e981f4f7c4.16961.1609847529000000.lz4 - -rw-r-----+ 1 root root 5428161 Jan 5 12:52 core.tracker-extract.1000.434b6b93ca3f4478b7cd87e981f4f7c4.16987.1609847531000000.lz4 - -rw-r-----+ 1 root root 5172133 Jan 5 12:52 core.tracker-extract.1000.434b6b93ca3f4478b7cd87e981f4f7c4.17014.1609847532000000.lz4 - -rw-r-----+ 1 root root 8148217 Jan 5 12:52 core.tracker-extract.1000.434b6b93ca3f4478b7cd87e981f4f7c4.17114.1609847533000000.lz4 - -rw-r-----+ 1 root root 8120605 Jan 5 12:48 core.tracker-extract.1000.434b6b93ca3f4478b7cd87e981f4f7c4.8835.1609847334000000.lz4
as an example. I'm assuming that your system is updated, rebooted, etc and still this remains?
Yes. System was running Leap 15.1, where I had the issue killed, and as soon as I upgraded to 15.2 it started again. The system has forcefully rebooted several times because hibernation crashes.
If you create a new test user and login with that user (create user, reboot, login with new user, don't touch uid=1000 use), does the problem start again or no longer occurs?
I don't know. That user will not have the same files. [...trying...] It crashes. User cer-g, uid 1030. +++........................ ● tracker-extract.service - Tracker metadata extractor Loaded: loaded (/usr/lib/systemd/user/tracker-extract.service; static; vendor preset: disabled) Active: failed (Result: signal) since Wed 2021-01-06 21:03:56 CET; 1min 56s ago Process: 13709 ExecStart=/usr/lib/tracker-extract (code=killed, signal=SEGV) Main PID: 13709 (code=killed, signal=SEGV) Jan 06 21:03:56 Telcontar systemd[12586]: tracker-extract.service: Unit entered failed state. Jan 06 21:03:56 Telcontar systemd[12586]: tracker-extract.service: Failed with result 'signal'. Jan 06 21:03:56 Telcontar systemd-coredump[13729]: Process 13709 (tracker-extract) of user 1030 dumped core. Stack trace of thread 13727: #0 0x00007fe62ddc5a0c _ZNSt8_Rb_treeINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt4pairIKS5_S5_ESt10_Select1stIS8_ESt4lessIS5_ESaIS8_EE11lower_boundERS7_ (libexiv2.so.26) #1 0x00007fe62dda0a0b WXMPMeta_RegisterNamespace_1 (libexiv2.so.26) #2 0x00007fe62dd90344 _ZN8TXMPMetaINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEE17RegisterNamespaceEPKcS8_ (libexiv2.so.26) #3 0x00007fe62dd8bba6 _ZN5Exiv29XmpParser10initializeEPFvPvbES1_ (libexiv2.so.26) #4 0x00007fe62dd8e539 _ZN5Exiv29XmpParser6decodeERNS_7XmpDataERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE (libexiv2.so.26) #5 0x00007fe62dd74576 _ZN5Exiv28Internal11TiffDecoder9decodeXmpEPKNS0_13TiffEntryBaseE (libexiv2.so.26) #6 0x00007fe62dd5c7a6 _ZN5Exiv28Internal13TiffDirectory8doAcceptERNS0_11TiffVisitorE (libexiv2.so.26) #7 0x00007fe62dd65e9f _ZN5Exiv28Internal16TiffParserWorker6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhjjPFMNS0_11TiffDecoderEFvPKNS0_13TiffEntryBaseEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjNS0_5IfdIdEEPNS0_14TiffHeaderBaseE (libexiv2.so.26) #8 0x00007fe62dd65fe7 _ZN5Exiv210TiffParser6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhj (libexiv2.so.26) #9 0x00007fe62dd66179 _ZN5Exiv29TiffImage12readMetadataEv (libexiv2.so.26) #10 0x00007fe62e0e401d gexiv2_metadata_open_internal (libgexiv2.so.2) #11 0x00007fe62e0e426f gexiv2_metadata_open_path (libgexiv2.so.2) #12 0x00007fe62e2fe848 tracker_extract_get_metadata (libextract-raw.so) #13 0x0000556b6e095eac get_file_metadata (tracker-extract) #14 0x0000556b6e09649b get_metadata (tracker-extract) #15 0x0000556b6e096530 single_thread_get_metadata (tracker-extract) #16 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #17 0x00007fe6680224f9 start_thread (libpthread.so.0) #18 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13710: #0 0x00007fe667d506db __GI___poll (libc.so.6) #1 0x00007fe6687197b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fe6687198cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007fe668719911 glib_worker_main (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13712: #0 0x00007fe667d506db __GI___poll (libc.so.6) #1 0x00007fe6687197b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fe6687198cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007fe65f58f5bd dconf_gdbus_worker_thread (libdconfsettings.so) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13711: #0 0x00007fe667d506db __GI___poll (libc.so.6) #1 0x00007fe6687197b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fe668719b02 g_main_loop_run (libglib-2.0.so.0) #3 0x00007fe668f61b16 gdbus_shared_thread_func (libgio-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13713: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13714: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13709: #0 0x00007fe667d506db __GI___poll (libc.so.6) #1 0x00007fe6687197b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007fe668719b02 g_main_loop_run (libglib-2.0.so.0) #3 0x0000556b6e09307c main (tracker-extract) #4 0x00007fe667c8334a __libc_start_main (libc.so.6) #5 0x0000556b6e09316a _start (tracker-extract) Stack trace of thread 13715: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13716: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13717: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13718: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13719: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13720: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13721: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13724: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13726: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876518a g_cond_wait_until (libglib-2.0.so.0) #2 0x00007fe6686eac51 g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe6687437a6 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Stack trace of thread 13722: #0 0x00007fe667d55839 syscall (libc.so.6) #1 0x00007fe66876506f g_cond_wait (libglib-2.0.so.0) #2 0x00007fe6686eac6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007fe668743845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007fe668742dce g_thread_proxy (libglib-2.0.so.0) #5 0x00007fe6680224f9 start_thread (libpthread.so.0) #6 0x00007fe667d5afbf __clone (libc.so.6) Jan 06 21:03:56 Telcontar systemd[12586]: tracker-extract.service: Start request repeated too quickly. Jan 06 21:03:56 Telcontar systemd[12586]: Failed to start Tracker metadata extractor. Jan 06 21:03:56 Telcontar systemd[12586]: tracker-extract.service: Unit entered failed state. Jan 06 21:03:56 Telcontar systemd[12586]: tracker-extract.service: Failed with result 'signal'. Jan 06 21:03:56 Telcontar systemd[12586]: tracker-extract.service: Start request repeated too quickly. Jan 06 21:03:56 Telcontar systemd[12586]: Failed to start Tracker metadata extractor. Jan 06 21:03:56 Telcontar systemd[12586]: tracker-extract.service: Failed with result 'signal'. ........................++- Hum. Now that I think, this cer-g user has access to the same photos. I will try another user. [...] Right, user cer-g2 doesn't crash. +++........................ ● tracker-extract.service - Tracker metadata extractor Loaded: loaded (/usr/lib/systemd/user/tracker-extract.service; static; vendor preset: disabled) Active: inactive (dead) Jan 06 21:17:47 Telcontar tracker-extract[17650]: Locale 'LANG' is not set, defaulting to C locale Jan 06 21:17:47 Telcontar tracker-extract[17650]: Locale 'LANG' is not set, defaulting to C locale Jan 06 21:17:47 Telcontar tracker-extract[17650]: Unknown desktop entry type 'Service' Jan 06 21:17:48 Telcontar tracker-extract[17650]: Unknown desktop entry type 'Service' Jan 06 21:17:48 Telcontar tracker-extract[17650]: Unknown desktop entry type 'Application' ........................++- cer@Telcontar:~> tree ../cer-g/ ../cer-g/ ├── Desktop │ ├── MozillaFirefox.desktop │ ├── Office.desktop │ ├── SuSE.desktop │ ├── Support.desktop │ └── kinfocenter.desktop ├── Documents -> /home_aux/cer-g/Documents ├── Downloads -> /home_aux/cer-g/Downloads ├── Mail [error opening dir] ├── Music -> /home_aux/cer-g/Music ├── Pictures -> /home_aux/cer-g/Pictures ├── Public ├── Templates ├── Videos -> /home_aux/cer-g/Videos ├── bin ├── p └── public_html 11 directories, 6 files cer@Telcontar:~> cer@Telcontar:~> tree ../cer-g2/ ../cer-g2/ ├── Desktop ├── Documents ├── Downloads -> /home_aux/cer-g2/Downloads ├── Music ├── Pictures ├── Public ├── Templates ├── Videos ├── bin ├── p └── public_html 10 directories, 1 file cer@Telcontar:~> - -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCX/Ycpxwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfV7l8AnimWS5p4zFYxUFWU3w8W 3QIjGPgZAJ4t9kWGG8ifLVqMv6JeYKdMuGZVSg== =mj91 -----END PGP SIGNATURE-----
On 06/01/2021 21.25, Carlos E. R. wrote:
El 2021-01-06 a las 16:10 +0100, Adam Majer escribió:
On 1/3/21 12:47 PM, Carlos E. R. wrote:
Is this the report? It's for 15.1. Can you update it for 15.2?
Thanks for your interest. Yes, that's the report. Sure, I can update it.
Done. I can not upload the coredumps with valgrind info, they have 80 MB each, compressed, and the maximum is 10 megs. So I uploaded others, actually belonging to a different user. The system is apparently constantly purging the files, the most recent is dated an hour and a half ago. I saved a few to another directory under home. I have no idea if the coredump says what files it is having problems with. If you find out, I can provide samples, unless confidential files. I suspect they are photos. [...] Ah, confirmed. I looked inside a valgrind dump with midnight commander, and saw them: /home/cer/Pictures/2018/07/DSC_5876_NEF_shotwell_12.jpg file:///home/cer/Pictures/2018/07/DSC_5877_NEF_shotwell_12.jpg file:///home/cer/Pictures/2018/07/DSC_5878_NEF_shotwell_12.jpg file:///home/cer/Pictures/2018/07/DSC_5880_NEF_shotwell_12.jpg file:///home/cer/Pictures/2018/07/DSC_5879_NEF_shotwell_12.jpg All have zero bytes. Well, you have now a wild guess: tracker doesn't check that the files have zero bytes and chokes on them. (the normal operation of shotwell can produce them by the hundreds) file:///home/cer/Pictures/2018/07/DSC_5882_NEF_shotwell_12.jpg...............b..p... ............... .file:///home/cer/Pictures/2018/07/DSC_5 883_NEF_shotwell_12.jpg...............b..o... ............... .file:///home/cer/Pictures/2018/07/DSC_5881_NEF_shotwell_12.jpg...............b..n... ............ ... .file:///home/cer/Pictures/2018/07 but another valgrind file mentions "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 megs and was created by my camera. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
* Carlos E. R. <robin.listas@telefonica.net> [01-06-21 16:49]: [...]
I can not upload the coredumps with valgrind info, they have 80 MB each, compressed, and the maximum is 10 megs. So I uploaded others, actually belonging to a different user. The system is apparently constantly purging the files, the most recent is dated an hour and a half ago. I saved a few to another directory under home.
I have no idea if the coredump says what files it is having problems with. If you find out, I can provide samples, unless confidential files. I suspect they are photos.
[...]
Ah, confirmed. I looked inside a valgrind dump with midnight commander, and saw them:
/home/cer/Pictures/2018/07/DSC_5876_NEF_shotwell_12.jpg file:///home/cer/Pictures/2018/07/DSC_5877_NEF_shotwell_12.jpg file:///home/cer/Pictures/2018/07/DSC_5878_NEF_shotwell_12.jpg file:///home/cer/Pictures/2018/07/DSC_5880_NEF_shotwell_12.jpg file:///home/cer/Pictures/2018/07/DSC_5879_NEF_shotwell_12.jpg
All have zero bytes. Well, you have now a wild guess: tracker doesn't check that the files have zero bytes and chokes on them.
(the normal operation of shotwell can produce them by the hundreds)
file:///home/cer/Pictures/2018/07/DSC_5882_NEF_shotwell_12.jpg...............b..p... ............... .file:///home/cer/Pictures/2018/07/DSC_5 883_NEF_shotwell_12.jpg...............b..o... ............... .file:///home/cer/Pictures/2018/07/DSC_5881_NEF_shotwell_12.jpg...............b..n... ............ ... .file:///home/cer/Pictures/2018/07
but another valgrind file mentions "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 megs and was created by my camera.
since tracker appears to be bewildered by jpg/JPG files, why not disable the indexing of jpg/JPG files and see if the coredumps cease? note: as I am currently under censorship and my posts suffer great time delays, I have copied to you personally. -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode
On 06/01/2021 22.56, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 16:49]: [...]
but another valgrind file mentions "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 megs and was created by my camera.
since tracker appears to be bewildered by jpg/JPG files, why not disable the indexing of jpg/JPG files and see if the coredumps cease?
Sure, but then the issue can not be investigated. I can also remove and lock out the package. On 15.1 I had the issue, then did something to kill it permanently which I have forgotten (same with 15.0, I think). When I upgraded to 15.2, the issue has come back. What I do is "killall tracker-store tracker-miner-fs tracker-extracter" which works till I login again. This is acceptable while there are people interested in finding out what is going on.
note: as I am currently under censorship and my posts suffer great time delays, I have copied to you personally.
No need to mention it :-) -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
* Carlos E. R. <robin.listas@telefonica.net> [01-06-21 17:08]:
On 06/01/2021 22.56, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 16:49]: [...]
but another valgrind file mentions "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 megs and was created by my camera.
since tracker appears to be bewildered by jpg/JPG files, why not disable the indexing of jpg/JPG files and see if the coredumps cease?
Sure, but then the issue can not be investigated. I can also remove and lock out the package.
but you *can* narrow the field where the problem exists and the smaller the scope, the closer to a possible solution. or mitigation. note: as I am currently under censorship and my posts suffer great time delays, I have copied to you personally. -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode
On 06/01/2021 23.47, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 17:08]:
On 06/01/2021 22.56, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 16:49]: [...]
but another valgrind file mentions "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 megs and was created by my camera.
since tracker appears to be bewildered by jpg/JPG files, why not disable the indexing of jpg/JPG files and see if the coredumps cease?
Sure, but then the issue can not be investigated. I can also remove and lock out the package.
but you *can* narrow the field where the problem exists and the smaller the scope, the closer to a possible solution. or mitigation.
Nope. I can perhaps make it not scan photos, but that is not really a help. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
* Carlos E. R. <robin.listas@telefonica.net> [01-06-21 19:13]:
On 06/01/2021 23.47, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 17:08]:
On 06/01/2021 22.56, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 16:49]: [...]
but another valgrind file mentions "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 megs and was created by my camera.
since tracker appears to be bewildered by jpg/JPG files, why not disable the indexing of jpg/JPG files and see if the coredumps cease?
Sure, but then the issue can not be investigated. I can also remove and lock out the package.
but you *can* narrow the field where the problem exists and the smaller the scope, the closer to a possible solution. or mitigation.
Nope. I can perhaps make it not scan photos, but that is not really a help.
it is really a help if it helps to define your problem note: as I am currently under censorship and my posts suffer great time delays, I have copied to you personally. -- (paka)Patrick Shanahan Plainfield, Indiana, USA @ptilopteri http://en.opensuse.org openSUSE Community Member facebook/ptilopteri Photos: http://wahoo.no-ip.org/piwigo paka @ IRCnet freenode
On 07/01/2021 01.22, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 19:13]:
On 06/01/2021 23.47, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 17:08]:
On 06/01/2021 22.56, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 16:49]: [...]
but another valgrind file mentions "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 megs and was created by my camera.
since tracker appears to be bewildered by jpg/JPG files, why not disable the indexing of jpg/JPG files and see if the coredumps cease?
Sure, but then the issue can not be investigated. I can also remove and lock out the package.
but you *can* narrow the field where the problem exists and the smaller the scope, the closer to a possible solution. or mitigation.
Nope. I can perhaps make it not scan photos, but that is not really a help.
it is really a help if it helps to define your problem
But it does not. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. wrote:
On 07/01/2021 01.22, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 19:13]:
On 06/01/2021 23.47, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 17:08]:
On 06/01/2021 22.56, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 16:49]: [...]
> but another valgrind file mentions > "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 > megs and was created by my camera. >
since tracker appears to be bewildered by jpg/JPG files, why not disable the indexing of jpg/JPG files and see if the coredumps cease?
Sure, but then the issue can not be investigated. I can also remove and lock out the package.
but you *can* narrow the field where the problem exists and the smaller the scope, the closer to a possible solution. or mitigation.
Nope. I can perhaps make it not scan photos, but that is not really a help.
it is really a help if it helps to define your problem
But it does not.
Can you perhaps elaborate on that? you have a suspect filetype, omitting it from indexing to see if that solves the problem sounds like a good idea. Maybe tracker can also be instructed to skip certain paths, I don't know, but that way it might be possible to steer it away from your 0-byte jpegs (assuming you need/want them). -- Per Jessen, Zürich (0.2°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday, 2021-01-07 at 09:27 +0100, Per Jessen wrote:
Carlos E. R. wrote:
On 07/01/2021 01.22, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 19:13]:
Sure, but then the issue can not be investigated. I can also remove and lock out the package.
but you *can* narrow the field where the problem exists and the smaller the scope, the closer to a possible solution. or mitigation.
Nope. I can perhaps make it not scan photos, but that is not really a help.
it is really a help if it helps to define your problem
But it does not.
Can you perhaps elaborate on that? you have a suspect filetype, omitting it from indexing to see if that solves the problem sounds like a good idea.
I'm not interested in making the coredumps stop, but in finding the cause. I can make them stop simply by killing the process.
Maybe tracker can also be instructed to skip certain paths, I don't know, but that way it might be possible to steer it away from your 0-byte jpegs (assuming you need/want them).
I have forgotten how to do that... cer@Telcontar:~> apropos tracker bonobo-activation-server (1) - GNOME component trackergit (1) - the stupid content tracker Hey, it says "stupid" ;-) Log::Log4perl::Util::TimeTracker (3pm) - Track time elapsed tracker-daemon (1) - Start, stop, restart and list daemons responsible for indexing content tracker-extract (1) - Extract metadata from a file. tracker-index (1) - List, pause, resume and command data miners indexing content tracker-info (1) - Retrieve all information available for a certain file. tracker-miner-fs (1) - Used to crawl the file system to mine data. tracker-reset (1) - Reset the index and configuration tracker-search (1) - Search for content by type or across all types tracker-sparql (1) - Use SparQL to query the Tracker databases. tracker-sql (1) - Use SQL to query the Tracker databases. tracker-status (1) - Provide status and statistics on the data indexed <==== tracker-store (1) - database indexer and query daemon tracker-tag (1) - Add, remove and list tags. tracker-writeback (1) - Used to write metadata set in Tracker back to physical files. cer@Telcontar:~> tracker-status If 'tracker-status' is not a typo you can use command-not-found to lookup the package that contains it, like this: cnf tracker-status cer@Telcontar:~> cer@Telcontar:~> cnf tracker-status tracker-status: command not found Good grief. There is a man page, but the command is missing. Anyway, there is nothing about configuration. cer@Telcontar:~> tracker-search hello If 'tracker-search' is not a typo you can use command-not-found to lookup the package that contains it, like this: cnf tracker-search cer@Telcontar:~> cnf tracker-search tracker-search: command not found cer@Telcontar:~> cer@Telcontar:~> tracker-tag If 'tracker-tag' is not a typo you can use command-not-found to lookup the package that contains it, like this: cnf tracker-tag cer@Telcontar:~> cnf tracker-tag tracker-tag: command not found cer@Telcontar:~> Huh? The tracker rpm contains the man pages, but not the binaries. Everything with "tracker" in the filename is installed. No, "request-tracker" is not installed (ticket system). - -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCX/b+3hwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVtDIAoJPpWRh/XFhJfLwrluhG /WaNeKxYAJ9Ze3lkNEqGQaYEUNuMIT47/HrOeA== =l+NB -----END PGP SIGNATURE-----
Carlos E. R. wrote:
On Thursday, 2021-01-07 at 09:27 +0100, Per Jessen wrote:
Carlos E. R. wrote:
On 07/01/2021 01.22, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 19:13]:
it is really a help if it helps to define your problem
But it does not.
Can you perhaps elaborate on that? you have a suspect filetype, omitting it from indexing to see if that solves the problem sounds like a good idea.
I'm not interested in making the coredumps stop, but in finding the cause.
Which is _precisely_ what the above will help do. a) omit suspect filetype, observe if dumps stop. b1) if yes, culprit found, efforts can be focused on that. b2) if no, suspect filetype is innocent, search continues.
tracker-sql (1) - Use SQL to query the Tracker databases. tracker-status (1) - Provide status and statistics on the data indexed <====
cer@Telcontar:~> cnf tracker-status tracker-status: command not found
Good grief. There is a man page, but the command is missing.
tracker status
Anyway, there is nothing about configuration.
Google suggests: https://askubuntu.com/questions/1012772/how-do-you-configure-tracker-search-...
cer@Telcontar:~> tracker-search hello If 'tracker-search' is not a typo you can use command-not-found to lookup the package that contains it, like this: cnf tracker-search cer@Telcontar:~> cnf tracker-search tracker-search: command not found
If you just enter 'tracker', it shows you that status, search, tag etc are options. -- Per Jessen, Zürich (0.7°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland.
On 07/01/2021 13.47, Per Jessen wrote:
Carlos E. R. wrote:
On Thursday, 2021-01-07 at 09:27 +0100, Per Jessen wrote:
tracker-sql (1) - Use SQL to query the Tracker databases. tracker-status (1) - Provide status and statistics on the data indexed <====
cer@Telcontar:~> cnf tracker-status tracker-status: command not found
Good grief. There is a man page, but the command is missing.
tracker status
Ouch.
Anyway, there is nothing about configuration.
Google suggests: https://askubuntu.com/questions/1012772/how-do-you-configure-tracker-search-...
Thanks. cer@Telcontar:~> tracker-gui If 'tracker-gui' is not a typo you can use command-not-found to lookup the package that contains it, like this: cnf tracker-gui cer@Telcontar:~> cnf tracker-gui tracker-gui: command not found cer@Telcontar:~> cer@Telcontar:~> tracker-preferences If 'tracker-preferences' is not a typo you can use command-not-found to lookup the package that contains it, like this: cnf tracker-preferences cer@Telcontar:~> tracker preferences “preferences” is not a tracker command. See “tracker --help” cer@Telcontar:~> Good grief, now you have to use dconf... aka regedit for gnome. OK, added *.JPG and *.NEF But I have to log out/in to make it run, it is killed now. Trying "tracker daemon start". Nope, doesn't work. I could do a tracker reset...
cer@Telcontar:~> tracker-search hello If 'tracker-search' is not a typo you can use command-not-found to lookup the package that contains it, like this: cnf tracker-search cer@Telcontar:~> cnf tracker-search tracker-search: command not found
If you just enter 'tracker', it shows you that status, search, tag etc are options.
-- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 07/01/2021 14.11, Carlos E. R. wrote:
On 07/01/2021 13.47, Per Jessen wrote:
Carlos E. R. wrote:
On Thursday, 2021-01-07 at 09:27 +0100, Per Jessen wrote:
Anyway, there is nothing about configuration.
Google suggests: https://askubuntu.com/questions/1012772/how-do-you-configure-tracker-search-...
Thanks.
...
OK, added *.JPG and *.NEF
Log out, log in... keeps crashing. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 1/7/21 2:39 PM, Carlos E. R. wrote:
On 07/01/2021 14.11, Carlos E. R. wrote:
Log out, log in... keeps crashing.
You could always run the `tracker extract` on all of the media in turn and see if it crashes. For example, find -name \*.png -exec tracker extract {} \; find -name \*.mp4 -exec tracker extract {} \; etc. - Adam
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday, 2021-01-07 at 15:23 +0100, Adam Majer wrote:
On 1/7/21 2:39 PM, Carlos E. R. wrote:
On 07/01/2021 14.11, Carlos E. R. wrote:
Log out, log in... keeps crashing.
You could always run the `tracker extract` on all of the media in turn and see if it crashes.
For example,
find -name \*.png -exec tracker extract {} \; find -name \*.mp4 -exec tracker extract {} \;
Ok. cer@Telcontar:~/Pictures/2020/12> /usr/lib/tracker-extract DSC_a2275.JPG (tracker-extract:32527): GStreamer-WARNING **: 16:00:40.455: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though. Segmentation fault (core dumped) cer@Telcontar:~/Pictures/2020/12> cer@Telcontar:~> coredumpctl info 32527 > p +++···················· PID: 32527 (tracker-extract) UID: 1000 (cer) GID: 100 (users) Signal: 11 (SEGV) Timestamp: Sun 2021-01-03 19:08:22 CET (3 days ago) Command Line: /usr/lib/tracker-extract Executable: /usr/lib/tracker-extract Control Group: /user.slice/user-1000.slice/user@1000.service/tracker-extract.service Unit: user@1000.service User Unit: tracker-extract.service Slice: user-1000.slice Owner UID: 1000 (cer) Boot ID: 2ef60a9b78a14f8aa0ad0315a348b17c Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.tracker-extract.1000.2ef60a9b78a14f8aa0ad0315a348b17c.32527.1609697302000000.lz4 (inaccessible) Message: Process 32527 (tracker-extract) of user 1000 dumped core. Stack trace of thread 32543: #0 0x00007f79906d8a0c _ZNSt8_Rb_treeINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt4pairIKS5_S5_ESt10_Select1stIS8_ESt4lessIS5_ESaIS8_EE11lower_boundERS7_ (libexiv2.so.26) #1 0x00007f79906b3a0b WXMPMeta_RegisterNamespace_1 (libexiv2.so.26) #2 0x00007f79906a3344 _ZN8TXMPMetaINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEE17RegisterNamespaceEPKcS8_ (libexiv2.so.26) #3 0x00007f799069eba6 _ZN5Exiv29XmpParser10initializeEPFvPvbES1_ (libexiv2.so.26) #4 0x00007f79906a1539 _ZN5Exiv29XmpParser6decodeERNS_7XmpDataERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE (libexiv2.so.26) #5 0x00007f7990687576 _ZN5Exiv28Internal11TiffDecoder9decodeXmpEPKNS0_13TiffEntryBaseE (libexiv2.so.26) #6 0x00007f799066f7a6 _ZN5Exiv28Internal13TiffDirectory8doAcceptERNS0_11TiffVisitorE (libexiv2.so.26) #7 0x00007f7990678e9f _ZN5Exiv28Internal16TiffParserWorker6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhjjPFMNS0_11TiffDecoderEFvPKNS0_13TiffEntryBaseEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjNS0_5IfdIdEEPNS0_14TiffHeaderBaseE (libexiv2.so.26) #8 0x00007f7990678fe7 _ZN5Exiv210TiffParser6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhj (libexiv2.so.26) #9 0x00007f7990679179 _ZN5Exiv29TiffImage12readMetadataEv (libexiv2.so.26) #10 0x00007f79909f701d gexiv2_metadata_open_internal (libgexiv2.so.2) #11 0x00007f79909f726f gexiv2_metadata_open_path (libgexiv2.so.2) #12 0x00007f7990c11848 tracker_extract_get_metadata (libextract-raw.so) #13 0x000055aec7931eac get_file_metadata (tracker-extract) #14 0x000055aec793249b get_metadata (tracker-extract) #15 0x000055aec7932530 single_thread_get_metadata (tracker-extract) #16 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #17 0x00007f79d692a4f9 start_thread (libpthread.so.0) #18 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32527: #0 0x00007f79d66586db __GI___poll (libc.so.6) #1 0x00007f79d70217b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f79d7021b02 g_main_loop_run (libglib-2.0.so.0) #3 0x000055aec792f07c main (tracker-extract) #4 0x00007f79d658b34a __libc_start_main (libc.so.6) #5 0x000055aec792f16a _start (tracker-extract) Stack trace of thread 32531: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32529: #0 0x00007f79d66586db __GI___poll (libc.so.6) #1 0x00007f79d70217b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f79d7021b02 g_main_loop_run (libglib-2.0.so.0) #3 0x00007f79d7869b16 gdbus_shared_thread_func (libgio-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32528: #0 0x00007f79d66586db __GI___poll (libc.so.6) #1 0x00007f79d70217b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f79d70218cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007f79d7021911 glib_worker_main (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32530: #0 0x00007f79d66586db __GI___poll (libc.so.6) #1 0x00007f79d70217b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f79d70218cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007f79cde975bd dconf_gdbus_worker_thread (libdconfsettings.so) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32542: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d18a g_cond_wait_until (libglib-2.0.so.0) #2 0x00007f79d6ff2c51 g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b7a6 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32537: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32536: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32533: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32532: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32534: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32541: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32538: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32539: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32540: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) Stack trace of thread 32535: #0 0x00007f79d665d839 syscall (libc.so.6) #1 0x00007f79d706d06f g_cond_wait (libglib-2.0.so.0) #2 0x00007f79d6ff2c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f79d704b845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f79d704adce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f79d692a4f9 start_thread (libpthread.so.0) #6 0x00007f79d6662fbf __clone (libc.so.6) PID: 32527 (tracker-extract) UID: 1000 (cer) GID: 100 (users) Signal: 11 (SEGV) Timestamp: Thu 2021-01-07 16:00:44 CET (5min ago) Command Line: /usr/lib/tracker-extract DSC_a2275.JPG Executable: /usr/lib/tracker-extract Control Group: /user.slice/user-1000.slice/session-327.scope Unit: session-327.scope Slice: user-1000.slice Session: 327 Owner UID: 1000 (cer) Boot ID: 431e0a53560949238ef3840406768db8 Machine ID: 2ce1d54548517a7307c1c2bc38206d00 Hostname: Telcontar Storage: /var/lib/systemd/coredump/core.tracker-extract.1000.431e0a53560949238ef3840406768db8.32527.1610031644000000.lz4 Message: Process 32527 (tracker-extract) of user 1000 dumped core. Stack trace of thread 32676: #0 0x00007f0bf0d19a0c _ZNSt8_Rb_treeINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt4pairIKS5_S5_ESt10_Select1stIS8_ESt4lessIS5_ESaIS8_EE11lower_boundERS7_ (libexiv2.so.26) #1 0x00007f0bf0cf4a0b WXMPMeta_RegisterNamespace_1 (libexiv2.so.26) #2 0x00007f0bf0ce4344 _ZN8TXMPMetaINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEE17RegisterNamespaceEPKcS8_ (libexiv2.so.26) #3 0x00007f0bf0cdfba6 _ZN5Exiv29XmpParser10initializeEPFvPvbES1_ (libexiv2.so.26) #4 0x00007f0bf0ce2539 _ZN5Exiv29XmpParser6decodeERNS_7XmpDataERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE (libexiv2.so.26) #5 0x00007f0bf0cc8576 _ZN5Exiv28Internal11TiffDecoder9decodeXmpEPKNS0_13TiffEntryBaseE (libexiv2.so.26) #6 0x00007f0bf0cb07a6 _ZN5Exiv28Internal13TiffDirectory8doAcceptERNS0_11TiffVisitorE (libexiv2.so.26) #7 0x00007f0bf0cb9e9f _ZN5Exiv28Internal16TiffParserWorker6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhjjPFMNS0_11TiffDecoderEFvPKNS0_13TiffEntryBaseEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjNS0_5IfdIdEEPNS0_14TiffHeaderBaseE (libexiv2.so.26) #8 0x00007f0bf0cb9fe7 _ZN5Exiv210TiffParser6decodeERNS_8ExifDataERNS_8IptcDataERNS_7XmpDataEPKhj (libexiv2.so.26) #9 0x00007f0bf0cba179 _ZN5Exiv29TiffImage12readMetadataEv (libexiv2.so.26) #10 0x00007f0bf103801d gexiv2_metadata_open_internal (libgexiv2.so.2) #11 0x00007f0bf103826f gexiv2_metadata_open_path (libgexiv2.so.2) #12 0x00007f0bf1252848 tracker_extract_get_metadata (libextract-raw.so) #13 0x00005634eb949eac get_file_metadata (tracker-extract) #14 0x00005634eb94a49b get_metadata (tracker-extract) #15 0x00005634eb94a530 single_thread_get_metadata (tracker-extract) #16 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #17 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #18 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32532: #0 0x00007f0c3ac1b6db __GI___poll (libc.so.6) #1 0x00007f0c3b5e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f0c3b5e48cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007f0c3245a5bd dconf_gdbus_worker_thread (libdconfsettings.so) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32529: #0 0x00007f0c3ac1b6db __GI___poll (libc.so.6) #1 0x00007f0c3b5e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f0c3b5e48cc g_main_context_iteration (libglib-2.0.so.0) #3 0x00007f0c3b5e4911 glib_worker_main (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32530: #0 0x00007f0c3ac1b6db __GI___poll (libc.so.6) #1 0x00007f0c3b5e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f0c3b5e4b02 g_main_loop_run (libglib-2.0.so.0) #3 0x00007f0c3be2cb16 gdbus_shared_thread_func (libgio-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32541: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32534: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32535: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32640: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b5b6097 g_async_queue_pop (libglib-2.0.so.0) #4 0x00005634eb94a528 single_thread_get_metadata (tracker-extract) #5 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #6 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #7 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32536: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32537: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32533: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32538: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32639: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b5b6097 g_async_queue_pop (libglib-2.0.so.0) #4 0x00005634eb94a528 single_thread_get_metadata (tracker-extract) #5 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #6 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #7 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32624: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63018a g_cond_wait_until (libglib-2.0.so.0) #2 0x00007f0c3b5b5c51 g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e7a6 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32539: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32542: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32527: #0 0x00007f0c3ac1b6db __GI___poll (libc.so.6) #1 0x00007f0c3b5e47b9 g_main_context_poll (libglib-2.0.so.0) #2 0x00007f0c3b5e4b02 g_main_loop_run (libglib-2.0.so.0) #3 0x00005634eb94707c main (tracker-extract) #4 0x00007f0c3ab4e34a __libc_start_main (libc.so.6) #5 0x00005634eb94716a _start (tracker-extract) Stack trace of thread 32540: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) Stack trace of thread 32621: #0 0x00007f0c3ac20839 syscall (libc.so.6) #1 0x00007f0c3b63006f g_cond_wait (libglib-2.0.so.0) #2 0x00007f0c3b5b5c6b g_async_queue_pop_intern_unlocked (libglib-2.0.so.0) #3 0x00007f0c3b60e845 g_thread_pool_wait_for_new_task (libglib-2.0.so.0) #4 0x00007f0c3b60ddce g_thread_proxy (libglib-2.0.so.0) #5 0x00007f0c3aeed4f9 start_thread (libpthread.so.0) #6 0x00007f0c3ac25fbf __clone (libc.so.6) ····················++- gdb Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/lib/tracker-extract DSC_a2275.JPG'. Program terminated with signal SIGSEGV, Segmentation fault. #0 std::_Rb_tree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::_Select1st<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocatMissing separate debuginfos, use: zypper install Mesa-libva-debuginfo-19.3.4-lp152.27.1.x86_64 gstreamer-plugins-vaapi-debuginfo-1.16.2-lp152.1.2.x86_64 libLLVM9-debuginfo-9.0.1-lp152.1.4.x86_64 libgbm1-debuginfo-19.3.4-lp152.27.1.x86_64 libgstallocators-1_0-0-debuginfo-1.16.2-lp152.2.16.x86_64 libgstgl-1_0-0-debuginfo-1.16.2-lp152.2.16.x86_64 libva-wayland2-debuginfo-2.5.0-lp152.2.3.x86_64 libwayland-server0-debuginfo-1.18.0-lp152.6.3.x86_64 libxcb-xfixes0-debuginfo-1.13-lp152.5.3.1.x86_64 - --Type <RET> for more, q to quit, c to continue without paging--c or<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::lower_bound (__k=..., this=<optimized out>) at /usr/include/c++/7/bits/stl_tree.h:1187 warning: Source file is more recent than executable. 1187 { return _M_lower_bound(_M_begin(), _M_end(), __k); } [Current thread is 1 (Thread 0x7f0bd2550700 (LWP 32676))] (gdb) q cer@Telcontar:~> - -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCX/cj/Bwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVKS0An0pZZv7Q36Uu/LKa7e+o 0Ca1O6oBAJsE2/xCi6NM8OTUx7O3DeF7YN6r2Q== =QTy4 -----END PGP SIGNATURE-----
Carlos E. R. wrote:
On 06/01/2021 22.56, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 16:49]: [...]
but another valgrind file mentions "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 megs and was created by my camera.
since tracker appears to be bewildered by jpg/JPG files, why not disable the indexing of jpg/JPG files and see if the coredumps cease?
Sure, but then the issue can not be investigated. I can also remove and lock out the package.
You're putting up unnecessary hurdles - Disable indexing of jpegs Wait to see if the coredumps stop Whether yes or no, report it. Re-enable indexing of jpeg (if you want to). Resume investigation. -- Per Jessen, Zürich (0.2°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland.
On 07/01/2021 09.24, Per Jessen wrote:
Carlos E. R. wrote:
On 06/01/2021 22.56, Patrick Shanahan wrote:
* Carlos E. R. <> [01-06-21 16:49]: [...]
but another valgrind file mentions "file:///home/cer/Pictures/2018/08/DSC_6196.JPG" which has 13 megs and was created by my camera.
since tracker appears to be bewildered by jpg/JPG files, why not disable the indexing of jpg/JPG files and see if the coredumps cease?
Sure, but then the issue can not be investigated. I can also remove and lock out the package.
You're putting up unnecessary hurdles -
Disable indexing of jpegs Wait to see if the coredumps stop Whether yes or no, report it. Re-enable indexing of jpeg (if you want to). Resume investigation.
Ok, said that way I buy it :-) How? The executables are missing from the distribution (see my post of a minute ago) and I see no information on how to configure it. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Am 03.01.21 um 12:47 schrieb Carlos E. R.:
Hi,
I upgraded to 15.2 on 2021-12-31, and since then I have been flooded with coredumps from tracker-extract
Is it this bug? https://bugzilla.gnome.org/show_bug.cgi?id=793210 I asked them to remember which files crash the extraction tool and skip them. Bug has been idle since Feb 2018. Workarounds: 1. Configure the tool to skip all folders which contain the broken files 2. Disable the desktop search Regards, -- Aaron "Optimizer" Digulla a.k.a. Philmann Dark "It's not the universe that's limited, it's our imagination. Follow me and I'll show you something beyond the limits." http://blog.pdark.de/
On 07/01/2021 17.52, Aaron Digulla wrote:
Am 03.01.21 um 12:47 schrieb Carlos E. R.:
Hi,
I upgraded to 15.2 on 2021-12-31, and since then I have been flooded with coredumps from tracker-extract
Is it this bug?
https://bugzilla.gnome.org/show_bug.cgi?id=793210
I asked them to remember which files crash the extraction tool and skip them. Bug has been idle since Feb 2018.
I don't know if it is, because if it is failing on my photos, I have many thousands. It can try one after the other and crash on all of them...
Workarounds:
1. Configure the tool to skip all folders which contain the broken files
2. Disable the desktop search
3 remove the package. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
participants (7)
-
Aaron Digulla
-
Adam Majer
-
Carlos E. R.
-
Dave Howorth
-
David C. Rankin
-
Patrick Shanahan
-
Per Jessen