Hello community, here is the log from the commit of package python-beautifulsoup for openSUSE:Factory checked in at Tue Jul 6 10:00:07 CEST 2010. -------- New Changes file: --- /dev/null 2010-05-08 11:31:08.000000000 +0200 +++ /mounts/work_src_done/STABLE/python-beautifulsoup/python-beautifulsoup.changes 2010-07-06 09:56:10.000000000 +0200 @@ -0,0 +1,81 @@ +------------------------------------------------------------------- +Tue Jul 6 07:56:02 UTC 2010 - coolo@novell.com + +- fix dates in changelog + +------------------------------------------------------------------- +Sat Apr 10 19:20:07 UTC 2010 - alexandre@exatati.com.br + +- Update to 3.0.8.1; +- Spec file cleaned with spec-cleaner. + +------------------------------------------------------------------- +Fri Jan 8 18:38:49 UTC 2010 - alexandre@exatati.com.br + +- Update to 3.0.8; +- Building as noarch for openSUSE >= 11.2. + +------------------------------------------------------------------- +Fri Dec 9 00:00:00 BRST 2008 - cfarrell1980@gmail.com + +- Update to 3.0.7a + - Release 3.0.7a (2008/07/03) + - Added an import that makes BS work in Python 2.3. + + - Release 3.0.7 (2008/06/22) + - Fixed a UnicodeDecodeError when unpickling documents that contain non-ASCII characters. + - Fixed a TypeError that occured in some circumstances when a tag contained no text. + - Jump through hoops to avoid the use of chardet, which can be slow in some circumstances. UTF-8 documents should never trigger the use of chardet. + - Whitespace is preserved inside <pre> and <textarea> tags that contain nothing but whitespace. + - Beautiful Soup can now parse a doctype that's scoped to an XML namespace. + +- Update to 3.0.6 + - Release 3.0.6 (2008/04/26) + - Added a Tag.decompose() method to disconnect a tree or subset, breaking it into bite-sized pieces for the garbage collecter to collect. + - Got rid of a very old debug line that prevented chardet from working. + - Tag.extract() now returns the tag that was extracted. + - Tag.findNext() now does something with the keyword arguments you pass it instead of dropping them on the floor. + - Fixed a Unicode conversion bug. + - Fixed a bug that garbled some tags when rewriting them. + +------------------------------------------------------------------- +Fri Dec 18 00:00:00 BRST 2007 - jfunk@funktronics.ca + +- Update to 3.0.5: + - Beautiful Soup is now licensed under a BSD-style license + - Soup objects can now be pickled, and copied with copy.deepcopy + - Tag.append now works properly on existing BS objects. (It wasn't originally + intended for outside use, but it can be now.) (Giles Radford) + - Passing in a nonexistent encoding will no longer crash the parser on Python + 2.4 (John Nagle) + - Fixed an underlying bug in SGMLParser that thinks ASCII has 255 characters + instead of 127 (John Nagle) + - Entities are converted more consistently to Unicode characters + - Entity references in attribute values are now converted to Unicode + characters when appropriate. Numeric entities are always converted, because + SGMLParser always converts them outside of attribute values + - ALL_ENTITIES happens to just be the XHTML entities, so I renamed it to + XHTML_ENTITIES + - The regular expression for bare ampersands was too loose. In some cases + ampersands were not being escaped. (Sam Ruby?) + - Non-breaking spaces and other special Unicode space characters are no + longer folded to ASCII spaces. (Robert Leftwich) + - Information inside a TEXTAREA tag is now parsed literally, not as HTML + tags. TEXTAREA now works exactly the same way as SCRIPT. (Zephyr Fang) + +------------------------------------------------------------------- +Sun Apr 23 00:00:00 BRT 2007 - jfunk@funktronics.ca + +- Update to 3.0.4: + - Fixed a bug that crashed Unicode conversion in some cases + - Fixed a bug that prevented UnicodeDammit from being used as a general- + purpose data scrubber + - Fixed some unit test failures when running against Python 2.5 + - When considering whether to convert smart quotes, UnicodeDammit now looks + at the original encoding in a case-insensitive way + +------------------------------------------------------------------- +Sun Aug 30 00:00:00 BRT 2006 - jfunk@funktronics.ca + +- Initial release + calling whatdependson for head-i586 New: ---- BeautifulSoup-3.0.8.1.tar.gz python-beautifulsoup.changes python-beautifulsoup.spec ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ python-beautifulsoup.spec ++++++ # # spec file for package python-beautifulsoup (Version 3.0.8.1) # # Copyright (c) 2010 SUSE LINUX Products GmbH, Nuernberg, Germany. # # All modifications and additions to the file contributed by third parties # remain the property of their copyright owners, unless otherwise agreed # upon. The license for this file, and modifications and additions to the # file, is the same license as for the pristine package itself (unless the # license for the pristine package is not an Open Source License, in which # case the license is the MIT License). An "Open Source License" is a # license that conforms to the Open Source Definition (Version 1.9) # published by the Open Source Initiative. # Please submit bugfixes or comments via http://bugs.opensuse.org/ # %define modname BeautifulSoup Name: python-beautifulsoup Version: 3.0.8.1 Release: 1 License: BSD Summary: HTML/XML parser for quick-turnaround applications like screen-scraping Url: http://www.crummy.com/software/BeautifulSoup/ Group: Development/Libraries/Python Source: %{modname}-%{version}.tar.gz BuildRequires: python-devel BuildRoot: %{_tmppath}/%{name}-%{version}-build %{py_requires} %if %{?suse_version: %{suse_version} > 1110} %{!?suse_version:1} BuildArch: noarch %endif %description Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful: * Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away * Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application * Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding Beautiful Soup parses anything you give it, and does the tree traversal stuff for you. You can tell it "Find all the links", or "Find all the links of class externalLink", or "Find all the links whose urls match "foo.com", or "Find the table heading that's got bold text, then give me that text." Valuable data that was once locked up in poorly-designed websites is now within your reach. Projects that would have taken hours take only minutes with Beautiful Soup. %prep %setup -q -n %{modname}-%{version} %build export CFLAGS="%{optflags}" python setup.py build %install python setup.py install --prefix=%{_prefix} --root=%{buildroot} --record-rpm=INSTALLED_FILES %clean rm -rf %{buildroot} %files -f INSTALLED_FILES %defattr(-,root,root) %changelog ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Remember to have fun... -- To unsubscribe, e-mail: opensuse-commit+unsubscribe@opensuse.org For additional commands, e-mail: opensuse-commit+help@opensuse.org