openSUSE Commits
Threads by month
- ----- 2025 -----
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
March 2017
- 2 participants
- 2028 discussions
Hello community,
here is the log from the commit of package geary for openSUSE:Factory checked in at 2017-03-02 19:39:21
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/geary (Old)
and /work/SRC/openSUSE:Factory/.geary.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "geary"
Thu Mar 2 19:39:21 2017 rev:29 rq:461125 version:0.11.3+20170228
Changes:
--------
--- /work/SRC/openSUSE:…
[View More]Factory/geary/geary.changes 2017-02-25 00:54:32.155473203 +0100
+++ /work/SRC/openSUSE:Factory/.geary.new/geary.changes 2017-03-02 19:39:22.790279318 +0100
@@ -1,0 +2,29 @@
+Tue Feb 28 07:06:25 UTC 2017 - zaitor(a)opensuse.org
+
+- Update to version 0.11.3+20170228:
+ + Fix error validing account details on second try (bgo#775511).
+ + Make both Engine and AccountInfo a bit more unit testable.
+ + Fix error when adding third account (bgo779048).
+ + Add unit tests for adding accounts.
+ + Allow using foreach loops over ConversationEmail's messages.
+ + Fix messages with search hits in bodies not being expanded
+ (bgo#778033).
+ + Fix matching message subject not being highlighted in
+ find/search.
+ + Rename archive/trash/delete actions to clearly be for
+ conversations
+ + Add a keyboard nav section to the user manual.
+ + Validate entered email address before allowing add a new
+ account.
+ + Fix print to file not working. Bug 778874.
+ + Remember print dir and reuse when printing again (bgo#713573).
+ + Remember attachments dir and reuse adding/saving attachments
+ and images.
+ + Close folders in reverse order (bgo#778968).
+ + Fix build with new vala.
+ + Fix a shell warning running configure under flatpak-builder.
+ + Use Gtk.show_uri_on_window when available (bgo#770884).
+ + Fix attachments not being opened when using flatpak
+ (bgo#770886).
+
+-------------------------------------------------------------------
Old:
----
geary-0.11.3+20170222.tar.xz
New:
----
geary-0.11.3+20170228.tar.xz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ geary.spec ++++++
--- /var/tmp/diff_new_pack.KThyqV/_old 2017-03-02 19:39:23.478181977 +0100
+++ /var/tmp/diff_new_pack.KThyqV/_new 2017-03-02 19:39:23.478181977 +0100
@@ -17,14 +17,14 @@
Name: geary
-Version: 0.11.3+20170222
+Version: 0.11.3+20170228
Release: 0
Summary: A lightweight email reader for the GNOME desktop
License: LGPL-2.0+
Group: Productivity/Networking/Email/Clients
Url: https://wiki.gnome.org/Apps/Geary
Source: %{name}-%{version}.tar.xz
-#Source: http://download.gnome.org/sources/geary/0.11/%{name}-%{version}.tar.xz
+#Source: http://download.gnome.org/sources/geary/0.11/%%{name}-%%{version}.tar.xz
BuildRequires: cmake
BuildRequires: fdupes
BuildRequires: intltool
++++++ _servicedata ++++++
--- /var/tmp/diff_new_pack.KThyqV/_old 2017-03-02 19:39:23.530174620 +0100
+++ /var/tmp/diff_new_pack.KThyqV/_new 2017-03-02 19:39:23.530174620 +0100
@@ -1,4 +1,4 @@
<servicedata>
<service name="tar_scm">
<param name="url">git://git.gnome.org/geary</param>
- <param name="changesrevision">7d4573a9cd284caec0ac05e50115cfc585381900</param></service></servicedata>
\ No newline at end of file
+ <param name="changesrevision">ca3452b840adeb1c715ca4b11173bf0a9ca5e979</param></service></servicedata>
\ No newline at end of file
++++++ geary-0.11.3+20170222.tar.xz -> geary-0.11.3+20170228.tar.xz ++++++
++++ 7557 lines of diff (skipped)
[View Less]
1
0
Hello community,
here is the log from the commit of package gnucash-docs for openSUSE:Factory checked in at 2017-03-02 19:39:12
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/gnucash-docs (Old)
and /work/SRC/openSUSE:Factory/.gnucash-docs.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "gnucash-docs"
Thu Mar 2 19:39:12 2017 rev:34 rq:461117 version:2.6.15
Changes:
--------
--- …
[View More]/work/SRC/openSUSE:Factory/gnucash-docs/gnucash-docs.changes 2017-01-25 23:31:21.694180684 +0100
+++ /work/SRC/openSUSE:Factory/.gnucash-docs.new/gnucash-docs.changes 2017-03-02 19:39:15.751275371 +0100
@@ -1,0 +2,6 @@
+Fri Feb 24 22:29:14 UTC 2017 - zaitor(a)opensuse.org
+
+- Fixup sourceurl and add proper tarball, the previous sub had the
+ tarball named download.
+
+-------------------------------------------------------------------
Old:
----
download
New:
----
gnucash-docs-2.6.15.tar.gz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ gnucash-docs.spec ++++++
--- /var/tmp/diff_new_pack.J7SJxy/_old 2017-03-02 19:39:19.186789229 +0100
+++ /var/tmp/diff_new_pack.J7SJxy/_new 2017-03-02 19:39:19.186789229 +0100
@@ -23,7 +23,7 @@
License: GFDL-1.1 and GPL-2.0+
Group: Productivity/Office/Finance
Url: http://www.gnucash.org/
-Source: https://sourceforge.net/projects/gnucash/files/gnucash-docs/2.6.15/gnucash-…
+Source: http://downloads.sourceforge.net/project/gnucash/gnucash-docs/%{version}/%{…
BuildRequires: fdupes
BuildRequires: sgml-skel
BuildRequires: xsltproc
[View Less]
1
0
Hello community,
here is the log from the commit of package soundconverter for openSUSE:Factory checked in at 2017-03-02 19:39:06
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/soundconverter (Old)
and /work/SRC/openSUSE:Factory/.soundconverter.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "soundconverter"
Thu Mar 2 19:39:06 2017 rev:2 rq:461013 version:2.9.0~beta2
Changes:
-…
[View More]-------
--- /work/SRC/openSUSE:Factory/soundconverter/soundconverter.changes 2016-11-09 11:38:31.000000000 +0100
+++ /work/SRC/openSUSE:Factory/.soundconverter.new/soundconverter.changes 2017-03-02 19:39:07.336466109 +0100
@@ -1,0 +2,19 @@
+Sun Feb 26 03:00:27 UTC 2017 - zaitor(a)opensuse.org
+
+- Add global __requires_exclude typelib\\(Unity\\), make app
+ installable.
+
+-------------------------------------------------------------------
+Wed Feb 8 00:33:22 UTC 2017 - jengelh(a)inai.de
+
+- Description update
+
+-------------------------------------------------------------------
+Thu Jan 19 00:16:12 UTC 2017 - sor.alexei(a)meowr.ru
+
+- Update to version 2.9.0~beta2:
+ * No changelog available.
+- Separate locales to soundconverter-lang.
+- Own /usr/share/appdata/ unconditionally.
+
+-------------------------------------------------------------------
Old:
----
soundconverter-2.1.6.tar.xz
New:
----
soundconverter-2.9.0~beta2.tar.gz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ soundconverter.spec ++++++
--- /var/tmp/diff_new_pack.lpAEPo/_old 2017-03-02 19:39:08.032367635 +0100
+++ /var/tmp/diff_new_pack.lpAEPo/_new 2017-03-02 19:39:08.036367069 +0100
@@ -1,7 +1,7 @@
#
# spec file for package soundconverter
#
-# Copyright (c) 2016 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2017 SUSE LINUX GmbH, Nuernberg, Germany.
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
@@ -15,58 +15,65 @@
# Please submit bugfixes or comments via http://bugs.opensuse.org/
#
+%global __requires_exclude typelib\\(Unity\\)
+
+%define _rev b101a833fbc55ec43811691338e1e67789d23b98
Name: soundconverter
-Version: 2.1.6
-Release: 1
+Version: 2.9.0~beta2
+Release: 0
+Summary: Sound Converter Application for the GNOME Desktop
License: GPL-3.0+
-Summary: Simple Sound Converter Application for the GNOME Desktop
-Url: http://soundconverter.org/
Group: Productivity/Multimedia/Video/Editors and Convertors
-Source: https://launchpad.net/soundconverter/trunk/%{version}/+download/soundconver…
+Url: http://soundconverter.org/
+Source: https://github.com/kassoulet/soundconverter/archive/%{_rev}.tar.gz#/%{name}…
+BuildRequires: autoconf
+BuildRequires: automake
+BuildRequires: fdupes
+BuildRequires: gobject-introspection-devel
+BuildRequires: hicolor-icon-theme
BuildRequires: intltool
-BuildRequires: mDNSResponder-lib
-BuildRequires: perl-XML-Parser
-BuildRequires: python
-BuildRequires: python-gnome
-BuildRequires: python-gtk-devel >= 2.24
+BuildRequires: python3
+BuildRequires: python3-gobject
BuildRequires: update-desktop-files
-BuildRequires: hicolor-icon-theme
-BuildRequires: fdupes
-Requires: gstreamer-0_10
-Recommends: gstreamer-0_10-plugins-bad
-Recommends: gstreamer-0_10-plugins-base
-Recommends: gstreamer-0_10-plugins-good
-Recommends: gstreamer-0_10-plugins-ugly
-Suggests: gstreamer-0_10-plugins-ugly-orig-addon
-Requires: python-gnome
-Requires: python-gstreamer-0_10
-Requires: python-gtk >= 2.24
-BuildRoot: %{_tmppath}/%{name}-%{version}-build
-%py_requires
+BuildRequires: typelib(Gst) = 1.0
+BuildRequires: typelib(Gtk) = 3.0
+Requires: gstreamer
+Requires: python3-gobject
+Recommends: %{name}-lang
+Recommends: gstreamer-plugins-bad
+Recommends: gstreamer-plugins-base
+Recommends: gstreamer-plugins-good
+Recommends: gstreamer-plugins-ugly
+Suggests: gstreamer-plugins-ugly-orig-addon
+%if 0%{?suse_version} > 1320 || 0%{?sle_version} >= 120200
+BuildRequires: python3-gobject-Gdk
+Requires: python3-gobject-Gdk
+%endif
%description
-A simple sound converter application for the GNOME environment.
+A sound converter application for the GNOME environment.
-It reads anything the GStreamer library can read, and writes WAV,
-FLAC, MP3, and Ogg Vorbis files.
+It reads anything the GStreamer library can read, and offers writing
+to WAV, FLAC, MP3, AAC, and Ogg Vorbis, also with the help of
+GStreamer.
+
+%lang_package
%prep
-%setup -q
+%setup -q -n %{name}-%{_rev}
%build
+NOCONFIGURE=1 ./autogen.sh
%configure
-make %{?_smp_mflags}
+make %{?_smp_mflags} V=1
%install
%make_install
+chmod a+x %{buildroot}%{_libdir}/%{name}/python/%{name}/*py
%find_lang %{name}
-
%suse_update_desktop_file -r %{name} AudioVideo AudioVideoEditing
-
-%fdupes %{buildroot}%{_prefix}
-
-chmod +x %{buildroot}%{_libdir}/soundconverter/python/soundconverter/*py
+%fdupes %{buildroot}%{_prefix}/
%post
%desktop_database_post
@@ -76,18 +83,19 @@
%desktop_database_postun
%icon_theme_cache_postun
-%files -f %{name}.lang
+%files
%defattr(-,root,root)
%doc COPYING ChangeLog README TODO
%{_bindir}/%{name}
-%{_libdir}/%{name}
+%{_libdir}/%{name}/
%{_datadir}/%{name}/
-%{_mandir}/man1/%{name}*
%{_datadir}/applications/%{name}.desktop
%{_datadir}/icons/hicolor/*/apps/%{name}.*
-%if 0%{?suse_version} < 1320
%dir %{_datadir}/appdata/
-%endif
%{_datadir}/appdata/%{name}.appdata.xml
+%{_mandir}/man1/%{name}.1%{?ext_man}
+
+%files lang -f %{name}.lang
+%defattr(-,root,root)
%changelog
[View Less]
1
0
Hello community,
here is the log from the commit of package python3-jupyter_nbformat for openSUSE:Factory checked in at 2017-03-02 19:38:55
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python3-jupyter_nbformat (Old)
and /work/SRC/openSUSE:Factory/.python3-jupyter_nbformat.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "python3-jupyter_nbformat"
Thu Mar 2 19:38:55 2017 rev:5 …
[View More]rq:461011 version:4.3.0
Changes:
--------
--- /work/SRC/openSUSE:Factory/python3-jupyter_nbformat/python3-jupyter_nbformat-doc.changes 2016-07-14 09:45:48.000000000 +0200
+++ /work/SRC/openSUSE:Factory/.python3-jupyter_nbformat.new/python3-jupyter_nbformat-doc.changes 2017-03-02 19:38:56.709969816 +0100
@@ -1,0 +2,20 @@
+Tue Feb 28 20:34:06 UTC 2017 - toddrme2178(a)gmail.com
+
+- Update to 4.3.0
+ * A new pluggable ``SignatureStore`` class allows specifying different ways to
+ record the signatures of trusted notebooks. The default is still an SQLite
+ database. See :ref:`pluggable_signature_store` for more information.
+ * :func:`nbformat.read` and :func:`nbformat.write` accept file paths as bytes
+ as well as unicode.
+ * Fix for calling :func:`nbformat.validate` on an empty dictionary.
+ * Fix for running the tests where the locale makes ASCII the default encoding.
+- Update to 4.2.0
+ * Update nbformat spec version to 4.2, allowing JSON outputs to have any JSONable type, not just ``object``,
+ and mime-types of the form ``application/anything+json``.
+ * Define basics of ``authors`` in notebook metadata.
+ ``nb.metadata.authors`` shall be a list of objects with the property ``name``, a string of each author's full name.
+ * Update use of traitlets API to require traitlets 4.1.
+ * Support trusting notebooks on stdin with ``cat notebook | jupyter trust``
+- Merge documentation into single rpm
+
+-------------------------------------------------------------------
python3-jupyter_nbformat.changes: same change
Old:
----
nbformat-4.0.1.tar.gz
New:
----
nbformat-4.3.0.tar.gz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ python3-jupyter_nbformat-doc.spec ++++++
--- /var/tmp/diff_new_pack.vnxj5B/_old 2017-03-02 19:38:58.305744004 +0100
+++ /var/tmp/diff_new_pack.vnxj5B/_new 2017-03-02 19:38:58.309743439 +0100
@@ -1,7 +1,7 @@
#
# spec file for package python3-jupyter_nbformat-doc
#
-# Copyright (c) 2016 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2017 SUSE LINUX GmbH, Nuernberg, Germany.
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
@@ -23,7 +23,7 @@
%endif
Name: python3-jupyter_nbformat-doc
-Version: 4.0.1
+Version: 4.3.0
Release: 0
Summary: Documentation for python3-jupyter_nbformat
License: BSD-3-Clause
@@ -37,28 +37,16 @@
%if %{build_pdf}
BuildRequires: python3-Sphinx-latex
%endif
+Provides: %{name}-html = %{version}
+Provides: %{name}-pdf = %{version}
+Obsoletes: %{name}-html < %{version}
+Obsoletes: %{name}-pdf < %{version}
BuildRoot: %{_tmppath}/%{name}-%{version}-build
BuildArch: noarch
%description
Documentation and help files for python3-jupyter_nbformat.
-%package html
-Summary: HTML documentation for python3-jupyter_nbformat
-Group: Documentation/HTML
-Recommends: python3-jupyter_nbformat = %{version}
-
-%description html
-Documentation and help files for python3-jupyter_nbformat in HTML format.
-
-%package pdf
-Summary: PDF documentation for python3-jupyter_nbformat
-Group: Documentation/Other
-Recommends: python3-jupyter_nbformat = %{version}
-
-%description pdf
-Documentation and help files for python3-jupyter_nbformat in PDF format.
-
%prep
%setup -q -n nbformat-%{version}
@@ -68,21 +56,17 @@
%install
# Build the documentation
pushd docs
-PYTHONPATH=%{buildroot}%{python3_sitelib} make html
%if %{build_pdf}
PYTHONPATH=%{buildroot}%{python3_sitelib} make latexpdf
%endif
+PYTHONPATH=%{buildroot}%{python3_sitelib} make html
rm -rf _build/html/.buildinfo
-%files html
+%files
%defattr(-,root,root,-)
%doc COPYING.md
%doc docs/_build/html/
-
%if %{build_pdf}
-%files pdf
-%defattr(-,root,root,-)
-%doc COPYING.md
%doc docs/_build/latex/*.pdf
%endif
++++++ python3-jupyter_nbformat.spec ++++++
--- /var/tmp/diff_new_pack.vnxj5B/_old 2017-03-02 19:38:58.329740609 +0100
+++ /var/tmp/diff_new_pack.vnxj5B/_new 2017-03-02 19:38:58.333740043 +0100
@@ -1,7 +1,7 @@
#
# spec file for package python3-jupyter_nbformat
#
-# Copyright (c) 2016 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2017 SUSE LINUX GmbH, Nuernberg, Germany.
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
@@ -17,7 +17,7 @@
Name: python3-jupyter_nbformat
-Version: 4.0.1
+Version: 4.3.0
Release: 0
Summary: The Jupyter Notebook format
License: BSD-3-Clause
@@ -29,13 +29,14 @@
BuildRequires: python3-jsonschema > 2.5.0
BuildRequires: python3-jupyter_core
BuildRequires: python3-setuptools
-BuildRequires: python3-traitlets
+BuildRequires: python3-traitlets >= 4.1
# Test requirements
-BuildRequires: python3-nose
+BuildRequires: python3-pytest
+BuildRequires: python3-testpath
Requires: python3-ipython_genutils
Requires: python3-jsonschema > 2.5.0
Requires: python3-jupyter_core
-Requires: python3-traitlets
+Requires: python3-traitlets >= 4.1
Requires(post): update-alternatives
Requires(postun): update-alternatives
BuildRoot: %{_tmppath}/%{name}-%{version}-build
@@ -73,7 +74,9 @@
fi
%check
-nosetests
+pushd docs
+PYTHONPATH=%{buildroot}%{python3_sitelib} py.test ../nbformat/tests
+popd
%files
%defattr(-,root,root,-)
++++++ nbformat-4.0.1.tar.gz -> nbformat-4.3.0.tar.gz ++++++
++++ 19226 lines of diff (skipped)
[View Less]
1
0
Hello community,
here is the log from the commit of package criu for openSUSE:Factory checked in at 2017-03-02 19:38:46
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/criu (Old)
and /work/SRC/openSUSE:Factory/.criu.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "criu"
Thu Mar 2 19:38:46 2017 rev:29 rq:460904 version:2.11.1
Changes:
--------
--- /work/SRC/openSUSE:Factory/criu/…
[View More]criu.changes 2017-02-03 17:35:03.187188745 +0100
+++ /work/SRC/openSUSE:Factory/.criu.new/criu.changes 2017-03-02 19:38:47.543266956 +0100
@@ -1,0 +2,35 @@
+Tue Feb 28 15:35:27 CET 2017 - tiwai(a)suse.de
+
+- Update to criu 2.11:
+ New features:
+ * Added "pre-resume" to action scripts
+ * New --status-fd option for better control of page server
+ * C/R OFD file locks, RO root mount for mount namespaces
+ Optimizations/improvements:
+ * More strict checks for extra CLI options
+ * Report errors when probing locks
+ * Restorer logs now contain timestamps
+ Fixes:
+ * Regression: v2.10 was broken on ARM
+ * Use-after-free when restoring ghost directory
+ * Array out-of-bound access when restoring VETH device
+ * Page server exit code could be screwed up
+ * Clang over-optimized string.h routines resulting in random
+ crashes
+ * Parasite failed to send FDs via socket on Alpine Linux
+ * Restore of huge file tables could get stuck
+ * Restore of epoll in epoll could fail
+ * Errno value could be lost when reporting failure to restore
+ invisible files
+ * Dump of sched params didn't work on Alpine
+ * Restore of huge memory dumps (over 2G) failed
+ * Installation guessed /lib vs /lib64 with errors
+ * Migration between xsave and noxsave didn't work for wrong cpu
+ feature being checked
+- Update to criu 2.11.1:
+ Fixes:
+ * Page server start via RPC was broken
+ * Fedora build didn't work
+ * Ppc64LE restorer switch crashed
+
+-------------------------------------------------------------------
Old:
----
criu-2.10.tar.bz2
New:
----
criu-2.11.1.tar.bz2
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ criu.spec ++++++
--- /var/tmp/diff_new_pack.x91B0w/_old 2017-03-02 19:38:48.127184329 +0100
+++ /var/tmp/diff_new_pack.x91B0w/_new 2017-03-02 19:38:48.127184329 +0100
@@ -17,7 +17,7 @@
Name: criu
-Version: 2.10
+Version: 2.11.1
Release: 0
Summary: Checkpoint/Restore In Userspace Tools
License: GPL-2.0
++++++ criu-2.10.tar.bz2 -> criu-2.11.1.tar.bz2 ++++++
++++ 9130 lines of diff (skipped)
[View Less]
1
0
Hello community,
here is the log from the commit of package dehydrated for openSUSE:Factory checked in at 2017-03-02 19:38:39
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/dehydrated (Old)
and /work/SRC/openSUSE:Factory/.dehydrated.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "dehydrated"
Thu Mar 2 19:38:39 2017 rev:3 rq:460891 version:0.4.0
Changes:
--------
--- /work/SRC/…
[View More]openSUSE:Factory/dehydrated/dehydrated.changes 2017-02-13 07:49:05.430491137 +0100
+++ /work/SRC/openSUSE:Factory/.dehydrated.new/dehydrated.changes 2017-03-02 19:38:40.344285655 +0100
@@ -1,0 +2,15 @@
+Tue Feb 21 13:12:19 UTC 2017 - daniel.molkentin(a)suse.com
+
+- Drop the (undocumented) dependeny for mod_headers
+
+-------------------------------------------------------------------
+Sat Feb 18 16:51:10 UTC 2017 - daniel(a)molkentin.de
+
+- Unify configuration file source names
+
+-------------------------------------------------------------------
+Sat Feb 18 14:08:02 UTC 2017 - daniel(a)molkentin.de
+
+- Bump to 0.4.0
+
+-------------------------------------------------------------------
Old:
----
acme-challenge.conf.in
acme-challenge.in
dehydrated-0.3.1.tar.gz
New:
----
acme-challenge.conf.apache.in
acme-challenge.conf.nginx.in
dehydrated-0.4.0.tar.gz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ dehydrated.spec ++++++
--- /var/tmp/diff_new_pack.QXanHK/_old 2017-03-02 19:38:41.056184917 +0100
+++ /var/tmp/diff_new_pack.QXanHK/_new 2017-03-02 19:38:41.060184351 +0100
@@ -46,15 +46,15 @@
%{!?_tmpfilesdir: %global _tmpfilesdir /usr/lib/tmpfiles.d }
Name: dehydrated
-Version: 0.3.1
+Version: 0.4.0
Release: 0
Summary: A client for signing certificates with an ACME server
License: MIT
Group: Productivity/Networking/Security
Url: https://github.com/lukas2511/dehydrated
Source0: %{name}-%{version}.tar.gz
-Source1: acme-challenge.conf.in
-Source2: acme-challenge.in
+Source1: acme-challenge.conf.apache.in
+Source2: acme-challenge.conf.nginx.in
Source3: acme-challenge.conf.lighttpd.in
Source4: dehydrated.cron.in
Source5: dehydrated.tmpfiles.d
++++++ acme-challenge.conf.apache.in ++++++
Alias /.well-known/acme-challenge @CHALLENGEDIR@
<Directory "@CHALLENGEDIR@">
Options None
AllowOverride None
Require all granted
ForceType text/plain
</Directory>
++++++ acme-challenge.conf.nginx.in ++++++
# This adds a the acme challenge directory to
# your hosts config file. You will only need
# this on port 80. The following snippet shows
# how to use in on a HTTP server that only
# redirects to HTTPS otherwise. it's important
# to wrap the rest into a "location /" block.
#
#server {
# listen 80 default_server;
# listen [::]:80 default_server;
#
# include "acme-challenge";
# location / {
# return 301 https://$host$request_uri;
# }
#}
location /.well-known/acme-challenge {
alias @CHALLENGEDIR@;
}
++++++ dehydrated-0.3.1.tar.gz -> dehydrated-0.4.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/.travis.yml new/dehydrated-0.4.0/.travis.yml
--- old/dehydrated-0.3.1/.travis.yml 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/.travis.yml 2017-02-05 15:33:17.000000000 +0100
@@ -1,6 +1,10 @@
sudo: false
language: shell
+os:
+ - linux
+ - osx
+
cache:
directories:
- ngrok
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/CHANGELOG new/dehydrated-0.4.0/CHANGELOG
--- old/dehydrated-0.3.1/CHANGELOG 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/CHANGELOG 2017-02-05 15:33:17.000000000 +0100
@@ -5,6 +5,22 @@
## Changed
- ...
+## [0.4.0] - 2017-02-05
+## Changed
+- dehydrated now asks you to read and accept the CAs terms of service before creating an account
+- Skip challenges for already validated domains
+- Removed need for some special commands (BusyBox compatibility)
+- Exported a few more variables for use in hook-scripts
+- fullchain.pem now actually contains the full chain instead of just the certificate with an intermediate cert
+
+## Added
+- Added private-key rollover functionality
+- Added `--lock-suffix` option for allowing parallel execution
+- Added `invalid_challenge` hook
+- Added `request_failure` hook
+- Added `exit_hook` hook
+- Added standalone `register` command
+
## [0.3.1] - 2016-09-13
## Changed
- Renamed project to `dehydrated`.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/LICENSE new/dehydrated-0.4.0/LICENSE
--- old/dehydrated-0.3.1/LICENSE 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/LICENSE 2017-02-05 15:33:17.000000000 +0100
@@ -1,6 +1,6 @@
The MIT License (MIT)
-Copyright (c) 2015 Lukas Schauer
+Copyright (c) 2015-2017 Lukas Schauer
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/README.md new/dehydrated-0.4.0/README.md
--- old/dehydrated-0.3.1/README.md 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/README.md 2017-02-05 15:33:17.000000000 +0100
@@ -2,11 +2,11 @@
![](docs/logo.jpg)
-This is a client for signing certificates with an ACME-server (currently only provided by letsencrypt) implemented as a relatively simple bash-script.
+This is a client for signing certificates with an ACME-server (currently only provided by Let's Encrypt) implemented as a relatively simple bash-script.
It uses the `openssl` utility for everything related to actually handling keys and certificates, so you need to have that installed.
-Other dependencies are: curl, sed, grep, mktemp (all found on almost any system, curl being the only exception)
+Other dependencies are: cURL, sed, grep, mktemp (all found on almost any system, cURL being the only exception)
Current features:
- Signing of a list of domains
@@ -14,19 +14,30 @@
- Renewal if a certificate is about to expire or SAN (subdomains) changed
- Certificate revocation
-Please keep in mind that this software and even the acme-protocol are relatively young and may still have some unresolved issues.
-Feel free to report any issues you find with this script or contribute by submitting a pullrequest.
+Please keep in mind that this software and even the acme-protocol are relatively young and may still have some unresolved issues. Feel free to report any issues you find with this script or contribute by submitting a pull request.
-### Getting started
+## Getting started
For getting started I recommend taking a look at [docs/domains_txt.md](docs/domains_txt.md), [docs/wellknown.md](docs/wellknown.md) and the [Usage](#usage) section on this page (you'll probably only need the `-c` option).
Generally you want to set up your WELLKNOWN path first, and then fill in domains.txt.
-**Please note that you should use the staging URL when experimenting with this script to not hit letsencrypts rate limits.** See [docs/staging.md](docs/staging.md).
+**Please note that you should use the staging URL when experimenting with this script to not hit Let's Encrypt's rate limits.** See [docs/staging.md](docs/staging.md).
If you have any problems take a look at our [Troubleshooting](docs/troubleshooting.md) guide.
+## Config
+
+dehydrated is looking for a config file in a few different places, it will use the first one it can find in this order:
+
+- `/etc/dehydrated/config`
+- `/usr/local/etc/dehydrated/config`
+- The current working directory of your shell
+- The directory from which dehydrated was ran
+
+Have a look at [docs/examples/config](docs/examples/config) to get started, copy it to e.g. `/etc/dehydrated/config`
+and edit it to fit your needs.
+
## Usage:
```text
@@ -35,6 +46,7 @@
Default command: help
Commands:
+ --register Register account key
--cron (-c) Sign/renew non-existant/changed/expiring certificates.
--signcsr (-s) path/to/csr.pem Sign a given CSR, output CRT on stdout (advanced usage)
--revoke (-r) path/to/cert.pem Revoke specified certificate
@@ -43,6 +55,7 @@
--env (-e) Output configuration variables for use in other scripts
Parameters:
+ --accept-terms Accept CAs terms of service
--full-chain (-fc) Print full chain when using --signcsr
--ipv4 (-4) Resolve names to IPv4 addresses only
--ipv6 (-6) Resolve names to IPv6 addresses only
@@ -50,6 +63,7 @@
--keep-going (-g) Keep going after encountering an error while creating/renewing multiple certificates in cron mode
--force (-x) Force renew of certificate even if it is longer valid than value in RENEW_DAYS
--no-lock (-n) Don't use lockfile (potentially dangerous!)
+ --lock-suffix example.com Suffix lockfile name with a string (useful for with -d)
--ocsp Sets option in CSR indicating OCSP stapling to be mandatory
--privkey (-p) path/to/key.pem Use specified private key instead of account key (useful for revocation)
--config (-f) path/to/config Use specified config file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/dehydrated new/dehydrated-0.4.0/dehydrated
--- old/dehydrated-0.3.1/dehydrated 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/dehydrated 2017-02-05 15:33:17.000000000 +0100
@@ -34,8 +34,8 @@
openssl version > /dev/null 2>&1 || _exiterr "This script requires an openssl binary."
_sed "" < /dev/null > /dev/null 2>&1 || _exiterr "This script requires sed with support for extended (modern) regular expressions."
command -v grep > /dev/null 2>&1 || _exiterr "This script requires grep."
- _mktemp -u > /dev/null 2>&1 || _exiterr "This script requires mktemp."
- diff -u /dev/null /dev/null || _exiterr "This script requires diff."
+ command -v mktemp > /dev/null 2>&1 || _exiterr "This script requires mktemp."
+ command -v diff > /dev/null 2>&1 || _exiterr "This script requires diff."
# curl returns with an error code in some ancient versions so we have to catch that
set +e
@@ -81,7 +81,7 @@
if [[ "${CHALLENGETYPE}" = "dns-01" ]] && [[ -z "${HOOK}" ]]; then
_exiterr "Challenge type dns-01 needs a hook script for deployment... can not continue."
fi
- if [[ "${CHALLENGETYPE}" = "http-01" && ! -d "${WELLKNOWN}" ]]; then
+ if [[ "${CHALLENGETYPE}" = "http-01" && ! -d "${WELLKNOWN}" && ! "${COMMAND:-}" = "register" ]]; then
_exiterr "WELLKNOWN directory doesn't exist, please create ${WELLKNOWN} and set appropriate permissions."
fi
[[ "${KEY_ALGO}" =~ ^(rsa|prime256v1|secp384r1)$ ]] || _exiterr "Unknown public key algorithm ${KEY_ALGO}... can not continue."
@@ -105,7 +105,8 @@
# Default values
CA="https://acme-v01.api.letsencrypt.org/directory"
- LICENSE="https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf"
+ CA_TERMS="https://acme-v01.api.letsencrypt.org/terms"
+ LICENSE=
CERTDIR=
ACCOUNTDIR=
CHALLENGETYPE="http-01"
@@ -118,6 +119,7 @@
KEYSIZE="4096"
WELLKNOWN=
PRIVATE_KEY_RENEW="yes"
+ PRIVATE_KEY_ROLLOVER="no"
KEY_ALGO=rsa
OPENSSL_CNF="$(openssl version -d | cut -d\" -f2)/openssl.cnf"
CONTACT_EMAIL=
@@ -183,6 +185,7 @@
[[ -z "${DOMAINS_TXT}" ]] && DOMAINS_TXT="${BASEDIR}/domains.txt"
[[ -z "${WELLKNOWN}" ]] && WELLKNOWN="/var/www/dehydrated"
[[ -z "${LOCKFILE}" ]] && LOCKFILE="${BASEDIR}/lock"
+ [[ -n "${PARAM_LOCKFILE_SUFFIX:-}" ]] && LOCKFILE="${LOCKFILE}-${PARAM_LOCKFILE_SUFFIX}"
[[ -n "${PARAM_NO_LOCK:-}" ]] && LOCKFILE=""
[[ -n "${PARAM_HOOK:-}" ]] && HOOK="${PARAM_HOOK}"
@@ -219,7 +222,7 @@
_exiterr "Problem retrieving ACME/CA-URLs, check if your configured CA points to the directory entrypoint."
# Export some environment variables to be used in hook script
- export WELLKNOWN BASEDIR CERTDIR CONFIG
+ export WELLKNOWN BASEDIR CERTDIR CONFIG COMMAND
# Checking for private key ...
register_new_key="no"
@@ -231,6 +234,24 @@
else
# Check if private account key exists, if it doesn't exist yet generate a new one (rsa key)
if [[ ! -e "${ACCOUNT_KEY}" ]]; then
+ REAL_LICENSE="$(http_request head "${CA_TERMS}" | (grep Location: || true) | awk -F ': ' '{print $2}' | tr -d '\n\r')"
+ if [[ -z "${REAL_LICENSE}" ]]; then
+ printf '\n'
+ printf 'Error retrieving terms of service from certificate authority.\n'
+ printf 'Please set LICENSE in config manually.\n'
+ exit 1
+ fi
+ if [[ ! "${LICENSE}" = "${REAL_LICENSE}" ]]; then
+ if [[ "${PARAM_ACCEPT_TERMS:-}" = "yes" ]]; then
+ LICENSE="${REAL_LICENSE}"
+ else
+ printf '\n'
+ printf 'To use dehydrated with this certificate authority you have to agree to their terms of service which you can find here: %s\n\n' "${REAL_LICENSE}"
+ printf 'To accept these terms of service run `%s --register --accept-terms`.\n' "${0}"
+ exit 1
+ fi
+ fi
+
echo "+ Generating account key..."
_openssl genrsa -out "${ACCOUNT_KEY}" "${KEYSIZE}"
register_new_key="yes"
@@ -247,14 +268,22 @@
# If we generated a new private key in the step above we have to register it with the acme-server
if [[ "${register_new_key}" = "yes" ]]; then
echo "+ Registering account key with ACME server..."
- [[ ! -z "${CA_NEW_REG}" ]] || _exiterr "Certificate authority doesn't allow registrations."
- # If an email for the contact has been provided then adding it to the registration request
FAILED=false
- if [[ -n "${CONTACT_EMAIL}" ]]; then
- (signed_request "${CA_NEW_REG}" '{"resource": "new-reg", "contact":["mailto:'"${CONTACT_EMAIL}"'"], "agreement": "'"$LICENSE"'"}' > "${ACCOUNT_KEY_JSON}") || FAILED=true
- else
- (signed_request "${CA_NEW_REG}" '{"resource": "new-reg", "agreement": "'"$LICENSE"'"}' > "${ACCOUNT_KEY_JSON}") || FAILED=true
+
+ if [[ -z "${CA_NEW_REG}" ]]; then
+ echo "Certificate authority doesn't allow registrations."
+ FAILED=true
+ fi
+
+ # If an email for the contact has been provided then adding it to the registration request
+ if [[ "${FAILED}" = "false" ]]; then
+ if [[ -n "${CONTACT_EMAIL}" ]]; then
+ (signed_request "${CA_NEW_REG}" '{"resource": "new-reg", "contact":["mailto:'"${CONTACT_EMAIL}"'"], "agreement": "'"$LICENSE"'"}' > "${ACCOUNT_KEY_JSON}") || FAILED=true
+ else
+ (signed_request "${CA_NEW_REG}" '{"resource": "new-reg", "agreement": "'"$LICENSE"'"}' > "${ACCOUNT_KEY_JSON}") || FAILED=true
+ fi
fi
+
if [[ "${FAILED}" = "true" ]]; then
echo
echo
@@ -262,8 +291,10 @@
rm "${ACCOUNT_KEY}" "${ACCOUNT_KEY_JSON}"
exit 1
fi
+ elif [[ "${COMMAND:-}" = "register" ]]; then
+ echo "+ Account already registered!"
+ exit 0
fi
-
}
# Different sed version for different os types...
@@ -305,6 +336,13 @@
sed -n "${filter}"
}
+rm_json_arrays() {
+ local filter
+ filter='s/\[[^][]*\]/null/g'
+ # remove three levels of nested arrays
+ sed -e "${filter}" -e "${filter}" -e "${filter}"
+}
+
# OpenSSL writes to stderr/stdout even when there are no errors. So just
# display the output if the exit code was != 0 to simplify debugging.
_openssl() {
@@ -351,22 +389,31 @@
fi
if [[ ! "${statuscode:0:1}" = "2" ]]; then
- echo " + ERROR: An error occurred while sending ${1}-request to ${2} (Status ${statuscode})" >&2
- echo >&2
- echo "Details:" >&2
- cat "${tempcont}" >&2
- echo >&2
- echo >&2
- rm -f "${tempcont}"
+ if [[ ! "${2}" = "${CA_TERMS}" ]] || [[ ! "${statuscode:0:1}" = "3" ]]; then
+ echo " + ERROR: An error occurred while sending ${1}-request to ${2} (Status ${statuscode})" >&2
+ echo >&2
+ echo "Details:" >&2
+ cat "${tempcont}" >&2
+ echo >&2
+ echo >&2
- # Wait for hook script to clean the challenge if used
- if [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" != "yes" ]] && [[ -n "${challenge_token:+set}" ]]; then
- "${HOOK}" "clean_challenge" '' "${challenge_token}" "${keyauth}"
- fi
+ # An exclusive hook for the {1}-request error might be useful (e.g., for sending an e-mail to admins)
+ if [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" != "yes" ]]; then
+ errtxt=`cat ${tempcont}`
+ "${HOOK}" "request_failure" "${statuscode}" "${errtxt}" "${1}"
+ fi
- # remove temporary domains.txt file if used
- [[ -n "${PARAM_DOMAIN:-}" && -n "${DOMAINS_TXT:-}" ]] && rm "${DOMAINS_TXT}"
- exit 1
+ rm -f "${tempcont}"
+
+ # Wait for hook script to clean the challenge if used
+ if [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" != "yes" ]] && [[ -n "${challenge_token:+set}" ]]; then
+ "${HOOK}" "clean_challenge" '' "${challenge_token}" "${keyauth}"
+ fi
+
+ # remove temporary domains.txt file if used
+ [[ -n "${PARAM_DOMAIN:-}" && -n "${DOMAINS_TXT:-}" ]] && rm "${DOMAINS_TXT}"
+ exit 1
+ fi
fi
cat "${tempcont}"
@@ -409,7 +456,7 @@
reqtext="$( <<<"${csr}" openssl req -noout -text )"
if <<<"${reqtext}" grep -q '^[[:space:]]*X509v3 Subject Alternative Name:[[:space:]]*$'; then
# SANs used, extract these
- altnames="$( <<<"${reqtext}" grep -A1 '^[[:space:]]*X509v3 Subject Alternative Name:[[:space:]]*$' | tail -n1 )"
+ altnames="$( <<<"${reqtext}" awk '/X509v3 Subject Alternative Name:/{print;getline;print;}' | tail -n1 )"
# split to one per line:
# shellcheck disable=SC1003
altnames="$( <<<"${altnames}" _sed -e 's/^[[:space:]]*//; s/, /\'$'\n''/g' )"
@@ -450,9 +497,9 @@
local idx=0
if [[ -n "${ZSH_VERSION:-}" ]]; then
- local -A challenge_uris challenge_tokens keyauths deploy_args
+ local -A challenge_altnames challenge_uris challenge_tokens keyauths deploy_args
else
- local -a challenge_uris challenge_tokens keyauths deploy_args
+ local -a challenge_altnames challenge_uris challenge_tokens keyauths deploy_args
fi
# Request challenges
@@ -461,6 +508,12 @@
echo " + Requesting challenge for ${altname}..."
response="$(signed_request "${CA_NEW_AUTHZ}" '{"resource": "new-authz", "identifier": {"type": "dns", "value": "'"${altname}"'"}}' | clean_json)"
+ challenge_status="$(printf '%s' "${response}" | rm_json_arrays | get_json_string_value status)"
+ if [ "${challenge_status}" = "valid" ]; then
+ echo " + Already validated!"
+ continue
+ fi
+
challenges="$(printf '%s\n' "${response}" | sed -n 's/.*\("challenges":[^\[]*\[[^]]*]\).*/\1/p')"
repl=$'\n''{' # fix syntax highlighting in Vim
challenge="$(printf "%s" "${challenges//\{/${repl}}" | grep \""${CHALLENGETYPE}"\")"
@@ -487,6 +540,7 @@
;;
esac
+ challenge_altnames[${idx}]="${altname}"
challenge_uris[${idx}]="${challenge_uri}"
keyauths[${idx}]="${keyauth}"
challenge_tokens[${idx}]="${challenge_token}"
@@ -494,56 +548,64 @@
deploy_args[${idx}]="${altname} ${challenge_token} ${keyauth_hook}"
idx=$((idx+1))
done
+ challenge_count="${idx}"
# Wait for hook script to deploy the challenges if used
- # shellcheck disable=SC2068
- [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" = "yes" ]] && "${HOOK}" "deploy_challenge" ${deploy_args[@]}
+ if [[ ${challenge_count} -ne 0 ]]; then
+ # shellcheck disable=SC2068
+ [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" = "yes" ]] && "${HOOK}" "deploy_challenge" ${deploy_args[@]}
+ fi
# Respond to challenges
+ reqstatus="valid"
idx=0
- for altname in ${altnames}; do
- challenge_token="${challenge_tokens[${idx}]}"
- keyauth="${keyauths[${idx}]}"
-
- # Wait for hook script to deploy the challenge if used
- # shellcheck disable=SC2086
- [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" != "yes" ]] && "${HOOK}" "deploy_challenge" ${deploy_args[${idx}]}
+ if [ ${challenge_count} -ne 0 ]; then
+ for altname in "${challenge_altnames[@]:0}"; do
+ challenge_token="${challenge_tokens[${idx}]}"
+ keyauth="${keyauths[${idx}]}"
- # Ask the acme-server to verify our challenge and wait until it is no longer pending
- echo " + Responding to challenge for ${altname}..."
- result="$(signed_request "${challenge_uris[${idx}]}" '{"resource": "challenge", "keyAuthorization": "'"${keyauth}"'"}' | clean_json)"
+ # Wait for hook script to deploy the challenge if used
+ # shellcheck disable=SC2086
+ [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" != "yes" ]] && "${HOOK}" "deploy_challenge" ${deploy_args[${idx}]}
- reqstatus="$(printf '%s\n' "${result}" | get_json_string_value status)"
+ # Ask the acme-server to verify our challenge and wait until it is no longer pending
+ echo " + Responding to challenge for ${altname}..."
+ result="$(signed_request "${challenge_uris[${idx}]}" '{"resource": "challenge", "keyAuthorization": "'"${keyauth}"'"}' | clean_json)"
- while [[ "${reqstatus}" = "pending" ]]; do
- sleep 1
- result="$(http_request get "${challenge_uris[${idx}]}")"
reqstatus="$(printf '%s\n' "${result}" | get_json_string_value status)"
- done
- [[ "${CHALLENGETYPE}" = "http-01" ]] && rm -f "${WELLKNOWN}/${challenge_token}"
+ while [[ "${reqstatus}" = "pending" ]]; do
+ sleep 1
+ result="$(http_request get "${challenge_uris[${idx}]}")"
+ reqstatus="$(printf '%s\n' "${result}" | get_json_string_value status)"
+ done
- # Wait for hook script to clean the challenge if used
- if [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" != "yes" ]] && [[ -n "${challenge_token}" ]]; then
- # shellcheck disable=SC2086
- "${HOOK}" "clean_challenge" ${deploy_args[${idx}]}
- fi
- idx=$((idx+1))
+ [[ "${CHALLENGETYPE}" = "http-01" ]] && rm -f "${WELLKNOWN}/${challenge_token}"
- if [[ "${reqstatus}" = "valid" ]]; then
- echo " + Challenge is valid!"
- else
- break
- fi
- done
+ # Wait for hook script to clean the challenge if used
+ if [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" != "yes" ]] && [[ -n "${challenge_token}" ]]; then
+ # shellcheck disable=SC2086
+ "${HOOK}" "clean_challenge" ${deploy_args[${idx}]}
+ fi
+ idx=$((idx+1))
+
+ if [[ "${reqstatus}" = "valid" ]]; then
+ echo " + Challenge is valid!"
+ else
+ [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" != "yes" ]] && "${HOOK}" "invalid_challenge" "${altname}" "${result}"
+ fi
+ done
+ fi
# Wait for hook script to clean the challenges if used
# shellcheck disable=SC2068
- [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" = "yes" ]] && "${HOOK}" "clean_challenge" ${deploy_args[@]}
+ if [[ ${challenge_count} -ne 0 ]]; then
+ [[ -n "${HOOK}" ]] && [[ "${HOOK_CHAIN}" = "yes" ]] && "${HOOK}" "clean_challenge" ${deploy_args[@]}
+ fi
if [[ "${reqstatus}" != "valid" ]]; then
# Clean up any remaining challenge_tokens if we stopped early
- if [[ "${CHALLENGETYPE}" = "http-01" ]]; then
+ if [[ "${CHALLENGETYPE}" = "http-01" ]] && [[ ${challenge_count} -ne 0 ]]; then
while [ ${idx} -lt ${#challenge_tokens[@]} ]; do
rm -f "${WELLKNOWN}/${challenge_tokens[${idx}]}"
idx=$((idx+1))
@@ -569,6 +631,51 @@
echo " + Done!"
}
+# grep issuer cert uri from certificate
+get_issuer_cert_uri() {
+ certificate="${1}"
+ openssl x509 -in "${certificate}" -noout -text | (grep 'CA Issuers - URI:' | cut -d':' -f2-) || true
+}
+
+# walk certificate chain, retrieving all intermediate certificates
+walk_chain() {
+ local certificate
+ certificate="${1}"
+
+ local issuer_cert_uri
+ issuer_cert_uri="${2:-}"
+ if [[ -z "${issuer_cert_uri}" ]]; then issuer_cert_uri="$(get_issuer_cert_uri "${certificate}")"; fi
+ if [[ -n "${issuer_cert_uri}" ]]; then
+ # create temporary files
+ local tmpcert
+ local tmpcert_raw
+ tmpcert_raw="$(_mktemp)"
+ tmpcert="$(_mktemp)"
+
+ # download certificate
+ http_request get "${issuer_cert_uri}" > "${tmpcert_raw}"
+
+ # PEM
+ if grep -q "BEGIN CERTIFICATE" "${tmpcert_raw}"; then mv "${tmpcert_raw}" "${tmpcert}"
+ # DER
+ elif openssl x509 -in "${tmpcert_raw}" -inform DER -out "${tmpcert}" -outform PEM 2> /dev/null > /dev/null; then :
+ # PKCS7
+ elif openssl pkcs7 -in "${tmpcert_raw}" -inform DER -out "${tmpcert}" -outform PEM -print_certs 2> /dev/null > /dev/null; then :
+ # Unknown certificate type
+ else _exiterr "Unknown certificate type in chain"
+ fi
+
+ local next_issuer_cert_uri
+ next_issuer_cert_uri="$(get_issuer_cert_uri "${tmpcert}")"
+ if [[ -n "${next_issuer_cert_uri}" ]]; then
+ printf "\n%s\n" "${issuer_cert_uri}"
+ cat "${tmpcert}"
+ walk_chain "${tmpcert}" "${next_issuer_cert_uri}"
+ fi
+ rm -f "${tmpcert}" "${tmpcert_raw}"
+ fi
+}
+
# Create certificate for domain(s)
sign_domain() {
domain="${1}"
@@ -596,6 +703,26 @@
prime256v1|secp384r1) _openssl ecparam -genkey -name "${KEY_ALGO}" -out "${CERTDIR}/${domain}/privkey-${timestamp}.pem";;
esac
fi
+ # move rolloverkey into position (if any)
+ if [[ -r "${CERTDIR}/${domain}/privkey.pem" && -r "${CERTDIR}/${domain}/privkey.roll.pem" && "${PRIVATE_KEY_RENEW}" = "yes" && "${PRIVATE_KEY_ROLLOVER}" = "yes" ]]; then
+ echo " + Moving Rolloverkey into position.... "
+ mv "${CERTDIR}/${domain}/privkey.roll.pem" "${CERTDIR}/${domain}/privkey-tmp.pem"
+ mv "${CERTDIR}/${domain}/privkey-${timestamp}.pem" "${CERTDIR}/${domain}/privkey.roll.pem"
+ mv "${CERTDIR}/${domain}/privkey-tmp.pem" "${CERTDIR}/${domain}/privkey-${timestamp}.pem"
+ fi
+ # generate a new private rollover key if we need or want one
+ if [[ ! -r "${CERTDIR}/${domain}/privkey.roll.pem" && "${PRIVATE_KEY_ROLLOVER}" = "yes" && "${PRIVATE_KEY_RENEW}" = "yes" ]]; then
+ echo " + Generating private rollover key..."
+ case "${KEY_ALGO}" in
+ rsa) _openssl genrsa -out "${CERTDIR}/${domain}/privkey.roll.pem" "${KEYSIZE}";;
+ prime256v1|secp384r1) _openssl ecparam -genkey -name "${KEY_ALGO}" -out "${CERTDIR}/${domain}/privkey.roll.pem";;
+ esac
+ fi
+ # delete rolloverkeys if disabled
+ if [[ -r "${CERTDIR}/${domain}/privkey.roll.pem" && ! "${PRIVATE_KEY_ROLLOVER}" = "yes" ]]; then
+ echo " + Removing Rolloverkey (feature disabled)..."
+ rm -f "${CERTDIR}/${domain}/privkey.roll.pem"
+ fi
# Generate signing request config and the actual signing request
echo " + Generating signing request..."
@@ -621,10 +748,7 @@
# Create fullchain.pem
echo " + Creating fullchain.pem..."
cat "${crt_path}" > "${CERTDIR}/${domain}/fullchain-${timestamp}.pem"
- http_request get "$(openssl x509 -in "${CERTDIR}/${domain}/cert-${timestamp}.pem" -noout -text | grep 'CA Issuers - URI:' | cut -d':' -f2-)" > "${CERTDIR}/${domain}/chain-${timestamp}.pem"
- if ! grep -q "BEGIN CERTIFICATE" "${CERTDIR}/${domain}/chain-${timestamp}.pem"; then
- openssl x509 -in "${CERTDIR}/${domain}/chain-${timestamp}.pem" -inform DER -out "${CERTDIR}/${domain}/chain-${timestamp}.pem" -outform PEM
- fi
+ walk_chain "${crt_path}" > "${CERTDIR}/${domain}/chain-${timestamp}.pem"
cat "${CERTDIR}/${domain}/chain-${timestamp}.pem" >> "${CERTDIR}/${domain}/fullchain-${timestamp}.pem"
# Update symlinks
@@ -636,13 +760,20 @@
ln -sf "cert-${timestamp}.pem" "${CERTDIR}/${domain}/cert.pem"
# Wait for hook script to clean the challenge and to deploy cert if used
- export KEY_ALGO
[[ -n "${HOOK}" ]] && "${HOOK}" "deploy_cert" "${domain}" "${CERTDIR}/${domain}/privkey.pem" "${CERTDIR}/${domain}/cert.pem" "${CERTDIR}/${domain}/fullchain.pem" "${CERTDIR}/${domain}/chain.pem" "${timestamp}"
unset challenge_token
echo " + Done!"
}
+# Usage: --register
+# Description: Register account key
+command_register() {
+ init_system
+ echo "+ Done!"
+ exit 0
+}
+
# Usage: --cron (-c)
# Description: Sign/renew non-existant/changed/expiring certificates.
command_sign_domains() {
@@ -662,7 +793,7 @@
# Generate certificates for all domains found in domains.txt. Check if existing certificate are about to expire
ORIGIFS="${IFS}"
IFS=$'\n'
- for line in $(<"${DOMAINS_TXT}" tr -d '\r' | tr '[:upper:]' '[:lower:]' | _sed -e 's/^[[:space:]]*//g' -e 's/[[:space:]]*$//g' -e 's/[[:space:]]+/ /g' | (grep -vE '^(#|$)' || true)); do
+ for line in $(<"${DOMAINS_TXT}" tr -d '\r' | awk '{print tolower($0)}' | _sed -e 's/^[[:space:]]*//g' -e 's/[[:space:]]*$//g' -e 's/[[:space:]]+/ /g' | (grep -vE '^(#|$)' || true)); do
reset_configvars
IFS="${ORIGIFS}"
domain="$(printf '%s\n' "${line}" | cut -d' ' -f1)"
@@ -705,7 +836,7 @@
config_var="$(echo "${cfgline:1}" | cut -d'=' -f1)"
config_value="$(echo "${cfgline:1}" | cut -d'=' -f2-)"
case "${config_var}" in
- KEY_ALGO|OCSP_MUST_STAPLE|PRIVATE_KEY_RENEW|KEYSIZE|CHALLENGETYPE|HOOK|WELLKNOWN|HOOK_CHAIN|OPENSSL_CNF|RENEW_DAYS)
+ KEY_ALGO|OCSP_MUST_STAPLE|PRIVATE_KEY_RENEW|PRIVATE_KEY_ROLLOVER|KEYSIZE|CHALLENGETYPE|HOOK|WELLKNOWN|HOOK_CHAIN|OPENSSL_CNF|RENEW_DAYS)
echo " + ${config_var} = ${config_value}"
declare -- "${config_var}=${config_value}"
;;
@@ -716,6 +847,7 @@
IFS="${ORIGIFS}"
fi
verify_config
+ export WELLKNOWN CHALLENGETYPE KEY_ALGO PRIVATE_KEY_ROLLOVER
if [[ -e "${cert}" ]]; then
printf " + Checking domain name(s) of existing cert..."
@@ -767,6 +899,7 @@
# remove temporary domains.txt file if used
[[ -n "${PARAM_DOMAIN:-}" ]] && rm -f "${DOMAINS_TXT}"
+ [[ -n "${HOOK}" ]] && "${HOOK}" "exit_hook"
exit 0
}
@@ -797,10 +930,13 @@
if [ -n "${PARAM_FULL_CHAIN:-}" ]; then
# get and convert ca cert
chainfile="$(_mktemp)"
- http_request get "$(openssl x509 -in "${certfile}" -noout -text | grep 'CA Issuers - URI:' | cut -d':' -f2-)" > "${chainfile}"
-
- if ! grep -q "BEGIN CERTIFICATE" "${chainfile}"; then
- openssl x509 -inform DER -in "${chainfile}" -outform PEM -out "${chainfile}"
+ tmpchain="$(_mktemp)"
+ http_request get "$(openssl x509 -in "${certfile}" -noout -text | grep 'CA Issuers - URI:' | cut -d':' -f2-)" > "${tmpchain}"
+ if grep -q "BEGIN CERTIFICATE" "${tmpchain}"; then
+ mv "${tmpchain}" "${chainfile}"
+ else
+ openssl x509 -in "${tmpchain}" -inform DER -out "${chainfile}" -outform PEM
+ rm "${tmpchain}"
fi
echo "# CHAIN #" >&3
@@ -965,6 +1101,16 @@
set_command sign_domains
;;
+ --register)
+ set_command register
+ ;;
+
+ # PARAM_Usage: --accept-terms
+ # PARAM_Description: Accept CAs terms of service
+ --accept-terms)
+ PARAM_ACCEPT_TERMS="yes"
+ ;;
+
--signcsr|-s)
shift 1
set_command sign_csr
@@ -1031,6 +1177,14 @@
PARAM_NO_LOCK="yes"
;;
+ # PARAM_Usage: --lock-suffix example.com
+ # PARAM_Description: Suffix lockfile name with a string (useful for with -d)
+ --lock-suffix)
+ shift 1
+ check_parameters "${1:-}"
+ PARAM_LOCKFILE_SUFFIX="${1}"
+ ;;
+
# PARAM_Usage: --ocsp
# PARAM_Description: Sets option in CSR indicating OCSP stapling to be mandatory
--ocsp)
@@ -1099,6 +1253,7 @@
case "${COMMAND}" in
env) command_env;;
sign_domains) command_sign_domains;;
+ register) command_register;;
sign_csr) command_sign_csr "${PARAM_CSR}";;
revoke) command_revoke "${PARAM_REVOKECERT}";;
cleanup) command_cleanup;;
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/docs/dns-verification.md new/dehydrated-0.4.0/docs/dns-verification.md
--- old/dehydrated-0.3.1/docs/dns-verification.md 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/docs/dns-verification.md 2017-02-05 15:33:17.000000000 +0100
@@ -4,7 +4,7 @@
You need a hook script that deploys the challenge to your DNS server!
-The hook script (indicated in the config file or the --hook/-k command line argument) gets four arguments: an operation name (clean_challenge, deploy_challenge, or deploy_cert) and some operands for that. For deploy_challenge $2 is the domain name for which the certificate is required, $3 is a "challenge token" (which is not needed for dns-01), and $4 is a token which needs to be inserted in a TXT record for the domain.
+The hook script (indicated in the config file or the --hook/-k command line argument) gets four arguments: an operation name (clean_challenge, deploy_challenge, deploy_cert, invalid_challenge or request_failure) and some operands for that. For deploy_challenge $2 is the domain name for which the certificate is required, $3 is a "challenge token" (which is not needed for dns-01), and $4 is a token which needs to be inserted in a TXT record for the domain.
Typically, you will need to split the subdomain name in two, the subdomain name and the domain name separately. For example, for "my.example.com", you'll need "my" and "example.com" separately. You then have to prefix "_acme-challenge." before the subdomain name, as in "_acme-challenge.my" and set a TXT record for that on the domain (e.g. "example.com") which has the value supplied in $4
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/docs/examples/config new/dehydrated-0.4.0/docs/examples/config
--- old/dehydrated-0.3.1/docs/examples/config 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/docs/examples/config 2017-02-05 15:33:17.000000000 +0100
@@ -18,8 +18,11 @@
# Path to certificate authority (default: https://acme-v01.api.letsencrypt.org/directory)
#CA="https://acme-v01.api.letsencrypt.org/directory"
-# Path to license agreement (default: https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf)
-#LICENSE="https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf"
+# Path to certificate authority license terms redirect (default: https://acme-v01.api.letsencrypt.org/terms)
+#CA_TERMS="https://acme-v01.api.letsencrypt.org/terms"
+
+# Path to license agreement (default: <unset>)
+#LICENSE=""
# Which challenge should be used? Currently http-01 and dns-01 are supported
#CHALLENGETYPE="http-01"
@@ -72,6 +75,9 @@
# Regenerate private keys instead of just signing new certificates on renewal (default: yes)
#PRIVATE_KEY_RENEW="yes"
+# Create an extra private key for rollover (default: no)
+#PRIVATE_KEY_ROLLOVER="no"
+
# Which public key algorithm should be used? Supported: rsa, prime256v1 and secp384r1
#KEY_ALGO=rsa
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/docs/examples/hook.sh new/dehydrated-0.4.0/docs/examples/hook.sh
--- old/dehydrated-0.3.1/docs/examples/hook.sh 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/docs/examples/hook.sh 2017-02-05 15:33:17.000000000 +0100
@@ -1,6 +1,6 @@
#!/usr/bin/env bash
-function deploy_challenge {
+deploy_challenge() {
local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}"
# This hook is called once for every domain that needs to be
@@ -21,7 +21,7 @@
# be found in the $TOKEN_FILENAME file.
}
-function clean_challenge {
+clean_challenge() {
local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}"
# This hook is called after attempting to validate each domain,
@@ -31,7 +31,7 @@
# The parameters are the same as for deploy_challenge.
}
-function deploy_cert {
+deploy_cert() {
local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}" TIMESTAMP="${6}"
# This hook is called once for each certificate that has been
@@ -54,7 +54,7 @@
# Timestamp when the specified certificate was created.
}
-function unchanged_cert {
+unchanged_cert() {
local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}"
# This hook is called once for each certificate that is still
@@ -74,4 +74,45 @@
# The path of the file containing the intermediate certificate(s).
}
-HANDLER=$1; shift; $HANDLER $@
+invalid_challenge() {
+ local DOMAIN="${1}" RESPONSE="${2}"
+
+ # This hook is called if the challenge response has failed, so domain
+ # owners can be aware and act accordingly.
+ #
+ # Parameters:
+ # - DOMAIN
+ # The primary domain name, i.e. the certificate common
+ # name (CN).
+ # - RESPONSE
+ # The response that the verification server returned
+}
+
+request_failure() {
+ local STATUSCODE="${1}" REASON="${2}" REQTYPE="${3}"
+
+ # This hook is called when a HTTP request fails (e.g., when the ACME
+ # server is busy, returns an error, etc). It will be called upon any
+ # response code that does not start with '2'. Useful to alert admins
+ # about problems with requests.
+ #
+ # Parameters:
+ # - STATUSCODE
+ # The HTML status code that originated the error.
+ # - REASON
+ # The specified reason for the error.
+ # - REQTYPE
+ # The kind of request that was made (GET, POST...)
+}
+
+exit_hook() {
+ # This hook is called at the end of a dehydrated command and can be used
+ # to do some final (cleanup or other) tasks.
+
+ :
+}
+
+HANDLER="$1"; shift
+if [[ "${HANDLER}" =~ ^(deploy_challenge|clean_challenge|deploy_cert|unchanged_cert|invalid_challenge|request_failure|exit_hook)$ ]]; then
+ "$HANDLER" "$@"
+fi
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/docs/staging.md new/dehydrated-0.4.0/docs/staging.md
--- old/dehydrated-0.3.1/docs/staging.md 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/docs/staging.md 2017-02-05 15:33:17.000000000 +0100
@@ -9,4 +9,5 @@
```bash
CA="https://acme-staging.api.letsencrypt.org/directory"
+CA_TERMS="https://acme-staging.api.letsencrypt.org/terms"
```
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/docs/wellknown.md new/dehydrated-0.4.0/docs/wellknown.md
--- old/dehydrated-0.3.1/docs/wellknown.md 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/docs/wellknown.md 2017-02-05 15:33:17.000000000 +0100
@@ -60,9 +60,8 @@
With Lighttpd just add this to your config and it should work in any VHost:
```lighttpd
-modules += "alias"
-
+server.modules += ("alias")
alias.url += (
- "/.well-known/acme-challenge/" => "/var/www/dehydrated/"
+ "/.well-known/acme-challenge/" => "/var/www/dehydrated/",
)
```
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/dehydrated-0.3.1/test.sh new/dehydrated-0.4.0/test.sh
--- old/dehydrated-0.3.1/test.sh 2016-09-13 20:00:43.000000000 +0200
+++ new/dehydrated-0.4.0/test.sh 2017-02-05 15:33:17.000000000 +0100
@@ -69,7 +69,14 @@
(
mkdir -p ngrok
cd ngrok
- wget https://dl.ngrok.com/ngrok_2.0.19_linux_amd64.zip -O ngrok.zip
+ if [ "${TRAVIS_OS_NAME}" = "linux" ]; then
+ wget -O ngrok.zip https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
+ elif [ "${TRAVIS_OS_NAME}" = "osx" ]; then
+ wget -O ngrok.zip https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-darwin-amd64.zip
+ else
+ echo "No ngrok for ${TRAVIS_OS_NAME}"
+ exit 1
+ fi
unzip ngrok.zip ngrok
chmod +x ngrok
)
@@ -97,7 +104,7 @@
# Generate config and create empty domains.txt
echo 'CA="https://testca.kurz.pw/directory"' > config
-echo 'LICENSE="https://testca.kurz.pw/terms/v1"' >> config
+echo 'CA_TERMS="https://testca.kurz.pw/terms"' >> config
echo 'WELLKNOWN=".acme-challenges/.well-known/acme-challenge"' >> config
echo 'RENEW_DAYS="14"' >> config
touch domains.txt
@@ -110,6 +117,23 @@
_CHECK_LOG "--domain (-d) domain.tld"
_CHECK_ERRORLOG
+# Register account key without LICENSE set
+_TEST "Register account key without LICENSE set"
+./dehydrated --register > tmplog 2> errorlog && _FAIL "Script execution failed"
+_CHECK_LOG "To accept these terms"
+_CHECK_ERRORLOG
+
+# Register account key and agreeing to terms
+_TEST "Register account key without LICENSE set"
+./dehydrated --register --accept-terms > tmplog 2> errorlog || _FAIL "Script execution failed"
+_CHECK_LOG "Registering account key"
+_CHECK_FILE accounts/*/account_key.pem
+_CHECK_ERRORLOG
+
+# Delete accounts and add LICENSE to config for normal operation
+rm -rf accounts
+echo 'LICENSE="https://testca.kurz.pw/terms/v1"' >> config
+
# Run in cron mode with empty domains.txt (should only generate private key and exit)
_TEST "First run in cron mode, checking if private key is generated and registered"
./dehydrated --cron > tmplog 2> errorlog || _FAIL "Script execution failed"
@@ -120,7 +144,7 @@
# Temporarily move config out of the way and try signing certificate by using temporary config location
_TEST "Try signing using temporary config location and with domain as command line parameter"
mv config tmp_config
-./dehydrated --cron --domain "${TMP_URL}" --domain "${TMP2_URL}" -f tmp_config > tmplog 2> errorlog || _FAIL "Script execution failed"
+./dehydrated --cron --domain "${TMP_URL}" --domain "${TMP2_URL}" --accept-terms -f tmp_config > tmplog 2> errorlog || _FAIL "Script execution failed"
_CHECK_NOT_LOG "Checking domain name(s) of existing cert"
_CHECK_LOG "Generating private key"
_CHECK_LOG "Requesting challenge for ${TMP_URL}"
@@ -168,7 +192,7 @@
_CHECK_LOG "Requesting challenge for ${TMP_URL}"
_CHECK_LOG "Requesting challenge for ${TMP2_URL}"
_CHECK_LOG "Requesting challenge for ${TMP3_URL}"
-_CHECK_LOG "Challenge is valid!"
+_CHECK_LOG "Already validated!"
_CHECK_LOG "Creating fullchain.pem"
_CHECK_LOG "Done!"
_CHECK_ERRORLOG
@@ -197,7 +221,8 @@
_SUBTEST "Verifying file with full chain..."
openssl x509 -in "certs/${TMP_URL}/fullchain.pem" -noout -text > /dev/null 2>> errorlog && _PASS || _FAIL
_SUBTEST "Verifying certificate against CA certificate..."
-(openssl verify -verbose -CAfile "certs/${TMP_URL}/fullchain.pem" -purpose sslserver "certs/${TMP_URL}/fullchain.pem" 2>&1 || true) | (grep -v ': OK$' || true) >> errorlog 2>> errorlog && _PASS || _FAIL
+curl -s https://testca.kurz.pw/acme/issuer-cert | openssl x509 -inform DER -outform PEM > ca.pem
+(openssl verify -verbose -CAfile "ca.pem" -purpose sslserver "certs/${TMP_URL}/fullchain.pem" 2>&1 || true) | (grep -v ': OK$' || true) >> errorlog 2>> errorlog && _PASS || _FAIL
_CHECK_ERRORLOG
# Revoke certificate using certificate key
@@ -209,6 +234,26 @@
_CHECK_FILE "certs/${TMP_URL}/${REAL_CERT}-revoked"
_CHECK_ERRORLOG
+# Enable private key renew
+echo 'PRIVATE_KEY_RENEW="yes"' >> config
+echo 'PRIVATE_KEY_ROLLOVER="yes"' >> config
+
+# Check if Rolloverkey creation works
+_TEST "Testing Rolloverkeys..."
+_SUBTEST "First Run: Creating rolloverkey"
+./dehydrated --cron --domain "${TMP2_URL}" > tmplog 2> errorlog || _FAIL "Script execution failed"
+CERT_ROLL_HASH=$(openssl rsa -in certs/${TMP2_URL}/privkey.roll.pem -outform DER -pubout 2>/dev/null | openssl sha -sha256)
+_CHECK_LOG "Generating private key"
+_CHECK_LOG "Generating private rollover key"
+_SUBTEST "Second Run: Force Renew, Use rolloverkey"
+./dehydrated --cron --force --domain "${TMP2_URL}" > tmplog 2> errorlog || _FAIL "Script execution failed"
+CERT_NEW_HASH=$(openssl rsa -in certs/${TMP2_URL}/privkey.pem -outform DER -pubout 2>/dev/null | openssl sha -sha256)
+_CHECK_LOG "Generating private key"
+_CHECK_LOG "Moving Rolloverkey into position"
+_SUBTEST "Verifying Hash Rolloverkey and private key second run"
+[[ "${CERT_ROLL_HASH}" = "${CERT_NEW_HASH}" ]] && _PASS || _FAIL
+_CHECK_ERRORLOG
+
# Test cleanup command
_TEST "Cleaning up certificates"
./dehydrated --cleanup > tmplog 2> errorlog || _FAIL "Script execution failed"
[View Less]
1
0
Hello community,
here is the log from the commit of package haproxy for openSUSE:Factory checked in at 2017-03-02 19:38:34
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/haproxy (Old)
and /work/SRC/openSUSE:Factory/.haproxy.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "haproxy"
Thu Mar 2 19:38:34 2017 rev:47 rq:460861 version:1.7.3
Changes:
--------
--- /work/SRC/openSUSE:…
[View More]Factory/haproxy/haproxy.changes 2017-02-03 17:42:18.141625423 +0100
+++ /work/SRC/openSUSE:Factory/.haproxy.new/haproxy.changes 2017-03-02 19:38:35.568961394 +0100
@@ -1,0 +2,20 @@
+Tue Feb 28 11:31:02 UTC 2017 - kgronlund(a)suse.com
+
+- Update to version 1.7.3:
+ * BUG/MINOR: stream: Fix how backend-specific analyzers are set on a stream
+ * BUG/MEDIUM: tcp: don't poll for write when connect() succeeds
+ * BUG/MINOR: unix: fix connect's polling in case no data are scheduled
+ * BUG/MINOR: lua: Map.end are not reliable because "end" is a reserved keyword
+ * MINOR: dns: give ability to dns_init_resolvers() to close a socket when requested
+ * BUG/MAJOR: dns: restart sockets after fork()
+ * MINOR: chunks: implement a simple dynamic allocator for trash buffers
+ * BUG/MEDIUM: http: prevent redirect from overwriting a buffer
+ * BUG/MEDIUM: filters: Do not truncate HTTP response when body length is undefined
+ * BUG/MEDIUM: http: Prevent replace-header from overwriting a buffer
+ * BUG/MINOR: http: Return an error when a replace-header rule failed on the response
+ * BUG/MINOR: sendmail: The return of vsnprintf is not cleanly tested
+ * BUG/MAJOR: lua segmentation fault when the request is like 'GET ?arg=val HTTP/1.1'
+ * BUG/MEDIUM: config: reject anything but "if" or "unless" after a use-backend rule
+ * MINOR: http: don't close when redirect location doesn't start with "/"
+
+-------------------------------------------------------------------
Old:
----
haproxy-1.7.2.tar.gz
New:
----
haproxy-1.7.3.tar.gz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ haproxy.spec ++++++
--- /var/tmp/diff_new_pack.QrU3j0/_old 2017-03-02 19:38:36.372847639 +0100
+++ /var/tmp/diff_new_pack.QrU3j0/_new 2017-03-02 19:38:36.372847639 +0100
@@ -41,7 +41,7 @@
%bcond_without apparmor
Name: haproxy
-Version: 1.7.2
+Version: 1.7.3
Release: 0
#
#
++++++ _service ++++++
--- /var/tmp/diff_new_pack.QrU3j0/_old 2017-03-02 19:38:36.404843111 +0100
+++ /var/tmp/diff_new_pack.QrU3j0/_new 2017-03-02 19:38:36.408842545 +0100
@@ -3,8 +3,8 @@
<param name="url">http://git.haproxy.org/git/haproxy-1.7.git</param>
<param name="scm">git</param>
<param name="filename">haproxy</param>
- <param name="versionformat">1.7.2</param>
- <param name="revision">v1.7.2</param>
+ <param name="versionformat">1.7.3</param>
+ <param name="revision">v1.7.3</param>
<param name="changesgenerate">enable</param>
</service>
++++++ _servicedata ++++++
--- /var/tmp/diff_new_pack.QrU3j0/_old 2017-03-02 19:38:36.424840281 +0100
+++ /var/tmp/diff_new_pack.QrU3j0/_new 2017-03-02 19:38:36.428839716 +0100
@@ -3,4 +3,4 @@
<param name="url">http://git.haproxy.org/git/haproxy-1.6.git</param>
<param name="changesrevision">864bf78c3b6898eb12ece5f0a44032090f26f57f</param></service><service name="tar_scm">
<param name="url">http://git.haproxy.org/git/haproxy-1.7.git</param>
- <param name="changesrevision">ddb646ee9182df570017ddf280873a1360a28898</param></service></servicedata>
\ No newline at end of file
+ <param name="changesrevision">9cb532a34ae190b350cdeb8bbbae25d524b10949</param></service></servicedata>
\ No newline at end of file
++++++ haproxy-1.7.2.tar.gz -> haproxy-1.7.3.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/CHANGELOG new/haproxy-1.7.3/CHANGELOG
--- old/haproxy-1.7.2/CHANGELOG 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/CHANGELOG 2017-02-28 09:59:23.000000000 +0100
@@ -1,6 +1,27 @@
ChangeLog :
===========
+2017/02/28 : 1.7.3
+ - BUG/MINOR: stream: Fix how backend-specific analyzers are set on a stream
+ - BUILD: ssl: fix build on OpenSSL 1.0.0
+ - BUILD: ssl: silence a warning reported for ERR_remove_state()
+ - BUILD: ssl: eliminate warning with OpenSSL 1.1.0 regarding RAND_pseudo_bytes()
+ - BUG/MEDIUM: tcp: don't poll for write when connect() succeeds
+ - BUG/MINOR: unix: fix connect's polling in case no data are scheduled
+ - DOC: lua: improve links
+ - BUG/MINOR: lua: Map.end are not reliable because "end" is a reserved keyword
+ - MINOR: dns: give ability to dns_init_resolvers() to close a socket when requested
+ - BUG/MAJOR: dns: restart sockets after fork()
+ - MINOR: chunks: implement a simple dynamic allocator for trash buffers
+ - BUG/MEDIUM: http: prevent redirect from overwriting a buffer
+ - BUG/MEDIUM: filters: Do not truncate HTTP response when body length is undefined
+ - BUG/MEDIUM: http: Prevent replace-header from overwriting a buffer
+ - BUG/MINOR: http: Return an error when a replace-header rule failed on the response
+ - BUG/MINOR: sendmail: The return of vsnprintf is not cleanly tested
+ - BUG/MAJOR: lua segmentation fault when the request is like 'GET ?arg=val HTTP/1.1'
+ - BUG/MEDIUM: config: reject anything but "if" or "unless" after a use-backend rule
+ - MINOR: http: don't close when redirect location doesn't start with "/"
+
2017/01/13 : 1.7.2
- BUG/MEDIUM: lua: In some case, the return of sample-fetches is ignored (2)
- SCRIPTS: git-show-backports: fix a harmless typo
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/README new/haproxy-1.7.3/README
--- old/haproxy-1.7.2/README 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/README 2017-02-28 09:59:23.000000000 +0100
@@ -3,7 +3,7 @@
----------------------
version 1.7
willy tarreau
- 2017/01/13
+ 2017/02/28
1) How to build it
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/VERDATE new/haproxy-1.7.3/VERDATE
--- old/haproxy-1.7.2/VERDATE 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/VERDATE 2017-02-28 09:59:23.000000000 +0100
@@ -1,2 +1,2 @@
$Format:%ci$
-2017/01/13
+2017/02/28
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/VERSION new/haproxy-1.7.3/VERSION
--- old/haproxy-1.7.2/VERSION 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/VERSION 2017-02-28 09:59:23.000000000 +0100
@@ -1 +1 @@
-1.7.2
+1.7.3
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/doc/configuration.txt new/haproxy-1.7.3/doc/configuration.txt
--- old/haproxy-1.7.2/doc/configuration.txt 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/doc/configuration.txt 2017-02-28 09:59:23.000000000 +0100
@@ -4,7 +4,7 @@
----------------------
version 1.7
willy tarreau
- 2017/01/13
+ 2017/02/28
This document covers the configuration language as implemented in the version
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/doc/lua-api/index.rst new/haproxy-1.7.3/doc/lua-api/index.rst
--- old/haproxy-1.7.2/doc/lua-api/index.rst 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/doc/lua-api/index.rst 2017-02-28 09:59:23.000000000 +0100
@@ -165,6 +165,14 @@
This attribute is an integer, it contains the value of the loglevel "debug" (7).
+.. js:attribute:: core.proxies
+
+ **context**: task, action, sample-fetch, converter
+
+ This attribute is an array of declared proxies (frontend and backends). Each
+ proxy give an access to his list of listeners and servers. Each entry is of
+ type :ref:`proxy_class`
+
.. js:function:: core.log(loglevel, msg)
**context**: body, init, task, action, sample-fetch, converter
@@ -176,19 +184,20 @@
:param integer loglevel: Is the log level asociated with the message. It is a
number between 0 and 7.
:param string msg: The log content.
- :see: core.emerg, core.alert, core.crit, core.err, core.warning, core.notice,
- core.info, core.debug (log level definitions)
- :see: code.Debug
- :see: core.Info
- :see: core.Warning
- :see: core.Alert
+ :see: :js:attr:`core.emerg`, :js:attr:`core.alert`, :js:attr:`core.crit`,
+ :js:attr:`core.err`, :js:attr:`core.warning`, :js:attr:`core.notice`,
+ :js:attr:`core.info`, :js:attr:`core.debug` (log level definitions)
+ :see: :js:func:`core.Debug`
+ :see: :js:func:`core.Info`
+ :see: :js:func:`core.Warning`
+ :see: :js:func:`core.Alert`
.. js:function:: core.Debug(msg)
**context**: body, init, task, action, sample-fetch, converter
:param string msg: The log content.
- :see: log
+ :see: :js:func:`core.log`
Does the same job than:
@@ -204,7 +213,7 @@
**context**: body, init, task, action, sample-fetch, converter
:param string msg: The log content.
- :see: log
+ :see: :js:func:`core.log`
.. code-block:: lua
@@ -218,7 +227,7 @@
**context**: body, init, task, action, sample-fetch, converter
:param string msg: The log content.
- :see: log
+ :see: :js:func:`core.log`
.. code-block:: lua
@@ -232,7 +241,7 @@
**context**: body, init, task, action, sample-fetch, converter
:param string msg: The log content.
- :see: log
+ :see: :js:func:`core.log`
.. code-block:: lua
@@ -1097,8 +1106,8 @@
**warning** some sample fetches are not available in some context. These
limitations are specified in this documentation when theire useful.
- :see: TXN.f
- :see: TXN.sf
+ :see: :js:attr:`TXN.f`
+ :see: :js:attr:`TXN.sf`
Fetches are useful for:
@@ -1131,8 +1140,8 @@
HAProxy documentation "configuration.txt" for more information about her
usage. Its the chapter 7.3.1.
- :see: TXN.c
- :see: TXN.sc
+ :see: :js:attr:`TXN.c`
+ :see: :js:attr:`TXN.sc`
Converters provides statefull transformation. They are useful for:
@@ -1275,7 +1284,7 @@
:param class_http http: The related http object.
:returns: array of headers.
- :see: HTTP.res_get_headers()
+ :see: :js:func:`HTTP.res_get_headers`
This is the form of the returned array:
@@ -1296,7 +1305,7 @@
:param class_http http: The related http object.
:returns: array of headers.
- :see: HTTP.req_get_headers()
+ :see: :js:func:`HTTP.req_get_headers`
This is the form of the returned array:
@@ -1319,7 +1328,7 @@
:param class_http http: The related http object.
:param string name: The header name.
:param string value: The header value.
- :see: HTTP.res_add_header()
+ :see: :js:func:`HTTP.res_add_header`
.. js:function:: HTTP.res_add_header(http, name, value)
@@ -1329,7 +1338,7 @@
:param class_http http: The related http object.
:param string name: The header name.
:param string value: The header value.
- :see: HTTP.req_add_header()
+ :see: :js:func:`HTTP.req_add_header`
.. js:function:: HTTP.req_del_header(http, name)
@@ -1338,7 +1347,7 @@
:param class_http http: The related http object.
:param string name: The header name.
- :see: HTTP.res_del_header()
+ :see: :js:func:`HTTP.res_del_header`
.. js:function:: HTTP.res_del_header(http, name)
@@ -1347,7 +1356,7 @@
:param class_http http: The related http object.
:param string name: The header name.
- :see: HTTP.req_del_header()
+ :see: :js:func:`HTTP.req_del_header`
.. js:function:: HTTP.req_set_header(http, name, value)
@@ -1357,7 +1366,7 @@
:param class_http http: The related http object.
:param string name: The header name.
:param string value: The header value.
- :see: HTTP.res_set_header()
+ :see: :js:func:`HTTP.res_set_header`
This function does the same work as the folowwing code:
@@ -1377,7 +1386,7 @@
:param class_http http: The related http object.
:param string name: The header name.
:param string value: The header value.
- :see: HTTP.req_rep_header()
+ :see: :js:func:`HTTP.req_rep_header()`
.. js:function:: HTTP.req_rep_header(http, name, regex, replace)
@@ -1390,7 +1399,7 @@
:param string name: The header name.
:param string regex: The match regular expression.
:param string replace: The replacement value.
- :see: HTTP.res_rep_header()
+ :see: :js:func:`HTTP.res_rep_header()`
.. js:function:: HTTP.res_rep_header(http, name, regex, string)
@@ -1403,7 +1412,7 @@
:param string name: The header name.
:param string regex: The match regular expression.
:param string replace: The replacement value.
- :see: HTTP.req_replace_header()
+ :see: :js:func:`HTTP.req_rep_header()`
.. js:function:: HTTP.req_set_method(http, method)
@@ -1516,13 +1525,14 @@
:param integer loglevel: Is the log level asociated with the message. It is a
number between 0 and 7.
:param string msg: The log content.
- :see: core.emerg, core.alert, core.crit, core.err, core.warning, core.notice,
- core.info, core.debug (log level definitions)
- :see: TXN.deflog
- :see: TXN.Debug
- :see: TXN.Info
- :see: TXN.Warning
- :see: TXN.Alert
+ :see: :js:attr:`core.emerg`, :js:attr:`core.alert`, :js:attr:`core.crit`,
+ :js:attr:`core.err`, :js:attr:`core.warning`, :js:attr:`core.notice`,
+ :js:attr:`core.info`, :js:attr:`core.debug` (log level definitions)
+ :see: :js:func:`TXN.deflog`
+ :see: :js:func:`TXN.Debug`
+ :see: :js:func:`TXN.Info`
+ :see: :js:func:`TXN.Warning`
+ :see: :js:func:`TXN.Alert`
.. js:function:: TXN.deflog(TXN, msg)
@@ -1531,13 +1541,13 @@
:param class_txn txn: The class txn object containing the data.
:param string msg: The log content.
- :see: TXN.log
+ :see: :js:func:`TXN.log
.. js:function:: TXN.Debug(txn, msg)
:param class_txn txn: The class txn object containing the data.
:param string msg: The log content.
- :see: TXN.log
+ :see: :js:func:`TXN.log`
Does the same job than:
@@ -1552,7 +1562,7 @@
:param class_txn txn: The class txn object containing the data.
:param string msg: The log content.
- :see: TXN.log
+ :see: :js:func:`TXN.log`
.. code-block:: lua
@@ -1565,7 +1575,7 @@
:param class_txn txn: The class txn object containing the data.
:param string msg: The log content.
- :see: TXN.log
+ :see: :js:func:`TXN.log`
.. code-block:: lua
@@ -1578,7 +1588,7 @@
:param class_txn txn: The class txn object containing the data.
:param string msg: The log content.
- :see: TXN.log
+ :see: :js:func:`TXN.log`
.. code-block:: lua
@@ -1647,7 +1657,9 @@
:param class_txn txn: The class txn object containing the data.
:param integer loglevel: The required log level. This variable can be one of
- :see: core.<loglevel>
+ :see: :js:attr:`core.emerg`, :js:attr:`core.alert`, :js:attr:`core.crit`,
+ :js:attr:`core.err`, :js:attr:`core.warning`, :js:attr:`core.notice`,
+ :js:attr:`core.info`, :js:attr:`core.debug` (log level definitions)
.. js:function:: TXN.set_tos(txn, tos)
@@ -1851,7 +1863,7 @@
default = "usa"
-- Create and load map
- geo = Map.new("geo.map", Map.ip);
+ geo = Map.new("geo.map", Map._ip);
-- Create new fetch that returns the user country
core.register_fetches("country", function(txn)
@@ -1876,60 +1888,76 @@
return loc;
end);
-.. js:attribute:: Map.int
+.. js:attribute:: Map._int
See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
samples" ans subchapter "ACL basics" to understand this pattern matching
method.
-.. js:attribute:: Map.ip
+ Note that :js:attr:`Map.int` is also available for compatibility.
+
+.. js:attribute:: Map._ip
See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
samples" ans subchapter "ACL basics" to understand this pattern matching
method.
-.. js:attribute:: Map.str
+ Note that :js:attr:`Map.ip` is also available for compatibility.
+
+.. js:attribute:: Map._str
See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
samples" ans subchapter "ACL basics" to understand this pattern matching
method.
-.. js:attribute:: Map.beg
+ Note that :js:attr:`Map.str` is also available for compatibility.
+
+.. js:attribute:: Map._beg
See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
samples" ans subchapter "ACL basics" to understand this pattern matching
method.
-.. js:attribute:: Map.sub
+ Note that :js:attr:`Map.beg` is also available for compatibility.
+
+.. js:attribute:: Map._sub
See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
samples" ans subchapter "ACL basics" to understand this pattern matching
method.
-.. js:attribute:: Map.dir
+ Note that :js:attr:`Map.sub` is also available for compatibility.
+
+.. js:attribute:: Map._dir
See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
samples" ans subchapter "ACL basics" to understand this pattern matching
method.
-.. js:attribute:: Map.dom
+ Note that :js:attr:`Map.dir` is also available for compatibility.
+
+.. js:attribute:: Map._dom
See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
samples" ans subchapter "ACL basics" to understand this pattern matching
method.
-.. js:attribute:: Map.end
+ Note that :js:attr:`Map.dom` is also available for compatibility.
+
+.. js:attribute:: Map._end
See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
samples" ans subchapter "ACL basics" to understand this pattern matching
method.
-.. js:attribute:: Map.reg
+.. js:attribute:: Map._reg
See the HAProxy configuration.txt file, chapter "Using ACLs and fetching
samples" ans subchapter "ACL basics" to understand this pattern matching
method.
+ Note that :js:attr:`Map.reg` is also available for compatibility.
+
.. js:function:: Map.new(file, method)
@@ -1939,7 +1967,10 @@
:param integer method: Is the map pattern matching method. See the attributes
of the Map class.
:returns: a class Map object.
- :see: The Map attributes.
+ :see: The Map attributes: :js:attr:`Map._int`, :js:attr:`Map._ip`,
+ :js:attr:`Map._str`, :js:attr:`Map._beg`, :js:attr:`Map._sub`,
+ :js:attr:`Map._dir`, :js:attr:`Map._dom`, :js:attr:`Map._end` and
+ :js:attr:`Map._reg`.
.. js:function:: Map.lookup(map, str)
@@ -2126,13 +2157,13 @@
.. js:function:: AppletHTTP.get_priv(applet)
- Return Lua data stored in the current transaction (with the
- `AppletHTTP.set_priv()`) function. If no data are stored, it returns a nil
- value.
+ Return Lua data stored in the current transaction. If no data are stored,
+ it returns a nil value.
:param class_AppletHTTP applet: An :ref:`applethttp_class`
:returns: the opaque data previsously stored, or nil if nothing is
avalaible.
+ :see: :js:func:`AppletHTTP.set_priv`
.. js:function:: AppletHTTP.set_priv(applet, data)
@@ -2141,6 +2172,7 @@
:param class_AppletHTTP applet: An :ref:`applethttp_class`
:param opaque data: The data which is stored in the transaction.
+ :see: :js:func:`AppletHTTP.get_priv`
.. _applettcp_class:
@@ -2207,13 +2239,13 @@
.. js:function:: AppletTCP.get_priv(applet)
- Return Lua data stored in the current transaction (with the
- `AppletTCP.set_priv()`) function. If no data are stored, it returns a nil
- value.
+ Return Lua data stored in the current transaction. If no data are stored,
+ it returns a nil value.
:param class_AppletTCP applet: An :ref:`applettcp_class`
:returns: the opaque data previsously stored, or nil if nothing is
avalaible.
+ :see: :js:func:`AppletTCP.set_priv`
.. js:function:: AppletTCP.set_priv(applet, data)
@@ -2222,6 +2254,7 @@
:param class_AppletTCP applet: An :ref:`applettcp_class`
:param opaque data: The data which is stored in the transaction.
+ :see: :js:func:`AppletTCP.get_priv`
External Lua libraries
======================
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/examples/haproxy.spec new/haproxy-1.7.3/examples/haproxy.spec
--- old/haproxy-1.7.2/examples/haproxy.spec 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/examples/haproxy.spec 2017-02-28 09:59:23.000000000 +0100
@@ -1,6 +1,6 @@
Summary: HA-Proxy is a TCP/HTTP reverse proxy for high availability environments
Name: haproxy
-Version: 1.7.2
+Version: 1.7.3
Release: 1
License: GPL
Group: System Environment/Daemons
@@ -74,6 +74,9 @@
%attr(0755,root,root) %config %{_sysconfdir}/rc.d/init.d/%{name}
%changelog
+* Tue Feb 28 2017 Willy Tarreau <w(a)1wt.eu>
+- updated to 1.7.3
+
* Fri Jan 13 2017 Willy Tarreau <w(a)1wt.eu>
- updated to 1.7.2
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/include/common/chunk.h new/haproxy-1.7.3/include/common/chunk.h
--- old/haproxy-1.7.2/include/common/chunk.h 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/include/common/chunk.h 2017-02-28 09:59:23.000000000 +0100
@@ -26,6 +26,7 @@
#include <string.h>
#include <common/config.h>
+#include <common/memory.h>
/* describes a chunk of string */
@@ -35,6 +36,8 @@
int len; /* current size of the string from first to last char. <0 = uninit. */
};
+struct pool_head *pool2_trash;
+
/* function prototypes */
int chunk_printf(struct chunk *chk, const char *fmt, ...)
@@ -50,6 +53,16 @@
int alloc_trash_buffers(int bufsize);
void free_trash_buffers(void);
struct chunk *get_trash_chunk(void);
+struct chunk *alloc_trash_chunk(void);
+
+/*
+ * free a trash chunk allocated by alloc_trash_chunk(). NOP on NULL.
+ */
+static inline void free_trash_chunk(struct chunk *chunk)
+{
+ pool_free2(pool2_trash, chunk);
+}
+
static inline void chunk_reset(struct chunk *chk)
{
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/include/proto/dns.h new/haproxy-1.7.3/include/proto/dns.h
--- old/haproxy-1.7.2/include/proto/dns.h 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/include/proto/dns.h 2017-02-28 09:59:23.000000000 +0100
@@ -30,7 +30,7 @@
int dns_hostname_validation(const char *string, char **err);
int dns_build_query(int query_id, int query_type, char *hostname_dn, int hostname_dn_len, char *buf, int bufsize);
struct task *dns_process_resolve(struct task *t);
-int dns_init_resolvers(void);
+int dns_init_resolvers(int close_socket);
uint16_t dns_rnd16(void);
int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, struct dns_response_packet *dns_p);
int dns_get_ip_from_response(struct dns_response_packet *dns_p,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/include/proto/openssl-compat.h new/haproxy-1.7.3/include/proto/openssl-compat.h
--- old/haproxy-1.7.2/include/proto/openssl-compat.h 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/include/proto/openssl-compat.h 2017-02-28 09:59:23.000000000 +0100
@@ -56,16 +56,7 @@
#if (OPENSSL_VERSION_NUMBER < 0x1000000fL)
-/*
- * Functions introduced in OpenSSL 1.0.1
- */
-static inline int SSL_SESSION_set1_id_context(SSL_SESSION *s, const unsigned char *sid_ctx, unsigned int sid_ctx_len)
-{
- s->sid_ctx_length = sid_ctx_len;
- memcpy(s->sid_ctx, sid_ctx, sid_ctx_len);
- return 1;
-}
-
+/* Functions introduced in OpenSSL 1.0.0 */
static inline int EVP_PKEY_base_id(const EVP_PKEY *pkey)
{
return EVP_PKEY_type(pkey->type);
@@ -86,6 +77,18 @@
#endif
+#if (OPENSSL_VERSION_NUMBER < 0x1000100fL)
+/*
+ * Functions introduced in OpenSSL 1.0.1
+ */
+static inline int SSL_SESSION_set1_id_context(SSL_SESSION *s, const unsigned char *sid_ctx, unsigned int sid_ctx_len)
+{
+ s->sid_ctx_length = sid_ctx_len;
+ memcpy(s->sid_ctx, sid_ctx, sid_ctx_len);
+ return 1;
+}
+#endif
+
#if (OPENSSL_VERSION_NUMBER < 0x1010000fL) || defined(LIBRESSL_VERSION_NUMBER)
/*
* Functions introduced in OpenSSL 1.1.0 and not yet present in LibreSSL
@@ -147,4 +150,25 @@
#define __OPENSSL_110_CONST__
#endif
+/* ERR_remove_state() was deprecated in 1.0.0 in favor of
+ * ERR_remove_thread_state(), which was in turn deprecated in
+ * 1.1.0 and does nothing anymore. Let's simply silently kill
+ * it.
+ */
+#if (OPENSSL_VERSION_NUMBER >= 0x1010000fL)
+#undef ERR_remove_state
+#define ERR_remove_state(x)
+#endif
+
+
+/* RAND_pseudo_bytes() is deprecated in 1.1.0 in favor of RAND_bytes(). Note
+ * that the return codes differ, but it happens that the only use case (ticket
+ * key update) was already wrong, considering a non-cryptographic random as a
+ * failure.
+ */
+#if (OPENSSL_VERSION_NUMBER >= 0x1010000fL)
+#undef RAND_pseudo_bytes
+#define RAND_pseudo_bytes(x,y) RAND_bytes(x,y)
+#endif
+
#endif /* _PROTO_OPENSSL_COMPAT_H */
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/cfgparse.c new/haproxy-1.7.3/src/cfgparse.c
--- old/haproxy-1.7.2/src/cfgparse.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/cfgparse.c 2017-02-28 09:59:23.000000000 +0100
@@ -3984,6 +3984,12 @@
err_code |= warnif_cond_conflicts(cond, SMP_VAL_FE_SET_BCK, file, linenum);
}
+ else if (*args[2]) {
+ Alert("parsing [%s:%d] : unexpected keyword '%s' after switching rule, only 'if' and 'unless' are allowed.\n",
+ file, linenum, args[2]);
+ err_code |= ERR_ALERT | ERR_FATAL;
+ goto out;
+ }
rule = calloc(1, sizeof(*rule));
if (!rule) {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/checks.c new/haproxy-1.7.3/src/checks.c
--- old/haproxy-1.7.2/src/checks.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/checks.c 2017-02-28 09:59:23.000000000 +0100
@@ -3407,7 +3407,7 @@
len = vsnprintf(buf, sizeof(buf), format, argp);
va_end(argp);
- if (len < 0) {
+ if (len < 0 || len >= sizeof(buf)) {
Alert("Email alert [%s] could not format message\n", p->id);
return;
}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/chunk.c new/haproxy-1.7.3/src/chunk.c
--- old/haproxy-1.7.2/src/chunk.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/chunk.c 2017-02-28 09:59:23.000000000 +0100
@@ -29,6 +29,9 @@
static char *trash_buf1;
static char *trash_buf2;
+/* the trash pool for reentrant allocations */
+struct pool_head *pool2_trash = NULL;
+
/*
* Returns a pre-allocated and initialized trash chunk that can be used for any
* type of conversion. Two chunks and their respective buffers are alternatively
@@ -63,7 +66,8 @@
trash_size = bufsize;
trash_buf1 = (char *)my_realloc2(trash_buf1, bufsize);
trash_buf2 = (char *)my_realloc2(trash_buf2, bufsize);
- return trash_buf1 && trash_buf2;
+ pool2_trash = create_pool("trash", sizeof(struct chunk) + bufsize, MEM_F_EXACT);
+ return trash_buf1 && trash_buf2 && pool2_trash;
}
/*
@@ -78,6 +82,25 @@
}
/*
+ * Allocate a trash chunk from the reentrant pool. The buffer starts at the
+ * end of the chunk. This chunk must be freed using free_trash_chunk(). This
+ * call may fail and the caller is responsible for checking that the returned
+ * pointer is not NULL.
+ */
+struct chunk *alloc_trash_chunk(void)
+{
+ struct chunk *chunk;
+
+ chunk = pool_alloc2(pool2_trash);
+ if (chunk) {
+ char *buf = (char *)chunk + sizeof(struct chunk);
+ *buf = 0;
+ chunk_init(chunk, buf, pool2_trash->size - sizeof(struct chunk));
+ }
+ return chunk;
+}
+
+/*
* Does an snprintf() at the beginning of chunk <chk>, respecting the limit of
* at most chk->size chars. If the chk->len is over, nothing is added. Returns
* the new chunk size, or < 0 in case of failure.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/dns.c new/haproxy-1.7.3/src/dns.c
--- old/haproxy-1.7.2/src/dns.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/dns.c 2017-02-28 09:59:23.000000000 +0100
@@ -919,11 +919,13 @@
* parses resolvers sections and initializes:
* - task (time events) for each resolvers section
* - the datagram layer (network IO events) for each nameserver
+ * It takes one argument:
+ * - close_first takes 2 values: 0 or 1. If 1, the connection is closed first.
* returns:
* 0 in case of error
* 1 when no error
*/
-int dns_init_resolvers(void)
+int dns_init_resolvers(int close_socket)
{
struct dns_resolvers *curr_resolvers;
struct dns_nameserver *curnameserver;
@@ -961,7 +963,19 @@
curr_resolvers->t = t;
list_for_each_entry(curnameserver, &curr_resolvers->nameserver_list, list) {
- if ((dgram = calloc(1, sizeof(*dgram))) == NULL) {
+ dgram = NULL;
+
+ if (close_socket == 1) {
+ if (curnameserver->dgram) {
+ close(curnameserver->dgram->t.sock.fd);
+ memset(curnameserver->dgram, '\0', sizeof(*dgram));
+ dgram = curnameserver->dgram;
+ }
+ }
+
+ /* allocate memory only if it has not already been allocated
+ * by a previous call to this function */
+ if (!dgram && (dgram = calloc(1, sizeof(*dgram))) == NULL) {
Alert("Starting [%s/%s] nameserver: out of memory.\n", curr_resolvers->id,
curnameserver->id);
return 0;
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/haproxy.c new/haproxy-1.7.3/src/haproxy.c
--- old/haproxy-1.7.2/src/haproxy.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/haproxy.c 2017-02-28 09:59:23.000000000 +0100
@@ -1309,7 +1309,7 @@
exit(1);
/* initialize structures for name resolution */
- if (!dns_init_resolvers())
+ if (!dns_init_resolvers(0))
exit(1);
free(err_msg);
@@ -1685,6 +1685,7 @@
pool_destroy2(pool2_stream);
pool_destroy2(pool2_session);
pool_destroy2(pool2_connection);
+ pool_destroy2(pool2_trash);
pool_destroy2(pool2_buffer);
pool_destroy2(pool2_requri);
pool_destroy2(pool2_task);
@@ -2090,6 +2091,10 @@
fork_poller();
}
+ /* initialize structures for name resolution */
+ if (!dns_init_resolvers(1))
+ exit(1);
+
protocol_enable_all();
/*
* That's it : the central polling loop. Run until we stop.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/hlua.c new/haproxy-1.7.3/src/hlua.c
--- old/haproxy-1.7.2/src/hlua.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/hlua.c 2017-02-28 09:59:23.000000000 +0100
@@ -3551,22 +3551,24 @@
/* Get path and qs */
path = http_get_path(txn);
- end = txn->req.chn->buf->p + txn->req.sl.rq.u + txn->req.sl.rq.u_l;
- p = path;
- while (p < end && *p != '?')
- p++;
-
- /* Stores the request path. */
- lua_pushstring(L, "path");
- lua_pushlstring(L, path, p - path);
- lua_settable(L, -3);
+ if (path) {
+ end = txn->req.chn->buf->p + txn->req.sl.rq.u + txn->req.sl.rq.u_l;
+ p = path;
+ while (p < end && *p != '?')
+ p++;
- /* Stores the query string. */
- lua_pushstring(L, "qs");
- if (*p == '?')
- p++;
- lua_pushlstring(L, p, end - p);
- lua_settable(L, -3);
+ /* Stores the request path. */
+ lua_pushstring(L, "path");
+ lua_pushlstring(L, path, p - path);
+ lua_settable(L, -3);
+
+ /* Stores the query string. */
+ lua_pushstring(L, "qs");
+ if (*p == '?')
+ p++;
+ lua_pushlstring(L, p, end - p);
+ lua_settable(L, -3);
+ }
/* Stores the request path. */
lua_pushstring(L, "length");
@@ -7008,6 +7010,10 @@
/* register pattern types. */
for (i=0; i<PAT_MATCH_NUM; i++)
hlua_class_const_int(gL.T, pat_match_names[i], i);
+ for (i=0; i<PAT_MATCH_NUM; i++) {
+ snprintf(trash.str, trash.size, "_%s", pat_match_names[i]);
+ hlua_class_const_int(gL.T, trash.str, i);
+ }
/* register constructor. */
hlua_class_function(gL.T, "new", hlua_map_new);
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/proto_http.c new/haproxy-1.7.3/src/proto_http.c
--- old/haproxy-1.7.2/src/proto_http.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/proto_http.c 2017-02-28 09:59:23.000000000 +0100
@@ -3419,13 +3419,22 @@
struct list *fmt, struct my_regex *re,
int action)
{
- struct chunk *replace = get_trash_chunk();
+ struct chunk *replace;
+ int ret = -1;
+
+ replace = alloc_trash_chunk();
+ if (!replace)
+ goto leave;
replace->len = build_logline(s, replace->str, replace->size, fmt);
if (replace->len >= replace->size - 1)
- return -1;
+ goto leave;
+
+ ret = http_transform_header_str(s, msg, name, name_len, replace->str, re, action);
- return http_transform_header_str(s, msg, name, name_len, replace->str, re, action);
+ leave:
+ free_trash_chunk(replace);
+ return ret;
}
/* Executes the http-request rules <rules> for stream <s>, proxy <px> and
@@ -3814,7 +3823,7 @@
rule->arg.hdr_add.name_len,
&rule->arg.hdr_add.fmt,
&rule->arg.hdr_add.re, rule->action))
- return HTTP_RULE_RES_STOP; /* note: we should report an error here */
+ return HTTP_RULE_RES_BADREQ;
break;
case ACT_HTTP_DEL_HDR:
@@ -4023,7 +4032,12 @@
struct http_msg *req = &txn->req;
struct http_msg *res = &txn->rsp;
const char *msg_fmt;
- const char *location;
+ struct chunk *chunk;
+ int ret = 0;
+
+ chunk = alloc_trash_chunk();
+ if (!chunk)
+ goto leave;
/* build redirect message */
switch(rule->code) {
@@ -4045,10 +4059,8 @@
break;
}
- if (unlikely(!chunk_strcpy(&trash, msg_fmt)))
- return 0;
-
- location = trash.str + trash.len;
+ if (unlikely(!chunk_strcpy(chunk, msg_fmt)))
+ goto leave;
switch(rule->type) {
case REDIRECT_TYPE_SCHEME: {
@@ -4087,40 +4099,40 @@
if (rule->rdr_str) { /* this is an old "redirect" rule */
/* check if we can add scheme + "://" + host + path */
- if (trash.len + rule->rdr_len + 3 + hostlen + pathlen > trash.size - 4)
- return 0;
+ if (chunk->len + rule->rdr_len + 3 + hostlen + pathlen > chunk->size - 4)
+ goto leave;
/* add scheme */
- memcpy(trash.str + trash.len, rule->rdr_str, rule->rdr_len);
- trash.len += rule->rdr_len;
+ memcpy(chunk->str + chunk->len, rule->rdr_str, rule->rdr_len);
+ chunk->len += rule->rdr_len;
}
else {
/* add scheme with executing log format */
- trash.len += build_logline(s, trash.str + trash.len, trash.size - trash.len, &rule->rdr_fmt);
+ chunk->len += build_logline(s, chunk->str + chunk->len, chunk->size - chunk->len, &rule->rdr_fmt);
/* check if we can add scheme + "://" + host + path */
- if (trash.len + 3 + hostlen + pathlen > trash.size - 4)
- return 0;
+ if (chunk->len + 3 + hostlen + pathlen > chunk->size - 4)
+ goto leave;
}
/* add "://" */
- memcpy(trash.str + trash.len, "://", 3);
- trash.len += 3;
+ memcpy(chunk->str + chunk->len, "://", 3);
+ chunk->len += 3;
/* add host */
- memcpy(trash.str + trash.len, host, hostlen);
- trash.len += hostlen;
+ memcpy(chunk->str + chunk->len, host, hostlen);
+ chunk->len += hostlen;
/* add path */
- memcpy(trash.str + trash.len, path, pathlen);
- trash.len += pathlen;
+ memcpy(chunk->str + chunk->len, path, pathlen);
+ chunk->len += pathlen;
/* append a slash at the end of the location if needed and missing */
- if (trash.len && trash.str[trash.len - 1] != '/' &&
+ if (chunk->len && chunk->str[chunk->len - 1] != '/' &&
(rule->flags & REDIRECT_FLAG_APPEND_SLASH)) {
- if (trash.len > trash.size - 5)
- return 0;
- trash.str[trash.len] = '/';
- trash.len++;
+ if (chunk->len > chunk->size - 5)
+ goto leave;
+ chunk->str[chunk->len] = '/';
+ chunk->len++;
}
break;
@@ -4149,38 +4161,38 @@
}
if (rule->rdr_str) { /* this is an old "redirect" rule */
- if (trash.len + rule->rdr_len + pathlen > trash.size - 4)
- return 0;
+ if (chunk->len + rule->rdr_len + pathlen > chunk->size - 4)
+ goto leave;
/* add prefix. Note that if prefix == "/", we don't want to
* add anything, otherwise it makes it hard for the user to
* configure a self-redirection.
*/
if (rule->rdr_len != 1 || *rule->rdr_str != '/') {
- memcpy(trash.str + trash.len, rule->rdr_str, rule->rdr_len);
- trash.len += rule->rdr_len;
+ memcpy(chunk->str + chunk->len, rule->rdr_str, rule->rdr_len);
+ chunk->len += rule->rdr_len;
}
}
else {
/* add prefix with executing log format */
- trash.len += build_logline(s, trash.str + trash.len, trash.size - trash.len, &rule->rdr_fmt);
+ chunk->len += build_logline(s, chunk->str + chunk->len, chunk->size - chunk->len, &rule->rdr_fmt);
/* Check length */
- if (trash.len + pathlen > trash.size - 4)
- return 0;
+ if (chunk->len + pathlen > chunk->size - 4)
+ goto leave;
}
/* add path */
- memcpy(trash.str + trash.len, path, pathlen);
- trash.len += pathlen;
+ memcpy(chunk->str + chunk->len, path, pathlen);
+ chunk->len += pathlen;
/* append a slash at the end of the location if needed and missing */
- if (trash.len && trash.str[trash.len - 1] != '/' &&
+ if (chunk->len && chunk->str[chunk->len - 1] != '/' &&
(rule->flags & REDIRECT_FLAG_APPEND_SLASH)) {
- if (trash.len > trash.size - 5)
- return 0;
- trash.str[trash.len] = '/';
- trash.len++;
+ if (chunk->len > chunk->size - 5)
+ goto leave;
+ chunk->str[chunk->len] = '/';
+ chunk->len++;
}
break;
@@ -4188,59 +4200,54 @@
case REDIRECT_TYPE_LOCATION:
default:
if (rule->rdr_str) { /* this is an old "redirect" rule */
- if (trash.len + rule->rdr_len > trash.size - 4)
- return 0;
+ if (chunk->len + rule->rdr_len > chunk->size - 4)
+ goto leave;
/* add location */
- memcpy(trash.str + trash.len, rule->rdr_str, rule->rdr_len);
- trash.len += rule->rdr_len;
+ memcpy(chunk->str + chunk->len, rule->rdr_str, rule->rdr_len);
+ chunk->len += rule->rdr_len;
}
else {
/* add location with executing log format */
- trash.len += build_logline(s, trash.str + trash.len, trash.size - trash.len, &rule->rdr_fmt);
+ chunk->len += build_logline(s, chunk->str + chunk->len, chunk->size - chunk->len, &rule->rdr_fmt);
/* Check left length */
- if (trash.len > trash.size - 4)
- return 0;
+ if (chunk->len > chunk->size - 4)
+ goto leave;
}
break;
}
if (rule->cookie_len) {
- memcpy(trash.str + trash.len, "\r\nSet-Cookie: ", 14);
- trash.len += 14;
- memcpy(trash.str + trash.len, rule->cookie_str, rule->cookie_len);
- trash.len += rule->cookie_len;
+ memcpy(chunk->str + chunk->len, "\r\nSet-Cookie: ", 14);
+ chunk->len += 14;
+ memcpy(chunk->str + chunk->len, rule->cookie_str, rule->cookie_len);
+ chunk->len += rule->cookie_len;
}
- /* add end of headers and the keep-alive/close status.
- * We may choose to set keep-alive if the Location begins
- * with a slash, because the client will come back to the
- * same server.
- */
+ /* add end of headers and the keep-alive/close status. */
txn->status = rule->code;
/* let's log the request time */
s->logs.tv_request = now;
- if (*location == '/' &&
- (req->flags & HTTP_MSGF_XFER_LEN) &&
+ if ((req->flags & HTTP_MSGF_XFER_LEN) &&
((!(req->flags & HTTP_MSGF_TE_CHNK) && !req->body_len) || (req->msg_state == HTTP_MSG_DONE)) &&
((txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL ||
(txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL)) {
/* keep-alive possible */
if (!(req->flags & HTTP_MSGF_VER_11)) {
if (unlikely(txn->flags & TX_USE_PX_CONN)) {
- memcpy(trash.str + trash.len, "\r\nProxy-Connection: keep-alive", 30);
- trash.len += 30;
+ memcpy(chunk->str + chunk->len, "\r\nProxy-Connection: keep-alive", 30);
+ chunk->len += 30;
} else {
- memcpy(trash.str + trash.len, "\r\nConnection: keep-alive", 24);
- trash.len += 24;
+ memcpy(chunk->str + chunk->len, "\r\nConnection: keep-alive", 24);
+ chunk->len += 24;
}
}
- memcpy(trash.str + trash.len, "\r\n\r\n", 4);
- trash.len += 4;
- FLT_STRM_CB(s, flt_http_reply(s, txn->status, &trash));
- bo_inject(res->chn, trash.str, trash.len);
+ memcpy(chunk->str + chunk->len, "\r\n\r\n", 4);
+ chunk->len += 4;
+ FLT_STRM_CB(s, flt_http_reply(s, txn->status, chunk));
+ bo_inject(res->chn, chunk->str, chunk->len);
/* "eat" the request */
bi_fast_delete(req->chn->buf, req->sov);
req->next -= req->sov;
@@ -4255,13 +4262,13 @@
} else {
/* keep-alive not possible */
if (unlikely(txn->flags & TX_USE_PX_CONN)) {
- memcpy(trash.str + trash.len, "\r\nProxy-Connection: close\r\n\r\n", 29);
- trash.len += 29;
+ memcpy(chunk->str + chunk->len, "\r\nProxy-Connection: close\r\n\r\n", 29);
+ chunk->len += 29;
} else {
- memcpy(trash.str + trash.len, "\r\nConnection: close\r\n\r\n", 23);
- trash.len += 23;
+ memcpy(chunk->str + chunk->len, "\r\nConnection: close\r\n\r\n", 23);
+ chunk->len += 23;
}
- http_reply_and_close(s, txn->status, &trash);
+ http_reply_and_close(s, txn->status, chunk);
req->chn->analysers &= AN_REQ_FLT_END;
}
@@ -4270,7 +4277,10 @@
if (!(s->flags & SF_FINST_MASK))
s->flags |= SF_FINST_R;
- return 1;
+ ret = 1;
+ leave:
+ free_trash_chunk(chunk);
+ return ret;
}
/* This stream analyser runs all HTTP request processing which is common to
@@ -6820,7 +6830,7 @@
}
skip_header_mangling:
- if ((msg->flags & HTTP_MSGF_XFER_LEN) || HAS_FILTERS(s) ||
+ if ((msg->flags & HTTP_MSGF_XFER_LEN) || HAS_DATA_FILTERS(s, rep) ||
(txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_TUN) {
rep->analysers &= ~AN_RES_FLT_XFER_DATA;
rep->analysers |= AN_RES_HTTP_XFER_BODY;
@@ -6971,8 +6981,8 @@
* keep-alive is set on the client side or if there are filters
* registered on the stream, we don't want to forward a close
*/
- if ((msg->flags & HTTP_MSGF_TE_CHNK) || !msg->body_len ||
- HAS_FILTERS(s) ||
+ if ((msg->flags & HTTP_MSGF_TE_CHNK) || !(msg->flags & HTTP_MSGF_XFER_LEN) ||
+ HAS_DATA_FILTERS(s, res) ||
(txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_KAL ||
(txn->flags & TX_CON_WANT_MSK) == TX_CON_WANT_SCL)
channel_dont_close(res);
@@ -7064,11 +7074,10 @@
goto missing_data_or_waiting;
}
- if (!(msg->flags & HTTP_MSGF_XFER_LEN) && !(chn->flags & CF_SHUTR) &&
- HAS_DATA_FILTERS(s, chn)) {
- /* The server still sending data that should be filtered */
+ /* The server still sending data that should be filtered */
+ if (!(msg->flags & HTTP_MSGF_XFER_LEN) && !(chn->flags & CF_SHUTR))
goto missing_data_or_waiting;
- }
+
msg->msg_state = HTTP_MSG_ENDING;
ending:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/proto_tcp.c new/haproxy-1.7.3/src/proto_tcp.c
--- old/haproxy-1.7.2/src/proto_tcp.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/proto_tcp.c 2017-02-28 09:59:23.000000000 +0100
@@ -474,10 +474,16 @@
if (global.tune.server_rcvbuf)
setsockopt(fd, SOL_SOCKET, SO_RCVBUF, &global.tune.server_rcvbuf, sizeof(global.tune.server_rcvbuf));
- if ((connect(fd, (struct sockaddr *)&conn->addr.to, get_addr_len(&conn->addr.to)) == -1) &&
- (errno != EINPROGRESS) && (errno != EALREADY) && (errno != EISCONN)) {
-
- if (errno == EAGAIN || errno == EADDRINUSE || errno == EADDRNOTAVAIL) {
+ if (connect(fd, (struct sockaddr *)&conn->addr.to, get_addr_len(&conn->addr.to)) == -1) {
+ if (errno == EINPROGRESS || errno == EALREADY) {
+ /* common case, let's wait for connect status */
+ conn->flags |= CO_FL_WAIT_L4_CONN;
+ }
+ else if (errno == EISCONN) {
+ /* should normally not happen but if so, indicates that it's OK */
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ }
+ else if (errno == EAGAIN || errno == EADDRINUSE || errno == EADDRNOTAVAIL) {
char *msg;
if (errno == EAGAIN || errno == EADDRNOTAVAIL) {
msg = "no free ports";
@@ -514,6 +520,10 @@
return SF_ERR_SRVCL;
}
}
+ else {
+ /* connect() == 0, this is great! */
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ }
conn->flags |= CO_FL_ADDR_TO_SET;
@@ -523,7 +533,6 @@
conn_ctrl_init(conn); /* registers the FD */
fdtab[fd].linger_risk = 1; /* close hard if needed */
- conn_sock_want_send(conn); /* for connect status */
if (conn_xprt_init(conn) < 0) {
conn_force_close(conn);
@@ -531,6 +540,17 @@
return SF_ERR_RESOURCE;
}
+ if (conn->flags & (CO_FL_HANDSHAKE | CO_FL_WAIT_L4_CONN)) {
+ conn_sock_want_send(conn); /* for connect status, proxy protocol or SSL */
+ }
+ else {
+ /* If there's no more handshake, we need to notify the data
+ * layer when the connection is already OK otherwise we'll have
+ * no other opportunity to do it later (eg: health checks).
+ */
+ data = 1;
+ }
+
if (data)
conn_data_want_send(conn); /* prepare to send data if any */
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/proto_uxst.c new/haproxy-1.7.3/src/proto_uxst.c
--- old/haproxy-1.7.2/src/proto_uxst.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/proto_uxst.c 2017-02-28 09:59:23.000000000 +0100
@@ -495,12 +495,12 @@
setsockopt(fd, SOL_SOCKET, SO_RCVBUF, &global.tune.server_rcvbuf, sizeof(global.tune.server_rcvbuf));
if (connect(fd, (struct sockaddr *)&conn->addr.to, get_addr_len(&conn->addr.to)) == -1) {
- if (errno == EALREADY || errno == EISCONN) {
- conn->flags &= ~CO_FL_WAIT_L4_CONN;
- }
- else if (errno == EINPROGRESS) {
+ if (errno == EINPROGRESS || errno == EALREADY) {
conn->flags |= CO_FL_WAIT_L4_CONN;
}
+ else if (errno == EISCONN) {
+ conn->flags &= ~CO_FL_WAIT_L4_CONN;
+ }
else if (errno == EAGAIN || errno == EADDRINUSE || errno == EADDRNOTAVAIL) {
char *msg;
if (errno == EAGAIN || errno == EADDRNOTAVAIL) {
@@ -533,13 +533,9 @@
}
else {
/* connect() already succeeded, which is quite usual for unix
- * sockets. Let's avoid a second connect() probe to complete it,
- * but we need to ensure we'll wake up if there's no more handshake
- * pending (eg: for health checks).
+ * sockets. Let's avoid a second connect() probe to complete it.
*/
conn->flags &= ~CO_FL_WAIT_L4_CONN;
- if (!(conn->flags & CO_FL_HANDSHAKE))
- data = 1;
}
conn->flags |= CO_FL_ADDR_TO_SET;
@@ -550,8 +546,6 @@
conn_ctrl_init(conn); /* registers the FD */
fdtab[fd].linger_risk = 0; /* no need to disable lingering */
- if (conn->flags & CO_FL_HANDSHAKE)
- conn_sock_want_send(conn); /* for connect status or proxy protocol */
if (conn_xprt_init(conn) < 0) {
conn_force_close(conn);
@@ -559,6 +553,17 @@
return SF_ERR_RESOURCE;
}
+ if (conn->flags & (CO_FL_HANDSHAKE | CO_FL_WAIT_L4_CONN)) {
+ conn_sock_want_send(conn); /* for connect status, proxy protocol or SSL */
+ }
+ else {
+ /* If there's no more handshake, we need to notify the data
+ * layer when the connection is already OK otherwise we'll have
+ * no other opportunity to do it later (eg: health checks).
+ */
+ data = 1;
+ }
+
if (data)
conn_data_want_send(conn); /* prepare to send data if any */
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/haproxy-1.7.2/src/proxy.c new/haproxy-1.7.3/src/proxy.c
--- old/haproxy-1.7.2/src/proxy.c 2017-01-13 10:03:00.000000000 +0100
+++ new/haproxy-1.7.3/src/proxy.c 2017-02-28 09:59:23.000000000 +0100
@@ -1156,7 +1156,7 @@
* be more reliable to store the list of analysers that have been run,
* but what we do here is OK for now.
*/
- s->req.analysers |= be->be_req_ana & (strm_li(s) ? ~strm_li(s)->analysers : 0);
+ s->req.analysers |= be->be_req_ana & ~(strm_li(s) ? strm_li(s)->analysers : 0);
/* If the target backend requires HTTP processing, we have to allocate
* the HTTP transaction and hdr_idx if we did not have one.
[View Less]
1
0
Hello community,
here is the log from the commit of package kstars for openSUSE:Factory checked in at 2017-03-02 19:38:29
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/kstars (Old)
and /work/SRC/openSUSE:Factory/.kstars.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "kstars"
Thu Mar 2 19:38:29 2017 rev:82 rq:460854 version:16.12.2
Changes:
--------
--- /work/SRC/openSUSE:…
[View More]Factory/kstars/kstars.changes 2017-02-16 16:58:39.070174735 +0100
+++ /work/SRC/openSUSE:Factory/.kstars.new/kstars.changes 2017-03-02 19:38:30.297707310 +0100
@@ -1,0 +2,5 @@
+Sat Feb 25 20:26:28 UTC 2017 - asterios.dramis(a)gmail.com
+
+- Enable wcslib-devel build requirement also for Leap >= 42.3.
+
+-------------------------------------------------------------------
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ kstars.spec ++++++
--- /var/tmp/diff_new_pack.MbWPGx/_old 2017-03-02 19:38:31.501536960 +0100
+++ /var/tmp/diff_new_pack.MbWPGx/_new 2017-03-02 19:38:31.501536960 +0100
@@ -46,7 +46,7 @@
BuildRequires: libraw-devel
BuildRequires: pkgconfig
BuildRequires: update-desktop-files
-%if 0%{?suse_version} > 1320
+%if 0%{?suse_version} > 1320 || (0%{?is_opensuse} && 0%{?sle_version} >= 120300)
BuildRequires: wcslib-devel
%endif
BuildRequires: xplanet
[View Less]
1
0
Hello community,
here is the log from the commit of package hawk2 for openSUSE:Factory checked in at 2017-03-02 19:38:24
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/hawk2 (Old)
and /work/SRC/openSUSE:Factory/.hawk2.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "hawk2"
Thu Mar 2 19:38:24 2017 rev:35 rq:460846 version:2.1.0+git.1488276154.57dd3268
Changes:
--------
--- /…
[View More]work/SRC/openSUSE:Factory/hawk2/hawk2.changes 2017-02-20 13:14:15.795474593 +0100
+++ /work/SRC/openSUSE:Factory/.hawk2.new/hawk2.changes 2017-03-02 19:38:25.786345698 +0100
@@ -1,0 +2,16 @@
+Tue Feb 28 10:14:40 UTC 2017 - kgronlund(a)suse.com
+
+- Update to version 2.1.0+git.1488276154.57dd3268:
+ * Add fencing topology support (fate#321133)
+ * Add documentation for fencing topology (fate#321133)
+ * UI: Revise copy for login screen
+ * UI: Anchor backlinks to correct tab
+ * UI: Fix footer position issues
+ * UI: Fix missing alerts attribute controls
+ * Check and warn re: DRBD disk/connection status (fate#322043)
+ * UI: Show recent event = running (master) as info, not danger
+ * Increase size of eventcontrol markers (bsc#1001357)
+ * Show transition node as DC for clarity (bsc#1010843)
+ * Reports: Display times as UTC as consistently as possible (bsc#1010831)
+
+-------------------------------------------------------------------
@@ -542 +558 @@
-- Update to version 1.0.1+git.1446048276.98c4de1:
+- Update to version 1.0.1+git.1446048276.98c4de1 (bsc#952441):
@@ -1106 +1122 @@
- - Integrated puma as a lighttpd replacement
+ - Integrated puma as a lighttpd replacement (fate#317078)
Old:
----
hawk2-2.0.0+git.1480940121.2c59e4e.tar.bz2
New:
----
hawk2-2.1.0+git.1488276154.57dd3268.tar.bz2
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ hawk2.spec ++++++
--- /var/tmp/diff_new_pack.b4L3I0/_old 2017-03-02 19:38:26.618227981 +0100
+++ /var/tmp/diff_new_pack.b4L3I0/_new 2017-03-02 19:38:26.618227981 +0100
@@ -33,13 +33,13 @@
%define gname haclient
%define uname hacluster
-%define version_unconverted 2.0.0+git.1480940121.2c59e4e
+%define version_unconverted 2.1.0+git.1488276154.57dd3268
Name: hawk2
Summary: HA Web Konsole
License: GPL-2.0
Group: %{pkg_group}
-Version: 2.0.0+git.1480940121.2c59e4e
+Version: 2.1.0+git.1488276154.57dd3268
Release: 0
Url: http://www.clusterlabs.org/wiki/Hawk
Source: %{name}-%{version}.tar.bz2
++++++ _service ++++++
--- /var/tmp/diff_new_pack.b4L3I0/_old 2017-03-02 19:38:26.658222321 +0100
+++ /var/tmp/diff_new_pack.b4L3I0/_new 2017-03-02 19:38:26.662221755 +0100
@@ -4,8 +4,8 @@
<param name="scm">git</param>
<param name="exclude">.git</param>
<param name="filename">hawk2</param>
- <param name="versionformat">2.0.0+git.%ct.%h</param>
- <param name="revision">hawk-2</param>
+ <param name="versionformat">2.1.0+git.%ct.%h</param>
+ <param name="revision">sle-12-sp3</param>
<param name="changesgenerate">enable</param>
</service>
++++++ _servicedata ++++++
--- /var/tmp/diff_new_pack.b4L3I0/_old 2017-03-02 19:38:26.686218360 +0100
+++ /var/tmp/diff_new_pack.b4L3I0/_new 2017-03-02 19:38:26.686218360 +0100
@@ -1,4 +1,4 @@
<servicedata>
<service name="tar_scm">
<param name="url">git://github.com/ClusterLabs/hawk.git</param>
- <param name="changesrevision">2c59e4ee0cf8d0818d3ddea62c9c93d969e4fbc8</param></service></servicedata>
\ No newline at end of file
+ <param name="changesrevision">57dd32683adca3932060cd82c383bd67c414fd32</param></service></servicedata>
\ No newline at end of file
++++++ hawk2-2.0.0+git.1480940121.2c59e4e.tar.bz2 -> hawk2-2.1.0+git.1488276154.57dd3268.tar.bz2 ++++++
/work/SRC/openSUSE:Factory/hawk2/hawk2-2.0.0+git.1480940121.2c59e4e.tar.bz2 /work/SRC/openSUSE:Factory/.hawk2.new/hawk2-2.1.0+git.1488276154.57dd3268.tar.bz2 differ: char 11, line 1
[View Less]
1
0
Hello community,
here is the log from the commit of package sysdig for openSUSE:Factory checked in at 2017-03-02 19:38:18
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/sysdig (Old)
and /work/SRC/openSUSE:Factory/.sysdig.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "sysdig"
Thu Mar 2 19:38:18 2017 rev:12 rq:460843 version:0.15.0
Changes:
--------
--- /work/SRC/openSUSE:…
[View More]Factory/sysdig/sysdig.changes 2017-01-25 23:36:30.379665582 +0100
+++ /work/SRC/openSUSE:Factory/.sysdig.new/sysdig.changes 2017-03-02 19:38:19.595221784 +0100
@@ -1,0 +2,15 @@
+Tue Feb 28 07:48:27 UTC 2017 - joop.boonen(a)opensuse.org
+
+- Update to version 0.15.0
+ * New Features
+ + Support for Linux Kernel 4.10
+ + Use /proc/<pid>/status instead of custom ioctl to get process vpid for kernels >= 4.1
+ * Bug fixes
+ + Various fixes on Kubernetes ingestion
+ + Fix some happening deadlocks in the driver when ioctl were exiting with error
+ + Fix mkdir and rmdir events, they were skipped in case of page faults
+ + Bugfix on topports_server chisel
+ + Avoid some cases of infinite loop when evaluating filters like proc.aname
+ * Fixed sysdig-no_return_random.patch https://github.com/draios/sysdig/issues/734
+
+-------------------------------------------------------------------
Old:
----
sysdig-0.14.0.tar.gz
sysdig-no_return_random.patch
New:
----
sysdig-0.15.0.tar.gz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ sysdig.spec ++++++
--- /var/tmp/diff_new_pack.oSXj4p/_old 2017-03-02 19:38:21.618935415 +0100
+++ /var/tmp/diff_new_pack.oSXj4p/_new 2017-03-02 19:38:21.622934849 +0100
@@ -17,14 +17,13 @@
Name: sysdig
-Version: 0.14.0
+Version: 0.15.0
Release: 0
Summary: System-level exploration
License: GPL-2.0
Group: System/Monitoring
Url: http://www.sysdig.org/
Source0: https://github.com/draios/%{name}/archive/%{version}/sysdig-%{version}.tar.…
-Patch0: sysdig-no_return_random.patch
BuildRequires: %{kernel_module_package_buildreqs}
BuildRequires: cmake
BuildRequires: fdupes
@@ -50,7 +49,6 @@
%prep
%setup -q
-%patch0
%build
export SYSDIG_CHISEL_DIR=%{_datadir}%{name}/chisels
++++++ sysdig-0.14.0.tar.gz -> sysdig-0.15.0.tar.gz ++++++
++++ 7988 lines of diff (skipped)
[View Less]
1
0