Mailinglist Archive: opensuse-commit (1903 mails)

< Previous Next >
commit youtube-dl for openSUSE:Factory
Hello community,

here is the log from the commit of package youtube-dl for openSUSE:Factory
checked in at 2019-04-02 09:23:52
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/youtube-dl (Old)
and /work/SRC/openSUSE:Factory/.youtube-dl.new.25356 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "youtube-dl"

Tue Apr 2 09:23:52 2019 rev:100 rq:690471 version:2019.04.01

Changes:
--------
--- /work/SRC/openSUSE:Factory/youtube-dl/python-youtube-dl.changes
2019-03-18 10:43:53.319111835 +0100
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.25356/python-youtube-dl.changes
2019-04-02 09:23:59.656777897 +0200
@@ -1,0 +2,18 @@
+Mon Apr 1 19:23:11 UTC 2019 - Sebastien CHAVAUX
<seb95passionlinux@xxxxxxxxxxxx>
+
+- Update to new upstream release 2019.04.01
+ * [utils] Improve int_or_none and float_or_none (#20403)
+ * Check for valid --min-sleep-interval when --max-sleep-interval is specified
+ (#20435)
+ * [weibo] Extend URL regular expression (#20496)
+ * [xhamster] Add support for xhamster.one (#20508)
+ * [mediasite] Add support for catalogs (#20507)
+ * [teamtreehouse] Add support for teamtreehouse.com (#9836)
+ * [ina] Add support for audio URLs
+ * [ina] Improve extraction
+ * [cwtv] Fix episode number extraction (#20461)
+ * [npo] Improve DRM detection
+ * [pornhub] Add support for DASH formats (#20403)
+ * [svtplay] Update API endpoint (#20430)
+
+-------------------------------------------------------------------
youtube-dl.changes: same change

Old:
----
youtube-dl-2019.03.18.tar.gz
youtube-dl-2019.03.18.tar.gz.sig

New:
----
youtube-dl-2019.04.01.tar.gz
youtube-dl-2019.04.01.tar.gz.sig

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.BnMpi3/_old 2019-04-02 09:24:00.976778781 +0200
+++ /var/tmp/diff_new_pack.BnMpi3/_new 2019-04-02 09:24:00.980778784 +0200
@@ -19,7 +19,7 @@
%define modname youtube-dl
%{?!python_module:%define python_module() python-%{**} python3-%{**}}
Name: python-youtube-dl
-Version: 2019.03.18
+Version: 2019.04.01
Release: 0
Summary: A python module for downloading from video sites for offline
watching
License: SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.BnMpi3/_old 2019-04-02 09:24:01.020778811 +0200
+++ /var/tmp/diff_new_pack.BnMpi3/_new 2019-04-02 09:24:01.024778814 +0200
@@ -17,7 +17,7 @@


Name: youtube-dl
-Version: 2019.03.18
+Version: 2019.04.01
Release: 0
Summary: A tool for downloading from video sites for offline watching
License: SUSE-Public-Domain AND CC-BY-SA-3.0

++++++ youtube-dl-2019.03.18.tar.gz -> youtube-dl-2019.04.01.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/ChangeLog new/youtube-dl/ChangeLog
--- old/youtube-dl/ChangeLog 2019-03-17 19:36:38.000000000 +0100
+++ new/youtube-dl/ChangeLog 2019-04-01 18:55:14.000000000 +0200
@@ -1,3 +1,23 @@
+version 2019.04.01
+
+Core
+* [utils] Improve int_or_none and float_or_none (#20403)
+* Check for valid --min-sleep-interval when --max-sleep-interval is specified
+ (#20435)
+
+Extractors
++ [weibo] Extend URL regular expression (#20496)
++ [xhamster] Add support for xhamster.one (#20508)
++ [mediasite] Add support for catalogs (#20507)
++ [teamtreehouse] Add support for teamtreehouse.com (#9836)
++ [ina] Add support for audio URLs
+* [ina] Improve extraction
+* [cwtv] Fix episode number extraction (#20461)
+* [npo] Improve DRM detection
++ [pornhub] Add support for DASH formats (#20403)
+* [svtplay] Update API endpoint (#20430)
+
+
version 2019.03.18

Core
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/docs/supportedsites.md
new/youtube-dl/docs/supportedsites.md
--- old/youtube-dl/docs/supportedsites.md 2019-03-17 19:36:41.000000000
+0100
+++ new/youtube-dl/docs/supportedsites.md 2019-04-01 18:55:17.000000000
+0200
@@ -488,6 +488,7 @@
- **Medialaan**
- **Mediaset**
- **Mediasite**
+ - **MediasiteCatalog**
- **Medici**
- **megaphone.fm**: megaphone.fm embedded players
- **Meipai**: 美拍
@@ -869,6 +870,7 @@
- **teachertube:user:collection**: teachertube.com user and collection videos
- **TeachingChannel**
- **Teamcoco**
+ - **TeamTreeHouse**
- **TechTalks**
- **techtv.mit.edu**
- **ted**
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/test/test_utils.py
new/youtube-dl/test/test_utils.py
--- old/youtube-dl/test/test_utils.py 2019-03-08 21:03:06.000000000 +0100
+++ new/youtube-dl/test/test_utils.py 2019-03-17 20:02:05.000000000 +0100
@@ -33,11 +33,13 @@
ExtractorError,
find_xpath_attr,
fix_xml_ampersands,
+ float_or_none,
get_element_by_class,
get_element_by_attribute,
get_elements_by_class,
get_elements_by_attribute,
InAdvancePagedList,
+ int_or_none,
intlist_to_bytes,
is_html,
js_to_json,
@@ -468,6 +470,21 @@
shell_quote(args),
"""ffmpeg -i 'ñ€ß'"'"'.mp4'""" if compat_os_name != 'nt' else
'''ffmpeg -i "ñ€ß'.mp4"''')

+ def test_float_or_none(self):
+ self.assertEqual(float_or_none('42.42'), 42.42)
+ self.assertEqual(float_or_none('42'), 42.0)
+ self.assertEqual(float_or_none(''), None)
+ self.assertEqual(float_or_none(None), None)
+ self.assertEqual(float_or_none([]), None)
+ self.assertEqual(float_or_none(set()), None)
+
+ def test_int_or_none(self):
+ self.assertEqual(int_or_none('42'), 42)
+ self.assertEqual(int_or_none(''), None)
+ self.assertEqual(int_or_none(None), None)
+ self.assertEqual(int_or_none([]), None)
+ self.assertEqual(int_or_none(set()), None)
+
def test_str_to_int(self):
self.assertEqual(str_to_int('123,456'), 123456)
self.assertEqual(str_to_int('123.456'), 123456)
Binary files old/youtube-dl/youtube-dl and new/youtube-dl/youtube-dl differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/__init__.py
new/youtube-dl/youtube_dl/__init__.py
--- old/youtube-dl/youtube_dl/__init__.py 2019-03-08 21:03:06.000000000
+0100
+++ new/youtube-dl/youtube_dl/__init__.py 2019-03-17 20:02:05.000000000
+0100
@@ -166,6 +166,8 @@
if opts.max_sleep_interval is not None:
if opts.max_sleep_interval < 0:
parser.error('max sleep interval must be positive or 0')
+ if opts.sleep_interval is None:
+ parser.error('min sleep interval must be specified, use
--min-sleep-interval')
if opts.max_sleep_interval < opts.sleep_interval:
parser.error('max sleep interval must be greater than or equal to
min sleep interval')
else:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/cwtv.py
new/youtube-dl/youtube_dl/extractor/cwtv.py
--- old/youtube-dl/youtube_dl/extractor/cwtv.py 2019-03-08 21:02:58.000000000
+0100
+++ new/youtube-dl/youtube_dl/extractor/cwtv.py 2019-03-17 20:02:05.000000000
+0100
@@ -79,7 +79,7 @@
season = str_or_none(video_data.get('season'))
episode = str_or_none(video_data.get('episode'))
if episode and season:
- episode = episode.lstrip(season)
+ episode = episode[len(season):]

return {
'_type': 'url_transparent',
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/extractors.py
new/youtube-dl/youtube_dl/extractor/extractors.py
--- old/youtube-dl/youtube_dl/extractor/extractors.py 2019-03-08
21:03:06.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/extractors.py 2019-03-17
20:02:05.000000000 +0100
@@ -632,7 +632,10 @@
from .matchtv import MatchTVIE
from .mdr import MDRIE
from .mediaset import MediasetIE
-from .mediasite import MediasiteIE
+from .mediasite import (
+ MediasiteIE,
+ MediasiteCatalogIE,
+)
from .medici import MediciIE
from .megaphone import MegaphoneIE
from .meipai import MeipaiIE
@@ -1114,6 +1117,7 @@
)
from .teachingchannel import TeachingChannelIE
from .teamcoco import TeamcocoIE
+from .teamtreehouse import TeamTreeHouseIE
from .techtalks import TechTalksIE
from .ted import TEDIE
from .tele5 import Tele5IE
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/ina.py
new/youtube-dl/youtube_dl/extractor/ina.py
--- old/youtube-dl/youtube_dl/extractor/ina.py 2019-03-08 21:02:58.000000000
+0100
+++ new/youtube-dl/youtube_dl/extractor/ina.py 2019-03-17 20:02:05.000000000
+0100
@@ -1,36 +1,83 @@
# coding: utf-8
from __future__ import unicode_literals

-import re
-
from .common import InfoExtractor
+from ..utils import (
+ determine_ext,
+ int_or_none,
+ strip_or_none,
+ xpath_attr,
+ xpath_text,
+)


class InaIE(InfoExtractor):
- _VALID_URL = r'https?://(?:www\.)?ina\.fr/video/(?P<id>I?[A-Z0-9]+)'
- _TEST = {
+ _VALID_URL =
r'https?://(?:www\.)?ina\.fr/(?:video|audio)/(?P<id>[A-Z0-9_]+)'
+ _TESTS = [{
'url':
'http://www.ina.fr/video/I12055569/francois-hollande-je-crois-que-c-est-clair-video.html',
'md5': 'a667021bf2b41f8dc6049479d9bb38a3',
'info_dict': {
'id': 'I12055569',
'ext': 'mp4',
'title': 'François Hollande "Je crois que c\'est clair"',
+ 'description': 'md5:3f09eb072a06cb286b8f7e4f77109663',
}
- }
+ }, {
+ 'url':
'https://www.ina.fr/video/S806544_001/don-d-organes-des-avancees-mais-d-importants-besoins-video.html',
+ 'only_matching': True,
+ }, {
+ 'url': 'https://www.ina.fr/audio/P16173408',
+ 'only_matching': True,
+ }, {
+ 'url': 'https://www.ina.fr/video/P16173408-video.html',
+ 'only_matching': True,
+ }]

def _real_extract(self, url):
- mobj = re.match(self._VALID_URL, url)
-
- video_id = mobj.group('id')
- mrss_url = 'http://player.ina.fr/notices/%s.mrss' % video_id
- info_doc = self._download_xml(mrss_url, video_id)
-
- self.report_extraction(video_id)
-
- video_url =
info_doc.find('.//{http://search.yahoo.com/mrss/}player').attrib['url']
+ video_id = self._match_id(url)
+ info_doc = self._download_xml(
+ 'http://player.ina.fr/notices/%s.mrss' % video_id, video_id)
+ item = info_doc.find('channel/item')
+ title = xpath_text(item, 'title', fatal=True)
+ media_ns_xpath = lambda x: self._xpath_ns(x,
'http://search.yahoo.com/mrss/')
+ content = item.find(media_ns_xpath('content'))
+
+ get_furl = lambda x: xpath_attr(content, media_ns_xpath(x), 'url')
+ formats = []
+ for q, w, h in (('bq', 400, 300), ('mq', 512, 384), ('hq', 768, 576)):
+ q_url = get_furl(q)
+ if not q_url:
+ continue
+ formats.append({
+ 'format_id': q,
+ 'url': q_url,
+ 'width': w,
+ 'height': h,
+ })
+ if not formats:
+ furl = get_furl('player') or content.attrib['url']
+ ext = determine_ext(furl)
+ formats = [{
+ 'url': furl,
+ 'vcodec': 'none' if ext == 'mp3' else None,
+ 'ext': ext,
+ }]
+
+ thumbnails = []
+ for thumbnail in content.findall(media_ns_xpath('thumbnail')):
+ thumbnail_url = thumbnail.get('url')
+ if not thumbnail_url:
+ continue
+ thumbnails.append({
+ 'url': thumbnail_url,
+ 'height': int_or_none(thumbnail.get('height')),
+ 'width': int_or_none(thumbnail.get('width')),
+ })

return {
'id': video_id,
- 'url': video_url,
- 'title': info_doc.find('.//title').text,
+ 'formats': formats,
+ 'title': title,
+ 'description': strip_or_none(xpath_text(item, 'description')),
+ 'thumbnails': thumbnails,
}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/mediasite.py
new/youtube-dl/youtube_dl/extractor/mediasite.py
--- old/youtube-dl/youtube_dl/extractor/mediasite.py 2019-03-08
21:02:58.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/mediasite.py 2019-03-17
20:02:05.000000000 +0100
@@ -13,6 +13,8 @@
ExtractorError,
float_or_none,
mimetype2ext,
+ str_or_none,
+ try_get,
unescapeHTML,
unsmuggle_url,
url_or_none,
@@ -20,8 +22,11 @@
)


+_ID_RE = r'[0-9a-f]{32,34}'
+
+
class MediasiteIE(InfoExtractor):
- _VALID_URL =
r'(?xi)https?://[^/]+/Mediasite/(?:Play|Showcase/(?:default|livebroadcast)/Presentation)/(?P<id>[0-9a-f]{32,34})(?P<query>\?[^#]+|)'
+ _VALID_URL =
r'(?xi)https?://[^/]+/Mediasite/(?:Play|Showcase/(?:default|livebroadcast)/Presentation)/(?P<id>%s)(?P<query>\?[^#]+|)'
% _ID_RE
_TESTS = [
{
'url':
'https://hitsmediaweb.h-its.org/mediasite/Play/2db6c271681e4f199af3c60d1f82869b1d',
@@ -109,7 +114,7 @@
return [
unescapeHTML(mobj.group('url'))
for mobj in re.finditer(
-
r'(?xi)<iframe\b[^>]+\bsrc=(["\'])(?P<url>(?:(?:https?:)?//[^/]+)?/Mediasite/Play/[0-9a-f]{32,34}(?:\?.*?)?)\1',
+
r'(?xi)<iframe\b[^>]+\bsrc=(["\'])(?P<url>(?:(?:https?:)?//[^/]+)?/Mediasite/Play/%s(?:\?.*?)?)\1'
% _ID_RE,
webpage)]

def _real_extract(self, url):
@@ -221,3 +226,110 @@
'formats': formats,
'thumbnails': thumbnails,
}
+
+
+class MediasiteCatalogIE(InfoExtractor):
+ _VALID_URL = r'''(?xi)
+ (?P<url>https?://[^/]+/Mediasite)
+ /Catalog/Full/
+ (?P<catalog_id>{0})
+ (?:
+ /(?P<current_folder_id>{0})
+ /(?P<root_dynamic_folder_id>{0})
+ )?
+ '''.format(_ID_RE)
+ _TESTS = [{
+ 'url':
'http://events7.mediasite.com/Mediasite/Catalog/Full/631f9e48530d454381549f955d08c75e21',
+ 'info_dict': {
+ 'id': '631f9e48530d454381549f955d08c75e21',
+ 'title': 'WCET Summit: Adaptive Learning in Higher Ed: Improving
Outcomes Dynamically',
+ },
+ 'playlist_count': 6,
+ 'expected_warnings': ['is not a supported codec'],
+ }, {
+ # with CurrentFolderId and RootDynamicFolderId
+ 'url':
'https://medaudio.medicine.iu.edu/Mediasite/Catalog/Full/9518c4a6c5cf4993b21cbd53e828a92521/97a9db45f7ab47428c77cd2ed74bb98f14/9518c4a6c5cf4993b21cbd53e828a92521',
+ 'info_dict': {
+ 'id': '9518c4a6c5cf4993b21cbd53e828a92521',
+ 'title': 'IUSM Family and Friends Sessions',
+ },
+ 'playlist_count': 2,
+ }, {
+ 'url':
'http://uipsyc.mediasite.com/mediasite/Catalog/Full/d5d79287c75243c58c50fef50174ec1b21',
+ 'only_matching': True,
+ }, {
+ # no AntiForgeryToken
+ 'url':
'https://live.libraries.psu.edu/Mediasite/Catalog/Full/8376d4b24dd1457ea3bfe4cf9163feda21',
+ 'only_matching': True,
+ }, {
+ 'url':
'https://medaudio.medicine.iu.edu/Mediasite/Catalog/Full/9518c4a6c5cf4993b21cbd53e828a92521/97a9db45f7ab47428c77cd2ed74bb98f14/9518c4a6c5cf4993b21cbd53e828a92521',
+ 'only_matching': True,
+ }]
+
+ def _real_extract(self, url):
+ mobj = re.match(self._VALID_URL, url)
+ mediasite_url = mobj.group('url')
+ catalog_id = mobj.group('catalog_id')
+ current_folder_id = mobj.group('current_folder_id') or catalog_id
+ root_dynamic_folder_id = mobj.group('root_dynamic_folder_id')
+
+ webpage = self._download_webpage(url, catalog_id)
+
+ # AntiForgeryToken is optional (e.g. [1])
+ # 1.
https://live.libraries.psu.edu/Mediasite/Catalog/Full/8376d4b24dd1457ea3bfe4cf9163feda21
+ anti_forgery_token = self._search_regex(
+ r'AntiForgeryToken\s*:\s*(["\'])(?P<value>(?:(?!\1).)+)\1',
+ webpage, 'anti forgery token', default=None, group='value')
+ if anti_forgery_token:
+ anti_forgery_header = self._search_regex(
+
r'AntiForgeryHeaderName\s*:\s*(["\'])(?P<value>(?:(?!\1).)+)\1',
+ webpage, 'anti forgery header name',
+ default='X-SOFO-AntiForgeryHeader', group='value')
+
+ data = {
+ 'IsViewPage': True,
+ 'IsNewFolder': True,
+ 'AuthTicket': None,
+ 'CatalogId': catalog_id,
+ 'CurrentFolderId': current_folder_id,
+ 'RootDynamicFolderId': root_dynamic_folder_id,
+ 'ItemsPerPage': 1000,
+ 'PageIndex': 0,
+ 'PermissionMask': 'Execute',
+ 'CatalogSearchType': 'SearchInFolder',
+ 'SortBy': 'Date',
+ 'SortDirection': 'Descending',
+ 'StartDate': None,
+ 'EndDate': None,
+ 'StatusFilterList': None,
+ 'PreviewKey': None,
+ 'Tags': [],
+ }
+
+ headers = {
+ 'Content-Type': 'application/json; charset=UTF-8',
+ 'Referer': url,
+ 'X-Requested-With': 'XMLHttpRequest',
+ }
+ if anti_forgery_token:
+ headers[anti_forgery_header] = anti_forgery_token
+
+ catalog = self._download_json(
+ '%s/Catalog/Data/GetPresentationsForFolder' % mediasite_url,
+ catalog_id, data=json.dumps(data).encode(), headers=headers)
+
+ entries = []
+ for video in catalog['PresentationDetailsList']:
+ if not isinstance(video, dict):
+ continue
+ video_id = str_or_none(video.get('Id'))
+ if not video_id:
+ continue
+ entries.append(self.url_result(
+ '%s/Play/%s' % (mediasite_url, video_id),
+ ie=MediasiteIE.ie_key(), video_id=video_id))
+
+ title = try_get(
+ catalog, lambda x: x['CurrentFolder']['Name'], compat_str)
+
+ return self.playlist_result(entries, catalog_id, title,)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/npo.py
new/youtube-dl/youtube_dl/extractor/npo.py
--- old/youtube-dl/youtube_dl/extractor/npo.py 2019-03-08 21:02:58.000000000
+0100
+++ new/youtube-dl/youtube_dl/extractor/npo.py 2019-03-17 20:02:05.000000000
+0100
@@ -181,10 +181,7 @@

def _real_extract(self, url):
video_id = self._match_id(url)
- try:
- return self._get_info(url, video_id)
- except ExtractorError:
- return self._get_old_info(video_id)
+ return self._get_info(url, video_id) or self._get_old_info(video_id)

def _get_info(self, url, video_id):
token = self._download_json(
@@ -206,6 +203,7 @@

player_token = player['token']

+ drm = False
format_urls = set()
formats = []
for profile in ('hls', 'dash-widevine', 'dash-playready', 'smooth'):
@@ -227,7 +225,8 @@
if not stream_url or stream_url in format_urls:
continue
format_urls.add(stream_url)
- if stream.get('protection') is not None:
+ if stream.get('protection') is not None or
stream.get('keySystemOptions') is not None:
+ drm = True
continue
stream_type = stream.get('type')
stream_ext = determine_ext(stream_url)
@@ -246,6 +245,11 @@
'url': stream_url,
})

+ if not formats:
+ if drm:
+ raise ExtractorError('This video is DRM protected.',
expected=True)
+ return
+
self._sort_formats(formats)

info = {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/pornhub.py
new/youtube-dl/youtube_dl/extractor/pornhub.py
--- old/youtube-dl/youtube_dl/extractor/pornhub.py 2019-03-08
21:03:06.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/pornhub.py 2019-03-17
20:02:05.000000000 +0100
@@ -14,6 +14,7 @@
)
from .openload import PhantomJSwrapper
from ..utils import (
+ determine_ext,
ExtractorError,
int_or_none,
orderedSet,
@@ -275,6 +276,10 @@
r'/(\d{6}/\d{2})/', video_url, 'upload data', default=None)
if upload_date:
upload_date = upload_date.replace('/', '')
+ if determine_ext(video_url) == 'mpd':
+ formats.extend(self._extract_mpd_formats(
+ video_url, video_id, mpd_id='dash', fatal=False))
+ continue
tbr = None
mobj = re.search(r'(?P<height>\d+)[pP]?_(?P<tbr>\d+)[kK]',
video_url)
if mobj:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/svt.py
new/youtube-dl/youtube_dl/extractor/svt.py
--- old/youtube-dl/youtube_dl/extractor/svt.py 2019-03-08 21:02:58.000000000
+0100
+++ new/youtube-dl/youtube_dl/extractor/svt.py 2019-03-17 20:02:05.000000000
+0100
@@ -185,7 +185,7 @@

def _extract_by_video_id(self, video_id, webpage=None):
data = self._download_json(
- 'https://api.svt.se/videoplayer-api/video/%s' % video_id,
+ 'https://api.svt.se/video/%s' % video_id,
video_id, headers=self.geo_verification_headers())
info_dict = self._extract_video(data, video_id)
if not info_dict.get('title'):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/teamtreehouse.py
new/youtube-dl/youtube_dl/extractor/teamtreehouse.py
--- old/youtube-dl/youtube_dl/extractor/teamtreehouse.py 1970-01-01
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/teamtreehouse.py 2019-03-17
20:02:05.000000000 +0100
@@ -0,0 +1,140 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..utils import (
+ clean_html,
+ determine_ext,
+ ExtractorError,
+ float_or_none,
+ get_element_by_class,
+ get_element_by_id,
+ parse_duration,
+ remove_end,
+ urlencode_postdata,
+ urljoin,
+)
+
+
+class TeamTreeHouseIE(InfoExtractor):
+ _VALID_URL = r'https?://(?:www\.)?teamtreehouse\.com/library/(?P<id>[^/]+)'
+ _TESTS = [{
+ # Course
+ 'url':
'https://teamtreehouse.com/library/introduction-to-user-authentication-in-php',
+ 'info_dict': {
+ 'id': 'introduction-to-user-authentication-in-php',
+ 'title': 'Introduction to User Authentication in PHP',
+ 'description': 'md5:405d7b4287a159b27ddf30ca72b5b053',
+ },
+ 'playlist_mincount': 24,
+ }, {
+ # WorkShop
+ 'url': 'https://teamtreehouse.com/library/deploying-a-react-app',
+ 'info_dict': {
+ 'id': 'deploying-a-react-app',
+ 'title': 'Deploying a React App',
+ 'description': 'md5:10a82e3ddff18c14ac13581c9b8e5921',
+ },
+ 'playlist_mincount': 4,
+ }, {
+ # Video
+ 'url': 'https://teamtreehouse.com/library/application-overview-2',
+ 'info_dict': {
+ 'id': 'application-overview-2',
+ 'ext': 'mp4',
+ 'title': 'Application Overview',
+ 'description': 'md5:4b0a234385c27140a4378de5f1e15127',
+ },
+ 'expected_warnings': ['This is just a preview'],
+ }]
+ _NETRC_MACHINE = 'teamtreehouse'
+
+ def _real_initialize(self):
+ email, password = self._get_login_info()
+ if email is None:
+ return
+
+ signin_page = self._download_webpage(
+ 'https://teamtreehouse.com/signin',
+ None, 'Downloading signin page')
+ data = self._form_hidden_inputs('new_user_session', signin_page)
+ data.update({
+ 'user_session[email]': email,
+ 'user_session[password]': password,
+ })
+ error_message = get_element_by_class('error-message',
self._download_webpage(
+ 'https://teamtreehouse.com/person_session',
+ None, 'Logging in', data=urlencode_postdata(data)))
+ if error_message:
+ raise ExtractorError(clean_html(error_message), expected=True)
+
+ def _real_extract(self, url):
+ display_id = self._match_id(url)
+ webpage = self._download_webpage(url, display_id)
+ title = self._html_search_meta(['og:title', 'twitter:title'], webpage)
+ description = self._html_search_meta(
+ ['description', 'og:description', 'twitter:description'], webpage)
+ entries = self._parse_html5_media_entries(url, webpage, display_id)
+ if entries:
+ info = entries[0]
+
+ for subtitles in info.get('subtitles', {}).values():
+ for subtitle in subtitles:
+ subtitle['ext'] = determine_ext(subtitle['url'], 'srt')
+
+ is_preview = 'data-preview="true"' in webpage
+ if is_preview:
+ self.report_warning(
+ 'This is just a preview. You need to be signed in with a
Basic account to download the entire video.', display_id)
+ duration = 30
+ else:
+ duration = float_or_none(self._search_regex(
+ r'data-duration="(\d+)"', webpage, 'duration'), 1000)
+ if not duration:
+ duration = parse_duration(get_element_by_id(
+ 'video-duration', webpage))
+
+ info.update({
+ 'id': display_id,
+ 'title': title,
+ 'description': description,
+ 'duration': duration,
+ })
+ return info
+ else:
+ def extract_urls(html, extract_info=None):
+ for path in re.findall(r'<a[^>]+href="([^"]+)"', html):
+ page_url = urljoin(url, path)
+ entry = {
+ '_type': 'url_transparent',
+ 'id': self._match_id(page_url),
+ 'url': page_url,
+ 'id_key': self.ie_key(),
+ }
+ if extract_info:
+ entry.update(extract_info)
+ entries.append(entry)
+
+ workshop_videos = self._search_regex(
+ r'(?s)<ul[^>]+id="workshop-videos"[^>]*>(.+?)</ul>',
+ webpage, 'workshop videos', default=None)
+ if workshop_videos:
+ extract_urls(workshop_videos)
+ else:
+ stages_path = self._search_regex(
+
r'(?s)<div[^>]+id="syllabus-stages"[^>]+data-url="([^"]+)"',
+ webpage, 'stages path')
+ if stages_path:
+ stages_page = self._download_webpage(
+ urljoin(url, stages_path), display_id, 'Downloading
stages page')
+ for chapter_number, (chapter, steps_list) in
enumerate(re.findall(r'(?s)<h2[^>]*>\s*(.+?)\s*</h2>.+?<ul[^>]*>(.+?)</ul>',
stages_page), 1):
+ extract_urls(steps_list, {
+ 'chapter': chapter,
+ 'chapter_number': chapter_number,
+ })
+ title = remove_end(title, ' Course')
+
+ return self.playlist_result(
+ entries, display_id, title, description)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/weibo.py
new/youtube-dl/youtube_dl/extractor/weibo.py
--- old/youtube-dl/youtube_dl/extractor/weibo.py 2019-03-08
21:02:59.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/weibo.py 2019-03-17
20:02:05.000000000 +0100
@@ -19,7 +19,7 @@


class WeiboIE(InfoExtractor):
- _VALID_URL = r'https?://weibo\.com/[0-9]+/(?P<id>[a-zA-Z0-9]+)'
+ _VALID_URL = r'https?://(?:www\.)?weibo\.com/[0-9]+/(?P<id>[a-zA-Z0-9]+)'
_TEST = {
'url': 'https://weibo.com/6275294458/Fp6RGfbff?type=comment',
'info_dict': {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/xhamster.py
new/youtube-dl/youtube_dl/extractor/xhamster.py
--- old/youtube-dl/youtube_dl/extractor/xhamster.py 2019-03-08
21:02:59.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/xhamster.py 2019-03-17
20:02:05.000000000 +0100
@@ -20,7 +20,7 @@
class XHamsterIE(InfoExtractor):
_VALID_URL = r'''(?x)
https?://
- (?:.+?\.)?xhamster\.com/
+ (?:.+?\.)?xhamster\.(?:com|one)/
(?:
movies/(?P<id>\d+)/(?P<display_id>[^/]*)\.html|
videos/(?P<display_id_2>[^/]*)-(?P<id_2>\d+)
@@ -91,6 +91,9 @@
# new URL schema
'url': 'https://pt.xhamster.com/videos/euro-pedal-pumping-7937821',
'only_matching': True,
+ }, {
+ 'url':
'https://xhamster.one/videos/femaleagent-shy-beauty-takes-the-bait-1509445',
+ 'only_matching': True,
}]

def _real_extract(self, url):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/utils.py
new/youtube-dl/youtube_dl/utils.py
--- old/youtube-dl/youtube_dl/utils.py 2019-03-08 21:03:06.000000000 +0100
+++ new/youtube-dl/youtube_dl/utils.py 2019-03-17 20:02:05.000000000 +0100
@@ -1922,7 +1922,7 @@
return default
try:
return int(v) * invscale // scale
- except ValueError:
+ except (ValueError, TypeError):
return default


@@ -1943,7 +1943,7 @@
return default
try:
return float(v) * invscale / scale
- except ValueError:
+ except (ValueError, TypeError):
return default


diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/youtube-dl/youtube_dl/version.py
new/youtube-dl/youtube_dl/version.py
--- old/youtube-dl/youtube_dl/version.py 2019-03-17 19:36:38.000000000
+0100
+++ new/youtube-dl/youtube_dl/version.py 2019-04-01 18:55:14.000000000
+0200
@@ -1,3 +1,3 @@
from __future__ import unicode_literals

-__version__ = '2019.03.18'
+__version__ = '2019.04.01'


< Previous Next >
This Thread