
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (13)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Déploiements possibles
31 janvier 2010, parDeux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
Version mono serveur
La version mono serveur consiste à n’utiliser qu’une (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (3756)
-
How to restream IPTV playlist with Nginx RTMP, FFmpeg, and Python without recording, but getting HTTP 403 error ? [closed]
1er avril, par boyuna1720I have an IPTV playlist from a provider that allows only one user to connect and watch. I want to restream this playlist through my own server without recording it and in a lightweight manner. I’m using Nginx RTMP, FFmpeg, and Python TCP for the setup, but I keep getting an HTTP 403 error when trying to access the stream.


Here’s a summary of my setup :


Nginx RTMP : Used for streaming.


FFmpeg : Used to handle the video stream.


Python TCP : Trying to handle the connection between my server and the IPTV source.


#!/usr/bin/env python3

import sys
import socket
import threading
import requests
import time

def accept_connections(server_socket, clients, clients_lock):
 """
 Continuously accept new client connections, perform a minimal read of the
 client's HTTP request, send back a valid HTTP/1.1 response header, and
 add the socket to the broadcast list.
 """
 while True:
 client_socket, addr = server_socket.accept()
 print(f"[+] New client connected from {addr}")
 threading.Thread(
 target=handle_client,
 args=(client_socket, addr, clients, clients_lock),
 daemon=True
 ).start()

def handle_client(client_socket, addr, clients, clients_lock):
 """
 Read the client's HTTP request minimally, send back a proper HTTP/1.1 200 OK header,
 and then add the socket to our broadcast list.
 """
 try:
 # Read until we reach the end of the request headers
 request_data = b""
 while b"\r\n\r\n" not in request_data:
 chunk = client_socket.recv(1024)
 if not chunk:
 break
 request_data += chunk

 # Send a proper HTTP response header to satisfy clients like curl
 response_header = (
 "HTTP/1.1 200 OK\r\n"
 "Content-Type: application/octet-stream\r\n"
 "Connection: close\r\n"
 "\r\n"
 )
 client_socket.sendall(response_header.encode("utf-8"))

 with clients_lock:
 clients.append(client_socket)
 print(f"[+] Client from {addr} is ready to receive stream.")
 except Exception as e:
 print(f"[!] Error handling client {addr}: {e}")
 client_socket.close()

def read_from_source_and_broadcast(source_url, clients, clients_lock):
 """
 Continuously connect to the source URL (following redirects) using custom headers
 so that it mimics a curl-like request. In case of connection errors (e.g. connection reset),
 wait a bit and then try again.
 
 For each successful connection, stream data in chunks and broadcast each chunk
 to all connected clients.
 """
 # Set custom headers to mimic curl
 headers = {
 "User-Agent": "curl/8.5.0",
 "Accept": "*/*"
 }

 while True:
 try:
 print(f"[+] Fetching from source URL (with redirects): {source_url}")
 with requests.get(source_url, stream=True, allow_redirects=True, headers=headers) as resp:
 if resp.status_code >= 400:
 print(f"[!] Got HTTP {resp.status_code} from the source. Retrying in 5 seconds.")
 time.sleep(5)
 continue

 # Stream data and broadcast each chunk
 for chunk in resp.iter_content(chunk_size=4096):
 if not chunk:
 continue
 with clients_lock:
 for c in clients[:]:
 try:
 c.sendall(chunk)
 except Exception as e:
 print(f"[!] A client disconnected or send failed: {e}")
 c.close()
 clients.remove(c)
 except requests.exceptions.RequestException as e:
 print(f"[!] Source connection error, retrying in 5 seconds: {e}")
 time.sleep(5)

def main():
 if len(sys.argv) != 3:
 print(f"Usage: {sys.argv[0]} <port>")
 sys.exit(1)

 source_url = sys.argv[1]
 port = int(sys.argv[2])

 # Create a TCP socket to listen for incoming connections
 server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
 server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
 server_socket.bind(("0.0.0.0", port))
 server_socket.listen(5)
 print(f"[+] Listening on port {port}...")

 # List of currently connected client sockets
 clients = []
 clients_lock = threading.Lock()

 # Start a thread to accept incoming client connections
 t_accept = threading.Thread(
 target=accept_connections,
 args=(server_socket, clients, clients_lock),
 daemon=True
 )
 t_accept.start()

 # Continuously read from the source URL and broadcast to connected clients
 read_from_source_and_broadcast(source_url, clients, clients_lock)

if __name__ == "__main__":
 main()
</port>


When i write command
python3 proxy_server.py 'http://channelurl' 9999

I getting error.

[+] Listening on port 9999...
[+] Fetching from source URL (with redirects): http://ate91060.cdn-akm.me:80/dc31a19e5a6a/fc5e38e28e/325973
[!] Got HTTP 403 from the source. Retrying in 5 seconds.
^CTraceback (most recent call last):
 File "/home/namepirate58/nginx-1.23.1/proxy_server.py", line 127, in <module>
 main()
 File "/home/namepirate58/nginx-1.23.1/proxy_server.py", line 124, in main
 read_from_source_and_broadcast(source_url, clients, clients_lock)
 File "/home/namepirate58/nginx-1.23.1/proxy_server.py", line 77, in read_from_source_and_broadcast
 time.sleep(5)
KeyboardInterrupt
</module>


-
While using skvideo.io.FFmpegReader and skvideo.io.FFmpegWriter for video throughput the input video and output video length differ
28 juin 2024, par Kaesebrotus AnonymousI have an h264 encoded mp4 video of about 27.5 minutes length and I am trying to create a copy of the video which excludes the first 5 frames. I am using scikit-video and ffmpeg in python for this purpose. I do not have a GPU, so I am using libx264 codec for the output video.


It generally works and the output video excludes the first 5 frames. Somehow, the output video results in a length of about 22 minutes. When visually checking the videos, the shorter video does seem slightly faster and I can identify the same frames at different timestamps. In windows explorer, when clicking properties and then details, both videos' frame rates show as 20.00 fps.


So, my goal is to have both videos of the same length, except for the loss of the first 5 frames which should result in a duration difference of 0.25 seconds, and use the same (almost same) codec and not lose quality.


Can anyone explain why this apparent loss of frames is happening ?


Thank you for your interest in helping me, please find the details below.


Here is a minimal example of what I have done.


framerate = str(20)
reader = skvideo.io.FFmpegReader(inputvideo.mp4, inputdict={'-r': framerate})
writer = skvideo.io.FFmpegWriter(outputvideo.mp4, outputdict={'-vcodec': 'libx264', '-r': framerate})

for idx,frame in enumerate(reader.nextFrame()):
 if idx < 5:
 continue
 writer.writeFrame(frame)

reader.close()
writer.close()



When I read the output video again using FFmpegReader and check the .probeInfo, I can see that the output video has less frames in total. I have also managed to replicate the same problem for shorter videos (now not excluding the first 5 frames, but only throughputting a video), e.g. 10 seconds input turns to 8 seconds output with less frames. I have also tried playing around with further parameters of the outputdict, e.g. -pix_fmt, -b. I have tried to set -time_base in the output dict to the same value as in the inputdict, but that did not seem to have the desired effect. I am not sure if the name of the parameter is right.


For additional info, I am providing the .probeInfo of the input video, of which I used 10 seconds, and the .probeInfo of the 8 second output video it produced.


**input video** .probeInfo:
input dict

{'video': OrderedDict([('@index', '0'),
 ('@codec_name', 'h264'),
 ('@codec_long_name',
 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10'),
 ('@profile', 'High 4:4:4 Predictive'),
 ('@codec_type', 'video'),
 ('@codec_tag_string', 'avc1'),
 ('@codec_tag', '0x31637661'),
 ('@width', '4096'),
 ('@height', '3000'),
 ('@coded_width', '4096'),
 ('@coded_height', '3000'),
 ('@closed_captions', '0'),
 ('@film_grain', '0'),
 ('@has_b_frames', '0'),
 ('@sample_aspect_ratio', '1:1'),
 ('@display_aspect_ratio', '512:375'),
 ('@pix_fmt', 'yuv420p'),
 ('@level', '60'),
 ('@chroma_location', 'left'),
 ('@field_order', 'progressive'),
 ('@refs', '1'),
 ('@is_avc', 'true'),
 ('@nal_length_size', '4'),
 ('@id', '0x1'),
 ('@r_frame_rate', '20/1'),
 ('@avg_frame_rate', '20/1'),
 ('@time_base', '1/1200000'),
 ('@start_pts', '0'),
 ('@start_time', '0.000000'),
 ('@duration_ts', '1984740000'),
 ('@duration', '1653.950000'),
 ('@bit_rate', '3788971'),
 ('@bits_per_raw_sample', '8'),
 ('@nb_frames', '33079'),
 ('@extradata_size', '43'),
 ('disposition',
 OrderedDict([('@default', '1'),
 ('@dub', '0'),
 ('@original', '0'),
 ('@comment', '0'),
 ('@lyrics', '0'),
 ('@karaoke', '0'),
 ('@forced', '0'),
 ('@hearing_impaired', '0'),
 ('@visual_impaired', '0'),
 ('@clean_effects', '0'),
 ('@attached_pic', '0'),
 ('@timed_thumbnails', '0'),
 ('@non_diegetic', '0'),
 ('@captions', '0'),
 ('@descriptions', '0'),
 ('@metadata', '0'),
 ('@dependent', '0'),
 ('@still_image', '0')])),
 ('tags',
 OrderedDict([('tag',
 [OrderedDict([('@key', 'language'),
 ('@value', 'und')]),
 OrderedDict([('@key', 'handler_name'),
 ('@value', 'VideoHandler')]),
 OrderedDict([('@key', 'vendor_id'),
 ('@value', '[0][0][0][0]')])])]))])}

**output video** .probeInfo:
{'video': OrderedDict([('@index', '0'),
 ('@codec_name', 'h264'),
 ('@codec_long_name',
 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10'),
 ('@profile', 'High'),
 ('@codec_type', 'video'),
 ('@codec_tag_string', 'avc1'),
 ('@codec_tag', '0x31637661'),
 ('@width', '4096'),
 ('@height', '3000'),
 ('@coded_width', '4096'),
 ('@coded_height', '3000'),
 ('@closed_captions', '0'),
 ('@film_grain', '0'),
 ('@has_b_frames', '2'),
 ('@pix_fmt', 'yuv420p'),
 ('@level', '60'),
 ('@chroma_location', 'left'),
 ('@field_order', 'progressive'),
 ('@refs', '1'),
 ('@is_avc', 'true'),
 ('@nal_length_size', '4'),
 ('@id', '0x1'),
 ('@r_frame_rate', '20/1'),
 ('@avg_frame_rate', '20/1'),
 ('@time_base', '1/10240'),
 ('@start_pts', '0'),
 ('@start_time', '0.000000'),
 ('@duration_ts', '82944'),
 ('@duration', '8.100000'),
 ('@bit_rate', '3444755'),
 ('@bits_per_raw_sample', '8'),
 ('@nb_frames', '162'),
 ('@extradata_size', '47'),
 ('disposition',
 OrderedDict([('@default', '1'),
 ('@dub', '0'),
 ('@original', '0'),
 ('@comment', '0'),
 ('@lyrics', '0'),
 ('@karaoke', '0'),
 ('@forced', '0'),
 ('@hearing_impaired', '0'),
 ('@visual_impaired', '0'),
 ('@clean_effects', '0'),
 ('@attached_pic', '0'),
 ('@timed_thumbnails', '0'),
 ('@non_diegetic', '0'),
 ('@captions', '0'),
 ('@descriptions', '0'),
 ('@metadata', '0'),
 ('@dependent', '0'),
 ('@still_image', '0')])),
 ('tags',
 OrderedDict([('tag',
 [OrderedDict([('@key', 'language'),
 ('@value', 'und')]),
 OrderedDict([('@key', 'handler_name'),
 ('@value', 'VideoHandler')]),
 OrderedDict([('@key', 'vendor_id'),
 ('@value', '[0][0][0][0]')]),
 OrderedDict([('@key', 'encoder'),
 ('@value',
 'Lavc61.8.100 libx264')])])]))])}



I used 10 seconds by adding this to the bottom of the loop shown above :


if idx >= 200:
 break



-
What is “interoperable TTML” ?
1er janvier 2014, par silviaI’ve just tried to come to terms with the latest state of TTML, the Timed Text Markup Language.
TTML has been specified by the W3C Timed Text Working Group and released as a RECommendation v1.0 in November 2010. Since then, several organisations have tried to adopt it as their caption file format. This includes the SMPTE, the EBU (European Broadcasting Union), and Microsoft.
Both, Microsoft and the EBU actually looked at TTML in detail and decided that in order to make it usable for their use cases, a restriction of its functionalities is needed.
EBU-TT
The EBU released EBU-TT, which restricts the set of valid attributes and feature. “The EBU-TT format is intended to constrain the features provided by TTML, especially to make EBU-TT more suitable for the use with broadcast video and web video applications.” (see EBU-TT).
In addition, EBU-specific namespaces were introduce to extend TTML with EBU-specific data types, e.g. ebuttdt:frameRateMultiplierType or ebuttdt:smpteTimingType. Similarly, a bunch of metadata elements were introduced, e.g. ebuttm:documentMetadata, ebuttm:documentEbuttVersion, or ebuttm:documentIdentifier.
The use of namespaces as an extensibility mechanism will ascertain that EBU-TT files continue to be valid TTML files. However, any vanilla TTML parser will not know what to do with these custom extensions and will drop them on the floor.
Simple Delivery Profile
With the intention to make TTML ready for “internet delivery of Captions originated in the United States”, Microsoft proposed a “Simple Delivery Profile for Closed Captions (US)” (see Simple Profile). The Simple Profile is also a restriction of TTML.
Unfortunately, the Microsoft profile is not the same as the EBU-TT profile : for example, it contains the “set” element, which is not conformant in EBU-TT. Similarly, the supported style features are different, e.g. Simple Profile supports “display-region”, while EBU-TT does not. On the other hand, EBU-TT supports monospace, sans-serif and serif fonts, while the Simple profile does not.
Thus files created for the Simple Delivery Profile will not work on players that expect EBU-TT and the reverse.
Fortunately, the Simple Delivery Profile does not introduce any new namespaces and new features, so at least it is an explicit subpart of TTML and not both a restriction and extension like EBU-TT.
SMPTE-TT
SMPTE also created a version of the TTML standard called SMPTE-TT. SMPTE did not decide on a subset of TTML for their purposes – it was simply adopted as a complete set. “This Standard provides a framework for timed text to be supported for content delivered via broadband means,…” (see SMPTE-TT).
However, SMPTE extended TTML in SMPTE-TT with an ability to store a binary blob with captions in another format. This allows using SMPTE-TT as a transport format for any caption format and is deemed to help with “backwards compatibility”.
Now, instead of specifying a profile, SMPTE decided to define how to convert CEA-608 captions to SMPTE-TT. Even if it’s not called a “profile”, that’s actually what it is. It even has its own namespace : “m608 :”.
Conclusion
With all these different versions of TTML, I ask myself what a video player that claims support for TTML will do to get something working. The only chance it has is to implement all the extensions defined in all the different profiles. I pity the player that has to deal with a SMPTE-TT file that has a binary blob in it and is expected to be able to decode this.
Now, what is a caption author supposed to do when creating TTML ? They obviously cannot expect all players to be able to play back all TTML versions. Should they create different files depending on what platform they are targeting, i.e. a EBU-TT version, a SMPTE-TT version, a vanilla TTML version, and a Simple Delivery Profile version ? Should they by throwing all the features of all the versions into one TTML file and hope that the players will pick out the right things that they require and drop the rest on the floor ?
Maybe the best way to progress would be to make a list of the “safe” features : those features that every TTML profile supports. That may be the best way to get an “interoperable TTML” file. Here’s me hoping that this minimal set of features doesn’t just end up being the usual (starttime, endtime, text) triple.
UPDATE :
I just found out that UltraViolet have their own profile of SMPTE-TT called CFF-TT (see UltraViolet FAQ and spec). They are making some SMPTE-TT fields optional, but introduce a new @forcedDisplayMode attribute under their own namespace “cff :”.