
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (67)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (8437)
-
How do I extract the first frame of a video stream as an image using avcpp ?
19 mars 2023, par AurelaNYTI'm trying to extract the first frame of a video stream and save it as an image using the avcpp library (https://github.com/h4tr3d/avcpp). I have already written some code but I'm not sure how to extract the first frame from the stream and save it to a .JPG file. Can someone help me with this ?


I have done the following for now and it works fine, however I am unable to understand how to go on with the conversion of the frame to an image :


#include "iostream"
#include "avcpp/av.h"
#include "avcpp/codec.h"
#include "avcpp/frame.h"
#include "avcpp/packet.h"
#include "avcpp/codeccontext.h"
#include "avcpp/formatcontext.h"

class OpenVHCR {
public:

 static void getFirstFrame(const std::string& stream_uri, const bool debug_log) {
 av::init();
 av::Codec videoCodec;
 av::Stream videoStream;
 size_t videoStreamID = -1;
 std::error_code errorCode;
 av::FormatContext formatContext;
 av::VideoDecoderContext decoderContext;
 formatContext.openInput(stream_uri);
 if(formatContext.streamsCount() < 1){
 std::cerr << "[I/O] Failed to find a video steam in the URI provided." << std::endl;
 }else{
 if(debug_log){
 std::cout << "[I/O] Successfully found " << formatContext.streamsCount() << " streams in the URI provided." << std::endl;
 }
 formatContext.findStreamInfo(errorCode);
 if (errorCode) {
 std::cerr << "[AV] An unexpected error has occurred: " << errorCode.message() << std::endl;
 return;
 }
 for (size_t i = 0; i < formatContext.streamsCount(); ++i) {
 auto st = formatContext.stream(i);
 if (st.mediaType() == AVMEDIA_TYPE_VIDEO) {
 videoStreamID = i;
 videoStream = st;
 break;
 }
 }
 if(!videoStream.isNull() && videoStream.isValid()){
 if(debug_log){
 std::cout << "[I/O] Successfully opened a valid video stream." << std::endl;
 }
 decoderContext = av::VideoDecoderContext(videoStream);
 videoCodec = av::findDecodingCodec(decoderContext.raw()->codec_id);
 decoderContext.setCodec(videoCodec);
 decoderContext.setRefCountedFrames(true);
 decoderContext.open({{"threads", "1"}}, av::Codec(), errorCode);
 if(errorCode){
 std::cout << "[AV] Failed to find a supported codec for the stream." << std::endl;
 return;
 }else{
 while (av::Packet streamPacket = formatContext.readPacket(errorCode)){
 if(errorCode){
 std::cout << "[AV] Failed to read a packet from the stream." << std::endl;
 return;
 }else{
 std::cout << "[AV] Successfully read a packet from the stream." << std::endl;
 if(streamPacket.streamIndex() != videoStreamID){
 continue;
 }
 //TODO: Find a way to get the first frame from the stream and save it as an image.
 av::VideoFrame videoFrame = decoderContext.decode(streamPacket, errorCode);
 std::cout << "[AV] Successfully received a frame from the video." << std::endl;
 if (errorCode) {
 std::cerr << "[AV] An unexpected error has occurred: " << errorCode.message() << std::endl;
 return;
 } else if (!videoFrame) {
 std::cerr << "[AV] The received video frame seems to be empty." << std::endl;
 }
 }
 }

 formatContext.close();
 decoderContext.close();
 }
 }else{
 std::cerr << "[I/O] Failed to find open a valid video stream." << std::endl;
 }
 }
 }
};




-
Use HLS from Wifi device as input to stream over 4G
26 février 2017, par pbdevI’m building an Android app that streams video from a Wifi device to a Wowza server. It should be quite simple but I can’t figure out how to use both Wifi and 4G at the same time. The device I’m using is a Samsung S5 with Android 6.0.1. To sum it up, this is the goal :
- Fetch the video stream from a GoPro device over Wifi.
- Send the video stream to a Wowza server over 4G.
When connected to the GoPro’s Wifi network I can ping the GoPro and see the stream in a
MediaPlayer
. Since I’m connected to a Wifi device that doesn’t provide internet access, I can’t ping my Wowza server. Once I’ve disabled Wifi this is no problem, by using FFmpeg I can reach the Wowza server over 4G.This is the FFmpeg command I want to use to copy the stream to the Wowza server, where
10.5.5.9
is the IP-address of the GoPro :ffmpeg -i http://10.5.5.9:8080/live/amba.m3u8 -acodec aac -ar 44100 -ab 48k -vcodec copy -f flv rtmp://username:password@my-wowza-server.com:1935/my-app/my-stream
If I enable Wifi and connect to the GoPro,
10.5.5.9
is reachable butmy-wowza-server.com
isn’t. The Samsung S5 provides a Smart network switch which makes the Wowza server reachable but the connection to the GoPro gets lost.Is there any way to bind
10.5.5.9
to the Wifi interface of the phone and bindmy-wowza-server.com
to the cellular interface ? -
ffmpeg's invalid input stream fps causes low latency [closed]
9 mai 2024, par I have 10 fingersI have ffmpeg setup which produces rtmp stream from remote rtsp stream. The rtsp stream comes from ip camera which support multiple profiles. Each profile has 1080p 30 fps, 1080p 15 fps.


The weird thing is that when ffmpeg analyze its input stream, one profile is recognized as 1 fps as below


Input #0, rtsp, from 'rtsp://...':
 Metadata:
 title : Media Presentation
 comment : samsung
 Duration: N/A, start: 0.064144, bitrate: N/A
 Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 1 fps, 1 tbr, 90k tbn, 2 tbc
 Stream #0:1: Data: none
Output #0, flv, to 'rtmp:...':
 Metadata:
 title : Media Presentation
 comment : samsung
 encoder : Lavf58.29.100
 Stream #0:0: Video: h264 (High) ([7][0][0][0] / 0x0007), yuvj420p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 1 fps, 1 tbr, 1k tbn, 90k tbc
Stream mapping:
 Stream #0:0 -> #0:0 (copy)



And when it is played the tbr is 30 i think it means the real fps from video stream is 30. No problem actually this gives me the lowest latency(2-3s).


The other stream is normal 30 fps input 30 fps output and it gives me some delay(5-6s) so i want to know what cause this fps bug ? If i can use this bug i want to change all my streams to reduce the latency.