
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (96)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (8175)
-
how to stream h265 video from ffmpeg to amazon s3 ?
22 juin 2020, par herbI am trying to stream h265 video to aws s3 from ffmpeg, here is the command that i use :


ffmpeg -f gdigrab -i desktop -r 1 -vframes 5 -c:v libx265 -crf 40 -f mp4 pipe:1 | aws s3 cp - s3://videosbuket-009212/d5.mp4



and error information :


[mp4 @ 000001c49541bb40] muxer does not support non seekable output
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 --



what's wrong here ?


-
Cannot watch RTSP stream from Live555 using FFMPEG
30 octobre 2023, par bobku123I am trying to create an RTSP server using Live555, with the stream source being raw h264 video of my USB webcam encoded with FFMPEG, sent over UDP.


I have used BasicUDPSource from Live555 sources as my FramedSource class. I created my own MediaSubsession class as per the Live555 FAQ. Here is the source code I created so far :


#include "liveMedia.hh"

#include "BasicUsageEnvironment.hh"
#include "announceURL.hh"
#include "FFMPEGH264StreamMediaSubsession.hh"
#include "BasicUDPSource.hh"
#include "H264VideoStreamFramer.hh"
#include "H265VideoRTPSink.hh"

UsageEnvironment* env;

// To make the second and subsequent client for each stream reuse the same
// input stream as the first client (rather than playing the file from the
// start for each client), change the following "False" to "True":
Boolean reuseFirstSource = False;

// To stream *only* MPEG-1 or 2 video "I" frames
// (e.g., to reduce network bandwidth),
// change the following "False" to "True":
Boolean iFramesOnly = False;

static void announceStream(RTSPServer* rtspServer, ServerMediaSession* sms,
 char const* streamName, char const* inputFileName); // forward


int main(int argc, char** argv) {
 // Begin by setting up our usage environment:
 TaskScheduler* scheduler = BasicTaskScheduler::createNew();
 env = BasicUsageEnvironment::createNew(*scheduler);

 UserAuthenticationDatabase* authDB = NULL;

 // Serve regular RTSP (over a TCP connection):
 RTSPServer* rtspServer = RTSPServer::createNew(*env, 8554, authDB);

 if (rtspServer == NULL) {
 *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
 exit(1);
 }

 char const* descriptionString = "Session streamed by \"testFFMPEGRTSPServer\"";

 {
 char const* streamName = "FFMPEGRTSPStream";
 ServerMediaSession* sms
 = ServerMediaSession::createNew(*env, streamName, streamName,
 descriptionString);
 sms->addSubsession(FFMPEGH264StreamMediaSubsession
 ::createNew(*env, reuseFirstSource));
 rtspServer->addServerMediaSession(sms);

 announceStream(rtspServer, sms, streamName, "ffmpeg");
 }

 env->taskScheduler().doEventLoop(); // does not return
}

static void announceStream(RTSPServer* rtspServer, ServerMediaSession* sms,
 char const* streamName, char const* inputFileName) {
 UsageEnvironment& env = rtspServer->envir();

 env << "\n\"" << streamName << "\" stream, from the file \""
 << inputFileName << "\"\n";
 announceURL(rtspServer, sms);
}

FFMPEGH264StreamMediaSubsession*
 FFMPEGH264StreamMediaSubsession::createNew(UsageEnvironment& env, Boolean reuseFirstSource)
{
 return new FFMPEGH264StreamMediaSubsession(env, reuseFirstSource);
}

FFMPEGH264StreamMediaSubsession::FFMPEGH264StreamMediaSubsession(UsageEnvironment& env, Boolean reuseFirstSource) 
 : OnDemandServerMediaSubsession(env, reuseFirstSource)
{

}

FFMPEGH264StreamMediaSubsession::~FFMPEGH264StreamMediaSubsession()
{

}

FramedSource* FFMPEGH264StreamMediaSubsession::createNewStreamSource(unsigned clientSessionId,
 unsigned& estBitrate)
{
 estBitrate = 500; // kbps, estimate

 // Create the video source:
 // Create a 'groupsock' for the input multicast group,port:
 char const* inputAddressStr = "192.168.1.100";

 NetAddressList inputAddresses(inputAddressStr);
 struct sockaddr_storage inputAddress;
 copyAddress(inputAddress, inputAddresses.firstAddress());

 Port const inputPort(8888);
 unsigned char const inputTTL = 0; // we're only reading from this mcast group

 Groupsock inputGroupsock(envir(), inputAddress, inputPort, inputTTL);

 // Then create a liveMedia 'source' object, encapsulating this groupsock:
 FramedSource* source = BasicUDPSource::createNew(envir(), &inputGroupsock);

 // Create a framer for the Video Elementary Stream:
 return H264VideoStreamFramer::createNew(envir(), source);
} 

RTPSink* FFMPEGH264StreamMediaSubsession::createNewRTPSink(Groupsock* rtpGroupsock,
 unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource)
{
 return H265VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic);
}



I converted my USB webcam to H264 stream using the following FFMPEG command :


ffmpeg -video_size 1280x720 -framerate 30 -input_format mjpeg -f v4l2 -i /dev/video0 -c:v h264_nvenc -bf 0 -g 30 -bsf:v 'filter_units=remove_types=35|38-40' -f h264 udp://192.168.1.100:8888



However when I try to play it with FFPLAY using the following, I do not get any output :


ffplay rtsp://192.168.1.100:8554/FFMPEGRTSPStream



What am I doing wrong ?


-
How to deal with every frame of a video by use ffmpeg ?
7 février 2019, par candrwowI am using tensorflow to do video object segmentation,but now I only know how to do it on image(png,jpg),I want to do it on short time video(15 seconds),how to get frame by frame in sequence and processing between each frame ?
now I split mp4 to pngs by ffmpeg,then do segamentation for every pngs,finally compound pngs to mp4 and delete every pngs.split and compound like this :
ffmpeg -i video.mp4 -r 24 ./split/%03d.png
ffmpeg -f image2 -i imgFilePath -r 24 videoDesPathbut the solution is not good,it generate many imgs in disk and need more io operation,if process fail,many pngs may be unable to recycle,I want to find a solution like below :
- convert a video to stream(java,because I’m using on Android)
- read stream to get 1st frame,do object segamentation(this need 150ms 1s,must run in child thread) and write segamented frame to a new stream(called segamented stream).
- repeat the above steps,finally convert a segamented stream to video.
can you teach me how to convert video to stream and get frame from stream ?Thank you !