
Recherche avancée
Médias (91)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
avec chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
sans chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
config chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (56)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (10948)
-
FFmpeg - resize by max width/height, keep aspect ratio and avoid "width/height not divisible by 2" error
29 août 2020, par Eduard UnruhThis is the code I'm using to resize the video to either max width 640 or max height 700 and keeping the aspect ratio :


ffmpeg/bin/ffmpeg.exe" -y -i ttt.mp4 -profile:v high -c:v libx264 -filter_complex "scale=iw*min(1\,min(640/iw\,700/ih)):-1" -acodec copy -maxrate 600k -bufsize 300k -crf 18 ttt2.mp4



On some video I either get
width not divisible by 2
orheight not divisible by 2


I looked up that the solution would be :


-vf "crop=trunc(iw/2)*2:trunc(ih/2)*2"



So I tried :


ffmpeg/bin/ffmpeg.exe" -y -i ttt.mp4 -profile:v high -c:v libx264 -filter_complex "scale=iw*min(1\,min(640/iw\,700/ih)):-1" -vf "crop=trunc(iw/2)*2:trunc(ih/2)*2" -acodec copy -maxrate 600k -bufsize 300k -crf 18 ttt2.mp4



and get the error :


-vf/-af/-filter and -filter_complex cannot be used together for the same stream



so how to do this ??


-
Connect a remote Ip camera as a Webrtc client
5 avril 2017, par idoshI have 2 cameras :
- An internal webcam embedded in my laptop.
- A remote IP camera that is connected to my laptop through Wifi (transmits TCP, raw H264 data - no container). I’m getting the stream using node.js.
My goal is to create a Webrtc network and connect the remote camera as another client.
I’m trying to figure out possible solutions :
- My naive thinking was that I would stream the remote camera payload to the browser. But as I came to understand the browser can’t handle the stream without a container. Fair enough. But I don’t understand why it does handle the video stream that arrives from my internal camera (from the navigator.getUserMedia() function). what’s the difference between the two streams ? why can’t I mimic the stream from the remote camera as the input ?
- To bypass this problem I thought about creating a virtual camera using Manycam (or Manycam like app). To accomplish that I need to convert my TCP stream into an RTP stream (in order to feed Manycam). Though I did saw some info in ffmpeg command line, I couldn’t find info in their node.js api package "fluent-ffmpeg". Is it possible to do it using fluent-ffmpeg ? Or only using the command line tool ? Would it require another rtp server in the middle such as this one ?.
- Third option I read about is using node.js as a client in Webrtc. I saw it was implemented in "simple-peer". I tried it out using their co-work with socket.io (socket.io-p2p). unfortunately I couldn’t get it to work / : When i’m trying to create a socket/peer in the server - it throws errors, as it expect options that are only available on the client-side (like window, location, etc.). Am I doing something wrong ? maybe there is more suitable framework for this matter ?
- Forth option is to use a streaming server in the middle such as Kurnto. From my understanding it receives rtp as an input and transmits it as a webrtc client. I feel it’s the most excessive option, but maybe it’s not so bad (I have to admit that I haven’t investigate this option yet).
any thoughts ?
thanks !
-
ffmpeg/h265/opencv/c++ A method to resize frame after decoding on client side
18 janvier 2018, par 8793I’ve just joined a project to build a realtime video streaming application using ffmpeg/opencv/c++ via udp socket. On server side, they want to transmit a video size (640x480) to client, in order to reduce data transmission through network I resize the video to (320x240) and send frame. On client side (client), after receiving frame, we will upscale the frame back to (640x480). Using H265 for encode/decoding.
As I am just a beginner with video encoding, I would like to understand how to down-sampling & up-sampling the frame at server & client side in which we can incorporate with the video encoder/decoder.
A simple idea came into my mind that after decoding avframe -> Mat frame, I will upsampling this frame then display it.
I am not sure my idea is right or wrong. I would like to seek advice from any people who had experience in this area. Thank you very much !
static void updateFrameCallback(AVFrame *avframe, void* userdata) {
VideoStreamUDPClient* streamer = static_cast (userdata);
TinyClient* client = static_cast (streamer->userdata);
//Update Frame
pthread_mutex_lock(&client->mtx_updateFrame);
if (streamer->irect.width == client->frameSize.width
&& streamer->irect.height == client->frameSize.height) {
cvtAVFrameYUV4202Frame(&avframe, client->frame);
printf("TinyClient: Received Full Frame\n");
} else {
Mat block;
cvtAVFrameYUV4202Frame(&avframe, block);
block.copyTo(client->frame(streamer->irect));
}
//How to resize frame before display it!!!
imshow("Frame", client->frame);
waitKey(1);
pthread_mutex_unlock(&client->mtx_updateFrame);
}