
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (31)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (6035)
-
Write buffer to ffmpeg stdin and send websocket message after that
6 septembre 2024, par alpeccaI am working on a streaming application in which user can stream videos in real time to my server usig websocket and media recorder. I have written my backend code using fastapi python and I setup a websocket endpoint that would receive buffer data from the frontend every two second. But the problem I am facing is here :-


process.stdin.write(data)
await websocket.send_json



Here the code ffmpeg process write could take some time to take a buffer and write a mp4 for it, but the websocket send json won't wait for it and thus just send the message back to the client, which is causing videos being too short and currupted once the user stops the recording.


Here is the full code


@router.websocket("/stream")
async def websocket_endpoint(websocket: WebSocket, token: str = Query(...), videoId: str = Query(...), authorize: AuthJWT = Depends()):
 await manager.connect(websocket)
 dataNumber = 1

 recordingFile = os.path.join(temp_dir, f"recording_{videoId}.mp4")

 command = [
 'ffmpeg', 
 '-y',
 '-i', 
 '-', 
 '-codec:v', 
 'copy', 
 '-f', 'mp4',
 recordingFile,
 # "-"
 # f'output{queueNumber}.mp4',
 ]

 process = subprocess.Popen(command, stdin=subprocess.PIPE)

 try:
 while True:
 try:
 data = await websocket.receive_bytes()
 if not data:
 break
 process.stdin.write(data)
 process.stdin.flush()
 await websocket.send_json({"chunkNumber": dataNumber, "status": 200})
 dataNumber = dataNumber + 1
 except RuntimeError:
 break 
 except WebSocketDisconnect:
 print(f"Client disconnected: {websocket.client.host}")
 finally:
 manager.disconnect(websocket)
 process.stdin.close()
 process.wait()
 process.terminate()



What I want to do is on each buffer send from the client, I want to make sure that ffmpeg writes that to the filesystem compeletly and than only to send the websocket message back to the client. And also, as the request are coming from the client every 2 second no matter what, If the write is taking too long for the previous message, I want to make sure that that the ffmpeg first write that part to the file and send the message and than do the newer one


-
avformat/matroskaenc : write updated STREAMINFO metadata for FLAC streams if available
1er novembre 2016, par James Almeravformat/matroskaenc : write updated STREAMINFO metadata for FLAC streams if available
FLAC streams originating from the FLAC encoder send updated and more
complete STREAMINFO metadata as part of the last packet, so write that
to CodecPrivate instead of the incomplete one available in extradata
during init.Signed-off-by : James Almer <jamrial@gmail.com>
-
How do I write audio and video to the same file using FFMPEG and C ?
29 juin 2018, par benwizI am consuming an audio file and a video file using ffmpeg in a C program. I am modifying both the audio and the video data. In working the code below I write both each of these streams to its own file. How can I write both streams to the same file ?
#include
#include
#include
// Video resolution
#define W 1280
#define H 720
// Allocate a buffer to store one video frame
unsigned char video_frame[H][W][3] = {0};
int main()
{
// Audio pipes
FILE *audio_pipein = popen("ffmpeg -i data/daft-punk.mp3 -f s16le -ac 1 -", "r");
FILE *audio_pipeout = popen("ffmpeg -y -f s16le -ar 44100 -ac 1 -i - out/daft-punk.mp3", "w");
// Video pipes
FILE *video_pipein = popen("ffmpeg -i data/daft-punk.mp4 -f image2pipe -vcodec rawvideo -pix_fmt rgb24 -", "r");
FILE *video_pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 1280x720 -r 25 -i - -f mp4 -q:v 5 -an -vcodec mpeg4 out/daft-punk.mp4", "w");
// Audio vars
int16_t audio_sample;
int audio_count;
int audio_n = 0;
// Video vars
int x = 0;
int y = 0;
int video_count = 0;
// Read, modify, and write one audio_sample and video_frame at a time
while (1)
{
// Audio
audio_count = fread(&audio_sample, 2, 1, audio_pipein); // read one 2-byte audio_sample
if (audio_count == 1)
{
++audio_n;
audio_sample = audio_sample * sin(audio_n * 5.0 * 2 * M_PI / 44100.0);
fwrite(&audio_sample, 2, 1, audio_pipeout);
}
// Video
video_count = fread(video_frame, 1, H * W * 3, video_pipein); // Read a frame from the input pipe into the buffer
if (video_count == H * W * 3) // Only modify and write if frame exists
{
for (y = 0; y < H; ++y) // Process this frame
for (x = 0; x < W; ++x) // Invert each colour component in every pixel
{
video_frame[y][x][0] = 255 - video_frame[y][x][0]; // red
video_frame[y][x][1] = 255 - video_frame[y][x][1]; // green
video_frame[y][x][2] = 255 - video_frame[y][x][2]; // blue
}
fwrite(video_frame, 1, H * W * 3, video_pipeout); // Write this frame to the output pipe
}
// Break if both complete
if (audio_count != 1 && video_count != H * W * 3)
break;
}
// Close audio pipes
pclose(audio_pipein);
pclose(audio_pipeout);
// Close video pipes
fflush(video_pipein);
fflush(video_pipeout);
pclose(video_pipein);
pclose(video_pipeout);
return 0;
}I took the base for this code from this article.
Thanks !