
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (41)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (9123)
-
Could anyone help me understand why moviepy is rendering at 2.5 it/s ?
23 décembre 2023, par tristanI'm writing a program that uses moviepy to make those weird reddit thread videos with mc parkour playing in the background (real original ik), and everything is good except for when im rendering video which seems to consume a ton of memory and moves really... really slow, like 2.5 it/s. could anyone help ? also im a novice programmer that has no bearing on what is conventional or proper, so sorry if my code is very bad.


from moviepy.video.fx.all import resize
from moviepy.video.tools.subtitles import SubtitlesClip
from moviepy.editor import (
 CompositeVideoClip,
 AudioFileClip,
 VideoFileClip,
 ImageClip,
 TextClip
)
import random
import moviepy.config as cfg
import librosa
from imagegenerator import draw_title
from audioeditor import concatenate_audios
import soundfile as sf
import numpy as np

# Constants
VIDEO_FADE_DURATION = 0.4
SPEED_FACTOR = 1.1
TEXT_WIDTH = 600
MINIMUM_FONT_SIZE = 60
FONT_COLOR = "white"
OUTLINE_COLOR = "black"
TITLE_ANIMATION_DURATION = 0.25
ANIMATION_DURATION = 0.2

# Configure imagemagick binary
cfg.change_settings(
 {
 "IMAGEMAGICK_BINARY": "magick/magick.exe"
 }
)

# Ease-out function
def ease_out(t):
 return 1 - (1 - t) ** 2

# Overlap audio files
def overlap_audio_files(audio_path1, audio_path2):
 # Load the first audio file
 audio1, sr1 = librosa.load(audio_path1, sr=None)

 # Load the second audio file
 audio2, sr2 = librosa.load(audio_path2, sr=None)

 # Ensure both audio files have the same sample rate
 if sr1 != sr2:
 raise ValueError("Sample rates of the two audio files must be the same.")

 # Calculate the duration of audio2
 audio2_duration = len(audio2)

 # Tile audio1 to match the duration of audio2
 audio1 = np.tile(audio1, int(np.ceil(audio2_duration / len(audio1))))

 # Trim audio1 to match the duration of audio2
 audio1 = audio1[:audio2_duration]

 # Combine the audio files by superimposing them
 combined_audio = audio1 + audio2

 # Save the combined audio to a new file
 output_path = "temp/ttsclips/combined_audio.wav"
 sf.write(output_path, combined_audio, sr1)

 return output_path

# Generator function for subtitles with centered alignment and outline
def centered_text_generator_white(txt):
 return TextClip(
 txt,
 font=r"fonts/Invisible-ExtraBold.otf",
 fontsize=86,
 color=FONT_COLOR,
 bg_color='transparent', # Use a transparent background
 align='center', # Center the text
 size=(1072, 1682),
 method='caption', # Draw a caption instead of a title
 )

# Generator function for subtitles with centered alignment and blurred outline
def centered_text_generator_black_blurred_outline(txt, blur_factor=3):
 outline_clip = TextClip(
 txt,
 font=r"fonts/Invisible-ExtraBold.otf",
 fontsize=86,
 color=OUTLINE_COLOR,
 bg_color='transparent', # Use a transparent background
 align='center', # Center the text
 size=(1080, 1688),
 method='caption', # Draw a caption instead of a title
 )

 # Blur the black text (outline)
 blurred_outline_clip = outline_clip.fx(resize, 1.0 / blur_factor)
 blurred_outline_clip = blurred_outline_clip.fx(resize, blur_factor)

 return blurred_outline_clip

# Compile video function
def compile_video(title_content, upvotes, comments, tone, subreddit, video_num):
 # Set the dimensions of the video (720x1280 in this case)
 height = 1280

 # Concatenate the audios
 concatenate_audios()

 concatenated_audio_path = r"temp/ttsclips/concatenated_audio.mp3"
 title_audio_path = r"temp/ttsclips/title.mp3"

 title_audio = AudioFileClip(title_audio_path)
 concatenated_audio = AudioFileClip(concatenated_audio_path)

 # Calculate for video duration
 title_duration = title_audio.duration
 duration = concatenated_audio.duration

 # Set background
 background_path = f"saved_videos/newmcparkour.mp4"
 background = VideoFileClip(background_path)
 background_duration = background.duration
 random_start = random.uniform(0, background_duration - duration)
 background = background.subclip(random_start, random_start + duration)

 # Apply fade-out effect to both background clips
 background = background.crossfadeout(VIDEO_FADE_DURATION)

 # Generate the background image with rounded corners
 background_image_path = draw_title(title_content, upvotes, comments, subreddit)

 # Load the background image with rounded corners
 background_image = ImageClip(background_image_path)

 # Set the start of the animated title clip
 animated_background_clip = background_image.set_start(0)

 # Set the initial position of the text at the bottom of the screen
 initial_position = (90, height)

 # Calculate the final position of the text at the center of the screen
 final_position = [90, 630]

 # Animate the title clip to slide up over the course of the animation duration
 animated_background_clip = animated_background_clip.set_position(
 lambda t: (
 initial_position[0],
 initial_position[1]
 - (initial_position[1] - final_position[1])
 * ease_out(t / TITLE_ANIMATION_DURATION),
 )
 )

 # Set the duration of the animated title clip
 animated_background_clip = animated_background_clip.set_duration(
 TITLE_ANIMATION_DURATION
 )

 # Assign start times to title image
 stationary_background_clip = background_image.set_start(TITLE_ANIMATION_DURATION)

 # Assign positions to stationary title image
 stationary_background_clip = stationary_background_clip.set_position(final_position)

 # Assign durations to stationary title image
 stationary_background_clip = stationary_background_clip.set_duration(
 title_duration - TITLE_ANIMATION_DURATION
 )

 # Select background music
 if tone == "normal":
 music_options = [
 "Anguish",
 "Garden",
 "Limerence",
 "Lost",
 "NoWayOut",
 "Summer",
 "Never",
 "Miss",
 "Touch",
 "Stellar"
 ]
 elif tone == "eerie":
 music_options = [
 "Creepy",
 "Scary",
 "Spooky",
 "Space",
 "Suspense"
 ]
 background_music_choice = random.choice(music_options)
 background_music_path = f"music/eeriemusic/{background_music_choice}.mp3"

 # Create final audio by overlapping background music and concatenated audio
 final_audio = AudioFileClip(
 overlap_audio_files(background_music_path, concatenated_audio_path)
 )

 # Release the concatenated audio
 concatenated_audio.close()

 # Create subtitles clip using the centered_text_generator
 subtitles = SubtitlesClip("temp/ttsclips/content_speechmarks.srt",
 lambda txt: centered_text_generator_white(txt))
 subtitles_outline = SubtitlesClip("temp/ttsclips/content_speechmarks.srt",
 lambda txt: centered_text_generator_black_blurred_outline(txt))

 # Overlay subtitles on the blurred background
 final_clip = CompositeVideoClip(
 [background, animated_background_clip, stationary_background_clip, subtitles_outline, subtitles]
 )

 # Set the final video dimensions and export the video
 final_clip = final_clip.set_duration(duration)
 final_clip = final_clip.set_audio(final_audio)

 final_clip.write_videofile(
 f"temp/videos/{video_num}.mp4",
 codec="libx264",
 fps=60,
 bitrate="8000k",
 audio_codec="aac",
 audio_bitrate="192k",
 preset="ultrafast",
 threads=8
 )

 # Release the concatenated audio
 concatenated_audio.close()

 # Release the title audio
 title_audio.close()

 # Release the background video and image
 background.close()
 background_image.close()

 # Release the final audio
 final_audio.close()

 # Release the subtitle clips
 subtitles.close()
 subtitles_outline.close()

 # Release the final video clip
 final_clip.close()



ive tried turning down my settings, like setting it to "ultrafast" and dropping the bitrate, but nothing seems to work. the only thing I can think of now is that there is something Im doing wrong with moviepy.


-
ffmpeg GRAY16 stream over network
28 novembre 2023, par Norbert P.Im working in a school project where we need to use depth cameras. The camera produces color and depth (in other words 16bit grayscale image). We decided to use ffmpeg, as later on compression could be very useful. For now we got some basic stream running form one PC to other. These settings include :


- 

- rtmp
- flv as container
- pixel format AV_PIX_FMT_YUV420P
- codec AV_CODEC_ID_H264










The problem we are having is with grayscale image. Not every codec is able to cope with this format, so as not every protocol able to work with given codec. I got some settings "working" but receiver side is just stuck on avformat_open_input() method.
I have also tested it with commandline where ffmpeg is listening for connection and same happens.


I include a minimum "working" example of client code. Server can be tested with "ffmpeg.exe -f apng -listen 1 -i rtmp ://localhost:9999/stream/stream1 -c copy -f apng -listen 1 rtmp ://localhost:2222/live/l" or code below. I get no warnings, ffmpeg is newest version installed with "vcpkg install —triplet x64-windows ffmpeg[ffmpeg,ffprobe,zlib]" on windows or packet manager on linux.


The question : Did I miss something ? How do I get it to work ? If you have any better ideas I would very gladly consider them. In the end I need 16 bits of lossless transmission, could be split between channels etc. which I also tried with same effect.


Client code that would have camera and connect to server :


extern "C" {
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>imgutils.h>
}

int main() {

 std::string container = "apng";
 AVCodecID codec_id = AV_CODEC_ID_APNG;
 AVPixelFormat pixFormat = AV_PIX_FMT_GRAY16BE;

 AVFormatContext* format_ctx;
 AVCodec* out_codec;
 AVStream* out_stream;
 AVCodecContext* out_codec_ctx;
 AVFrame* frame;
 uint8_t* data;

 std::string server = "rtmp://localhost:9999/stream/stream1";

 int width = 1280, height = 720, fps = 30, bitrate = 1000000;

 //initialize format context for output with flv and no filename
 avformat_alloc_output_context2(&format_ctx, nullptr, container.c_str(), server.c_str());
 if (!format_ctx) {
 return 1;
 }

 //AVIOContext for accessing the resource indicated by url
 if (!(format_ctx->oformat->flags & AVFMT_NOFILE)) {
 int avopen_ret = avio_open(&format_ctx->pb, server.c_str(),
 AVIO_FLAG_WRITE);// , nullptr, nullptr);
 if (avopen_ret < 0) {
 fprintf(stderr, "failed to open stream output context, stream will not work\n");
 return 1;
 }
 }


 const AVCodec* tmp_out_codec = avcodec_find_encoder(codec_id);
 //const AVCodec* tmp_out_codec = avcodec_find_encoder_by_name("hevc");
 out_codec = const_cast(tmp_out_codec);
 if (!(out_codec)) {
 fprintf(stderr, "Could not find encoder for '%s'\n",
 avcodec_get_name(codec_id));

 return 1;
 }

 out_stream = avformat_new_stream(format_ctx, out_codec);
 if (!out_stream) {
 fprintf(stderr, "Could not allocate stream\n");
 return 1;
 }

 out_codec_ctx = avcodec_alloc_context3(out_codec);

 const AVRational timebase = { 60000, fps };
 const AVRational dst_fps = { fps, 1 };
 av_log_set_level(AV_LOG_VERBOSE);
 //codec_ctx->codec_tag = 0;
 //codec_ctx->codec_id = codec_id;
 out_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
 out_codec_ctx->width = width;
 out_codec_ctx->height = height;
 out_codec_ctx->gop_size = 1;
 out_codec_ctx->time_base = timebase;
 out_codec_ctx->pix_fmt = pixFormat;
 out_codec_ctx->framerate = dst_fps;
 out_codec_ctx->time_base = av_inv_q(dst_fps);
 out_codec_ctx->bit_rate = bitrate;
 //if (fctx->oformat->flags & AVFMT_GLOBALHEADER)
 //{
 // codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 //}

 out_stream->time_base = out_codec_ctx->time_base; //will be set afterwards by avformat_write_header to 1/1000

 int ret = avcodec_parameters_from_context(out_stream->codecpar, out_codec_ctx);
 if (ret < 0)
 {
 fprintf(stderr, "Could not initialize stream codec parameters!\n");
 return 1;
 }

 AVDictionary* codec_options = nullptr;
 av_dict_set(&codec_options, "tune", "zerolatency", 0);

 // open video encoder
 ret = avcodec_open2(out_codec_ctx, out_codec, &codec_options);
 if (ret < 0)
 {
 fprintf(stderr, "Could not open video encoder!\n");
 return 1;
 }
 av_dict_free(&codec_options);

 out_stream->codecpar->extradata_size = out_codec_ctx->extradata_size;
 out_stream->codecpar->extradata = static_cast(av_mallocz(out_codec_ctx->extradata_size));
 memcpy(out_stream->codecpar->extradata, out_codec_ctx->extradata, out_codec_ctx->extradata_size);

 av_dump_format(format_ctx, 0, server.c_str(), 1);

 frame = av_frame_alloc();

 int sz = av_image_get_buffer_size(pixFormat, width, height, 32);
#ifdef _WIN32
 data = (uint8_t*)_aligned_malloc(sz, 32);
 if (data == NULL)
 return ENOMEM;
#else
 ret = posix_memalign(reinterpret_cast(&data), 32, sz);
#endif
 av_image_fill_arrays(frame->data, frame->linesize, data, pixFormat, width, height, 32);
 frame->format = pixFormat;
 frame->width = width;
 frame->height = height;
 frame->pts = 1;
 if (avformat_write_header(format_ctx, nullptr) < 0) //Header making problems!!!
 {
 fprintf(stderr, "Could not write header!\n");
 return 1;
 }

 printf("stream time base = %d / %d \n", out_stream->time_base.num, out_stream->time_base.den);

 double inv_stream_timebase = (double)out_stream->time_base.den / (double)out_stream->time_base.num;
 printf("Init OK\n");
 /* Init phase end*/
 int dts = 0;
 int frameNo = 0;

 while (true) {
 //Fill dummy frame with something
 for (int y = 0; y < height; y++) {
 uint16_t color = ((y + frameNo) * 256) % (256 * 256);
 for (int x = 0; x < width; x++) {
 data[x+y*width] = color;
 }
 }

 memcpy(frame->data[0], data, 1280 * 720 * sizeof(uint16_t));
 AVPacket* pkt = av_packet_alloc();

 int ret = avcodec_send_frame(out_codec_ctx, frame);
 if (ret < 0)
 {
 fprintf(stderr, "Error sending frame to codec context!\n");
 return ret;
 }
 while (ret >= 0) {
 ret = avcodec_receive_packet(out_codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 else if (ret < 0) {
 fprintf(stderr, "Error during encoding\n");
 break;
 }
 pkt->dts = dts;
 pkt->pts = dts;
 dts += 33;
 av_write_frame(format_ctx, pkt);
 frameNo++;
 av_packet_unref(pkt);
 }
 printf("Streamed %d frames\n", frameNo);
 }
 return 0;
}



And part of server that should receive. code where is stops and waits


extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
}

int main() {
 AVFormatContext* fmt_ctx = NULL;
 av_log_set_level(AV_LOG_VERBOSE);
 AVDictionary* options = nullptr;
 av_dict_set(&options, "protocol_whitelist", "file,udp,rtp,tcp,rtmp,rtsp,hls", 0);
 av_dict_set(&options, "timeout", "500000", 0); // Timeout in microseconds 

//Next Line hangs 
 int ret = avformat_open_input(&fmt_ctx, "rtmp://localhost:9999/stream/stream1", NULL, &options);
 if (ret != 0) {
 fprintf(stderr, "Could not open RTMP stream\n");
 return -1;
 }

 // Find the first video stream
 ret = avformat_find_stream_info(fmt_ctx, nullptr);
 if (ret < 0) {
 return ret;
 }
 //...
} 




Edit :
I tried to just create a animated png and tried to stream that from the console to another console window to avoid any programming mistakes on my side. It was the same, I just could not get 16 PNG encoded stream to work. I hung trying to receive and closed when the file ended with in total zero frames received.


I managed to get other thing working :
To not encode gray frames with YUV420, I installed ffmpeg with libx264 support (was thinking is the same as H264, which in code is, but it adds support to new pixel formats). Used H264 again but with GRAY8 with doubled image width and reconstructing the image on the other side.


Maybe as a side note, I could not get any other formats to work. Is "flv" the only option here ? Could I get more performance if I changed it to... what ?


-
Announcing our latest open source project : DeviceDetector
This blog post is an announcement for our latest open source project release : DeviceDetector ! The Universal Device Detection library will parse any User Agent and detect the browser, operating system, device used (desktop, tablet, mobile, tv, cars, console, etc.), brand and model.
Read on to learn more about this exciting release.
Why did we create DeviceDetector ?
Our previous library UserAgentParser only had the possibility to detect operating systems and browsers. But as more and more traffic is coming from mobile devices like smartphones and tablets it is getting more and more important to know which devices are used by the websites visitors.
To ensure that the device detection within Piwik will gain the required attention, so it will be as accurate as possible, we decided to move that part of Piwik into a separate project, that we will maintain separately. As an own project we hope the DeviceDetector will gain a better visibility as well as a better support by and for the community !
DeviceDetector is hosted on GitHub at piwik/device-detector. It is also available as composer package through Packagist.
How DeviceDetector works
Every client requesting data from a webserver identifies itself by sending a so-called User-Agent within the request to the server. Those User Agents might contain several information such as :
- client name and version (clients can be browsers or other software like feed readers, media players, apps,…)
- operating system name and version
- device identifier, which can be used to detect the brand and model.
For Example :
Mozilla/5.0 (Linux; Android 4.4.2; Nexus 5 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.99 Mobile Safari/537.36
This User Agent contains following information :
Operating system is
Android 4.4.2
, client uses the browserChrome Mobile 32.0.1700.99
and the device is a GoogleNexus 5
smartphone.What DeviceDetector currently detects
DeviceDetector is able to detect bots, like search engines, feed fetchers, site monitors and so on, five different client types, including around 100 browsers, 15 feed readers, some media players, personal information managers (like mail clients) and mobile apps using the AFNetworking framework, around 80 operating systems and nine different device types (smartphones, tablets, feature phones, consoles, tvs, car browsers, cameras, smart displays and desktop devices) from over 180 brands.
Note : Piwik itself currently does not use the full feature set of DeviceDetector. Client detection is currently not implemented in Piwik (only detected browsers are reported, other clients are marked as Unknown). Client detection will be implemented into Piwik in the future, follow #5413 to stay updated.
Performance of DeviceDetector
Our detections are currently handled by an enormous number of regexes, that are defined in several .YML Files. As parsing these .YML files is a bit slow, DeviceDetector is able to cache the parsed .YML Files. By default DeviceDetector uses a static cache, which means that everything is cached in static variables. As that only improves speed for many detections within one process, there are also adapters to cache in files or memcache for speeding up detections across requests.
How can users help contribute to DeviceDetector ?
Submit your devices that are not detected yet
If you own a device, that is currently not correctly detected by the DeviceDetector, please create a issue on GitHub
In order to check if your device is detected correctly by the DeviceDetector go to your Piwik server, click on ‘Settings’ link, then click on ‘Device Detection’ under the Diagnostic menu. If the data does not match, please copy the displayed User Agent and use that and your device data to create a ticket.Submit a list of your User Agents
In order to create new detections or improve the existing ones, it is necessary for us to have lists of User Agents. If you have a website used by mostly non desktop devices it would be useful if you send a list of the User Agents that visited your website. To do so you need access to your access logs. The following command will extract the User Agents :
zcat ~/path/to/access/logs* | awk -F'"' '{print $6}' | sort | uniq -c | sort -rn | head -n20000 > /home/piwik/top-user-agents.txt
If you want to help us with those data, please get in touch at devicedetector@piwik.org
Submit improvements on GitHub
As DeviceDetector is free/libre library, we invite you to help us improving the detections as well as the code. Please feel free to create tickets and pull requests on Github.
What’s the next big thing for DeviceDetector ?
Please check out the list of issues in device-detector issue tracker.
We hope the community will answer our call for help. Together, we can build DeviceDetector as the most powerful device detection library !
Happy Device Detection,