Recherche avancée
Autres articles (112)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (7935)
-
Using an actual audio recording to filter out noise from a video
9 mars 2021, par user2751530I use my laptop (Ubuntu 18.04 LTS derivative on a Dell XPS13) for recording videos (these are just narrated presentations) using OBS. After a presentation is done (.flv format), I process it using ffmpeg using filters that try to reduce background noise, reduce the size of the video, change encoding to .mp4, insert a watermark, etc. Over several months, this system has worked well.


However, my laptop is now beginning to show its age (it is 4 years old). That means that the fan becomes loud - loud enough to notice in a recording, not loud enough to notice when you are working. So, even after filtering for low frequency in ffmpeg, there are clicking and other type of sounds that are left in the video. I am a scientist, though not an audio/video expert. So, I was thinking - is it possible for me to simply record the noise coming out of my machine when I am not presenting, and then use that recording to filter out the noise that my machine makes during the presentation ?


Blanket approaches like filtering out certain ranges of the audio spectrum, etc. are unlikely to work, as the power spectrum of the noise likely has many peaks, and these are likely to extend into human voice range as well (I can hear them). Further, this is a moving target - the laptop is aging and in any case, the amount and type of noise it makes depends on the load and how long it has been on. Algorithm :


- 

- Record actual computer noise (with the added bonus of background noise) while I am not recording. Ideally, just before starting to record the presentation. This could take the form of a 1-2 minute audio sample.
- Record the presentation on OBS.
- Use 1 as a filter to get rid of noise in 2. I imagine it would involve doing a Fourier analysis of 1, and then removing those peaks from the spectrum of 2 at each time epoch.








I have looked into sox, which is what people somewhat flippantly point you to without giving any details. I do not know how to separate out audio channels from a video and then interleave them back together (not an expert on the software here). Other than RTFM, is there any helpful advice anyone could offer ? I have searched, but have not been able to find a HOWTO. I expect that that is probably the fault of my search since I refuse to believe that this is a new idea - it is a standard method used in many fields to get rid of noise, including astronomy.


-
converting a "gif" to video using swift
3 décembre 2019, par James WoodrowI’ve looked around and found a few things here and there, mainly that I should be using AVAssetWriter to do this but I have 0 experience with this and video editing/creation so it doesn’t help me much since I can’t seem to find anything that does something I can modify easily (or not at my level of knowledge at least) so that it works as I intend it to.
I have an app which takes
nphotos everycft(capture frame time which I get from a backend server) seconds (it’s a double for obvious reasons) I then display these frames using a UIImageView and the frames change everydft(display frame time which I also get from a backend server and can be different fromcft). Up until this point nothing complicated.now what is currently the workflow is that these frames are sent back to a server with any relevant information I want and then the server would use imagemagick to create a real gif file and ffmpeg to create a 15 seconds video using said gif.
the issue is this makes it so that my heroku server bills aren’t as low as I would like because of the limited memory on the dynos and the time it takes to generate these videos is of about 5-10 seconds I believe (not sure but it’s longer than I’d like)
So the idea I had was to make the app create the video since he already has all the information he needs for this, and then simply upload it with the rest of the frames and relevant data. Using bandwidth nowadays is much cheaper than buying extra processing power on a server.
- he has
nframes to loop over - he has a float value representing how long each frame should last
dft - he has a gpu or at least a much better cpu than the dynos heroku have to offer
I’ve also looked around to see if anyone made an extensive tutorial on how to use ffmpeg in swift but I still didn’t find anything at my level and I didn’t even find a tutorial per se, only some GitHub projects which were partially completed and/or without the original tutorial linked to understand the thought process.
I would appreciate any tips/code sample/tutorials on the subject.
I’m adding the ffmpeg command line equivalent to what I would love to be able to do (if I could use ffmpeg directly with iOS this could be nice too)
ffmpeg -framerate 100/13 -loop 1 -i frame%02d.png -c:v libx264 -r 100/13 -pix_fmt yuv420p -t 0:15 instagram.mp4where basically I did
100 / (dft * 100)for the input frame rate and just output at the same fps for 15 seconds. by the way if there are any ways to optimise this command to make it run faster without losing quality I might be able to keep the current way of functioning with heroku although I would still prefer some iOS solution. - he has
-
FFmpeg - MJPEG decoding gives inconsistent values
28 décembre 2016, par ahmadhI have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :
ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.aviWhen I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :
AVFormatContext* fmt_ctx = NULL;
AVCodecContext* cdc_ctx = NULL;
AVCodec* vid_cdc = NULL;
int ret;
unsigned int height, width;
....
// read_nframes is the number of frames to read
output_arr = new unsigned char [height * width * 3 *
sizeof(unsigned char) * read_nframes];
avcodec_open2(cdc_ctx, vid_cdc, NULL);
int num_bytes;
uint8_t* buffer = NULL;
const AVPixelFormat out_format = AV_PIX_FMT_RGB24;
num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));
AVFrame* vid_frame = NULL;
vid_frame = av_frame_alloc();
AVFrame* conv_frame = NULL;
conv_frame = av_frame_alloc();
av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
out_format, width, height, 1);
struct SwsContext *sws_ctx = NULL;
sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
width, height, out_format,
SWS_BILINEAR, NULL,NULL,NULL);
int frame_num = 0;
AVPacket vid_pckt;
while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
if (ret < 0)
break;
ret = avcodec_receive_frame(cdc_ctx, vid_frame);
if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
break;
if (ret >= 0) {
// convert image from native format to planar GBR
sws_scale(sws_ctx, vid_frame->data,
vid_frame->linesize, 0, vid_frame->height,
conv_frame->data, conv_frame->linesize);
unsigned char* r_ptr = output_arr +
(height * width * sizeof(unsigned char) * 3 * frame_num);
unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
unsigned int pxl_i = 0;
for (unsigned int r = 0; r < height; ++r) {
uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
for (unsigned int c = 0; c < width; ++c) {
r_ptr[pxl_i] = avframe_r[0];
g_ptr[pxl_i] = avframe_r[1];
b_ptr[pxl_i] = avframe_r[2];
avframe_r += 3;
++pxl_i;
}
}
++frame_num;
if (frame_num >= read_nframes)
break;
}
}
...In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine. I am sure this is something to do with the libav decoding process because the MD5 values are consistent between the images and the video :
ffmpeg -i %06d.JPEG -f framemd5 -
ffmpeg -i vid.avi -f framemd5 -In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone bitbucket code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give ’—filetype png’ to test the png decoding).