Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (79)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (11556)

  • ffmpeg pipe process ends right after writing first buffer data to input stream and does not keep running

    6 mai, par Taketo Matsunaga

    I have been trying to convert 16bit PCM (s16le) audio data to webm using ffmpeg in C#.
But the process ends right after the writing the first buffer data to standard input.
I has exited with the status 0, meaning success. But do not know why....
Could anyone tell me why ?

    


    I apprecite it if you could support me.

    


        public class SpeechService : ISpeechService&#xA;    {&#xA;        &#xA;        /// <summary>&#xA;        /// Defines the _audioInputStream&#xA;        /// </summary>&#xA;        private readonly MemoryStream _audioInputStream = new MemoryStream();&#xA;&#xA;        public async Task SendPcmAsWebmViaWebSocketAsync(&#xA;            MemoryStream pcmAudioStream,&#xA;            int sampleRate,&#xA;            int channels) &#xA;        {&#xA;            string inputFormat = "s16le";&#xA;&#xA;            var ffmpegProcessInfo = new ProcessStartInfo&#xA;            {&#xA;                FileName = _ffmpegPath,&#xA;                Arguments =&#xA;                    $"-f {inputFormat} -ar {sampleRate} -ac {channels} -i pipe:0 " &#x2B;&#xA;                    $"-f webm pipe:1",&#xA;                RedirectStandardInput = true,&#xA;                RedirectStandardOutput = true,&#xA;                RedirectStandardError = true,&#xA;                UseShellExecute = false,&#xA;                CreateNoWindow = true,&#xA;            };&#xA;&#xA;            _ffmpegProcess = new Process { StartInfo = ffmpegProcessInfo };&#xA;&#xA;            Console.WriteLine("Starting FFmpeg process...");&#xA;            try&#xA;            {&#xA;&#xA;                if (!await Task.Run(() => _ffmpegProcess.Start()))&#xA;                {&#xA;                    Console.Error.WriteLine("Failed to start FFmpeg process.");&#xA;                    return;&#xA;                }&#xA;                Console.WriteLine("FFmpeg process started.");&#xA;&#xA;            }&#xA;            catch (Exception ex)&#xA;            {&#xA;                Console.Error.WriteLine($"Error starting FFmpeg process: {ex.Message}");&#xA;                throw;&#xA;            }&#xA;&#xA;            var encodeAndSendTask = Task.Run(async () =>&#xA;            {&#xA;                try&#xA;                {&#xA;                    using var ffmpegOutputStream = _ffmpegProcess.StandardOutput.BaseStream;&#xA;                    byte[] buffer = new byte[8192]; // Temporary buffer to read data&#xA;                    byte[] sendBuffer = new byte[8192]; // Buffer to accumulate data for sending&#xA;                    int sendBufferIndex = 0; // Tracks the current size of sendBuffer&#xA;                    int bytesRead;&#xA;&#xA;                    Console.WriteLine("Reading WebM output from FFmpeg and sending via WebSocket...");&#xA;                    while (true)&#xA;                    {&#xA;                        if ((bytesRead = await ffmpegOutputStream.ReadAsync(buffer, 0, buffer.Length)) > 0)&#xA;                        {&#xA;                            // Copy data to sendBuffer&#xA;                            Array.Copy(buffer, 0, sendBuffer, sendBufferIndex, bytesRead);&#xA;                            sendBufferIndex &#x2B;= bytesRead;&#xA;&#xA;                            // If sendBuffer is full, send it via WebSocket&#xA;                            if (sendBufferIndex >= sendBuffer.Length)&#xA;                            {&#xA;                                var segment = new ArraySegment<byte>(sendBuffer, 0, sendBuffer.Length);&#xA;                                _ws.SendMessage(segment);&#xA;                                sendBufferIndex = 0; // Reset the index after sending&#xA;                            }&#xA;                        }&#xA;                    }&#xA;                }&#xA;                catch (OperationCanceledException)&#xA;                {&#xA;                    Console.WriteLine("Encode/Send operation cancelled.");&#xA;                }&#xA;                catch (IOException ex) when (ex.InnerException is ObjectDisposedException)&#xA;                {&#xA;                    Console.WriteLine("Stream was closed, likely due to process exit or cancellation.");&#xA;                }&#xA;                catch (Exception ex)&#xA;                {&#xA;                    Console.Error.WriteLine($"Error during encoding/sending: {ex}");&#xA;                }&#xA;            });&#xA;&#xA;            var errorReadTask = Task.Run(async () =>&#xA;            {&#xA;                Console.WriteLine("Starting to read FFmpeg stderr...");&#xA;                using var errorReader = _ffmpegProcess.StandardError;&#xA;                try&#xA;                {&#xA;                    string? line;&#xA;                    while ((line = await errorReader.ReadLineAsync()) != null) &#xA;                    {&#xA;                        Console.WriteLine($"[FFmpeg stderr] {line}");&#xA;                    }&#xA;                }&#xA;                catch (OperationCanceledException) { Console.WriteLine("FFmpeg stderr reading cancelled."); }&#xA;                catch (TimeoutException) { Console.WriteLine("FFmpeg stderr reading timed out (due to cancellation)."); }&#xA;                catch (Exception ex) { Console.Error.WriteLine($"Error reading FFmpeg stderr: {ex.Message}"); }&#xA;                Console.WriteLine("Finished reading FFmpeg stderr.");&#xA;            });&#xA;&#xA;        }&#xA;&#xA;        public async Task AppendAudioBuffer(AudioMediaBuffer audioBuffer)&#xA;        {&#xA;            try&#xA;            {&#xA;                // audio for a 1:1 call&#xA;                var bufferLength = audioBuffer.Length;&#xA;                if (bufferLength > 0)&#xA;                {&#xA;                    var buffer = new byte[bufferLength];&#xA;                    Marshal.Copy(audioBuffer.Data, buffer, 0, (int)bufferLength);&#xA;&#xA;                    _logger.Info("_ffmpegProcess.HasExited:" &#x2B; _ffmpegProcess.HasExited);&#xA;                    using var ffmpegInputStream = _ffmpegProcess.StandardInput.BaseStream;&#xA;                    await ffmpegInputStream.WriteAsync(buffer, 0, buffer.Length);&#xA;                    await ffmpegInputStream.FlushAsync(); // バッファをフラッシュ&#xA;                    _logger.Info("Wrote buffer data.");&#xA;&#xA;                }&#xA;            }&#xA;            catch (Exception e)&#xA;            {&#xA;                _logger.Error(e, "Exception happend writing to input stream");&#xA;            }&#xA;        }&#xA;&#xA;</byte>

    &#xA;

    Starting FFmpeg process...&#xA;FFmpeg process started.&#xA;Starting to read FFmpeg stderr...&#xA;Reading WebM output from FFmpeg and sending via WebSocket...&#xA;[FFmpeg stderr] ffmpeg version 7.1.1-essentials_build-www.gyan.dev Copyright (c) 2000-2025 the FFmpeg developers&#xA;[FFmpeg stderr]   built with gcc 14.2.0 (Rev1, Built by MSYS2 project)&#xA;[FFmpeg stderr]   configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband&#xA;[FFmpeg stderr]   libavutil      59. 39.100 / 59. 39.100&#xA;[FFmpeg stderr]   libavcodec     61. 19.101 / 61. 19.101&#xA;[FFmpeg stderr]   libavformat    61.  7.100 / 61.  7.100&#xA;[FFmpeg stderr]   libavdevice    61.  3.100 / 61.  3.100&#xA;[FFmpeg stderr]   libavfilter    10.  4.100 / 10.  4.100&#xA;[FFmpeg stderr]   libswscale      8.  3.100 /  8.  3.100&#xA;[FFmpeg stderr]   libswresample   5.  3.100 /  5.  3.100&#xA;[FFmpeg stderr]   libpostproc    58.  3.100 / 58.  3.100&#xA;&#xA;[2025-05-06 15:44:43,598][INFO][XbLogger.cs:85] _ffmpegProcess.HasExited:False&#xA;[2025-05-06 15:44:43,613][INFO][XbLogger.cs:85] Wrote buffer data.&#xA;[2025-05-06 15:44:43,613][INFO][XbLogger.cs:85] Wrote buffer data.&#xA;[FFmpeg stderr] [aist#0:0/pcm_s16le @ 0000025ec8d36040] Guessed Channel Layout: mono&#xA;[FFmpeg stderr] Input #0, s16le, from &#x27;pipe:0&#x27;:&#xA;[FFmpeg stderr]   Duration: N/A, bitrate: 256 kb/s&#xA;[FFmpeg stderr]   Stream #0:0: Audio: pcm_s16le, 16000 Hz, mono, s16, 256 kb/s&#xA;[FFmpeg stderr] Stream mapping:&#xA;[FFmpeg stderr]   Stream #0:0 -> #0:0 (pcm_s16le (native) -> opus (libopus))&#xA;[FFmpeg stderr] [libopus @ 0000025ec8d317c0] No bit rate set. Defaulting to 64000 bps.&#xA;[FFmpeg stderr] Output #0, webm, to &#x27;pipe:1&#x27;:&#xA;[FFmpeg stderr]   Metadata:&#xA;[FFmpeg stderr]     encoder         : Lavf61.7.100&#xA;[FFmpeg stderr]   Stream #0:0: Audio: opus, 16000 Hz, mono, s16, 64 kb/s&#xA;[FFmpeg stderr]       Metadata:&#xA;[FFmpeg stderr]         encoder         : Lavc61.19.101 libopus&#xA;[FFmpeg stderr] [out#0/webm @ 0000025ec8d36200] video:0KiB audio:1KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: 67.493113%&#xA;[FFmpeg stderr] size=       1KiB time=00:00:00.04 bitrate= 243.2kbits/s speed=2.81x&#xA;Finished reading FFmpeg stderr.&#xA;[2025-05-06 15:44:44,101][INFO][XbLogger.cs:85] _ffmpegProcess.HasExited:True&#xA;[2025-05-06 15:44:44,132][ERROR][XbLogger.cs:67] Exception happend writing to input stream&#xA;System.ObjectDisposedException: Cannot access a closed file.&#xA;   at System.IO.FileStream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)&#xA;   at System.IO.Stream.WriteAsync(Byte[] buffer, Int32 offset, Int32 count)&#xA;   at EchoBot.Media.SpeechService.AppendAudioBuffer(AudioMediaBuffer audioBuffer) in C:\Users\tm068\Documents\workspace\myprj\xbridge-teams-bot\src\EchoBot\Media\SpeechService.cs:line 242&#xA;

    &#xA;

    I am expecting the ffmpeg process keep running.

    &#xA;

  • Files created with "ffmpeg hevc_nvenc" do not play on TV. (with video codec SDK 9.1 of nvidia)

    29 janvier 2020, par Dashhh

    Problem

    • Files created with hevc_nvenc do not play on TV. (samsung smart tv, model unknown)
      Related to my ffmpeg build is below.

    FFmpeg build conf

    $ ffmpeg -buildconf
       --enable-cuda
       --enable-cuvid
       --enable-nvenc
       --enable-nonfree
       --enable-libnpp
       --extra-cflags=-I/path/cuda/include
       --extra-ldflags=-L/path/cuda/lib64
       --prefix=/prefix/ffmpeg_build
       --pkg-config-flags=--static
       --extra-libs='-lpthread -lm'
       --extra-cflags=-I/prefix/ffmpeg_build/include
       --extra-ldflags=-L/prefix/ffmpeg_build/lib
       --enable-gpl
       --enable-nonfree
       --enable-version3
       --disable-stripping
       --enable-avisynth
       --enable-libass
       --enable-libfontconfig
       --enable-libfreetype
       --enable-libfribidi
       --enable-libgme
       --enable-libgsm
       --enable-librubberband
       --enable-libshine
       --enable-libsnappy
       --enable-libssh
       --enable-libtwolame
       --enable-libwavpack
       --enable-libzvbi
       --enable-openal
       --enable-sdl2
       --enable-libdrm
       --enable-frei0r
       --enable-ladspa
       --enable-libpulse
       --enable-libsoxr
       --enable-libspeex
       --enable-avfilter
       --enable-postproc
       --enable-pthreads
       --enable-libfdk-aac
       --enable-libmp3lame
       --enable-libopus
       --enable-libtheora
       --enable-libvorbis
       --enable-libvpx
       --enable-libx264
       --enable-libx265
       --disable-ffplay
       --enable-libopenjpeg
       --enable-libwebp
       --enable-libxvid
       --enable-libvidstab
       --enable-libopenh264
       --enable-zlib
       --enable-openssl

    ffmpeg Command

    • Command about FFmpeg encoding
    ffmpeg -ss 1800 -vsync 0 -hwaccel cuvid -hwaccel_device 0 \
    -c:v h264_cuvid -i /data/input.mp4 -t 10 \
    -filter_complex "\
    [0:v]hwdownload,format=nv12,format=yuv420p,\
    scale=iw*2:ih*2" -gpu 0 -c:v hevc_nvenc -pix_fmt yuv444p16le -preset slow -rc cbr_hq -b:v 5000k -maxrate 7000k -bufsize 1000k -acodec aac -ac 2 -dts_delta_threshold 1000 -ab 128k -flags global_header ./makevideo_nvenc_hevc.mp4

    Full log about This Command - check this full log

    The reason for adding "-color_ " in the command is as follows.

    • HDR video after creating bt2020 + smpte2084 video using nvidia hardware accelerator. (I’m studying to make HDR videos. I’m not sure if this is right.)

    How can I make a video using ffmpeg hevc_nvenc and have it play on TV ?


    Things i’ve done

    Here’s what I’ve researched about why it doesn’t work.
    - The header information is not properly included in the resulting video file. So I used a program called nvhsp to add SEI and VUI information inside the video. See below for the commands and logs used.

    nvhsp is open source for writing VUI and SEI bitstrings in raw video. nvhsp link

    # make rawvideo for nvhsp
    $  ffmpeg -vsync 0 -hwaccel cuvid -hwaccel_device 0 -c:v h264_cuvid \
    -i /data/input.mp4 -t 10 \
    -filter_complex "[0:v]hwdownload,format=nv12,\
    format=yuv420p,scale=iw*2:ih*2" \
    -gpu 0 -c:v hevc_nvenc -f rawvideo output_for_nvhsp.265

    # use nvhsp
    $ python nvhsp.py ./output_for_nvhsp.265 -colorprim bt2020 \
    -transfer smpte-st-2084 -colormatrix bt2020nc \
    -maxcll "1000,300" -videoformat ntsc -full_range tv \
    -masterdisplay "G (13250,34500) B (7500,3000 ) R (34000,16000) WP (15635,16450) L (10000000,1)" \
    ./after_nvhsp_proc_output.265

    Parsing the infile:

    ==========================

    Prepending SEI data
    Starting new SEI NALu ...
    SEI message with MaxCLL = 1000 and MaxFall = 300 created in SEI NAL
    SEI message Mastering Display Data G (13250,34500) B (7500,3000) R (34000,16000) WP (15635,16450) L (10000000,1) created in SEI NAL
    Looking for SPS ......... [232, 22703552]
    SPS_Nals_addresses [232, 22703552]
    SPS NAL Size 488
    Starting reading SPS NAL contents
    Reading of SPS NAL finished. Read 448 of SPS NALu data.

    Making modified SPS NALu ...
    Made modified SPS NALu-OK
    New SEI prepended
    Writing new stream ...
    Progress: 100%
    =====================
    Done!

    File nvhsp_after_output.mp4 created.

    # after process
    $ ffmpeg -y -f rawvideo -r 25 -s 3840x2160 -pix_fmt yuv444p16le -color_primaries bt2020 -color_trc smpte2084  -colorspace bt2020nc -color_range tv -i ./1/after_nvhsp_proc_output.265 -vcodec copy  ./1/result.mp4 -hide_banner

    Truncating packet of size 49766400 to 3260044
    [rawvideo @ 0x40a6400] Estimating duration from bitrate, this may be inaccurate
    Input #0, rawvideo, from './1/nvhsp_after_output.265':
     Duration: N/A, start: 0.000000, bitrate: 9953280 kb/s
       Stream #0:0: Video: rawvideo (Y3[0][16] / 0x10003359), yuv444p16le(tv, bt2020nc/bt2020/smpte2084), 3840x2160, 9953280 kb/s, 25 tbr, 25 tbn, 25 tbc
    [mp4 @ 0x40b0440] Could not find tag for codec rawvideo in stream #0, codec not currently supported in container
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
       Last message repeated 1 times

    Goal

    • I want to generate matadata normally when encoding a video through hevc_nvenc.

    • I want to create a video through hevc_nvenc and play HDR Video on smart tv with 10bit color depth support.


    Additional

    • Is it normal for ffmpeg hevc_nvenc not to generate metadata in the resulting video file ? or is it a bug ?

    • Please refer to the image below. (*’알 수 없음’ meaning ’unknown’)

      • if you need more detail file info, check this Gist Link (by ffprobe)
        hevc_nvenc metadata
    • However, if you encode a file in libx265, the attribute information is entered correctly as shown below.

      • if you need more detail file info, check this Gist Link
        libx265 metadata

    However, when using hevc_nvenc, all information is missing.

    • i used option -show_streams -show_programs -show_format -show_data -of json -show_frames -show_log 56 at ffprobe
  • Small Time DevOps

    1er janvier 2021, par Multimedia Mike — General

    When you are a certain type of nerd who has been on the internet for long enough, you might run the risk of accumulating a lot of projects and websites. Website-wise, I have this multimedia.cx domain on which I host a bunch of ancient static multimedia documents as well as this PHP/MySQL-based blog. Further, there are 3 other PHP/MySQL-based blogs hosted on subdomains. Also, there is the wiki, another PHP/MySQL web app. A few other custom PHP- and Python-based apps are running around on the server as well.

    While things largely run on auto-pilot, I need to concern myself every now and then with their ongoing upkeep.

    If you ask N different people about the meaning of the term ‘DevOps’, you will surely get N different definitions. However, whenever I have to perform VM maintenance, I like to think I am at least dipping my toes into the DevOps domain. At the very least, the job seems to be concerned with making infrastructure setup and upgrades reliable and repeatable.

    Even if it’s not fully automated, at the very least, I have generated a lot of lists for how to make things work (I’m a big fan of Trello’s Kanban boards for this), so it gets easier every time (ideally, anyway).

    Infrastructure History

    For a solid decade, from 2004 to 2014, everything was hosted on shared, cPanel-based web hosting. In mid-2014, I moved from the shared hosting over to my own VPSs, hosted on DigitalOcean. I must have used Ubuntu 14.04 at the time, as I look down down the list of Ubuntu LTS releases. It was with much trepidation that I undertook this task (knowing that anything that might go wrong with the stack, from the OS up to the apps, would all be firmly my fault), but it turned out not to be that bad. The earliest lesson you learn for such a small-time setup is to have a frontend VPS (web server) and a backend VPS (database server). That way, a surge in HTTP requests has no chance of crashing the database server due to depleted memory.

    At the end of 2016, I decided to refresh the VMs. I brought them up to Ubuntu 16.04 at the time.

    Earlier this year, I decided it would be a good idea to refresh the VMs again since it had been more than 3 years. The VMs were getting long in the tooth. Plus, I had seen an article speculating that Azure, another notable cloud hosting environment, might be getting full. It made me feel like I should grab some resources while I still could (resource-hoarding was in this year).

    I decided to use 18.04 for these refreshed VMs, even though 20.04 was available. I think I was a little nervous about 20.04 because I heard weird things about something called snap packages being the new standard for distributing software for the platform and I wasn’t ready to take that plunge.

    Which brings me to this month’s VM refresh in which I opted to take the 20.04 plunge.

    Oh MediaWiki

    I’ve been the maintainer and caretaker of the MultimediaWiki for 15 years now (wow ! Where does the time go ?). It doesn’t see a lot of updating these days, but I know it still serves as a resource for lots of obscure technical multimedia information. I still get requests for new accounts because someone has uncovered some niche technical data and wants to make sure it gets properly documented.

    MediaWiki is quite an amazing bit of software and it undergoes constant development and improvement. According to the version history, I probably started the MultimediaWiki with the 1.5 series. As of this writing, 1.35 is the latest and therefore greatest lineage.

    This pace of development can make it a bit of a chore to keep up to date. This was particularly true in the old days of the shared hosting when you didn’t have direct shell access and so it’s something you put off for a long time.

    Honestly, to be fair, the upgrade process is pretty straightforward :

    1. Unpack a set of new files on top of the existing tree
    2. Run a PHP script to perform any database table upgrades

    Pretty straightforward, assuming that there are no hiccups along the way, right ? And the vast majority of the time, that’s the case. Until it’s not. I had an upgrade go south about a year and a half ago (I wasn’t the only MW installation to have the problem at the time, I learned). While I do have proper backups, it still threw me for a loop and I worked for about an hour to restore the previous version of the site. That experience understandably left me a bit gun-shy about upgrading the wiki.

    But upgrades must happen, especially when security notices come out. Eventually, I created a Trello template with a solid, 18-step checklist for upgrading MW as soon as a new version shows up. It’s still a chore, just not so nerve-wracking when the steps are all enumerated like that.

    As I compose the post, I think I recall my impetus for wanting to refresh from the 16.04 VM. 16.04 used PHP 7.0. I wanted to upgrade to the latest MW, but if I tried to do so, it warned me that it needed PHP 7.4. So I initialized the new 18.04 VM as described above… only to realize that PHP 7.2 is the default on 18.04. You need to go all the way to 20.04 for 7.4 standard. I’m sure it’s possible to install later versions of PHP on 16.04 or 18.04, but I appreciate going with the defaults provided by the distro.

    I figured I would just stay with MediaWiki 1.34 series and eschew 1.35 series (requiring PHP 7.4) for the time being… until I started getting emails that 1.34 would go end-of-life soon. Oh, and there are some critical security updates, but those are only for 1.35 (and also 1.31 series which is still stubbornly being maintained for some reason).

    So here I am with a fresh Ubuntu 20.04 VM running PHP 7.4 and MediaWiki 1.35 series.

    How Much Process ?

    Anyone who decides to host on VPSs vs, say, shared hosting is (or ought to be) versed on the matter that all your data is your own problem and that glitches sometimes happen and that your VM might just suddenly disappear. (Indeed, I’ve read rants about VMs disappearing and taking entire un-backed-up websites with them, and also watched as the ranters get no sympathy– “yeah, it’s a VM ; the data is your responsibility”) So I like to make sure I have enough notes so that I could bring up a new VM quickly if I ever needed to.

    But the process is a lot of manual steps. Sometimes I wonder if I need to use some automation software like Ansible in order to bring a new VM to life. Why do that if I only update the VM once every 1-3 years ? Well, perhaps I should update more frequently in order to ensure the process is solid ?

    Seems like a lot of effort for a few websites which really don’t see much traffic in the grand scheme of things. But it still might be an interesting exercise and might be good preparation for some other websites I have in mind.

    Besides, if I really wanted to go off the deep end, I would wrap everything up in containers and deploy using D-O’s managed Kubernetes solution.

    The post Small Time DevOps first appeared on Breaking Eggs And Making Omelettes.