Now that I’ve got a good, reliable mechanism for getting WordPress into my Day One journal, it has got me thinking about using WordPress as a funnel for getting any “public” journal type stuff into Day One.
Letterboxd reviews come to mind. If I were still using Goodreads, that would make sense, too. Is there something from last.fm worth capturing in my Day One? I suspect some kind of weekly or monthly entry would make sense. Or new discoveries?
Funnel that data into WordPress (preferably using some kind of microformat) and then use my python script to collect all of that stuff into my Day One journal.
Food for thought for my next rainy Saturday morning coding session.
Also on:
The Shaarli docker documentation is like a masterclass in how to use docker and docker-compose. I wish I had known all of this when building up my self-hosted services (navidrome, photoprism, etc.) Great reference: https://shaarli.readthedocs.io/en/master/Docker.html
The things that you Believe are true Well, they’re just not so. You’re plain mixed up, a bit confused Hell you don’t know
It’s time for you to leave the stage Because, hell, you’re blinded by your rage It’s your twilight time and the ones taking your place won’t look like you
You pick and choose your evidence You hold on tight to your relevance but it’s fading fast and you wave that hateful flag for the past
The things that you Believe are true Well, they’re just not so. You’re plain mixed up, a bit confused Hell you don’t know
So I’ll be me and you be you But change is coming through and through The arc may be long But in the end it bends towards what is true.
We all know you wrote the rules and tried to make it so you couldn’t lose But when your time comes, here’s hoping thoughts and prayers will get you through Here’s hoping thoughts and prayers will get you through I’ll send my thoughts and prayers out to you
This past year felt like a big year for me as far as improved or developing musicianship compared with recent prior years. Most likely this is because this is the first year in 35+ years of playing guitar that I kept a rehearsal/practice journal and made even the slightest effort to structure my practice around material and technique. In any case, here’s a look back at how my year went from a musicianship POV.
I continued developing my flatpicking skills and bluegrass knowledge
I began learning to play mandolin
I started to use a journal to track what I’m practicing or working through
I started to learn how to play the Telecaster (as distinct from playing any other kind of electric guitar).
Bluegrass – more Stanley Brothers!
On the bluegrass and flatpicking front, I continued to dive into the Stanley Brothers. I read the Ralph Stanley autobiography, Man of Constant Sorrow. Really long book, but chock full of great stories and observations that helped solidify my believe that playing bluegrass music is part gift and part responsibility. Or, perhaps, part an honor and part an obligation.
Either way, playing bluegrass guitar ties me to a long chain of music and musicians in a way that no other music I’ve played does (though, playing in a Dead cover band, I think, will someday achieve this same level of meaning).
I also enjoyed listening to more current instances of bluegrass music, especially guitarists like Grant Gordy. He’s a far cry from Doc Watson and yet totally connected by the bluegrass thread.
Mandolin
Partly out of reverence for the tradition and partly for my own interest, I took up mandolin in June. By autumn I knew a few scales and could play a half dozen (fiddle) tunes at a pretty good clip. Much of this fast progress was due to having a good friend who 1.) lent me a mandolin and 2.) is an extraordinarily patient teacher. That, and learning straight out of the gate to not approach the mandolin as an upside down guitar. Those all made a huge difference for me.
Some combination of bad typing practice at work and poor left hand technique on the mandolin lead to some kind of RSI on my left hand that I’m still recovering from to some degree (heat, massage and some finger exercises as well as improved typing posture all helped here). As such, it’s been several weeks since I’ve played the mandolin for any length of time.
Still though, I had multiple opportunities to play mandolin with other mandolin players and deeply appreciate the sound of two mandolins playing in unison and it is gratifying to attend a guitar-heavy jam and be able to pull out the mandolin and chop with some moderate level of competency.
Day One Musical/Rehearsal Journal I do not know why it has taken me almost 35 years to begin a practice journal, but now that I have done so, I do not think it is a practice I will ever abandon. I’m not a structured rehearsal type player. I play a wide variety of material and have to learn a wide variety of lyrics, styles, etc. Meaning, on Monday I may be working on a fiddle tune and by Wednesday I’m working on chord substitutions and Thursday learning how to sing the melody line on a Stanley brothers song. With all of those balls in the air, I was regularly forgetting any nuanced learnings or finger positions about the fiddle tune or forgetting about certain substitutions, etc.
By writing them down in a dedicated Day One journal and by occasionally revisiting the journal before sitting down to practice, I wasn’t constantly reinventing the wheel. I am tracking which songs I’m working on, notes around different live performance takeaways (e.g. I notice that I move around too much in front of the mic), the way the Osborne Brothers sing the second half of the verse of Kentucky Waltz, etc.
I also started to track certain baseline stats around max tempo/bpm for certain fiddle tunes, etc. Helpful in so many ways. I don’t have any kind of template or model for the journal, though I looked at many online before starting my own.
If you don’t already keep a practice journal, I cannot recommend it enough. It has been enormously helpful. Especially in light of my hand injury recovery. I didn’t play for over two weeks, but by revisiting my journal I was able to pick up right where I left off.
Telecaster
By far, the biggest leap in my knowledge this year (though sadly, not with my proficiency or skills) is around Telecaster guitar. I have been enjoying certain types of post-bluegrass Bakersfield-type music for several years now (Clarence White, Buck Owens/Don Rich, Chris Hillman, etc.), but suddenly this year felt a strong urge to play the telecaster in a very specific way that is unique to the telecaster guitar tradition: an instrument that bridges the gap between acoustic guitar and pedal steel.
I took a few online lessons, watched a lot of YouTube videos but, mostly, listened to telecaster players. James Burton gets referenced a lot for being an influential telecaster player. Digging deeply into his work with Emmylou Harris’ Hot Band is a masterclass in why he is such an important figure in the development of telecaster technique. That said, there are way too many to name here but I’ll mention two Telecaster players who I got to see live when I visited Nashville: Stuie French and Luke McQueary. See my Nashville Notes for more on these two amazing players.
2024 I will admit to being a bit scared about the pain in my left hand preventing me from being able to resume playing but I’m getting more confident that the pain will eventually subside entirely. With that in mind, I think I need to focus my energies a bit more. While I love playing mandolin, I don’t think I can allow it to eat in to my guitar practice time which is already limited enough. I will continue to play mandolin but without any especially lofty goals, just being able to play a few fiddle tunes and chop behind other guitars is plenty satisfying.
On the guitar front, I want to continue to learn inversions and substitutions as they are the key to my understanding/unlocking the fretboard. The inversions and subs are great on the acoustic but even cooler when applied to the Telecaster and trying ring 6th chords out with the volume knob to emulate a pedal steel.
I’ll try to do all this while maintaining my current “OK-ness” as a bluegrass flat picker. I don’t want those skills to diminish too much while I explore alternate forms of playing. And, perhaps more importantly, I’d like to do all this while not sacrificing the time or energy required to write original music which didn’t happen nearly as much in 2023 as I’d hoped.
I’ve been trying to remember to bring my camera out with me more often, employing little tricks like keeping my X100F on my kitchen counter near my keys. But even when I remember to bring it, I haven’t been shooting at all.
But for Christmas this year I received a 7artisans 25mm/1.8 lens. This lens is notable for a few reasons:
1.) My X100 has a fixed lens so in order to use the 7artisans I needed to pull out my older X-E2s which had been gathering dust on my shelf 2.) It is a 25mm which on the Fuji sensor makes it much close to a 35mm, which is my favorite focal length 3.) it is a fully manual lens in that it doesn’t auto focus or auto adjust the aperture. 4.) It is a very inexpensive lens, especially compared with the Fuji line but it has some character to it.
This XE2S is older and has an older sensor (X-Trans II) than the X100F (X-Trans III) but I think I prefer it over the X100F. I just enjoy shooting with it more. I’m not sure I can explain why it feels different despite such similar bodies and dials, but I definitely like my X-E2s more.
Anyway, I brought it out last night when we went out to dinner and capture a few snaps in the restaurant:
I’ve been struggling to find a film simulation that works indoors and has an ok white balance. I think this Kodak Chrome simulation from Ritchie Roesch does the trick:
Classic Chrome Dynamic Range: DR200 Highlight: -1 (Medium-Soft) Shadow: 0 (Standard) Color: +1 (Medium-High) Sharpness: 0 (Standard) Noise Reduction: -2 (Low) White Balance: Auto, +2 Red & -2 Blue ISO: Auto up to ISO 3200 Exposure Compensation: 0 to +2/3 (typically)
I’m going to keep using this one for a while as I’d really like to settle in to a single film sim and really learn it.
I also grabbed a shot as we stopped for gas, I’d intended to use the Cinestill 800 simulation because of its suitability for nighttime shooting but because I couldn’t quite remember which presets I had assigned, I ended up using more of a Kodak Negative type sim, it still looks cool though:
Anyway, hope this is the start of me bringing my camera out with me more and remembering to actually shoot with it.
The latches on my Fender TSA flight case are all shot. Rather than toss the case, I want to try to fix it. It took me a while to find a number for Fender. (At the time of this post the number for consumer relations at Fender is 1-800-856-9801).
After a while on hold I spoke with a rep who told me that Fender doesn’t have parts for the case but he referred me to https://www.skbcases.com and they have a few replacement latch options (though not exactly what I need). I reached out to SKB Cases via email with details of what I need replaced, will update when I get a reply if they’re able to help me.
update: I heard back from customerservice@skbcases.com who asked me for some photos of the case and latches and they’re shipping me out the needed parts to repair the case. Really great outcome here and definitely recommend SKB for their great customer service.
Over dinner recently we have been talking about how we are living in a world that our bodies and minds are not designed for. This has been a recurring theme for a variety of reasons. I then serendipitously came across this quote:
The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”
Edward O. Wilson
I’m not sure how applicable that quote is but, maybe? It does feel like things are moving so quickly. It does feel like perhaps in our current state we are not equipped to handle the pace at which we are expected to engage with the world. It does feel like we are eating things that our bodies were never made to eat and working in ways that our bodies were not designed to do for 8-10 hours clips.
But none of that is to say these things aren’t natural.
I disagree with the proposition that computers and other technology and processed foods and the pace at which we are living are unnatural.
Everything is nature.
We are nature.
The things we build are nature. I’m not sure there is much wisdom in arbitrarily distinguishing between natural and unnatural.
By way of example, I was having a conversation with someone who said the processed foods were unnatural. My first response was that, well, they are nature but that doesn’t mean that processed foods are good for you. This in turn got me thinking about the Tao. Specifically, the Tao does not distinguish.
Instead, what I realized is that processed foods (or AI, or cars, or whatever) are natural and not outside of the Tao. But just because something is not separate from nature doesn’t mean that it serves us.
Whether something is natural or unnatural is irrelevant. What is important is does X serve us? Answering that question seems to be why the Buddha articulated the Eightfold Path. Not so that we could make false distinctions, but so that we could have a tool set to look at something: a thought, a technology, a practice and determine: does this serve me?
And ultimately, if taken to its logical conclusion in Buddhism the question does this serve me? really means Does this serve everyone?
We should be able to look at something and ask that question and realize that objects or technologies or foods or thoughts are always natural but it’s in how we use them that determines whether or not they are serving us that is the important question.
This article offers a comprehensive exploration of the collaboration between Microsoft and OpenAI, delving into the factors driving this partnership and its implications. Key insights include:
Kevin Scott, Microsoft’s CTO, views AI as a tool that empowers non-programmers to code, a perspective shaped by his upbringing in rural poverty. This highlights AI’s potential to democratize technology.
How Copilot got its name and why it’s such a fitting name
Copilot is the result of Microsoft and OpenAI’s partnership
That Microsoft is pinning its future on Copilot and, in turn, OpenAI
That the primarily academic and non-business makeup of the OpenAI non-profit board was either 100% correct in firing Altman and we should all be petrified of what is happening there or they were all totally out of their depth in running the board. Only time will tell what is true here.
The challenge of getting users to understand that Copilot isn’t perfect, isn’t always right but that doesn’t mean it can’t be very helpful
I couldn’t put this piece down, in part because it helped me crystalize my understanding of MS’s vision for Copilot. Copilot will be both an enormous shift in how we work day to day but also, so integrated and unobtrusive in our workflow that we won’t really feel how much it’s changed how we’re working together until we look back at pre Copilot days.
It would have been good to get some alternate viewpoints to the “generative AI is amazing and is going to make the world a better place” and some specific discussion around the dangers. But, the challenge there, is that
Scott, though, believed in a more optimistic story. At one point, he told me, about seventy per cent of Americans worked in agriculture. Technological advances reduced those labor needs, and today just 1.2 per cent of the workforce farms. But that doesn’t mean there are millions of out-of-work farmers: many such people became truck drivers, or returned to school and became accountants, or found other paths. “Perhaps to a greater extent than any technological revolution preceding it, A.I. could be used to revitalize the American Dream,” Scott has written.
Scott wanted A.I. to empower the kind of resourceful but digitally unschooled people he’d grown up among. This was a striking argument—one that some technologists would consider willfully naïve, given widespread concerns about A.I.-assisted automation eliminating jobs such as the grocery-store cashier, the factory worker, or the movie extra.
GitHub employees brainstormed names for the product: Coding Autopilot, Automated Pair Programmer, Programarama Automat. Friedman was an amateur pilot, and he and others felt these names wrongly implied that the tool would do all the work. The tool was more like a co-pilot—someone who joins you in the cockpit and makes suggestions, while occasionally proposing something off base. Usually you listen to a co-pilot; sometimes you ignore him. When Scott heard Friedman’s favored choice for a name—GitHub Copilot—he loved it. “It trains you how to think about it,” he told me. “It perfectly conveys its strengths and weaknesses.”
Nine years later, the company created Tay, an A.I. chatbot designed to mimic the inflections and preoccupations of a teen-age girl. The chatbot was set up to interact with Twitter users, and almost immediately Tay began posting racist, sexist, and homophobic content, including the statement “Hitler was right.”
Kevin Scott believes that the discourse around A.I. has been strangely focussed on dystopian scenarios, and has largely ignored its potential to “level the playing field” for people who know what they want computers to do but lack the training to make it happen.
Then, to add yet another layer of protection, Microsoft started running GPT-4 on hundreds of computers and set them to converse with one another—millions of exchanges apiece—with instructions to get other machines to say something untoward. Each time a new lapse was generated, the meta-prompts and other customizations were adjusted accordingly. Then the process began anew. After months of honing, the result was a version of GPT-4 unique to Microsoft’s needs and attitudes, which invisibly added dozens, sometimes hundreds, of instructions to each user inquiry.
The Copilot designers also concluded that they needed to encourage users to essentially become hackers—to devise tricks and workarounds to overcome A.I.’s limitations and even unlock some uncanny capacities.
The dismissed board members, meanwhile, insist that their actions were wise.
“A.I. is one of the most powerful things humans have ever invented for improving the quality of life of everyone,” Scott said. “But it will take time. It should take time.” He added, “We’ve always tackled super-challenging problems through technology. And so we can either tell ourselves a good story about the future or a bad story about the future—and, whichever one we choose, that’s probably the one that’ll come true.”
I’m rewiring my recording setup. I have a pretty good idea of what I want to accomplish but decided to give ChatGPT a chance to document my setup and it did surprisingly well. I also wanted to relocate where my stereo compressor sits in the chain so that I could use it as an outboard compressor in Logic. It also did really well with that. It totally fell down on generating a wiring diagram, but these instructions are pretty good:
Details of Audient iD22, Behringer ADA8200, an outboard stereo compressor, and my iMac:
iMac to Audient iD22:
Connection Type: USB
Details: Connect the Audient iD22 to your iMac using a USB cable. This connection is used for audio data transfer between the iD22 and your iMac, as well as for powering the iD22 if it does not have an external power source. Note: this is incorrect the id22 does have external power.
Audient iD22 to Behringer ADA8200:
Connection Type: ADAT Optical
Details: Use an ADAT optical cable to connect the ADAT Out of the ADA8200 to the ADAT In of the iD22. This connection is used to expand the number of inputs available on your iD22 with the preamps from the ADA8200.
Note: I’m using the ADA8200 as master clock at 48 kHz. The ID22 has a setting to use the dig input as a master.
Outboard Stereo Compressor to Audient iD22:
Connection Type: 1/4″ TRS cables
Details:
From iD22 to Compressor: Connect one of the line outputs (e.g., Line Output 1) of the iD22 to the left input of the stereo compressor, and another line output (e.g., Line Output 2) to the right input. This setup is used to send audio from the iD22 to the compressor for processing.
From Compressor to iD22: Connect the left output of the stereo compressor to one of the line inputs (e.g., Line Input 1) on the iD22, and the right output to another line input (e.g., Line Input 2). This setup is used to return the processed audio from the compressor back into the iD22.
Microphones and Instruments:
Connection Type: Various (XLR or 1/4″ TS cables)
Details: Connect your microphones and/or instruments to the available inputs on both the iD22 and the ADA8200, depending on where you need them.
Monitors/Headphones:
Connection Type: Various (1/4″ TRS, XLR, or RCA, depending on your monitors)
Details: Connect your studio monitors and/or headphones to the appropriate outputs on the iD22 for monitoring.
As a note to self here, I couldn’t get all 8 of the ADA8200’s inputs to appear in the Audient software mixer until I launched Logic, then they appeared. Which was weird.
And here’s how to use the Outboard Stereo Compressor in Logic:
To use a stereo outboard compressor during mixing or mastering in Logic Pro, you will need to set up an external effects loop. This setup allows you to route audio from Logic through your outboard compressor and back into Logic for further processing. Here are the specific steps:
Connect the Compressor to Your Audio Interface:
Use two 1/4″ TRS cables to connect two line outputs from your Audient iD22 to the stereo inputs of your compressor.
Use another two 1/4″ TRS cables to connect the stereo outputs of the compressor back to two line inputs on the iD22.
Configure Logic Pro for External Processing:
Open Logic Pro and load your project.
Go to the ‘Mix’ menu and choose ‘I/O Labels’. Here, you can label the inputs and outputs you are using for the compressor. This step is optional but helps keep things organized.
Create an Aux Channel for the Compressor:
Create a new Aux channel strip in your Logic project.
Set the input of this Aux channel to correspond to the line inputs where your compressor is returning its signal. For example, if you’ve connected the compressor’s outputs to inputs 3 and 4 on the iD22, set the Aux input to Input 3-4.
Set the output of this Aux channel to the line outputs you’re using to send audio to the compressor. For instance, if you’re sending audio out through outputs 3 and 4, set the Aux output to Output 3-4.
Send Audio to the Compressor:
Use the sends on your audio tracks or buses to send audio to the new Aux channel. This routes the audio through the compressor.
Adjust the send level to control how much signal is being processed by the compressor.
Monitoring and Adjusting Levels:
Monitor the signal level going into and coming out of the compressor to avoid clipping.
Use the Aux channel’s fader to control the level of the signal returning from the compressor.
Latency Considerations:
Be aware of any latency introduced by this setup. In some cases, you might need to adjust for latency manually in Logic Pro.
Logic Pro X offers a feature called ‘Low Latency Mode’ which can be helpful, but you may still need to adjust the timing of the tracks manually if there is noticeable delay.
Mixing with the Compressor:
Use the compressor on individual tracks, buses, or the master bus as needed during mixing or mastering.
Adjust the compressor settings as you would with a plugin, but remember that these settings are ‘printed’ to the audio in real-time.
Bounce (Export) Your Project:
Once you are satisfied with the sound, bounce (export) the mix or mastered track, including the processing from your outboard compressor.
“Under the worst conditions, what’s the most important thing to have?” He replied: “Friends.” Source: Recording: After the Election – Four Ways We Can Respond […]
"to be clear, I’m a fan of the Bluesky leadership and engineering team. With the VC money as fuel, I expect their next 12 months or so to be golden, with lots of groovy features and mind-blowing growth. But that’s not what I’ll be watching. I’ll be looking for ecosystem growth in directions that enable survival independent of the company. In the way that email is independent of any technology provider or network operator." — Direct link
"I have been impressed with the tools that the open source development community is building to bridge the gap between the AT protocol and ActivityPub, and I’m hopeful that some mixture of Bluesky and Mastodon will eventually serve most of my needs as a social media user and, hopefully, as someone who co-owns a website" — Direct link
Two Teslas, the Model Y and Model S, make the most dangerous cars list despite Tesla’s advanced driver-assist technology Tesla also has the highest fatal accident rate by brand, followed by Kia, Buick, Dodge, and Hyundai — Direct link
Whether you’re looking to escape into a cottage-core wonderland, enact fiery revenge on a deadbeat ex, or cosplay as a bigtime magazine editor, there’s a POV playlist for you. — Direct link
Bluesky offers a lot of unique features like algorithmic feeds and custom moderation that can make for a rich, in-control experience for users. — Direct link
Stanford professor and Smule co-founder Ge Wang explains how computers can make music, and the future of creativity in an AI world, on The Vergecast. #music #ai — Direct link