I’ve been working on a analog clock implementation with time zone support and I wanted to make this implementation into a QML module. Maybe because I’m not used to it, this wasn’t a straightforward task… And for the life of me, somehow this section on the documentation didn’t catch my eye…
So, I created a minimal sample on GitHub to hopefully save some time for people.
The crucial part is in your QML module, in the example above it would be the rectangles folder,
it matters whether or not you use RESOURCE_PREFIX. When that’s used, you have to manually add
the import path to QQmlEngine. Otherwise your QML types will not be found. However, your C++ types
will load just fine and it’ll leave you scratching your head…
Lessons learned, get a pair of glasses… 😅🤓
]]>Button with Qt Quick Controls styles.
Button {
id: root
contentItem: MyLabel {
text: root.text
}
}
However, for our use cases, we wouldn’t expect the styler to set the text property. We set that in
the template side. We found that this takes a lot more off of the shoulders of the styling code.
With that background information, let’s move on to how I abused QML recently…
I was in a situation similar to this:
// MyTemplate.qml
Item {
id: root
Column {
Repeater {
delegate: root.delegate
}
}
MyDataModel {
id: myData
property bool option_1
property bool option_2
property bool option_3
}
}
My goal was to show some CheckBoxes to display some options and when those options change, update
the properties inside MyDataModel so that the underlying code can react to these changes.
I had two problems:
Repeater so that I can use the delegate to display
multiple check boxes.To expose the model, I could just have a simple solution like this:
ListModel {
ListElement { text: "Option 1" }
ListElement { text: "Option 2" }
ListElement { text: "Option 3" }
}
So, that’s not really hard. The hard part is having access to the checked state…
The way I see it, I could create a MyTemplateItem.qml file that can be styled in whatever way we
want. And then a MyTemplateItemDelegate.qml that inherits from MyTemplateItem.qml and then
handle signals.
Repeter {
delegate: MyTemplateItemDelegate {
onCheckedChanged: {
if (option === "option_1") { /* Do things... */ }
}
}
}
This felt like a lot of code because I would have to have different files, signal handlers, and checking against a string type like this is something I don’t like.
So, I figured maybe I can create a model here where I can put some QtObjects in so that maybe I
can have some aliases to take care of updating the properties.
Repeater {
// NOTE: I think this should work but trying it with 6.8 caused "Cannot assign multiple values
// to a singular property" error.
model: [
QtObject {
property string text: "Option 1"
property alias turnedOn: myData.option_1
}
]
}
This, unfortunately, didn’t work. When the required property changes, the underlying data doesn’t
change because this model doesn’t support changing the data.
When you have a sufficiently large QML codebase, you will use the models and views for all sorts of
things and will end up with a lot of these required properties. I have used all the primitive types,
custom objects and various other types as the required properties here.
But… It never occurred to me to use JS function objects! Technically, they should work because
they can be stored in QVariants. I was very curious to try it!
This is what the code looks like:
Repeater {
model: [
QtObject {
property string text: "Option 1"
property var toggle: () => {
myData.option_1 = !myData.option_1
}
}
]
delegate: MyTemplateItem {
required property var toggle
onCheckedChanged: {
toggle()
}
}
}
And just like that, when checkedChanged is called, toggle() function is triggered and our value
in myData changes accordingly!
This isn’t something that I would rely on everywhere. But I think it’s neat that we could have a solution like this for those rare cases to save us some time and typing code.
Over the years, I’ve abused QML in other ways. And I’m hoping to write about those at some point as well.
Here’s the full code for you try mess around with using the qml tool.
import QtQuick
import QtQuick.Controls
ApplicationWindow {
id: root
visible: true
width: 300
height: 300
Column {
Label {
anchors {
left: parent.left
right: parent.right
}
text: "Filters"
font.bold: true
bottomPadding: 4
}
Repeater {
property list<QtObject> objects: [
QtObject {
property int number: 32
property string name: "Invisible Objects"
property var toggle: () => {
console.log("Invisible objects filter toggled!")
}
},
QtObject {
property int number: 33
property string name: "Cropped Objects"
property var toggle: () => {
console.log("Cropped objects filter toggled!")
}
}
]
model: objects
delegate: CheckBox {
id: dlg
required property int number
required property string name
required property var toggle
text: name + " - " + number
onCheckedChanged: {
dlg.toggle()
}
}
}
}
}
I’ve written about null-ls before here and briefly talked about how I converted nvim-cmp sources so I can use it with null-ls. Now that’ve been using it for a while, I thought I’d write an update about it. I’ve been using the nvim-cmp sources for a long time now and I’m happy to say that I have not had any problems with it at all.

Completions from cmp-buffer shows up along with spell source for null-ls.
I also haven’t needed to change the code that I wrote to integrate nvim-cmp sources into null-ls ever since I wrote it the first time. It has been working well for my needs, and I don’t rely on a fancy completion plugin but just use my simple one called sekme.nvim.
]]>I’m going to assume that your dotfiles live under $HOME for this short tutorial and that your
Vim configuration is under $HOME/.dotfiles/vim.
Let’s start with adding vim-dirvish to our plug-ins.
cd ~/.dotfiles/
# See :help packages for why we are using this path. I chose "packages" as the subdirectory but you
# can call it `pack/plugins/start/` as well.
git clone https://github.com/justinmk/vim-dirvish.git pack/packages/start/
# Now we've added vim-dirvish to our plugins.
cat .gitmodules
[submodule "vim/pack/packages/start/vim-dirvish"]
path = vim/pack/packages/start/vim-dirvish
url = https://github.com/justinmk/vim-dirvish.git
[submodule "vim/pack/packages/start/firvish.nvim"]
path = vim/pack/packages/start/firvish.nvim
url = https://github.com/Furkanzmc/firvish.nvim.git
# Now let's add an optional package.
git clone https://github.com/Furkanzmc/firvish.nvim.git pack/packages/opt/
git add .gitmodules pack/packages/*
git commit -m "Start using Git as package manager"
# Now we have the plugin in our packages. We only need to make a small change to our Vim config to
# get this to work.
Open your init.vim or init.lua.
set packpath += expand("~/.dotfiles/vim/")
Or in Lua for Neovim.
vim.opt.packpath:append(fn.expand("~/.dotfiles/vim/"))
Now, launch Vim and you should be able to use vim-dirvish by default. When you want to use
firvish.nvim, you just need to run :packadd firvish.nvim and the plugin will be available. I
like to make use of the optional plugins so I only enable them for certain file types and I don’t
pay the runtime cost for it when launching Vim.
The beauty of using Git as a package manager is that it acts as a lock file. If you encounter a problem with an updated version of a plugin, you can always come back to the version that works.
I have these two tiny aliases in my .gitconfig to see the updates for the plugins and to update
them.
[alias]
logpretty = log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit
show-updates = !git submodule foreach 'git fetch --prune && git logpretty HEAD..origin/HEAD && echo "-----"'
update-packages = !git submodule foreach 'git fetch --prune && git logpretty HEAD..origin/HEAD > updates.git && git logpretty HEAD..origin/HEAD' && git submodule update --remote
When you want to see the updates and then decide if you want to update or not, just run these:
cd ~/.dotfiles
git show-updates
# If you decide you want to update
git update-packages
# Once the packages are updated, you can pick which update you want to keep. You don't have to
# update all your packages.
Discuss on Reddit
]]>formatexpr and tagfunc to keep using Vim default mappings for formatting and
jumping to a tag.
if resolved_capabilities.goto_definition == true then
api.nvim_buf_set_option(bufnr, "tagfunc", "v:lua.vim.lsp.tagfunc")
end
if resolved_capabilities.document_formatting == true then
api.nvim_buf_set_option(bufnr, "formatexpr", "v:lua.vim.lsp.formatexpr()")
-- Add this <leader> bound mapping so formatting the entire document is easier.
map("n", "<leader>gq", "<cmd>lua vim.lsp.buf.formatting()<CR>", opts)
end
I like using this approach because it lets me stay closer to default shortcuts without having to
know new ones. Also, I much prefer gqi{ to visually selecting the range and then calling the LSP
range format function on it.
I recently needed to implement a TableView with frozen column support. Qt provides a good implementation of a table with good performance and item reusing, but it does not internally support headers or frozen columns.
In order to add a header do your TableView, you can use
HorizontalHeaderView.
This will call QAbstractTableModel::headerData internally when QAbstractTableModel::data is
called with a role. This works great, but still, it does not give you the ability to freeze certain
columns. It will use all the cells that you return from QAbstractTableModel::headerData and then
display them in a separate TableView. It internally uses a specialized version of a proxy model
called QHeaderDataProxyModel to do this. As a result, you end up with two table views on top of
each other, one that displays the data and one that displays the headers.
Since we want to freeze headers, we are going to have our own implementation.
When we are done with our implementation, the use of our FrozenTableView will be as follows:
FrozenTableView {
frozenColumn: 3
model: TableModel { }
horizontalHeaderDelegate: TableViewHeaderCell {
required property int index
required property string display
text: display
cellIndex: index
}
delegate: TableViewCell {
required property int index
required property string display
text: display
}
}
We need to ability to freeze up to n number of columns. And once we freeze a column, we not only
need the headers to freeze, but also the data that corresponds to those headers to also freeze.
However, if you look at the documentation for
TableView,
you will see that it inherits from Flickable and it does not support flicking certain areas of
the table while the rest stays static.
Therefore, we need 4 tables:
1- One that shows the frozen header columns.
2- One that shows the un-frozen header columns.
3- One that shows the frozen data columns.
4- One that shows the un-frozen data columns.

Just because we need to divide the representation of our model into 4, it does not mean that we also need to divide our data to 4 so that we can support freezing columns. That would be crazy. What we need instead is a proxy model that operates on the source model and transforms it to
Let’s start with the fundamentals. We need a table model that we can apply proxies to.
// TableModel.cpp
namespace {
constexpr std::array<char, 26> s_characters{ 'A', 'B', 'C', 'D', 'E', 'F', 'G',
'H', 'I', 'J', 'K', 'L', 'M', 'N',
'O', 'P', 'Q', 'R', 'S', 'T', 'U',
'V', 'W', 'X', 'Y', 'Z' };
}
TableModel::TableModel(QObject* parent)
: QAbstractTableModel{ parent }
{
}
Qt::ItemFlags TableModel::flags(const QModelIndex& index) const
{
Q_UNUSED(index);
return Qt::ItemIsSelectable | Qt::ItemIsUserCheckable | Qt::ItemIsEnabled;
}
QHash<int, QByteArray> TableModel::roleNames() const
{
return { { Qt::DisplayRole, "display" } };
}
int TableModel::rowCount(const QModelIndex& /*index*/) const
{
return m_rowCount;
}
int TableModel::columnCount(const QModelIndex& /*index*/) const
{
return m_columnCount;
}
QVariant TableModel::data(const QModelIndex& index, int role) const
{
switch (role) {
case Qt::DisplayRole:
return QString{ "%1, %2" }.arg(index.column()).arg(index.row());
default:
break;
}
return {};
}
QVariant TableModel::headerData(int section,
Qt::Orientation orientation,
int role) const
{
Q_UNUSED(orientation);
switch (role) {
case Qt::DisplayRole:
return QString{ s_characters.at(section) };
default:
break;
}
return {};
}
QModelIndex TableModel::index(int row,
int column,
const QModelIndex& parent) const
{
// NOTE: We are using the first item in our data as the header row.
// That's why we need to add one to check for valid index.
if (row < rowCount(parent) + 1 && column < columnCount(parent)) {
return createIndex(row, column);
}
return QModelIndex{};
}
Once we have a working table model, we are going to create our proxy model so we can return
different parts of our model depending on our use case. We are going to call this proxy
ModelSlice. We want to use this to tell the proxy that we are interested in getting data in a
certain row/column range rather than everything. We will use this to customize our source model for
our frozen/un-frozen header columns.
Here’s a slimmed down version of the interface of ModelSlice:
class ModelSlice : public QAbstractListModel {
Q_OBJECT
Q_PROPERTY(int fromRow READ fromRow WRITE setFromRow NOTIFY fromRowChanged)
Q_PROPERTY(int toRow READ toRow WRITE setToRow NOTIFY toRowChanged)
Q_PROPERTY(int fromColumn READ fromColumn WRITE setFromColumn NOTIFY
fromColumnChanged)
Q_PROPERTY(
int toColumn READ toColumn WRITE setToColumn NOTIFY toColumnChanged)
// We are going to be operating on the source. This model will not hold onto any data but
// returns a slice of the data from our source.
Q_PROPERTY(QAbstractItemModel* source READ source WRITE setSource NOTIFY
sourceChanged)
public:
explicit ModelSlice(QObject* parent = nullptr);
// We will return the same role names without altering them.
[[nodiscard]] QHash<int, QByteArray> roleNames() const override final;
// Row and column count will depend on the specified row/column range.
[[nodiscard]] int rowCount(
const QModelIndex& parent = QModelIndex{}) const override;
[[nodiscard]] int columnCount(
const QModelIndex& parent = QModelIndex{}) const override;
// This will return our data in the specified row/column range.
[[nodiscard]] QVariant data(const QModelIndex& index,
int role) const override;
// In order to make this a general purpose slice, we need to implement this as well.
[[nodiscard]] QVariant headerData(
int section,
Qt::Orientation orientation,
int role = Qt::DisplayRole) const override final;
// Getters and setters for the Q_PROPERTY declarations above...
signals:
// Signals for our properties.
private:
// Member variables.
};
We can now use this implementation to get a certain slice of a source model.
// Get me the first three rows and the first tow columns from the source model.
ModelSlice {
source: root.model
fromRow: 0
toRow: 2
fromColumn: 0
toColumn: 1
}
Just as a side note, this could have been achieved with a few function calls to our table model as well. But when dealing with QML, it’s very important to make things as declarative as possible.
This ModelSlice only returns the main data though, we need access to the header data. So, let’s
create another class that inherits from ModelSlice called HeaderModelSlice.
class HeaderModelSlice : public ModelSlice {
Q_OBJECT
// This is so that we can support both vertical and horizontal headers,
// however we will only be returning horizontal headers for simplicity.
Q_PROPERTY(Qt::Orientation orientation READ orientation WRITE setOrientation
NOTIFY orientationChanged)
// This part is important. Normally, we would auto adjust the row/column
// range to return only the first row as a header. But in order to freeze
// header columns, we need to adjust which columns are returned as well.
// Keeping this here for convenience in case the header is used when there's no need for
// freezing columns in other places.
Q_PROPERTY(bool useExplicitRange READ useExplicitRange WRITE
setUseExplicitRange NOTIFY useExplicitRangeChanged)
public:
explicit HeaderModelSlice(QObject* parent = nullptr);
// This will actually internally call source->headerData so that we can
// treat this proxy as a gateway to the underlying header.
[[nodiscard]] QVariant data(const QModelIndex& index,
int role) const override final;
// We have to specialize row/column count here because depending on the
// orientation, one of them will only be one, e.g for a horizontal header we
// return a row count of 0.
[[nodiscard]] int rowCount(
const QModelIndex& parent = QModelIndex{}) const override final;
[[nodiscard]] int columnCount(
const QModelIndex& parent = QModelIndex{}) const override final;
// Getters and setters for the properties are omitted here...
signals:
// Property signals.
};
Now that we are done with the C++ side, we can start getting into the QML land. Remember that we established we need 4 tables to accomplish what we want to do. Once we are done, we are going to put all these tables in a layout in such a way that the user can’t tell that we are actually using 4 different table views.
Let’s go over them one by one.
This table will be optionally visible. If we don’t have any frozen columns, we don’t need to show this table. Its job is to take a slice out of the header model and only show those cells. This is a 1xn type of column. We’ll always have a single row since its the header, and the column will be the number of frozen column headers we want.
This column is frozen, we only show cell "A" in this table.
|___|___|
⬇ ↔ ⬇
+-------+-------+-------+
| A | B | C |
+-------+-------+-------+
| A:1 | B:1 | B:1 |
+-------+-------+-------+
TableView {
id: columnHeader
boundsBehavior: Flickable.StopAtBounds
// We don't want interactivity here since it is frozen. Alternatively, you can enable this and
// add scroll bars.
interactive: false
model: HeaderModelSlice {
// This is our TableModel.
source: root.model
useExplicitRange: true
fromRow: 0
toRow: 0
fromColumn: 0
// Alternatively, we can disable column freezing, that's why we are setting the minimum
// value to 0 here. We need the cells that go from 0 to the given frozenColumn.
toColumn: Math.max(root.frozenColumn - 1, 0)
orientation: Qt.Horizontal
}
function _updateColumnWidth(index: int, width: int) {
// Skipping this for simplicity. Take a look at the GitHub repository for details.
}
function _updateHorizontalHeaderHeight(height: int) {
// Skipping this for simplicity. Take a look at the GitHub repository for details.
}
}
This table starts showing the columns starting with the column after our frozen column.
This is where we start showing in this table.
|_______|_______|
⬇ ↔ ⬇
+-------+-------+-------+
| A | B | C |
+-------+-------+-------+
| A:1 | B:1 | B:1 |
+-------+-------+-------+
TableView {
boundsBehavior: Flickable.StopAtBounds
interactive: false
model: HeaderModelSlice {
source: root.model
useExplicitRange: true
fromRow: 0
toRow: 0
// Our header data starts from the column that we end the freezing and goes to the end of
// the model.
fromColumn: Math.max(root.frozenColumn, 0)
toColumn: root.model.columnCount
orientation: Qt.Horizontal
}
function _updateColumnWidth(index: int, width: int) {
// Skipping this for simplicity. Take a look at the GitHub repository for details.
}
function _updateHorizontalHeaderHeight(height: int) {
// Skipping this for simplicity. Take a look at the GitHub repository for details.
}
}
+-------+-------+-------+
| A | B | C |
+-→ +-------+-------+-------+
| | A:1 | B:1 | B:1 |
| +-------+-------+-------+
| | A:2 | B:2 | B:2 |
+-→ +-------+-------+-------+
⬆ ↔ ⬆
|___|___|
We show this column and the rest of the rows in this table.
TableView {
id: frozenColumnTable
boundsBehavior: Flickable.StopAtBounds
interactive: false
model: HeaderModelSlice {
id: frozenColumnModel
source: root.model
useExplicitRange: true
fromRow: 0
toRow: 0
fromColumn: 0
toColumn: Math.max(root.frozenColumn - 1, 0)
orientation: Qt.Horizontal
}
// We need to duplicate these functions because a header cell that belongs to a
// frozen column will have a different table view than the one that's not frozen.
function _updateColumnWidth(index: int, width: int) {
// Skipping this for simplicity. Take a look at the GitHub repository for details.
}
function _updateHorizontalHeaderHeight(height: int) {
// Skipping this for simplicity. Take a look at the GitHub repository for details.
}
}
This table will show the data that does not belong to our frozen column.
+-------+-------+-------+
| A | B | C |
+-------+-------+-------+ ←-+
| A:1 | B:1 | B:1 | |
+-------+-------+-------+ |
| A:2 | B:2 | B:2 | |
+-------+-------+-------+ ←-+
⬆ ↔ ⬆
|_______|_______|
We show these two cells in this table.
TableView {
id: tb
syncView: columnHeader.visible ? columnHeader : null
syncDirection: Qt.Horizontal
boundsBehavior: Flickable.StopAtBounds
clip: true
model: ModelSlice {
// We are not using a HeaderModelSlice here because we are no longer interested in
// ::headerData.
source: root.model
fromRow: 0
toRow: root.model.rowCount
fromColumn: frozenColumnModel.toColumn
toColumn: root.model.columnCount
}
}
Now that we have all our 4 tables, time to put them together. We are going to use layouts to cleverly position them so there’s no gap between them and they don’t look distinct but the same table all together.
import QtQuick 2.15
import QtQuick.Controls 2.15
import QtQuick.Layouts 1.15
import Table 1.0 // This is where our TableModel and ModelSlice types are.
Control {
id: root
property TableModel model
property alias delegate: tb.delegate
// We are going to use the same delegate for our frozen and unfrozen column headers.
property alias horizontalHeaderDelegate: columnHeader.delegate
property alias columnSpacing: columnHeader.columnSpacing
property alias rowSpacing: columnHeader.rowSpacing
// NOTE: This is used to ensure that the frozen column table is wide enough to contain all the
// frozen columns so we can properly calculate cell widths.
property int defaultCellWidth: 100
property int frozenColumn: -1
// NOTE: ColumnLayout does not set implicit size so by default this will evaluate to 0 if
// there's no padding.
implicitWidth: Math.max(implicitBackgroundWidth + leftInset + rightInset,
implicitContentWidth + leftPadding + rightPadding)
implicitHeight: Math.max(implicitBackgroundHeight + topInset + bottomInset,
implicitContentHeight + topPadding + bottomPadding)
contentItem: ColumnLayout {
spacing: 0
RowLayout {
height: columnHeader.height
spacing: 0
Layout.fillWidth: true
TableView {
id: frozenColumnTable
width: privates.frozenCellsCreated ? privates.frozenColumnWidth : root.defaultCellWidth * Math.max(root.frozenColumn - 1, 0)
boundsBehavior: Flickable.StopAtBounds
columnSpacing: columnHeader.columnSpacing
rowSpacing: columnHeader.rowSpacing
interactive: false
columnWidthProvider: (column) => privates.columnWidths[column]
model: HeaderModelSlice {
id: frozenColumnModel
source: root.model
useExplicitRange: true
fromRow: 0
toRow: 0
fromColumn: 0
toColumn: Math.max(root.frozenColumn - 1, 0)
orientation: Qt.Horizontal
}
delegate: root.horizontalHeaderDelegate
Layout.preferredWidth: width
Layout.preferredHeight: height
// We need to duplicate these functions because a header cell that belongs to a
// frozen column will have a different table view than the one that's not frozen.
function _updateColumnWidth(index: int, width: int) {
privates.columnWidths[index] = width
if (!privates.frozenCellsCreated) {
privates.frozenCellsCreated = index == frozenColumnModel.toColumn
}
privates.frozenColumnWidth = privates.calculateFrozenColumnTableWidth()
Qt.callLater(frozenRows.forceLayout)
Qt.callLater(frozenColumnTable.forceLayout)
}
function _updateHorizontalHeaderHeight(height: int) {
frozenColumnTable.height = height
columnHeader.height = height
}
}
TableView {
id: columnHeader
boundsBehavior: Flickable.StopAtBounds
interactive: false
clip: true
columnWidthProvider: (column) => privates.columnWidths[root.frozenColumn > 0 ? column + root.frozenColumn : column]
model: HeaderModelSlice {
source: root.model
useExplicitRange: true
fromRow: 0
toRow: 0
fromColumn: Math.max(root.frozenColumn, 0)
toColumn: root.model.columnCount
orientation: Qt.Horizontal
}
Layout.preferredHeight: height
Layout.fillWidth: true
function _updateColumnWidth(index: int, width: int) {
const columnIndex = root.frozenColumn > 0 ? index + root.frozenColumn : index
privates.columnWidths[columnIndex] = width
Qt.callLater(tb.forceLayout)
Qt.callLater(columnHeader.forceLayout)
}
function _updateHorizontalHeaderHeight(height: int) {
columnHeader.height = height
}
}
}
RowLayout {
spacing: 0
Layout.fillHeight: true
Layout.fillWidth: true
TableView {
id: frozenRows
width: frozenColumnTable.width
boundsBehavior: Flickable.StopAtBounds
columnSpacing: columnHeader.columnSpacing
rowSpacing: columnHeader.rowSpacing
contentY: tb.contentY
clip: true
syncView: tb
syncDirection: Qt.Vertical
delegate: tb.delegate
columnWidthProvider: (column) => privates.columnWidths[column]
model: ModelSlice {
source: root.model
fromRow: 0
toRow: root.model.rowCount
fromColumn: 0
toColumn: frozenColumnModel.toColumn
}
Layout.preferredWidth: width
Layout.fillHeight: true
}
TableView {
id: tb
columnSpacing: columnHeader.columnSpacing
rowSpacing: columnHeader.rowSpacing
syncView: columnHeader.visible ? columnHeader : null
syncDirection: Qt.Horizontal
boundsBehavior: Flickable.StopAtBounds
clip: true
columnWidthProvider: (column) => privates.columnWidths[root.frozenColumn > 0 ? column + root.frozenColumn : column]
model: ModelSlice {
source: root.model
fromRow: 0
toRow: root.model.rowCount
fromColumn: frozenColumnModel.toColumn
toColumn: root.model.columnCount
}
Layout.fillWidth: true
Layout.fillHeight: true
ScrollBar.vertical: ScrollBar { }
ScrollBar.horizontal: ScrollBar { }
}
}
}
QtObject {
id: privates
property var columnWidths: ({})
property int frozenColumnWidth: 0
property bool frozenCellsCreated: false
function calculateFrozenColumnTableWidth() {
let column = frozenColumnModel.toColumn
let width = 0
while (column > -1) {
const value = privates.columnWidths[column]
if (value !== undefined) {
width += value
}
column--
}
return width
}
}
}
In order to get the full experience, checkout the full source code at GitHub.
]]>(Neo)Vim already provides very simple options for us to set up a bare bones environment for any language that we want. All I want is a way to:
Here’s how you do all that with 7 lines of code:
" Put this in .exrc/.nvimrc file in your project's folder.
set makeprg=java\ %
set formatprg=clang-format
set errorformat=%f:%l:\ %trror:\ %m
augroup nvimrc
autocmd!
autocmd VimLeavePre * mksession! session.vim
augroup END
There you go! Now you can run :make to see the output of your program, or see the errors java
command produces in your :help quickfix window. You can use gq to format a visual selection or
entire file. And when you quit, Vim will create a session file for you.
You can also use null-ls.nvim to provide diagnostics for you:
local helpers = require("null-ls.helpers")
require("null-ls.sources").register(helpers.make_builtin({
method = require("null-ls.methods").internal.DIAGNOSTICS,
filetypes = { "java" },
generator_opts = {
command = "java",
args = { "$FILENAME" },
to_stdin = false,
format = "raw",
from_stderr = true,
on_output = helpers.diagnostics.from_errorformat([[%f:%l: %trror: %m]], "java"),
},
factory = helpers.generator_factory,
}))
Final .nvimrc file looks like this:
set makeprg=java\ %
set formatprg=clang-format
set errorformat=%f:%l:\ %trror:\ %m
augroup nvimrc
autocmd!
autocmd VimLeavePre * mksession! session.vim
augroup END
lua << EOF
local helpers = require("null-ls.helpers")
require("null-ls.sources").register(helpers.make_builtin({
method = require("null-ls.methods").internal.DIAGNOSTICS,
filetypes = { "java" },
generator_opts = {
command = "java",
args = { "$FILENAME" },
to_stdin = false,
format = "raw",
from_stderr = true,
on_output = helpers.diagnostics.from_errorformat([[%f:%l: %trror: %m]], "java"),
},
factory = helpers.generator_factory,
}))
EOF
The ones marked with
*don’t really provide completion sources but extend Vim’s own completion feature by chaining them or binding them to one key.
I’m sure I’m missing a few plugins. But take a look at this list! All of this… For completion… The end goal of these plugins is the same: Get a list of completion items in a fast way. However because of the way they are implemented, almost all of those plugins require people to write custom completion sources for them.
Let’s take a look at what completion plugins do these days:
Each of those are huge tasks on their own. If one day you wanted to create a new completion plugin, because why not and maybe you have a new idea, you would have invest hundreds of hours to get to a usable plugin. Take a look at how much time coq.nvim author spent on the plugin: 1 year!
There must be a way to make it less painful for people to work on these plugins.
In order to solve this, I think a completion plugin should just focus on items from #1 to #3. Completion sources and snippets should not be a completion plugins business. These are always implementation specific, one source for a plugin is not useful for another.
The good thing is, most completion plugins (namely nvim-cmp and coq.nvim) rely on LSP specification to extend their sources and snippet support.
For snippets, they are provided by LSPs and completion plugins add the fancy UI elements to make it easier for us to use. For completion sources, they compute their own completion items and then feed it back to their completion engine in the same format that LSP specification shows.
Neovim has great built-in LSP support. After I switched to using it, I wanted to explore some completion plugin options. But no matter what I tried, there was always something missing for me. I wasn’t that interested in the additional features they add, I was just interested in getting my completion menu populated with completion items from LSPs and other Vim built-in sources.
completion-nvim provided the best experience for me, but when I started it at the time it was in its early stages of development. I was very interested in its chain completion feature, and I just ended up implementing a simple version of it in my configuration and ever since then I’ve been using that (Now I made that into a simple plugin sekme.nvim).
There was another problem, I would sometimes come across certain sources from these completion plugins, but I would need to switch to using that plugin in order to leverage it. But whenever I switch to a completion plug-in, I just found myself coming back to my own. I want to make it clear that this in no way means those plug-ins are bad. They are great tools created by smart people. They were just not for me.
So, while looking for solutions, I came across efm-langserver. It is a general purpose LSP that you can extend with your own external commands to provide your own custom completion or hover sources.
So, at first I had a chain of commands to provide completion sources with:
lspconfig.efm.setup({
on_attach = setup,
init_options = { documentFormatting = true },
settings = {
languages = {
["="] = {
{
completionCommand = 'cat ${INPUT} | rg "\\w+" --no-filename --no-column --no-line-number --only-matching | sort -u',
completionStdin = false
},
},
},
},
})
This was simple and fast enough for completing items from the same buffer. I used a similar approach to provide completion sources in different ways. For example, I wrote a little function to get me completion items with mnemonics.
local function get_name() {
return "Jane Doe"
}
gn<Tab> -- This results in get_name to appear in completion list.
I also had a custom script that returns the description of pylint error codes in Python files.
The benefit of this approach is that I can use any completion plugin I want, I would still get the same result.
Eventually, I switched to using a Python script to return these results and add a few more functionalities on top of it. But this was restrictive for me. I can do a lot, but I have limited access to Neovim’s API and on top of it it’s another dependency. It would be so much better for me to write all my code in Lua, which I use for my editor configuration as well, so that it’s easier to maintain.
So, I thought to myself, wouldn’t it be great if we had a general purpose language server written in Lua? One that I can use to write my own completion sources that would be useful with any completion plugin, one that I could use to create hover sources all in Lua! All the while having the ability to access any other plug-ins code, treesitter, or other Lua libraries like plenary.nvim.
This would make things so much easier. No longer would I need to worry about what sources I can or can’t access. There’s a source I’m interested from nvim-cmp? Then I can just fork it, modify it a little so it doesn’t depend on nvim-cmp any more and use the code.
I was thinking about this during my hikes in Alberta, and I was looking forward to coming back home to start working on this. But sadly, I didn’t have enough time because of work and my preparation for vimconf.live. I was watching Justin’s talk at vimconf.live, and I noticed he mentioned a plug-in called null-ls.nvim. Immediately after the talk, I went on the GitHub page and tried it out.
null-ls.nvim is exactly what I was thinking I would create. Jose did a fantastic job with the
plug-in. It’s very easy to get started and implement your own formatting and hover sources. He’s
very active and there’s lots of built-in formatting and diagnostics sources.
I immediately switched over my efm-langserver configuration to null-ls. It was pretty straightforward and easier because I could now use Lua. There was one thing missing though: Completion. So, I just built on what Jose created and submitted a PR for the completion support.
I also created two completion sources as samples, you can find them in the PR. On top of that, I wanted to see if it would be trivial to convert nvim-cmp sources to null-ls sources. And turns out, it was pretty easy! As an experiment, I copied the exact same code over to my configuration. And it works perfectly!
Here’s where the beauty of null-ls is. As software engineers, we want to avoid duplication. But with these plugins, there’s so much duplication that benefits only a certain subset of people. Wouldn’t it be great if coq.nvim users could benefit from the sources written for nvim-cmp? It is possible, it’s just a matter of community deciding to act on this.
I think null-ls.nvim has great potential to be much bigger than it is right now. It’s ripe with potential. Imagine how trivial it would be for you to create a completion source for your own needs. Imagine how easy it would be to have your own project specific code lens, or hover options. This would make it much easier for other plug-in developers to create new solutions. Just off the top of my head, I could see nvim-cmp, coq.nvim, any plugin that’s built on LSP. With the treesitter support in Neovim, there’s so much power.
I already have my small snippets to run my tests. With null-ls, I could go one step further and easily customize which test to run. I could have code actions to run these tests. And many more things that I can’t even think of right now.
Over the next few weeks, I’ll see in what ways I can extend null-ls and how I can improve my work flow. I’ll also try to fork nvim-cmp sources and see If I can port them over to null-ls. If my experiments go right, I’m hoping to engage nvim-cmp and coq.nvim authors to get their feedback on this idea.
Until then, please share your opinions on this and go give null-ls.nvim a try!
]]>Buffer me once, buffer me twice, buffer me chicken soup with rice. ~~ Todd
:help buffer vs :help window vs :help tab:help buffer switching:help buffer-list vs :help argument-list:help special-buffers, what are they good for?:help buffers and :help windows:help filter, :help equalprg, and :help formatprg:help read, and :help write:help skeleton:help firvish.txtI/O & U
echo -e '#include <stdio.h>\nint main() { \n\tprintf("Hello world!");\n\treturn 0;\n}' > hello.c
cat hello.c | grep world
cat hello.c | sed s/Hello/Ahoy/g
clang hello.c -o hello && ./hello | sed s/Ahoy/Hello/g
def hello():
print("Hello world!")
hello()
:help buffer vs :help window vs :help tabBut I use bufferline!
:help buffer switching:while !(bufname() =~ "indirect.c$") | bnext | endwhile:help ls and :buffer 5:buffer ind*.c<wildchar> or :buffer indir<CR> or :buffer *rect*.c
Tip:
:help wildcard
:[s]bfirst, :[s]bnext, :[s]bprevious, :[s]blast.
Tip: I map these to
[band]b, or[sband]sb
:help buffer-list vs :help argument-list:help 07.2:help argglobal:help arglocal:bnext != :next:help special-buffers, what are they good for?:help quickfix:help help:help terminal:help scratch-buffer:help buffers and :help windowsRun commands on buffers/windows/tabs.
:help bufdo:help argdo:help windo:help tabdo:help cexpr:help cfdo:help lfdoManipulate/Navigate buffers.
:help buffer:help buffers:help bdelete:help edit:help new:help badd:help bwipeout:help bunload:help sbuffer:help bnext:help bprevious:help sbnext:help sbprevious:help bmodified:help arglist.Command line:
git diff origin/master --name-only | xargs nvim "+nmap <leader>d :Gdiffsplit<CR>"
Vim:
:args `git diff --name-only`
See
:help backtick-expansion
:args `rg -l linux`
:argdo %s/linux/windows/g
Tip: Use
%s/linux/windows/gcto confirm each substitution.
Search in specific files.
:args `fd . -e h`
:argdo grepadd ext2_inode_info %
Or search in all buffers:
:bufdo grepadd ext2_inode_info %
:bufdo normal gggqG
:bufdo !clang-format -i %
:bufdo !black %
:help number in all windows in the tab.:windo setlocal number
" You need to set errorformat=%f for this to work.
:cexpr system("fd . -t f ext2")
" Now we can find and replace string only in these files
:cfdo %s/ext2/ext32/g
:bufdo write %.backup
:bufdo if !filereadable(bufname()) | bdelete | endif
:help filetype (e.g C++, Rust) and skip the rest.:help terminal buffers.Expect the output of every program to become the input to another, as yet unknown, program…
:help filter, :help formatprg, and :help equalprg:help write, and :help readHow to do 90% of What Plugins Do: https://www.youtube.com/watch?v=XA2WjJbmmoM
Here’s an unformatted code snippet
if vim.o.loadplugins == true then
print("Loading plugins.") end
if vim.o.loadplugins == true then
print("Loading plugins.") end
Here’s how you would format the code and do more!
:'<,'>!lua-format
:.,+1!lua-format
:.,+1!lua-format | sed s/true/false/g | sed s/Loading/Unloading/g
:help formatprg and :help equalprgHere’s the same unformatted code snippet
if vim.o.loadplugins == true then
print("Loading plugins.") end
Format it with lua-format.
:setlocal formatprg=lua-format\ --no-use-tab
:normal! gqj
Another unformatted text:
{ "vimconf": { "live": true } }
" Replace the block with a child object.
:.,.!jq .vimconf
:normal u
" Format it with equalprg
:setlocal equalprg=jq
:normal ==
:help read and :help write" List files
:read !ls -l
See the output of your script:
def hello():
print("Hello world!")
hello()
" See the output
:'<,'>write !python
" Write the output to the same file.
:'<,'>write !python >> %
" Write the output to the clipboard
:'<,'>write !python | pbcopy
:help mark-motions
m{a-zA-Z}, and ; to get to repeat :help f, :help F, :help t, or :help T motions.xmap <silent> . :normal .<CR>
xnoremap @ :<C-u>call ExecuteMacroOverVisualRange()<CR>
function! ExecuteMacroOverVisualRange()
echo "@".getcmdline()
execute ":'<,'>normal @" . nr2char(getchar())
endfunction
:help g; and :help g,:help skeletonCreate templates for most used boiler-plate code.
// ~/.dotfiles/vim/skeleton/skeleton.h
#ifndef MY_HEADER_H
#define MY_HEADER_H
class MyResourceClass {
public:
MyResourceClass();
~MyResourceClass();
MyResourceClass(const MyResourceClass&);
MyResourceClass(MyResourceClass&&);
MyResourceClass& operator=(const MyResourceClass&);
MyResourceClass& operator=(MyResourceClass&&);
}
#endif
autocmd BufNewFile *Manager.h :0read ./demo/skeleton.h | %s/\(MY_HEADER_H\|MyResourceClass\)//n
Then simply do cgn to rename the include guard and the class name to something more
appropriate.
Or use a snippet engine!
When you are developing plugins you can design it according to Vim way of doing things.
See Unix as an IDE: https://blog.sanctum.geek.nz/series/unix-as-ide/
See How to do 90% of What Plugins Do: https://www.youtube.com/watch?v=XA2WjJbmmoM
:help terminaldoes not allow you to edit the output of your commands.
:help wincmd:help window-resize:help dgn:help previewwindow:help @=:help firvish.txtfirvish.nvim is a plugin that provides a collection of functions/commands for manipulating
buffers.
:help arguments-list):Rg, :Fd, :Ug, and your own tools in your configuration
file.Link: https://github.com/furkanzmc/firvish.nvim
Link to the talk: https://youtu.be/rD2eyB9oMqQ
]]>connect(watcher, &SystemThemeWatcher::themeChanged, theme, &Theme::updateTheme);
watcher is our sender. When it emits themeChanged() signal, our updateTheme() will be called
and as a result our application theme will reflect the system theme.
I won’t go into details of the signal slot system. The official documentation does an excellent job of it. What I want to focus on is the way that the signals are used. Let’s examine the meaning of a signal.
A signal is emitted when a particular event occurs.
This particular event is usually the change of an internal state of a class.
class Theme : public QObject
{
Q_OBJECT
public:
void setTheme(Type type)
{
if (type == Type::Dark && m_currentType != type) {
// Update theme
emit themeChanged();
}
else if (type == Type::Light && m_currentType != type) {
// Update theme
emit themeChanged();
}
}
signals:
void themeChanged();
}
// SomeOtherFile.cpp
void myFunc()
{
Theme::instance().setTheme(Theme::Type::Dark);
}
My internal state in Theme is whether I have the dark or light theme. When I call myFunc(), it
will access our theme singleton and ask it to load the dark theme. There’s a catch here, a signal
is a public member function of a class. That means, I can also do the following:
void myBadFunc()
{
emit Theme::instance().themeChanged();
}
What happens if this is called? I probably had dozens of bindings to themeChanged(), and all those
bindings got re-evaluated. For some other class, this could be an expensive operation.
Since a signal conveys the message that “My internal state changed.”, it only makes sense that this internal change can only be known by the class itself. However, Since signals are public, there’s no way for us to stop the user of our class from emitting these signals.
Or is there?
I have not been able to find this feature in the official documentation. And it’s briefly mentioned in How Qt Signals and Slots Work - Part 2 - Qt5 New Syntax. But even though it’s undocumented, there’s indeed a way to create private signals. And I think this should be the default way of creating signals for all classes as the users of a class should not know about its internal state.
You can use QPrivateSignal as the last parameter of a signal to indicate that the signal is
supposed to be private. When moc is processing your file, it reads the arguments and if it’s
QPrivateSignal then it creates a special version of the signal handler that always feeds in the
default constructed QPrivateSignal for the class. QPrivateSignal is defined inside Q_OBJECT
so as long as you have Q_OBJECT you have access to private signals.
#define Q_OBJECT
public:
QT_WARNING_PUSH
Q_OBJECT_NO_OVERRIDE_WARNING
static const QMetaObject staticMetaObject;
virtual const QMetaObject *metaObject() const;
virtual void *qt_metacast(const char *);
virtual int qt_metacall(QMetaObject::Call, int, void **);
QT_TR_FUNCTIONS
private:
Q_OBJECT_NO_ATTRIBUTES_WARNING
Q_DECL_HIDDEN_STATIC_METACALL static void qt_static_metacall(
QObject *, QMetaObject::Call, int, void **);
QT_WARNING_POP
struct QPrivateSignal {}; // This is the private data that enables us to create private signals.
QT_ANNOTATE_CLASS(qt_qobject, "")
Here’s the updated version of the above example with the private signal.
class Theme : public QObject
{
Q_OBJECT
public:
void setTheme(Type type)
{
if (type == Type::Dark && m_currentType != type) {
// Update theme
emit themeChanged(QPrivateSignal{});
}
else if (type == Type::Light && m_currentType != type) {
// Update theme
emit themeChanged(QPrivateSignal{});
}
}
signals:
void themeChanged(QPrivateSignal);
}
// SomeOtherFile.cpp
void myFunc()
{
Theme::instance().setTheme(Theme::Type::Dark);
}
With this change, I will not be able to call themeChanged() outside of my Theme class.
void myBadFunc()
{
// ERROR! Cannot compile this.
emit Theme::instance().themeChanged();
}
Interestingly, this feature has been available since 2012. And here’s the commit that introduced this change.
Here’s the generated moc_Theme.cpp file when we are using the public signal:
void Theme::qt_static_metacall(QObject *_o, QMetaObject::Call _c,
int _id, void **_a) {
if (_c == QMetaObject::InvokeMetaMethod) {
auto *_t = static_cast<Theme *>(_o);
(void)_t;
switch (_id) {
case 1:
_t->themeChanged();
break;
}
}
}
And here’s the private one:
void Theme::qt_static_metacall(QObject *_o, QMetaObject::Call _c,
int _id, void **_a) {
if (_c == QMetaObject::InvokeMetaMethod) {
auto *_t = static_cast<Theme *>(_o);
(void)_t;
switch (_id) {
case 1:
_t->themeChanged(QPrivateSignal());
break;
}
}
}
It’s important to note that the visibility of the signal is not actually affected. The declared
signal is still a public member of Theme. But since the first parameter is a private member of
Theme, the user of Theme does not have access to it and cannot provide the correct arguments to
call themeChanged().
I’ve always hated using public signals, and having learned about this, I started using private signals in my projects. It makes things easier to manage because it truly enforces the meaning of a signal: An internal state change notification that only the class can know about.
The signals are always exposed to QML.
Theme {
onThemeChanged: {
}
}
When we make the themeChanged() private, the signal will still be accessible from QML. Another
interesting bit is that the private data member in the signal will still be exposed to QML if the
signal has more than 1 parameters. If there’s only one, which is QPrivateSignal, then it won’t
be exposed.
// Theme.h
signals:
void themeChanged(QPrivateSignal pr);
void themeChangedWithParams(int type, QPrivateSignal pr);
// main.qml
Theme {
onThemeChanged: (pr) => {
console.log("pr is", pr)
// Output: "pr is undefined"
}
onThemeChangedWithParams: (type, pr) => {
console.log("pr is", pr)
// Output: "pr is QVariant(Theme::QPrivateSignal, )"
}
// You should have this handler.
onThemeChangedWithParams: (type) => {
}
}