Product Feedback

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Add Git support for Visual Studio Team Services for Azure Databricks

    Most people on Azure are likely to also use VSTS as their git repository. VSTS support authorization tokens and git operations just like GIT itself so it should be an easy addition I would think.

    582 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    33 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  2. TensorFlow 1.13.1 (stable) in ML 5.4

    TensorFlow 1.13.1 contains major bugfixes compared to 1.12.0, here are some highlights:
    * TensorFlow GPU binaries are now built against CUDA 10 and TensorRT 5.0.
    (This is extremely important to leverage the full capabilities of V100 GPUs)
    * Fixes a potential security vulnerability where carefully crafted GIF images can produce a null pointer dereference during decoding
    * Improve performance of GPU cumsum/cumprod by up to 300x.

    Any loads more, but these 3 are enough to warrant an upgrade, not to mention compatibility ops and renaming that make models less prone to breaking in TF 2.0

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    completed  ·  7 comments  ·  External libraries / applications  ·  Flag idea as inappropriate…  ·  Admin →
  3. 1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →
  4. 2 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    completed  ·  2 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  5. Support Search and Replace within a Notebook across all Cells

    It will really aid productivity if we can search and replace across all cells within a notebook.

    49 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    8 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  6. Cluster configuration templates

    We use a handful of different cluster configurations (eg 250 gb on demand, 100 gb on demand, 1 tb spot, etc) to run different operations. In the clusters tab there would be 2 areas - live clusters and cluster configurations. If I want to spin up a specific cluster that I've already configured then I can just click a create button for the configuration and it will spin up in the live cluster area.

    This feature will become more useful with job scheduling so I can map each scheduled job to a cluster configuration so the correct size cluster is…

    9 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    5 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →

    You can now re-start terminated clusters by clicking on the “play” button on the clusters page. That way, you can keep all your previous parameters and use previous cluster settings as templates. You can also clone a terminated cluster by clicking on the copy button on the clusters page.

  7. Enable JDBC / ODBC

    Please enable server mode with has industry standard JDBC and ODBC connectivity options

    11 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    completed  ·  1 comment  ·  Other  ·  Flag idea as inappropriate…  ·  Admin →
  8. 1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  9. Collapsible Headings

    It would be great to have collapsible markdown headings just like the notebook extension "collapsible headings" in jupyter. Notebooks tend to become very large very quickly and this feature would greatly help us.

    72 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  10. Export Source includes command cell titles

    When a notebook is exported, the command cell titles are lost and cannot be imported.

    This includes when using the built in github / bitbucket integration.

    Perhaps the titles could be included within the MAGIC comments that proceed each cell in the source.

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    completed  ·  1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  11. add cell numbering

    it would be easier to say to my colleagues "have a look at cell #15" rather than "scroll down and look for such and such title, after such chart".

    it's ok if the cell numbers change even though i add and remove cells often. cell numbering is helpful when i'm asking someone to review something, which lasts about 15min most of the time. if i move the cells around, i'll inform them which cell again, that's not really a big issue for me.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →

    Cell/command numbers are now supported. Under the File menu enable “Show Command Numbers”. In addition to cell/command numbers, you can now click on a command number, which changes the URL. You can, hence, share links to particular cells/commands.

  12. Need to be able to see output in cell BEFORE it completes

    When I'm running Python or other code in a cell, it produces intermediate logs/outputs (e.g. when training a model, it shows how the accuracy is improving after each iteration), but these outputs are only shown AFTER the whole cell completes running. If the cell takes 5 minutes to run, we have no clue what it's doing. In a Jupyter notebook, we see output right away. This is a must-have feature

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    completed  ·  2 comments  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →
  13. Define AWS Role to be used on the cluster

    I could set up the AWS Role on my cluster so my access permissions to S3 could be defined on roles and not in access and secret keys, this way I don't need to have access/secret keys harcoded on my workbook.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  14. Line Numbers

    Debugging in Databricks is plenty annoying as is. Letting us see line numbers should be a simple change that would make debugging much less frustrating. (Sometimes I've ended up pasting the contents of a cell into a text editor to find the line providing the error!)

    40 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →

    Both line numbers and command numbers are now available. Under the VIEW menu you can show/hide them. Your preferences will be remembered. Command numbers can be clicked on, which gives you a permalink to that particular command. Such permalinks are great for sharing and pointing collaborators to specific commands in notebooks.

  15. run all cells from this point down

    Sometimes you have logic in one cell which fails but other cells below it rely on it. It would be nice if you could do "run all down" meaning it runs the selected cell, plus all remaining cells in the notebook.

    This would be nice in cases where you don't want to re-run the whole notebook, but just from the failed point on.

    5 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  16. It would be great to have "search and replace" ability in DB notebooks

    It would be great to have "search and replace" ability in DB notebooks

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  17. Keep common python packages up-to-date in default machine image

    Some of the common python packages (e.g., scipy, numpy, and scikit-learn) came with the Databricks default machine image were out-of-date. It will be great if Databricks can keep these commonly used python packages up-to-date in the default machine image.

    11 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  External libraries / applications  ·  Flag idea as inappropriate…  ·  Admin →
  18. Databricks would add Plotly support

    Plotly is increasingly becoming one of the most powerful visualizations tools around. For my team it has replaced Tableau, Platfora, ggplot2, Seaborn, surpassing them with superior visualizations capabilities and it's flexible and comprehensive chart deliver vectors (e.g. iFrames, pdf, etc).

    11 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Visualizations  ·  Flag idea as inappropriate…  ·  Admin →
  19. Being able to edit the jar on a job and not have to re-add all of the jar params/class path (

    Users should able to (Like before during Databricks 2.18 and prior) be able to edit a jar and re-upload a new one without having to remove it and re-add all of the parameters.

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    completed  ·  0 comments  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →
  20. Support for Bokeh and Plotly for Python visualization libraries

    I unsuccessfully tried many times to add this libraries for awesome visualization and wasn't able to make it work.
    For the sake of great visualizations these two libraries should be included by default or at least allow users to successfully using on the notebooks.

    26 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    5 comments  ·  External libraries / applications  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1 3 4 5 6
  • Don't see your idea?

Feedback and Knowledge Base